WorldWideScience

Sample records for response calculations based

  1. Response matrix Monte Carlo based on a general geometry local calculation for electron transport

    International Nuclear Information System (INIS)

    Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.

    1991-01-01

    A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs

  2. Final disposal room structural response calculations

    International Nuclear Information System (INIS)

    Stone, C.M.

    1997-08-01

    Finite element calculations have been performed to determine the structural response of waste-filled disposal rooms at the WIPP for a period of 10,000 years after emplacement of the waste. The calculations were performed to generate the porosity surface data for the final set of compliance calculations. The most recent reference data for the stratigraphy, waste characterization, gas generation potential, and nonlinear material response have been brought together for this final set of calculations

  3. Site response calculations for nuclear power plants

    International Nuclear Information System (INIS)

    Wight, L.H.

    1975-01-01

    Six typical sites consisting of three soil profiles with average shear wave velocities of 800, 1800, and 5000 ft/sec as well as two soil depths of 200 and 400 ft were considered. Seismic input to these sites was a synthetic accelerogram applied at the surface and corresponding to a statistically representative response spectrum. The response of each of these six sites to this input was calculated with the SHAKE program. The results of these calculations are presented

  4. Calculation of ex-core detector responses

    Energy Technology Data Exchange (ETDEWEB)

    Wouters, R. de; Haedens, M. [Tractebel Engineering, Brussels (Belgium); Baenst, H. de [Electrabel, Brussels (Belgium)

    2005-07-01

    The purpose of this work carried out by Tractebel Engineering, is to develop and validate a method for predicting the ex-core detector responses in the NPPs operated by Electrabel. Practical applications are: prediction of ex-core calibration coefficients for startup power ascension, replacement of xenon transients by theoretical predictions, and analysis of a Rod Drop Accident. The neutron diffusion program PANTHER calculates node-integrated fission sources which are combined with nodal importance representing the contribution of a neutron born in that node to the ex-core response. These importance are computed with the Monte Carlo program MCBEND in adjoint mode, with a model of the whole core at full power. Other core conditions are treated using sensitivities of the ex-core responses to water densities, computed with forward Monte Carlo. The Scaling Factors (SF), or ratios of the measured currents to the calculated response, have been established on a total of 550 in-core flux maps taken in four NPPs. The method has been applied to 15 startup transients, using the average SF obtained from previous cycles, and to 28 xenon transients, using the SF obtained from the in-core map immediately preceding the transient. The values of power (P) and axial offset (AOi) reconstructed with the theoretical calibration agree well with the measured values. The ex-core responses calculated during a rod drop transient have been successfully compared with available measurements, and with theoretical data obtained by alternative methods. In conclusion, the method is adequate for the practical applications previously listed. (authors)

  5. Calculation of integrated biological response in brachytherapy

    International Nuclear Information System (INIS)

    Dale, Roger G.; Coles, Ian P.; Deehan, Charles; O'Donoghue, Joseph A.

    1997-01-01

    Purpose: To present analytical methods for calculating or estimating the integrated biological response in brachytherapy applications, and which allow for the presence of dose gradients. Methods and Materials: The approach uses linear-quadratic (LQ) formulations to identify an equivalent biologically effective dose (BED eq ) which, if applied to a specified tissue volume, would produce the same biological effect as that achieved by a given brachytherapy application. For simple geometrical cases, BED multiplying factors have been derived which allow the equivalent BED for tumors to be estimated from a single BED value calculated at a dose reference point. For more complex brachytherapy applications a voxel-by-voxel determination of the equivalent BED will be more accurate. Equations are derived which when incorporated into brachytherapy software would facilitate such a process. Results: At both high and low dose rates, the BEDs calculated at the dose reference point are shown to be lower than the true values by an amount which depends primarily on the magnitude of the prescribed dose; the BED multiplying factors are higher for smaller prescribed doses. The multiplying factors are less dependent on the assumed radiobiological parameters. In most clinical applications involving multiple sources, particularly those in multiplanar arrays, the multiplying factors are likely to be smaller than those derived here for single sources. The overall suggestion is that the radiobiological consequences of dose gradients in well-designed brachytherapy treatments, although important, may be less significant than is sometimes supposed. The modeling exercise also demonstrates that the integrated biological effect associated with fractionated high-dose-rate (FHDR) brachytherapy will usually be different from that for an 'equivalent' continuous low-dose-rate (CLDR) regime. For practical FHDR regimes involving relatively small numbers of fractions, the integrated biological effect to

  6. Groebner bases in perturbative calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gerdt, Vladimir P. [Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2004-10-01

    In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations.

  7. Groebner bases in perturbative calculations

    International Nuclear Information System (INIS)

    Gerdt, Vladimir P.

    2004-01-01

    In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations

  8. Calculating the Responses of Self-Powered Radiation Detectors.

    Science.gov (United States)

    Thornton, D. A.

    Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual

  9. Calculation of reactivity using a finite impulse response filter

    Energy Technology Data Exchange (ETDEWEB)

    Suescun Diaz, Daniel [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil); Senra Martinez, Aquilino [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil)], E-mail: aquilino@lmp.ufrj.br; Carvalho Da Silva, Fernando [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil)

    2008-03-15

    A new formulation is presented in this paper to solve the inverse kinetics equation. This method is based on the Laplace transform of the point kinetics equations, resulting in an expression equivalent to the inverse kinetics equation as a function of the power history. Reactivity can be written in terms of the summation of convolution with response to impulse, characteristic of a linear system. For its digital form the Z-transform is used, which is the discrete version of the Laplace transform. This new method of reactivity calculation has very special features, amongst which it can be pointed out that the linear part is characterized by a filter named finite impulse response (FIR). The FIR filter will always be, stable and non-varying in time, and, apart from this, it can be implemented in the non-recursive form. This type of implementation does not require feedback, allowing the calculation of reactivity in a continuous way.

  10. Data base to compare calculations and observations

    International Nuclear Information System (INIS)

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed

  11. Proportional counter response calculations for gallium solar neutrino detectors

    International Nuclear Information System (INIS)

    Kouzes, R.T.; Reynolds, D.

    1989-01-01

    Gallium bases solar neutrino detectors are sensitive to the primary pp reaction in the sun. Two experiments using gallium, SAGE in the Soviet Union and GALLEX in Europe, are under construction and will produce data by 1989. The radioactive /sup 71/Ge produced by neutrinos interacting with the gallium detector material, is chemically extracted and counted in miniature proportional counters. A number of calculations have been carried out to simulate the response of these counters to the decay of /sup 71/Ge and to background events

  12. Exact-exchange-based quasiparticle calculations

    International Nuclear Information System (INIS)

    Aulbur, Wilfried G.; Staedele, Martin; Goerling, Andreas

    2000-01-01

    One-particle wave functions and energies from Kohn-Sham calculations with the exact local Kohn-Sham exchange and the local density approximation (LDA) correlation potential [EXX(c)] are used as input for quasiparticle calculations in the GW approximation (GWA) for eight semiconductors. Quasiparticle corrections to EXX(c) band gaps are small when EXX(c) band gaps are close to experiment. In the case of diamond, quasiparticle calculations are essential to remedy a 0.7 eV underestimate of the experimental band gap within EXX(c). The accuracy of EXX(c)-based GWA calculations for the determination of band gaps is as good as the accuracy of LDA-based GWA calculations. For the lowest valence band width a qualitatively different behavior is observed for medium- and wide-gap materials. The valence band width of medium- (wide-) gap materials is reduced (increased) in EXX(c) compared to the LDA. Quasiparticle corrections lead to a further reduction (increase). As a consequence, EXX(c)-based quasiparticle calculations give valence band widths that are generally 1-2 eV smaller (larger) than experiment for medium- (wide-) gap materials. (c) 2000 The American Physical Society

  13. A new approach to calculating spatial impulse responses

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1997-01-01

    Using linear acoustics the emitted and scattered ultrasound field can be found by using spatial impulse responses as developed by Tupholme (1969) and Stepanishen (1971). The impulse response is calculated by the Rayleigh integral by summing the spherical waves emitted from all of the aperture...

  14. Dose-Response Calculator for ArcGIS

    Science.gov (United States)

    Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.

    2011-01-01

    The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

  15. Calculated energy response of lithium fluoride finger-tip dosimeters

    International Nuclear Information System (INIS)

    Johns, T.F.

    1965-07-01

    Calculations have been made of the energy response of the lithium fluoride thermoluminescent dosimeters being used at A.E.E. Winfrith for the measurement of radiation doses to the finger-tips of people handling radio-active materials. It is shown that the energy response is likely to be materially affected if the sachet in which the powder is held contains elements with atomic numbers much higher than 9 (e.g. if the sachet is made from polyvinyl chloride). (author)

  16. Dielectric response of periodic systems from quantum Monte Carlo calculations.

    Science.gov (United States)

    Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola

    2005-11-11

    We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.

  17. MONTE CARLO CALCULATION OF THE ENERGY RESPONSE OF THE NARF HURST-TYPE FAST- NEUTRON DOSIMETER

    Energy Technology Data Exchange (ETDEWEB)

    De Vries, T. W.

    1963-06-15

    The response function for the fast-neutron dosimeter was calculated by the Monte Carlo technique (Code K-52) and compared with a calculation based on the Bragg-Gray principle. The energy deposition spectra so obtained show that the response spectra become softer with increased incident neutron energy ahove 3 Mev. The K-52 calculated total res nu onse is more nearly constant with energy than the BraggGray response. The former increases 70 percent from 1 Mev to 14 Mev while the latter increases 135 percent over this energy range. (auth)

  18. GPU based acceleration of first principles calculation

    International Nuclear Information System (INIS)

    Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T

    2010-01-01

    We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.

  19. Evaluation bases for calculation methods in radioecology

    International Nuclear Information System (INIS)

    Bleck-Neuhaus, J.; Boikat, U.; Franke, B.; Hinrichsen, K.; Hoepfner, U.; Ratka, R.; Steinhilber-Schwab, B.; Teufel, D.; Urbach, M.

    1982-03-01

    The seven contributions in this book deal with the state and problems of radioecology. In particular it analyses: The propagation of radioactive materials in the atmosphere, the transfer of radioactive substances from the soil into plants, respectively from animal feed into meat, the exposure pathways for, and high-risk groups of the population, the uncertainties and the band width of the ingestion factor, as well as the treatment of questions of radioecology in practice. The calculation model is assessed and the difficulty evaluated of laying down data in the general calculation basis. (DG) [de

  20. The giant resonances in hot nuclei. Linear response calculations

    International Nuclear Information System (INIS)

    Braghin, F.L.; Vautherin, D.; Abada, A.

    1995-01-01

    The isovector response function of hot nuclear matter is calculated using various effective Skyrme interactions. For Skyrme forces with a small effective mass the strength distribution is found to be nearly independent of temperature, and shows little collective effects. In contrast effective forces with an effective mass close to unity produce at zero temperature sizeable collective effects which disappear at temperatures of a few MeV. The relevance of these results for the saturation of the multiplicity of photons emitted by the giant dipole resonance in hot nuclei observed in recent experiments beyond T = 3 MeV is discussed. (authors). 12 refs., 3 figs

  1. Calculating Traffic based on Road Sensor Data

    NARCIS (Netherlands)

    Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Rata, Debanshu; Sikora, Monika

    2014-01-01

    Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for

  2. Criticality criteria for submissions based on calculations

    International Nuclear Information System (INIS)

    Burgess, M.H.

    1975-06-01

    Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)

  3. Calculation of Lightning Transient Responses on Wind Turbine Towers

    Directory of Open Access Journals (Sweden)

    Xiaoqing Zhang

    2013-01-01

    Full Text Available An efficient method is proposed in this paper for calculating lightning transient responses on wind turbine towers. In the proposed method, the actual tower body is simplified as a multiconductor grid in the shape of cylinder. A set of formulas are given for evaluating the circuit parameters of the branches in the multiconductor grid. On the basis of the circuit parameters, the multiconductor grid is further converted into an equivalent circuit. The circuit equation is built in frequency-domain to take into account the effect of the frequency-dependent characteristic of the resistances and inductances on lightning transients. The lightning transient responses can be obtained by using the discrete Fourier transform with exponential sampling to take the inverse transform of the frequency-domain solution of the circuit equation. A numerical example has been given for examining the applicability of the proposed method.

  4. SCINFUL-QMD: Monte Carlo based computer code to calculate response function and detection efficiency of a liquid organic scintillator for neutron energies up to 3 GeV

    International Nuclear Information System (INIS)

    Satoh, Daiki; Sato, Tatsuhiko; Shigyo, Nobuhiro; Ishibashi, Kenji

    2006-11-01

    The Monte Carlo based computer code SCINFUL-QMD has been developed to evaluate response function and detection efficiency of a liquid organic scintillator for neutrons from 0.1 MeV to 3 GeV. This code is a modified version of SCINFUL that was developed at Oak Ridge National Laboratory in 1988, to provide a calculated full response anticipated for neutron interactions in a scintillator. The upper limit of the applicable energy was extended from 80 MeV to 3 GeV by introducing the quantum molecular dynamics incorporated with the statistical decay model (QMD+SDM) in the high-energy nuclear reaction part. The particles generated in QMD+SDM are neutron, proton, deuteron, triton, 3 He nucleus, alpha particle, and charged pion. Secondary reactions by neutron, proton, and pion inside the scintillator are also taken into account. With the extension of the applicable energy, the database of total cross sections for hydrogen and carbon nuclei were upgraded. This report describes the physical model, computational flow and how to use the code. (author)

  5. The calculated neutron response of a sphere with the multi-counters

    International Nuclear Information System (INIS)

    Li Taosheng; Yang Lianzhen; Li Dongyu

    2004-01-01

    Based on the difference of the neutron distribution in the moderator, three position sensitive proportional counters which are perpendicular to each other are inserted into the moderator. The energy responses with six spherical moderators and six incidence directions have been calculated by MCNP4A code. The calculated results for two divided region methods in the radial of the spherical moderator have been analyzed and compared. (authors)

  6. [Biometric bases: basic concepts of probability calculation].

    Science.gov (United States)

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  7. The ripple electromagnetic calculation: accuracy demand and possible responses

    International Nuclear Information System (INIS)

    Cocilovo, V.; Ramogida, G.; Formisano, A.; Martone, R.; Portone, A.; Roccella, M.; Roccella, R.

    2006-01-01

    Due to a number of causes (the finite number of toroidal field coils or the presence of concentrate blocks of magnetic materials, as the neutral beam shielding) the actual magnetic configuration in a Tokamak differs from the desired one. For example, a ripple is added to the ideal axisymmetric toroidal field, impacting the equilibrium and stability of the plasma column; as a further example the magnetic field out of plasma affects the operation of a number of critical components, included the diagnostic system and the neutral beam. Therefore the actual magnetic field has to be suitably calculated and his shape controlled within the required limits. Due to the complexity of its design, the problem is quite critical for the ITER project. In this paper the problem is discussed both from mathematical and numerical point of view. In particular, a complete formulation is proposed, taking into account both the presence of the non linear magnetic materials and the fully 3D geometry. Then the quality level requirements are discussed, included the accuracy of calculations and the spatial resolution. As a consequence, the numerical tools able to fulfil the quality needs while requiring reasonable computer burden are considered. In particular possible tools based on numerical FEM scheme are considered; in addition, in spite of the presence of non linear materials, the practical possibility to use Biot-Savart based approaches, as cross check tools, is also discussed. The paper also analyses the possible geometrical simplifications of the geometry able to make possible the actual calculation while guarantying the required accuracy. Finally the characteristics required for a correction system able to effectively counteract the magnetic field degradation are presented. Of course a number of examples will be also reported and commented. (author)

  8. Goal based mesh adaptivity for fixed source radiation transport calculations

    International Nuclear Information System (INIS)

    Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.S.; Goffin, M.A.; Merton, S.R.; Warner, P.

    2013-01-01

    Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed

  9. Calculations of dosimetric parameter and REM meter response for BE(d, n) source

    International Nuclear Information System (INIS)

    Chen Changmao

    1988-01-01

    Based on the recent data about neutron spectra, the average energy, effictive energy and conversion coefficient of fluence to dose equivalent are calculated for some Be (α, n) neutron sources which have differene types and structures. The responses of 2202D and 0075 REM meter for thses spectral neutrons are also estimated. The results indicate that the relationship between average energy and conversion coefficient or REM meter responses can be described by simple functions

  10. CO2 impulse response curves for GWP calculations

    International Nuclear Information System (INIS)

    Jain, A.K.; Wuebbles, D.J.

    1993-01-01

    The primary purpose of Global Warming Potential (GWP) is to compare the effectiveness of emission strategies for various greenhouse gases to those for CO 2 , GWPs are quite sensitive to the amount of CO 2 . Unlike all other gases emitted in the atmosphere, CO 2 does not have a chemical or photochemical sink within the atmosphere. Removal of CO 2 is therefore dependent on exchanges with other carbon reservoirs, namely, ocean and terrestrial biosphere. The climatic-induced changes in ocean circulation or marine biological productivity could significantly alter the atmospheric CO 2 lifetime. Moreover, continuing forest destruction, nutrient limitations or temperature induced increases of respiration could also dramatically change the lifetime of CO 2 in the atmosphere. Determination of the current CO 2 sinks, and how these sinks are likely to change with increasing CO 2 emissions, is crucial to the calculations of GWPs. It is interesting to note that the impulse response function is sensitive to the initial state of the ocean-atmosphere system into which CO 2 is emitted. This is due to the fact that in our model the CO 2 flux from the atmosphere to the mixed layer is a nonlinear function of ocean surface total carbon

  11. Earthquake accelerations estimation for construction calculating with different responsibility degrees

    International Nuclear Information System (INIS)

    Dolgaya, A.A.; Uzdin, A.M.; Indeykin, A.V.

    1993-01-01

    The investigation object is the design amplitude of accelerograms, which are used in the evaluation of seismic stability of responsible structures, first and foremost, NPS. The amplitude level is established depending on the degree of responsibility of the structure and on the prevailing period of earthquake action on the construction site. The investigation procedure is based on statistical analysis of 310 earthquakes. At the first stage of statistical data-processing we established the correlation dependence of both the mathematical expectation and root-mean-square deviation of peak acceleration of the earthquake on its prevailing period. At the second stage the most suitable law of acceleration distribution about the mean was chosen. To determine of this distribution parameters, we specified the maximum conceivable acceleration, the excess of which is not allowed. Other parameters of distribution are determined according to statistical data. At the third stage the dependencies of design amplitude on the prevailing period of seismic effect for different structures and equipment were established. The obtained data made it possible to recommend to fix the level of safe-shutdown (SSB) and operating basis earthquakes (OBE) for objects of various responsibility categories when designing NPS. (author)

  12. Numerical calculation models of the elastoplastic response of a structure under seismic action

    International Nuclear Information System (INIS)

    Edjtemai, Nima.

    1982-06-01

    Two digital calculation models developed in this work have made it possible to analyze the exact dynamic behaviour of ductile structures with one or several degrees of liberty, during earthquakes. With the first model, response spectra were built in the linear and non-linear fields for different absorption and ductility values and two types of seismic accelerograms. The comparative study of these spectra made it possible to check the validity of certain hypotheses suggested for the construction of elastoplastic spectra from corresponding linear spectra. A simplified method of non-linear seismic calculation based on the modal analysis and the spectra of elastoplastic response was then applied to structures with a varying number of degrees of liberty. The results obtained in this manner were compared with those provided by an exact calculation provided by the second digital model developed by us [fr

  13. Uncertain hybrid model for the response calculation of an alternator

    International Nuclear Information System (INIS)

    Kuczkowiak, Antoine

    2014-01-01

    The complex structural dynamic behavior of alternator must be well understood in order to insure their reliable and safe operation. The numerical model is however difficult to construct mainly due to the presence of a high level of uncertainty. The objective of this work is to provide decision support tools in order to assess the vibratory levels in operation before to restart the alternator. Based on info-gap theory, a first decision support tool is proposed: the objective here is to assess the robustness of the dynamical response to the uncertain modal model. Based on real data, the calibration of an info-gap model of uncertainty is also proposed in order to enhance its fidelity to reality. Then, the extended constitutive relation error is used to expand identified mode shapes which are used to assess the vibratory levels. The robust expansion process is proposed in order to obtain robust expanded mode shapes to parametric uncertainties. In presence of lack-of knowledge, the trade-off between fidelity-to-data and robustness-to-uncertainties which expresses that robustness improves as fidelity deteriorates is emphasized on an industrial structure by using both reduced order model and surrogate model techniques. (author)

  14. CLEAR (Calculates Logical Evacuation And Response): A Generic Transportation Network Model for the Calculation of Evacuation Time Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.

  15. CLEAR (Calculates Logical Evacuation And Response): A generic transportation network model for the calculation of evacuation time estimates

    International Nuclear Information System (INIS)

    Moeller, M.P.; Desrosiers, A.E.; Urbanik, T. II

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)

  16. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  17. Double sliding-window technique: a new method to calculate the neuronal response onset latency.

    Science.gov (United States)

    Berényi, Antal; Benedek, György; Nagy, Attila

    2007-10-31

    Neuronal response onset latency provides important data on the information processing within the central nervous system. In order to enhance the quality of the onset latency estimation, we have developed a 'double sliding-window' technique, which combines the advantages of mathematical methods with the reliability of standard statistical processes. This method is based on repetitive series of statistical probes between two virtual time windows. The layout of the significance curve reveals the starting points of changes in neuronal activity in the form of break-points between linear segments. A second-order difference function is applied to determine the position of maximum slope change, which corresponds to the onset of the response. In comparison with Poisson spike-train analysis, the cumulative sum technique and the method of Falzett et al., this 'double sliding-window, technique seems to be a more accurate automated procedure to calculate the response onset latency of a broad range of neuronal response characteristics.

  18. Programmable calculator: alternative to minicomputer-based analyzer

    International Nuclear Information System (INIS)

    Hochel, R.C.

    1979-01-01

    Described are a number of typical field and laboratory counting systems that use standard stand-alone multichannel analyzers (MCA) interfaced to a Hewlett-Packard Company (HP 9830) programmable calculator. Such systems can offer significant advantages in cost and flexibility over a minicomputyr-based system. Because most laboratories tend to accumulate MCA's over the years, the programmable calculator also offers an easy way to upgrade the laboratory while making optimum use of existing systems. Software programs are easily tailored to fit a variety of general or specific applications. The only disadvantage of the calculator vs a computer-based system is in speed of analyses; however, for most applications this handicap is minimal. Applications discussed give a brief overview of the power and flexibility of the MCA-calculator approach to automated counting and data reduction

  19. Global calculation of PWR reactor core using the two group energy solution by the response matrix method

    International Nuclear Information System (INIS)

    Conti, C.F.S.; Watson, F.V.

    1991-01-01

    A computational code to solve a two energy group neutron diffusion problem has been developed base d on the Response Matrix Method. That method solves the global problem of PWR core, without using the cross sections homogenization process, thus it is equivalent to a pontwise core calculation. The present version of the code calculates the response matrices by the first order perturbative method and considers developments on arbitrary order Fourier series for the boundary fluxes and interior fluxes. (author)

  20. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  1. Analytical predictions of SGEMP response and comparisons with computer calculations

    International Nuclear Information System (INIS)

    de Plomb, E.P.

    1976-01-01

    An analytical formulation for the prediction of SGEMP surface current response is presented. Only two independent dimensionless parameters are required to predict the peak magnitude and rise time of SGEMP induced surface currents. The analysis applies to limited (high fluence) emission as well as unlimited (low fluence) emission. Cause-effect relationships for SGEMP response are treated quantitatively, and yield simple power law dependencies between several physical variables. Analytical predictions for a large matrix of SGEMP cases are compared with an array of about thirty-five computer solutions of similar SGEMP problems, which were collected from three independent research groups. The theoretical solutions generally agree with the computer solutions as well as the computer solutions agree with one another. Such comparisons typically show variations less than a ''factor of two.''

  2. THE ACCOUNTING POSTEMPLOYMENT BENEFITS BASED ON ACTUARIAL CALCULATIONS

    Directory of Open Access Journals (Sweden)

    Anna CEBOTARI

    2017-11-01

    Full Text Available The accounting post-employment benefits, based on actuarial calculations, at present remains a subject studied in Moldova only theoretically. Applying actuarial calculations of accounting in fact denotes its character of evolving. Because national accounting standards have been adapted to international, which, in turn, require the valuation of assets and debts at fair value, there is a need to draw up exact calculations on which stands the theory of probability and mathematical statistics. One of the main objectives of accounting information is reflected in its financial situations and providing internal and external users of the entity. Hence, arises the need to reflect highly reliable information that can be provided by applying actuarial calculations.

  3. A calculation model for a HTR core seismic response

    International Nuclear Information System (INIS)

    Buland, P.; Berriaud, C.; Cebe, E.; Livolant, M.

    1975-01-01

    The paper presents the experimental results obtained at Saclay on a HTGR core model and comparisons with analytical results. Two series of horizontal tests have been performed on the shaking table VESUVE: sinusoidal test and time history response. Acceleration of graphite blocks, forces on the boundaries, relative displacement of the core and PCRB model, impact velocity of the blocks on the boundaries were recorded. These tests have shown the strongly non-linear dynamic behaviour of the core. The resonant frequency of the core is dependent on the level of the excitation. These phenomena have been explained by a computer code, which is a lumped mass non-linear model. Good correlation between experimental and analytical results was obtained for impact velocities and forces on the boundaries. This comparison has shown that the damping of the core is a critical parameter for the estimation of forces and velocities. Time history displacement at the level of PCRV was reproduced on the shaking table. The analytical model was applied to this excitation and good agreement was obtained for forces and velocities. (orig./HP) [de

  4. Sampling of Stochastic Input Parameters for Rockfall Calculations and for Structural Response Calculations Under Vibratory Ground Motion

    International Nuclear Information System (INIS)

    M. Gross

    2004-01-01

    The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the

  5. Data base for terrestrial food pathways dose commitment calculations

    International Nuclear Information System (INIS)

    Bailey, C.E.

    1979-01-01

    A computer program is under development to allow calculation of the dose-to-man in Georgia and South Carolina from ingestion of radionuclides in terrestrial foods resulting from deposition of airborne radionuclides. This program is based on models described in Regulatory Guide 1.109 (USNRC, 1977). The data base describes the movement of radionuclides through the terrestrial food chain, growth and consumption factors for a variety of radionuclides

  6. Approximate calculation method for integral of mean square value of nonstationary response

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Fukano, Azusa

    2010-01-01

    The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.

  7. Software-Based Visual Loan Calculator For Banking Industry

    Science.gov (United States)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  8. Calculation of the spin-isospin response functions in an extended semi-classical theory

    International Nuclear Information System (INIS)

    Chanfray, G.

    1987-01-01

    We present a semi-classical calculation of the spin isospin response-functions beyond Thomas-Fermi theory. We show that surface-peaked ℎ 2 corrections reduce the collective effects predicted by Thomas-Fermi calculations. These effects, small for a volume response, become important for surface responses probed by hadrons. This yields a considerable improvement of the agreement with the (p, p') Los Alamos data

  9. Calculations of accelerator-based neutron sources characteristics

    International Nuclear Information System (INIS)

    Tertytchnyi, R.G.; Shorin, V.S.

    2000-01-01

    Accelerator-based quasi-monoenergetic neutron sources (T(p,n), D(d;n), T(d;n) and Li (p,n)-reactions) are widely used in experiments on measuring the interaction cross-sections of fast neutrons with nuclei. The present work represents the code for calculation of the yields and spectra of neutrons generated in (p, n)- and ( d; n)-reactions on some targets of light nuclei (D, T; 7 Li). The peculiarities of the stopping processes of charged particles (with incident energy up to 15 MeV) in multilayer and multicomponent targets are taken into account. The code version is made in terms of the 'SOURCE,' a subroutine for the well-known MCNP code. Some calculation results for the most popular accelerator- based neutron sources are given. (authors)

  10. New Products and Technologies, Based on Calculations Developed Areas

    Directory of Open Access Journals (Sweden)

    Gheorghe Vertan

    2013-09-01

    Full Text Available Following statistics, currently prosperous and have high GDP / capita, only countries that have and fructify intensively large natural resources and/or produce and export products massive based on patented inventions accordingly. Without great natural wealth and the lowest GDP / capita in the EU, Romania will prosper only with such products. Starting from the top experience in the country, some patented, can develop new and competitive technologies and patentable and exportable products, based on exact calculations of developed areas, such as that double shells welded assemblies and plating of ships' propellers and blade pump and hydraulic turbines.

  11. Plasma density calculation based on the HCN waveform data

    International Nuclear Information System (INIS)

    Chen Liaoyuan; Pan Li; Luo Cuiwen; Zhou Yan; Deng Zhongchao

    2004-01-01

    A method to improve the plasma density calculation is introduced using the base voltage and the phase zero points obtained from the HCN interference waveform data. The method includes making the signal quality higher by putting the signal control device and the analog-to-digit converters in the same location and charging them by the same power, and excluding the noise's effect according to the possible changing rate of the signal's phase, and to make the base voltage more accurate by dynamical data processing. (authors)

  12. Modeling and Calculation of Dent Based on Pipeline Bending Strain

    Directory of Open Access Journals (Sweden)

    Qingshan Feng

    2016-01-01

    Full Text Available The bending strain of long-distance oil and gas pipelines can be calculated by the in-line inspection tool which used inertial measurement unit (IMU. The bending strain is used to evaluate the strain and displacement of the pipeline. During the bending strain inspection, the dent existing in the pipeline can affect the bending strain data as well. This paper presents a novel method to model and calculate the pipeline dent based on the bending strain. The technique takes inertial mapping data from in-line inspection and calculates depth of dent in the pipeline using Bayesian statistical theory and neural network. To verify accuracy of the proposed method, an in-line inspection tool is used to inspect pipeline to gather data. The calculation of dent shows the method is accurate for the dent, and the mean relative error is 2.44%. The new method provides not only strain of the pipeline dent but also the depth of dent. It is more benefit for integrity management of pipeline for the safety of the pipeline.

  13. The internal radiation dose calculations based on Chinese mathematical phantom

    International Nuclear Information System (INIS)

    Wang Haiyan; Li Junli; Cheng Jianping; Fan Jiajin

    2006-01-01

    The internal radiation dose calculations built on Chinese facts become more and more important according to the development of nuclear medicine. the MIRD method developed and consummated by the society of Nuclear Medicine (America) is based on the European and American mathematical phantom and can't fit Chinese well. The transport of γ-ray in the Chinese mathematical phantom was simulated with Monte Carlo method in programs as MCNP4C. the specific absorbed fraction (Φ) of Chinese were calculated and the Chinese Φ database was created. The results were compared with the recommended values by ORNL. the method was proved correct by the coherence when the target organ was the same with the source organ. Else, the difference was due to the different phantom and the choice of different physical model. (authors)

  14. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  15. Determining dose rate with a semiconductor detector - Monte Carlo calculations of the detector response

    Energy Technology Data Exchange (ETDEWEB)

    Nordenfors, C

    1999-02-01

    To determine dose rate in a gamma radiation field, based on measurements with a semiconductor detector, it is necessary to know how the detector effects the field. This work aims to describe this effect with Monte Carlo simulations and calculations, that is to identify the detector response function. This is done for a germanium gamma detector. The detector is normally used in the in-situ measurements that is carried out regularly at the department. After the response function is determined it is used to reconstruct a spectrum from an in-situ measurement, a so called unfolding. This is done to be able to calculate fluence rate and dose rate directly from a measured (and unfolded) spectrum. The Monte Carlo code used in this work is EGS4 developed mainly at Stanford Linear Accelerator Center. It is a widely used code package to simulate particle transport. The results of this work indicates that the method could be used as-is since the accuracy of this method compares to other methods already in use to measure dose rate. Bearing in mind that this method provides the nuclide specific dose it is useful, in radiation protection, since knowing what the relations between different nuclides are and how they change is very important when estimating the risks

  16. Online detector response calculations for high-resolution PET image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Pratx, Guillem [Department of Radiation Oncology, Stanford University, Stanford, CA 94305 (United States); Levin, Craig, E-mail: cslevin@stanford.edu [Departments of Radiology, Physics and Electrical Engineering, and Molecular Imaging Program at Stanford, Stanford University, Stanford, CA 94305 (United States)

    2011-07-07

    Positron emission tomography systems are best described by a linear shift-varying model. However, image reconstruction often assumes simplified shift-invariant models to the detriment of image quality and quantitative accuracy. We investigated a shift-varying model of the geometrical system response based on an analytical formulation. The model was incorporated within a list-mode, fully 3D iterative reconstruction process in which the system response coefficients are calculated online on a graphics processing unit (GPU). The implementation requires less than 512 Mb of GPU memory and can process two million events per minute (forward and backprojection). For small detector volume elements, the analytical model compared well to reference calculations. Images reconstructed with the shift-varying model achieved higher quality and quantitative accuracy than those that used a simpler shift-invariant model. For an 8 mm sphere in a warm background, the contrast recovery was 95.8% for the shift-varying model versus 85.9% for the shift-invariant model. In addition, the spatial resolution was more uniform across the field-of-view: for an array of 1.75 mm hot spheres in air, the variation in reconstructed sphere size was 0.5 mm RMS for the shift-invariant model, compared to 0.07 mm RMS for the shift-varying model.

  17. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  18. Metric for Calculation of System Complexity based on its Connections

    Directory of Open Access Journals (Sweden)

    João Ricardo Braga de Paiva

    2017-02-01

    Full Text Available This paper proposes a methodology based on system connections to calculate its complexity. Two study cases are proposed: the dining Chinese philosophers’ problem and the distribution center. Both studies are modeled using the theory of Discrete Event Systems and simulations in different contexts were performed in order to measure their complexities. The obtained results present i the static complexity as a limiting factor for the dynamic complexity, ii the lowest cost in terms of complexity for each unit of measure of the system performance and iii the output sensitivity to the input parameters. The associated complexity and performance measures aggregate knowledge about the system.

  19. Design of software for calculation of shielding based on various standards radiodiagnostic calculation

    International Nuclear Information System (INIS)

    Falero, B.; Bueno, P.; Chaves, M. A.; Ordiales, J. M.; Villafana, O.; Gonzalez, M. J.

    2013-01-01

    The aim of this study was to develop a software application that performs calculation shields in radiology room depending on the type of equipment. The calculation will be done by selecting the user, the method proposed in the Guide 5.11, the Report 144 and 147 and also for the methodology given by the Portuguese Health Ministry. (Author)

  20. Jet identification based on probability calculations using Bayes' theorem

    International Nuclear Information System (INIS)

    Jacobsson, C.; Joensson, L.; Lindgren, G.; Nyberg-Werther, M.

    1994-11-01

    The problem of identifying jets at LEP and HERA has been studied. Identification using jet energies and fragmentation properties was treated separately in order to investigate the degree of quark-gluon separation that can be achieved by either of these approaches. In the case of the fragmentation-based identification, a neural network was used, and a test of the dependence on the jet production process and the fragmentation model was done. Instead of working with the separation variables directly, these have been used to calculate probabilities of having a specific type of jet, according to Bayes' theorem. This offers a direct interpretation of the performance of the jet identification and provides a simple means of combining the results of the energy- and fragmentation-based identifications. (orig.)

  1. Semi-classical calculation of the spin-isospin response functions

    International Nuclear Information System (INIS)

    Chanfray, G.

    1987-03-01

    We present a semi-classical calculation of the nuclear response functions beyond the Thomas-Fermi approximation. We apply our formalism to the spin-isospin responses and show that the surface peaked h/2π corrections considerably decrease the ratio longitudinal/transverse as obtained through hadronic probes

  2. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    DEFF Research Database (Denmark)

    Rinker, Jennifer M.

    2016-01-01

    at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four......This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a high-dimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data...... turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project...

  3. The PHREEQE Geochemical equilibrium code data base and calculations

    International Nuclear Information System (INIS)

    Andersoon, K.

    1987-01-01

    Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)

  4. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    International Nuclear Information System (INIS)

    Rinker, Jennifer M.

    2016-01-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity. (paper)

  5. A drainage data-based calculation method for coalbed permeability

    International Nuclear Information System (INIS)

    Lai, Feng-peng; Li, Zhi-ping; Fu, Ying-kun; Yang, Zhi-hao

    2013-01-01

    This paper establishes a drainage data-based calculation method for coalbed permeability. The method combines material balance and production equations. We use a material balance equation to derive the average pressure of the coalbed in the production process. The dimensionless water production index is introduced into the production equation for the water production stage. In the subsequent stage, which uses both gas and water, the gas and water production ratio is introduced to eliminate the effect of flush-flow radius, skin factor, and other uncertain factors in the calculation of coalbed methane permeability. The relationship between permeability and surface cumulative liquid production can be described as a single-variable cubic equation by derivation. The trend shows that the permeability initially declines and then increases after ten wells in the southern Qinshui coalbed methane field. The results show an exponential relationship between permeability and cumulative water production. The relationship between permeability and cumulative gas production is represented by a linear curve and that between permeability and surface cumulative liquid production is represented by a cubic polynomial curve. The regression result of the permeability and surface cumulative liquid production agrees with the theoretical mathematical relationship. (paper)

  6. Nonperturbative non-Markovian quantum master equation: Validity and limitation to calculate nonlinear response functions

    Science.gov (United States)

    Ishizaki, Akihito; Tanimura, Yoshitaka

    2008-05-01

    Based on the influence functional formalism, we have derived a nonperturbative equation of motion for a reduced system coupled to a harmonic bath with colored noise in which the system-bath coupling operator does not necessarily commute with the system Hamiltonian. The resultant expression coincides with the time-convolutionless quantum master equation derived from the second-order perturbative approximation, which is also equivalent to a generalized Redfield equation. This agreement occurs because, in the nonperturbative case, the relaxation operators arise from the higher-order system-bath interaction that can be incorporated into the reduced density matrix as the influence operator; while the second-order interaction remains as a relaxation operator in the equation of motion. While the equation describes the exact dynamics of the density matrix beyond weak system-bath interactions, it does not have the capability to calculate nonlinear response functions appropriately. This is because the equation cannot describe memory effects which straddle the external system interactions due to the reduced description of the bath. To illustrate this point, we have calculated the third-order two-dimensional (2D) spectra for a two-level system from the present approach and the hierarchically coupled equations approach that can handle quantal system-bath coherence thanks to its hierarchical formalism. The numerical demonstration clearly indicates the lack of the system-bath correlation in the present formalism as fast dephasing profiles of the 2D spectra.

  7. Linear response calculation using the canonical-basis TDHFB with a schematic pairing functional

    International Nuclear Information System (INIS)

    Ebata, Shuichiro; Nakatsukasa, Takashi; Yabana, Kazuhiro

    2011-01-01

    A canonical-basis formulation of the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory is obtained with an approximation that the pair potential is assumed to be diagonal in the time-dependent canonical basis. The canonical-basis formulation significantly reduces the computational cost. We apply the method to linear-response calculations for even-even nuclei. E1 strength distributions for proton-rich Mg isotopes are systematically calculated. The calculation suggests strong Landau damping of giant dipole resonance for drip-line nuclei.

  8. Volume-based geometric modeling for radiation transport calculations

    International Nuclear Information System (INIS)

    Li, Z.; Williamson, J.F.

    1992-01-01

    Accurate theoretical characterization of radiation fields is a valuable tool in the design of complex systems, such as linac heads and intracavitary applicators, and for generation of basic dose calculation data that is inaccessible to experimental measurement. Both Monte Carlo and deterministic solutions to such problems require a system for accurately modeling complex 3-D geometries that supports ray tracing, point and segment classification, and 2-D graphical representation. Previous combinatorial approaches to solid modeling, which involve describing complex structures as set-theoretic combinations of simple objects, are limited in their ease of use and place unrealistic constraints on the geometric relations between objects such as excluding common boundaries. A new approach to volume-based solid modeling has been developed which is based upon topologically consistent definitions of boundary, interior, and exterior of a region. From these definitions, FORTRAN union, intersection, and difference routines have been developed that allow involuted and deeply nested structures to be described as set-theoretic combinations of ellipsoids, elliptic cylinders, prisms, cones, and planes that accommodate shared boundaries. Line segments between adjacent intersections on a trajectory are assigned to the appropriate region by a novel sorting algorithm that generalizes upon Siddon's approach. Two 2-D graphic display tools are developed to help the debugging of a given geometric model. In this paper, the mathematical basis of our system is described, it is contrasted to other approaches, and examples are discussed

  9. Calculation and applications of the frequency dependent neutron detector response functions

    International Nuclear Information System (INIS)

    Van Dam, H.; Van Hagen, T.H.J.J. der; Hoogenboom, J.E.; Keijzer, J.

    1994-01-01

    The theoretical basis is presented for the evaluation of the frequency dependent function that enables to calculate the response of a neutron detector to parametric fluctuations ('noise') or oscillations in reactor core. This function describes the 'field view' of a detector and can be calculated with a static transport code under certain conditions which are discussed. Two applications are presented: the response of an ex-core detector to void fraction fluctuations in a BWR and of both in and ex-core detectors to a rotating neutron absorber near or inside a research reactor core. (authors). 7 refs., 4 figs

  10. Base response arising from free-field motions

    International Nuclear Information System (INIS)

    Whitley, J.R.; Morgan, J.R.; Hall, W.J.; Newmark, N.M.

    1977-01-01

    A procedure is illustrated in this paper for deriving (estimating) from a free-field record the horizontal base motions of a building, including horizontal rotation and translation. More specifically the goal was to compare results of response calculations based on derived accelerations with the results of calculations based on recorded accelerations. The motions are determined by assuming that an actual recorded ground wave transits a rigid base of a given dimension. Calculations given in the paper were made employing the earthquake acceleration time histories of the Hollywood storage building and the adjacent P.E. lot for the Kern County (1952) and San Fernando (1971) earthquakes. (Auth.)

  11. NFAP calculation of the response of a 1/6 scale reinforced concrete containment model

    International Nuclear Information System (INIS)

    Costantino, C.J.; Pepper, S.; Reich, M.

    1989-01-01

    The details associated with the NFAP calculation of the pressure response of the 1/6th scale model containment structure are discussed in this paper. Comparisons are presented of some of the primary items of interest with those determined from the experiment. It was found from this comparison that the hoop response of the containment wall was adequately predicted by the NFAP finite element calculation, including the response in the high pressure, high strain range at which cracking of the concrete and yielding of the hoop reinforcement occurred. In the vertical or meridional direction, it was found that the model was significantly softer than predicted by the finite element calculation; that is, the vertical strains in the test were three to four times larger than computed in the NFAP calculation. These differences were noted even at low strain levels at which the concrete would not be expected to be cracked under tensile loadings. Simplified calculations for the containment indicate that the vertical stiffness of the wall is similar to that which would be determined by assuming the concrete fully cracked. Thus, the experiment indicates an anomalous behavior in the vertical direction

  12. NFAP calculation of pressure response of 1/6th scale model containment structure

    International Nuclear Information System (INIS)

    Costantino, C.J.; Pepper, S.; Reich, M.

    1988-01-01

    The details associated with the NFAP calculation of the pressure response of the 1/6th scale model containment structure are discussed in this paper. Comparisons are presented of some of the primary items of interest with those determined from the experiment. It was found from this comparison that the hoop response of the containment wall was adequately predicted by the NFAP finite element calculation, including the response in the high pressure, high strain range at which cracking of the concrete and yielding of the hoop reinforcement occurred. In the vertical or meridional direction, it was found that the model was significantly softer than predicted by the finite element calculation; that is, the vertical strains in the test were three to four times larger than computed in the NFAP calculation. These differences were noted even at low strain levels at which the concrete would not be expected to be cracked under tensile loadings. Simplified calculations for the containment indicate that the vertical stiffness of the wall is similar to that which would be determined by assuming the concrete fully cracked. Thus, the experiment indicates an anomalous behavior in the vertical direction

  13. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    International Nuclear Information System (INIS)

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-01-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  14. Calculation of Multisphere Neutron Spectrometer Response Functions in Energy Range up to 20 MeV

    CERN Document Server

    Martinkovic, J

    2005-01-01

    Multisphere neutron spectrometer is a basic instrument of neutron measurements in the scattered radiation field at charged-particles accelerators for radiation protection and dosimetry purposes. The precise calculation of the spectrometer response functions is a necessary condition of the propriety of neutron spectra unfolding. The results of the response functions calculation for the JINR spectrometer with LiI(Eu) detector (a set of 6 homogeneous and 1 heterogeneous moderators, "bare" detector within cadmium cover and without it) at two geometries of the spectrometer irradiation - in uniform monodirectional and uniform isotropic neutron fields - are given. The calculation was carried out by the code MCNP in the neutron energy range 10$^{-8}$-20 MeV.

  15. Time delays between core power production and external detector response from Monte Carlo calculations

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1996-01-01

    One primary concern for design of safety systems for reactors is the time response of external detectors to changes in the core. This paper describes a way to estimate the time delay between the core power production and the external detector response using Monte Carlo calculations and suggests a technique to measure the time delay. The Monte Carlo code KENO-NR was used to determine the time delay between the core power production and the external detector response for a conceptual design of the Advanced Neutron Source (ANS) reactor. The Monte Carlo estimated time delay was determined to be about 10 ms for this conceptual design of the ANS reactor

  16. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  17. Comparison of calculated and measured spectral response and intrinsic efficiency for a boron-loaded plastic neutron detector

    Energy Technology Data Exchange (ETDEWEB)

    Kamykowski, E.A. (Grumman Corporate Research Center, Bethpage, NY (United States))

    1992-07-15

    Boron-loaded scintillators offer the potential for neutron spectrometers with a simplified, peak-shaped response. The Monte Carlo code, MCNP, has been used to calculate the detector characteristics of a scintillator made of a boron-loaded plastic, BC454, for neutrons between 1 and 7 MeV. Comparisons with measurements are made of spectral response for neutron energies between 4 and 6 MeV and of intrinsic efficiencies for neutrons up to 7 MeV. In order to compare the calculated spectra with measured data, enhancements to MCNP were introduced to generate tallies of light output spectra for recoil events terminating in a final capture by {sup 10}B. The comparison of measured and calculated spectra shows agreement in response shape, full width at half maximum, and recoil energy deposition. Intrinsic efficiencies measured to 7 MeV are also in agreement with the MCNP calculations. These results validate the code predictions and affirm the value of MCNP as a useful tool for development of sensor concepts based on boron-loaded plastics. (orig.).

  18. Kowledge-based dynamic network safety calculations. Wissensbasierte dynamische Netzsicherheitsberechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Kulicke, B [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany); Schlegel, S [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany)

    1993-06-28

    An important part of network operation management is the estimation and maintenance of the security of supply. So far the control personnel has only been supported by static network analyses and safety calculations. The authors describe an expert system, which is coupled to a real time simulation program on a transputer basis, for dynamic network safety calculations. They also introduce the system concept and the most important functions of the expert system. (orig.)

  19. Hybrid Electric Vehicle Control Strategy Based on Power Loss Calculations

    OpenAIRE

    Boyd, Steven J

    2006-01-01

    Defining an operation strategy for a Split Parallel Architecture (SPA) Hybrid Electric Vehicle (HEV) is accomplished through calculating powertrain component losses. The results of these calculations define how the vehicle can decrease fuel consumption while maintaining low vehicle emissions. For a HEV, simply operating the vehicle's engine in its regions of high efficiency does not guarantee the most efficient vehicle operation. The results presented are meant only to define a literal str...

  20. Calculation of seismic response of a flexible rotor by complex modal method, 1

    International Nuclear Information System (INIS)

    Azuma, Takao; Saito, Shinobu

    1984-01-01

    In rotary machines, at the time of earthquakes, whether the rotating part and stationary part touch or whether the bearings and seals are damaged or not are problems. In order to examine these problems, it is necessary to analyze the seismic response of a rotary shaft or sometimes a casing system. But the conventional analysis methods are unsatisfactory. Accordingly, in the case of a general shaft system supported with slide bearings and on which gyro effect acts, complex modal method must be used. This calculation method is explained in detail in the book of Lancaster, however, when this method is applied to the seismic response of rotary shafts, the calculation time is considerably different according to the method of final integration. In this study, good results were obtained when the method which did not depend on numerical integration was attempted. The equation of motion and its solution, the displacement vector of a foundation, the verification of the calculation program and the example of calculating the seismic response of two coupled rotor shafts are reported. (Kako, I.)

  1. Application of γ field theory based calculation method to the monitoring of mine nuclear radiation environment

    International Nuclear Information System (INIS)

    Du Yanjun; Liu Qingcheng; Liu Hongzhang; Qin Guoxiu

    2009-01-01

    In order to find the feasibility of calculating mine radiation dose based on γ field theory, this paper calculates the γ radiation dose of a mine by means of γ field theory based calculation method. The results show that the calculated radiation dose is of small error and can be used to monitor mine environment of nuclear radiation. (authors)

  2. Validation of KENO-based criticality calculations at Rocky Flats

    International Nuclear Information System (INIS)

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation

  3. Response surfaces and sensitivity analyses for an environmental model of dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Iooss, Bertrand [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)]. E-mail: bertrand.iooss@cea.fr; Van Dorpe, Francois [CEA Cadarache, DEN/DTN/SMTM/LMTE, 13108 Saint Paul lez Durance, Cedex (France); Devictor, Nicolas [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)

    2006-10-15

    A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk.

  4. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine

    International Nuclear Information System (INIS)

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6 MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, − 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3 mm criteria. The mean and standard deviation of pixels passing

  5. Electric field calculations in brain stimulation based on finite elements

    DEFF Research Database (Denmark)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-01-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...

  6. Application of CFD based wave loads in aeroelastic calculations

    DEFF Research Database (Denmark)

    Schløer, Signe; Paulsen, Bo Terp; Bredmose, Henrik

    2014-01-01

    Two fully nonlinear irregular wave realizations with different significant wave heights are considered. The wave realizations are both calculated in the potential flow solver Ocean-Wave3D and in a coupled domain decomposed potential-flow CFD solver. The surface elevations of the calculated wave...... domain decomposed potentialflow CFD solver result in different dynamic forces in the tower and monopile, despite that the static forces on a fixed monopile are similar. The changes are due to differences in the force profiles and wave steepness in the two solvers. The results indicate that an accurate...

  7. Calculation of Excore Detector Responses upon Control Rods Movement in PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Pham Nhu Viet; Lee, Min Jae; Kang, Chang Moo; Kim, Sang Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    The Prototype Generation-IV Sodium-cooled Fast Reactor (PGSFR) safety design concept, which aims at achieving IAEA's safety objectives and GIF's safety goals for Generation-IV reactor systems, is mainly focused on the defense in depth for accident detection, prevention, control, mitigation and termination. In practice, excore neutron detectors are widely used to determine the spatial power distribution and power level in a nuclear reactor core. Based on the excore detector signals, the reactor control and protection systems infer the corresponding core power and then provide appropriate actions for safe and reliable reactor operation. To this end, robust reactor power monitoring, control and core protection systems are indispensable to prevent accidents and reduce its detrimental effect should one occur. To design such power monitoring and control systems, numerical investigation of excore neutron detector responses upon various changes in the core power level/distribution and reactor conditions is required in advance. In this study, numerical analysis of excore neutron detector responses (DRs) upon control rods (CRs) movement in PGSFR was carried out. The objective is to examine the sensitivity of excore neutron detectors to the core power change induced by moving CRs and thereby recommend appropriate locations to locate excore neutron detectors for the designing process of the PGSFR power monitoring systems. Section 2 describes the PGSFR core model and calculation method as well as the numerical results for the excore detector spatial weighting functions, core power changes and detector responses upon various scenarios of moving CRs in PGSFR. The top detector is conservatively safe because it overestimated the core power level. However, the lower and bottom detectors still functioned well in this case because they exhibited a minor underestimation of core power of less than ∼0.5%. As a secondary CR was dropped into the core, the lower detector was

  8. Calculation of foundation response to spatially varying ground motion by finite element method

    International Nuclear Information System (INIS)

    Wang, F.; Gantenbein, F.

    1995-01-01

    This paper presents a general method to compute the response of a rigid foundation of arbitrary shape resting on a homogeneous or multilayered elastic soil when subjected to a spatially varying ground motion. The foundation response is calculated from the free-field ground motion and the contact tractions between the foundation and the soil. The spatial variation of ground motion in this study is introduced by a coherence function and the contact tractions are obtained numerically using the Finite Element Method in the process of calculating the dynamic compliance of the foundation. Applications of this method to a massless rigid disc supported on an elastic half space and to that founded on an elastic medium consisting of a layer of constant thickness supported on an elastic half space are described. The numerical results obtained are in very good agreement with analytical solutions published in the literature. (authors). 5 refs., 8 figs

  9. Inverse boundary element calculations based on structural modes

    DEFF Research Database (Denmark)

    Juhl, Peter Møller

    2007-01-01

    The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods sol...

  10. Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa

    CERN Document Server

    Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F

    2014-01-01

    The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...

  11. Reactor calculation in coarse mesh by finite element method applied to matrix response method

    International Nuclear Information System (INIS)

    Nakata, H.

    1982-01-01

    The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

  12. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    Science.gov (United States)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.

  13. Improved response function calculations for scintillation detectors using an extended version of the MCNP code

    CERN Document Server

    Schweda, K

    2002-01-01

    The analysis of (e,e'n) experiments at the Darmstadt superconducting electron linear accelerator S-DALINAC required the calculation of neutron response functions for the NE213 liquid scintillation detectors used. In an open geometry, these response functions can be obtained using the Monte Carlo codes NRESP7 and NEFF7. However, for more complex geometries, an extended version of the Monte Carlo code MCNP exists. This extended version of the MCNP code was improved upon by adding individual light-output functions for charged particles. In addition, more than one volume can be defined as a scintillator, thus allowing the simultaneous calculation of the response for multiple detector setups. With the implementation of sup 1 sup 2 C(n,n'3 alpha) reactions, all relevant reactions for neutron energies E sub n <20 MeV are now taken into consideration. The results of these calculations were compared to experimental data using monoenergetic neutrons in an open geometry and a sup 2 sup 5 sup 2 Cf neutron source in th...

  14. Calculation laboratory: game based learning in exact discipline

    Directory of Open Access Journals (Sweden)

    André Felipe de Almeida Xavier

    2017-12-01

    Full Text Available The Calculation Laboratory appeared with the need to give meaning to the learning of students entering the courses of Engineering, in the discipline of Differential Calculus, in the semester 1/2016. After obtaining good results, the activity was also extended to the classes of Analytical Geometry and Linear Algebra (GAAL and Integral Calculus, so that these incoming students could continue the process. Historically, students present some difficulty in these contents, and it is necessary to give meaning to their learning. Given the table presented, the Calculation Laboratory aims to give meaning to the contents worked, giving students autonomy, having the teacher as the tutor, as intermediary between the student and the knowledge, creating various practical, playful and innovative activities to assist in this process. Through this article, it is intended to report a little about the activities created to facilitate this process of execution of the Calculation Laboratory, in addition to demonstrating the results obtained and measured after its application. Through these proposed activities, it is noticed that the student is gradually gaining autonomy in the search for knowledge.

  15. Calculation of crack stress density of cement base materials

    Directory of Open Access Journals (Sweden)

    Chun-e Sui

    2018-01-01

    Full Text Available In this paper, the fracture load of cement paste with different water cement ratio, different mineral admixtures, including fly ash, silica fume and slag, is obtained through experiments. the three-dimensional fracture surface is reconstructed and the three-dimensional effective area of the fracture surface is calculated. the effective fracture stress density of different cement paste is obtained. The results show that the polynomial function can accurately describe the relationship between the three-dimensional total area and the tensile strength

  16. Freeway travel speed calculation model based on ETC transaction data.

    Science.gov (United States)

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.

  17. Response matrix calculation of a Bonner Sphere Spectrometer using ENDF/B-VII libraries

    Energy Technology Data Exchange (ETDEWEB)

    Morató, Sergio; Juste, Belén; Miró, Rafael; Verdú, Gumersindo [Instituto de Seguridad Industrial, Radiofísica y Medioambiental (ISIRYM), Universitat Politècnica de València (Spain); Guardia, Vicent, E-mail: bejusvi@iqn.upv.es [GD Energy Services, Valencia (Spain). Grupo dominguis

    2017-07-01

    The present work is focused on the reconstruction of a neutron spectra using a multisphere spectrometer also called Bonner Spheres System (BSS). To that, the determination of the response detector curves is necessary therefore we have obtained the response matrix of a neutron detector by Monte Carlo (MC) simulation with MCNP6 where the use of unstructured mesh geometries is introduced as a novelty. The aim of these curves was to study the theoretical response of a widespread neutron spectrometer exposed to neutron radiation. A neutron detector device has been used in this work which is formed by a multispheres spectrometer (BSS) that uses 6 high density polyethylene spheres with different diameters. The BSS consists of a set of 0.95 g/cm{sup 3} high density polyethylene spheres. The detector is composed of a lithium iodide 6LiI cylindrical scintillator crystal 4mm x 4mm size LUDLUM Model 42 coupled to a photomultiplier tube. Thermal tables are required to include polyethylene cross section in the simulation. These data are essential to get correct and accurate results in problems involving neutron thermalization. Nowadays available literature present the response matrix calculated with ENDF.B.V cross section libraries (V.Mares et al 1993) or with ENDF.B.VI (R.Vega Carrillo et al 2007). This work uses two novelties to calculate the response matrix. On the one hand the use of unstructured meshes to simulate the geometry of the detector and the Bonner Spheres and on the other hand the use of the updated ENDF.B.VII cross sections libraries. A set of simulations have been performed to obtain the detector response matrix. 29 mono energetic neutron beams between 10 KeV to 20 MeV were used as source for each moderator sphere up to a total of 174 simulations. Each mono energetic source was defined with the same diameter as the moderating sphere used in its corresponding simulation and the spheres were uniformly irradiated from the top of the photomultiplier tube. Some

  18. Particle-hole calculation of the longitudinal response function of 12C

    International Nuclear Information System (INIS)

    Dellafiore, A.; Lenz, F.; Brieva, F.A.

    1985-01-01

    The longitudinal response function of 12 C in the range of momentum transfers 200 MeV/c< or =q< or =550 MeV/c is calculated in the Tamm-Dancoff approximation. The particle-hole Green's function is evaluated by means of a doorway-state expansion. This method allows us to take into account finite-range residual interactions in the continuum, including exchange processes. At low momentum transfers, calculations agree qualitatively with the data. The data cannot be reproduced at momentum transfers around 450 MeV/c. This discrepancy can be accounted for neither by uncertainties in the residual interaction, nor by more complicated processes in the nuclear final states

  19. Many-body calculations with deuteron based single-particle bases and their associated natural orbits

    Science.gov (United States)

    Puddu, G.

    2018-06-01

    We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.

  20. Gender Responsive Community Based Planning and Budgeting ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Responsive Community Based Planning and Budgeting Tool for Local Governance ... in data collection, and another module that facilitates gender responsive and ... In partnership with UNESCO's Organization for Women in Science for the ...

  1. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  2. QED Based Calculation of the Fine Structure Constant

    Energy Technology Data Exchange (ETDEWEB)

    Lestone, John Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-13

    Quantum electrodynamics is complex and its associated mathematics can appear overwhelming for those not trained in this field. Here, semi-classical approaches are used to obtain a more intuitive feel for what causes electrostatics, and the anomalous magnetic moment of the electron. These intuitive arguments lead to a possible answer to the question of the nature of charge. Virtual photons, with a reduced wavelength of λ, are assumed to interact with isolated electrons with a cross section of πλ2. This interaction is assumed to generate time-reversed virtual photons that are capable of seeking out and interacting with other electrons. This exchange of virtual photons between particles is assumed to generate and define the strength of electromagnetism. With the inclusion of near-field effects the model presented here gives a fine structure constant of ~1/137 and an anomalous magnetic moment of the electron of ~0.00116. These calculations support the possibility that near-field corrections are the key to understanding the numerical value of the dimensionless fine structure constant.

  3. Applying the universal neutron transport codes to the calculation of well-logging probe response at different rock porosities

    International Nuclear Information System (INIS)

    Bogacz, J.; Loskiewicz, J.; Zazula, J.M.

    1991-01-01

    The use of universal neutron transport codes in order to calculate the parameters of well-logging probes presents a new approach first tried in U.S.A. and UK in the eighties. This paper deals with first such an attempt in Poland. The work is based on the use of MORSE code developed in Oak Ridge National Laboratory in U.S.A.. Using CG MORSE code we calculated neutron detector response when surrounded with sandstone of porosities 19% and 38%. During the work it come out that it was necessary to investigate different methods of estimation of the neutron flux. The stochastic estimation method as used currently in the original MORSE code (next collision approximation) can not be used because of slow convergence of its variance. Using the analog type of estimation (calculation of the sum of track lengths inside detector) we obtained results of acceptable variance (∼ 20%) for source-detector spacing smaller than 40 cm. The influence of porosity on detector response is correctly described for detector positioned at 27 cm from the source. At the moment the variances are quite large. (author). 33 refs, 8 figs, 8 tabs

  4. Grid-based electronic structure calculations: The tensor decomposition approach

    Energy Technology Data Exchange (ETDEWEB)

    Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  5. UAV-based NDVI calculation over grassland: An alternative approach

    Science.gov (United States)

    Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc

    2016-04-01

    The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a

  6. Simulation and analysis of main steam control system based on heat transfer calculation

    Science.gov (United States)

    Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai

    2018-05-01

    In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.

  7. The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.

    Science.gov (United States)

    Youn, Ahrim; Wang, Shuang

    2018-01-01

    Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .

  8. Glass viscosity calculation based on a global statistical modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.

  9. Development of NRESP98 Monte Carlo codes for the calculation of neutron response functions of neutron detectors. Calculation of the response function of spherical BF{sub 3} proportional counter

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, M.; Saito, K.; Ando, H. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-05-01

    The method to calculate the response function of spherical BF{sub 3} proportional counter, which is commonly used as neutron dose rate meter and neutron spectrometer with multi moderator system, is developed. As the calculation code for evaluating the response function, the existing code series NRESP, the Monte Carlo code for the calculation of response function of neutron detectors, is selected. However, the application scope of the existing NRESP is restricted, the NRESP98 is tuned as generally applicable code, with expansion of the geometrical condition, the applicable element, etc. The NRESP98 is tested with the response function of the spherical BF{sub 3} proportional counter. Including the effect of the distribution of amplification factor, the detailed evaluation of the charged particle transportation and the effect of the statistical distribution, the result of NRESP98 calculation fit the experience within {+-}10%. (author)

  10. Environment-based pin-power reconstruction method for homogeneous core calculations

    International Nuclear Information System (INIS)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-01-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)

  11. Sensor response time calculation with no stationary signals from a Nuclear Power Plant

    International Nuclear Information System (INIS)

    Vela, O.; Vallejo, I.

    1998-01-01

    Protection systems in a Nuclear Power Plant have to response in a specific time fixed by design requirements. This time includes the event detection (sensor delay) and the actuation time system. This time is obtained in refuel simulating the physics event, which trigger the protection system, with an electric signal and measuring the protection system actuation time. Nowadays sensor delay is calculated with noise analysis techniques. The signals are measured in Control Room during the normal operation of the Plant, decreasing both the cost in time and personal radioactive exposure. The noise analysis techniques require stationary signals but normally the data collected are mixed with process signals that are no stationary. This work shows the signals processing to avoid no-stationary components using conventional filters and new wavelets analysis. (Author) 2 refs

  12. Response surface methodology to simplify calculation of wood energy potency from tropical short rotation coppice species

    Science.gov (United States)

    Haqiqi, M. T.; Yuliansyah; Suwinarti, W.; Amirta, R.

    2018-04-01

    Short Rotation Coppice (SRC) system is an option to provide renewable and sustainable feedstock in generating electricity for rural area. Here in this study, we focussed on application of Response Surface Methodology (RSM) to simplify calculation protocols to point out wood chip production and energy potency from some tropical SRC species identified as Bauhinia purpurea, Bridelia tomentosa, Calliandra calothyrsus, Fagraea racemosa, Gliricidia sepium, Melastoma malabathricum, Piper aduncum, Vernonia amygdalina, Vernonia arborea and Vitex pinnata. The result showed that the highest calorific value was obtained from V. pinnata wood (19.97 MJ kg-1) due to its high lignin content (29.84 %, w/w). Our findings also indicated that the use of RSM for estimating energy-electricity of SRC wood had significant term regarding to the quadratic model (R2 = 0.953), whereas the solid-chip ratio prediction was accurate (R2 = 1.000). In the near future, the simple formula will be promising to calculate energy production easily from woody biomass, especially from SRC species.

  13. Consolidating duodenal and small bowel toxicity data via isoeffective dose calculations based on compiled clinical data.

    Science.gov (United States)

    Prior, Phillip; Tai, An; Erickson, Beth; Li, X Allen

    2014-01-01

    To consolidate duodenum and small bowel toxicity data from clinical studies with different dose fractionation schedules using the modified linear quadratic (MLQ) model. A methodology of adjusting the dose-volume (D,v) parameters to different levels of normal tissue complication probability (NTCP) was presented. A set of NTCP model parameters for duodenum toxicity were estimated by the χ(2) fitting method using literature-based tolerance dose and generalized equivalent uniform dose (gEUD) data. These model parameters were then used to convert (D,v) data into the isoeffective dose in 2 Gy per fraction, (D(MLQED2),v) and convert these parameters to an isoeffective dose at another NTCP (D(MLQED2'),v). The literature search yielded 5 reports useful in making estimates of duodenum and small bowel toxicity. The NTCP model parameters were found to be TD50(1)(model) = 60.9 ± 7.9 Gy, m = 0.21 ± 0.05, and δ = 0.09 ± 0.03 Gy(-1). Isoeffective dose calculations and toxicity rates associated with hypofractionated radiation therapy reports were found to be consistent with clinical data having different fractionation schedules. Values of (D(MLQED2'),v) between different NTCP levels remain consistent over a range of 5%-20%. MLQ-based isoeffective calculations of dose-response data corresponding to grade ≥2 duodenum toxicity were found to be consistent with one another within the calculation uncertainty. The (D(MLQED2),v) data could be used to determine duodenum and small bowel dose-volume constraints for new dose escalation strategies. Copyright © 2014 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  14. A program to calculate pulse transmission responses through transversely isotropic media

    Science.gov (United States)

    Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei

    2018-05-01

    We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.

  15. Base response arising from free-field motions

    International Nuclear Information System (INIS)

    Whitley, J.R.; Morgan, J.R.; Hall, W.J.; Newmark, N.M.

    1977-01-01

    A procedure is illustrated in this paper for deriving (estimating) from a free-field record the horizontal base motions of a building, including horizontal rotation and translation. More specifically the goal was to compare results of response calculations based on derived accelerations with the results of calculations based on recorded accelerations. The motions are determined by assuming that an actual recorded ground wave transits a rigid base of a given dimension. Calculations given in the paper were made employing the earthquake acceleration time histories of the Hollywood storage building and the adjacent P.E. lot for the Kern County (1952) and San Fernando (1971) earthquakes. For the Kern County earthquake the derived base corner accelerations, including the effect of rotation show generally fair agreement with the spectra computed from the Hollywood storage corner record. For the San Fernando earthquake the agreement between the spectra computed from derived base corner accelerations and that computed from the actual basement corner record is not as good as that for the Kern County earthquake. These limited studies admittedly are hardly a sufficient basis on which to form a judgment, but these differences noted probably can be attributed in part to foundation distortion, building feedback, distance between measurement points, and soil structure interaction; it was not possible to take any of these factors into account in these particular calculations

  16. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  17. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon

    2014-01-01

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  18. Using risk based tools in emergency response

    International Nuclear Information System (INIS)

    Dixon, B.W.; Ferns, K.G.

    1987-01-01

    Probabilistic Risk Assessment (PRA) techniques are used by the nuclear industry to model the potential response of a reactor subjected to unusual conditions. The knowledge contained in these models can aid in emergency response decision making. This paper presents requirements for a PRA based emergency response support system to date. A brief discussion of published work provides background for a detailed description of recent developments. A rapid deep assessment capability for specific portions of full plant models is presented. The program uses a screening rule base to control search space expansion in a combinational algorithm

  19. THEXSYST - a knowledge based system for the control and analysis of technical simulation calculations

    International Nuclear Information System (INIS)

    Burger, B.

    1991-07-01

    This system (THEXSYST) will be used for control, analysis and presentation of thermal hydraulic simulation calculations of light water reactors. THEXSYST is a modular system consisting of an expert shell with user interface, a data base, and a simulation program and uses techniques available in RSYST. A knowledge base, which was created to control the simulational calculation of pressurized water reactors, includes both the steady state calculation and the transient calculation in the domain of the depressurization, as a result of a small break loss of coolant accident. The methods developed are tested using a simulational calculation with RELAP5/Mod2. It will be seen that the application of knowledge base techniques may be a helpful tool to support existing solutions especially in graphical analysis. (orig./HP) [de

  20. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  1. Calculation of the Energy Dependence of Dosimeter Response to Ionizing Photons

    DEFF Research Database (Denmark)

    Miller, Arne; McLaughlin, W. L.

    1982-01-01

    Using a program in BASIC applied to a desk-top calculator, simplified calculations provide approximate energy dependence correction factors of dosimeter readings of absorbed dose according to Bragg-Gray cavity theories. Burlin's general cavity theory is applied in the present calculations, and ce...

  2. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    OpenAIRE

    Wan'e, Wu; Zuoming, Zhu

    2012-01-01

    A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ± 7 . 3 %; in the formulation rang...

  3. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and...

  4. Calculations of the resonant response of carbon nanotubes to binding of DNA

    International Nuclear Information System (INIS)

    Zheng Meng; Ke Changhong; Eom, Kilho

    2009-01-01

    We theoretically study the dynamical response of carbon nanotubes (CNTs) to the binding of DNA in an aqueous environment by considering two major interactions in DNA helical binding to the CNT side surface: adhesion between DNA nucleobases and CNT surfaces and electrostatic interactions between negative charges on DNA backbones. The equilibrium DNA helical wrapping angle is obtained using the minimum potential energy method. Our results show that the preferred DNA wrapping angle in the equilibrium binding to CNT is dependent on both DNA length and DNA base. The equilibrium wrapping angle for a poly(dT) chain is larger than a comparable poly(dA) chain as a result of dT in a homopolymer chain having a higher effective binding energy to CNT than dA. Our results also interestingly reveal a sharp transition in the wrapping angle-DNA length profile for both homopolymers, implying that the equilibrium helical wrapping configuration does not exist for a certain range of wrapping angles. Furthermore, the resonant response of the DNA-CNT complex is analysed based on the variational method with a Hamiltonian which takes into account the CNT bending energy as well as DNA-CNT interactions. The closed-form analytical solution for predicting the resonant frequency of the DNA-CNT complex is presented. Our results show that the hydrodynamic loading on the oscillating CNT in aqueous environments has profound impacts on the resonance behaviour of DNA-CNT complexes. Our results suggest that detection of DNA molecules using CNT resonators based on DNA-CNT interactions through frequency measurements should be conducted in media with low hydrodynamic loading on CNTs. Our theoretical framework provides a fundamental principle for label-free detection using CNT resonators based on DNA-CNT interactions.

  5. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

  6. MARIOLA: A model for calculating the response of mediterranean bush ecosystem to climatic variations

    Energy Technology Data Exchange (ETDEWEB)

    Uso-Domenech, J.L.; Ramo, M.P. [Department of Mathematics, Campus de Penyeta Roja, University Jaume I, Castellon (Spain); Villacampa-Esteve, Y. [Department of Analysis and Applied Mathematics, University of Alicante (Spain); Stuebing-Martinez, G. [Department of Botany, University of Valencia (Spain); Karjalainen, T. [Faculty of Forestry, University of Joensuu (Finland)

    1995-07-01

    The paper summarizes the bush ecosystem model developed for assessing the effects of climatic change on the behaviour of mediterranean bushes assuming that temperature, humidity and rain-fall are the basic dimensions of the niche occupied by shrub species. In this context, changes in the monthly weather pattern serve only to outline the growth conditions due to the nonlinearity of response of shrubs to climatic factors. The plant-soil-atmosphere system is described by means of ordinary non-linear differential equations for the state variables: green biomass, woody biomass, the residues of green and woody biomasses, faecal detritus of mammals on the soil, and the total organic matter of the soil. The behaviour of the flow variables is described by means of equations obtained from non-linear multiple regressions from the state variables and the input variables. The model has been applied with success to the behaviour of Cistus albidus in two zones of the Province of Alicante (Spain). The data base for the parametrical locations (zone 1) and validation (zone 2) is based upon measurements taken weekly over a 2-year period. The model is used to simulate the response of this shrub to a decreasing tendency in precipitation combined with a simultaneous rise in temperature. A period of 10 years is simulated and it is observed that plants with woody biomass smaller than 85 g die between the first and the third month and other plants` biomass decreases during this period, and strongly thereafter

  7. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  8. A clustering approach to segmenting users of internet-based risk calculators.

    Science.gov (United States)

    Harle, C A; Downs, J S; Padman, R

    2011-01-01

    Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.

  9. An Analysis on the Calculation Efficiency of the Responses Caused by the Biased Adjoint Fluxes in Hybrid Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho

    2015-01-01

    This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies

  10. Calculations of the response functions of Bonner spheres with a spherical 3He proportional counter using a realistic detector model

    International Nuclear Information System (INIS)

    Wiegel, B.; Alevra, A.V.; Siebert, B.R.L.

    1994-11-01

    A realistic geometry model of a Bonner sphere system with a spherical 3 He-filled proportional counter and 12 polyethylene moderating spheres with diameters ranging from 7,62 cm (3'') to 45,72 cm (18'') is introduced. The MCNP Monte Carlo computer code is used to calculate the responses of this Bonner sphere system to monoenergetic neutrons in the energy range between 1 meV to 20 MeV. The relative uncertainties of the responses due to the Monte Carlo calculations are less than 1% for spheres up to 30,48 cm (12'') in diameter and less than 2% for the 15'' and 18'' spheres. Resonances in the carbon cross section are seen as significant structures in the response functions. Additional calculations were made to study the influence of the 3 He number density and the polyethylene mass density on the response as well as the angular dependence of the Bonner sphere system. The calculated responses can be adjusted to a large set of calibration measurements with only a single fit factor common to all sphere diameters and energies. (orig.) [de

  11. Calculations of risk: regulation and responsibility for asbestos in social housing.

    Science.gov (United States)

    Waldman, Linda; Williams, Heather

    2013-01-01

    This paper examines questions of risk, regulation, and responsibility in relation to asbestos lodged in UK social housing. Despite extensive health and safety legislation protecting against industrial exposure, very little regulatory attention is given to asbestos present in domestic homes. The paper argues that this lack of regulatory oversight, combined with the informal, contractual, and small-scale work undertaken in domestic homes weakens the basic premise of occupational health and safety, namely that rational decision-making, technical measures, and individual safety behavior lead concerned parties (workers, employers, and others) to minimize risk and exposure. The paper focuses on UK council or social housing, examining how local housing authorities - as landlords - have a duty to provide housing, to protect and to care for residents, but points out that these obligations do not extend to health and safety legislation in relation to DIY undertaken by residents. At the same time, only conventional occupational health and safety, based on rationality, identification, containment, and protective measures, cover itinerant workmen entering these homes. Focusing on asbestos and the way things work in reality, this paper thus explores the degree to which official health and safety regulation can safeguard maintenance and other workers in council homes. It simultaneously examines how councils advise and protect tenants as they occupy and shape their homes. In so doing, this paper challenges the notion of risk as an objective, scientific, and effective measure. In contrast, it demonstrates the ways in which occupational risk - and the choice of appropriate response - is more likely situational and determined by wide-ranging and often contradictory factors.

  12. A RTS-based method for direct and consistent calculating intermittent peak cooling loads

    International Nuclear Information System (INIS)

    Chen Tingyao; Cui, Mingxian

    2010-01-01

    The RTS method currently recommended by ASHRAE Handbook is based on continuous operation. However, most of air-conditioning systems, if not all, in commercial buildings, are intermittently operated in practice. The application of the current RTS method to intermittent air-conditioning in nonresidential buildings could result in largely underestimated design cooling loads, and inconsistently sized air-conditioning systems. Improperly sized systems could seriously deteriorate the performance of system operation and management. Therefore, a new method based on both the current RTS method and the principles of heat transfer has been developed. The first part of the new method is the same as the current RTS method in principle, but its calculation procedure is simplified by the derived equations in a close form. The technical data available in the current RTS method can be utilized to compute zone responses to a change in space air temperature so that no efforts are needed for regenerating new technical data. Both the overall RTS coefficients and the hourly cooling loads computed in the first part are used to estimate the additional peak cooling load due to a change from continuous operation to intermittent operation. It only needs one more step after the current RTS method to determine the intermittent peak cooling load. The new RTS-based method has been validated by EnergyPlus simulations. The root mean square deviation (RMSD) between the relative additional peak cooling loads (RAPCLs) computed by the two methods is 1.8%. The deviation of the RAPCL varies from -3.0% to 5.0%, and the mean deviation is 1.35%.

  13. Applications of thermodynamic calculations to Mg alloy design: Mg-Sn based alloy development

    International Nuclear Information System (INIS)

    Jung, In-Ho; Park, Woo-Jin; Ahn, Sang Ho; Kang, Dae Hoon; Kim, Nack J.

    2007-01-01

    Recently an Mg-Sn based alloy system has been investigated actively in order to develop new magnesium alloys which have a stable structure and good mechanical properties at high temperatures. Thermodynamic modeling of the Mg-Al-Mn-Sb-Si-Sn-Zn system was performed based on available thermodynamic, phase equilibria and phase diagram data. Using the optimized database, the phase relationships of the Mg-Sn-Al-Zn alloys with additions of Si and Sb were calculated and compared with their experimental microstructures. It is shown that the calculated results are in good agreement with experimental microstructures, which proves the applicability of thermodynamic calculations for new Mg alloy design. All calculations were performed using FactSage thermochemical software. (orig.)

  14. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  15. Calculating evidence-based renal replacement therapy - Introducing an excel-based calculator to improve prescribing and delivery in renal replacement therapy - A before and after study.

    Science.gov (United States)

    Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin

    2016-02-01

    Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1  h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1  h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1  h -1 to 26.8 ml kg -1  h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p  < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p  = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.

  16. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  17. Promoting Culturally Responsive Standards-Based Teaching

    Science.gov (United States)

    Saifer, Steffen; Barton, Rhonda

    2007-01-01

    Culturally responsive standards-based (CRSB) teaching can help bring diverse school communities together and make learning meaningful. Unlike multicultural education--which is an important way to incorporate the world's cultural and ethnic diversity into lessons--CRSB teaching draws on the experiences, understanding, views, concepts, and ways of…

  18. Calculation of parameters of radial-piston reducer based on the use of functional semantic networks

    Directory of Open Access Journals (Sweden)

    Pashkevich V.M.

    2016-12-01

    Full Text Available The questions of сalculation of parameters of radial-piston reducer are considered in this article. It is used the approach which is based technologies of functional semantic networks. It is considered possibility applications of functional se-mantic networks for calculation of parameters of radial-piston reducer. Semantic networks to calculate the mass of the radial piston reducer are given.

  19. Evaluation of RSG-GAS Core Management Based on Burnup Calculation

    International Nuclear Information System (INIS)

    Lily Suparlina; Jati Susilo

    2009-01-01

    Evaluation of RSG-GAS Core Management Based on Burnup Calculation. Presently, U 3 Si 2 -Al dispersion fuel is used in RSG-GAS core and had passed the 60 th core. At the beginning of each cycle the 5/1 fuel reshuffling pattern is used. Since 52 nd core, operators did not use the core fuel management computer code provided by vendor for this activity. They use the manually calculation using excel software as the solving. To know the accuracy of the calculation, core calculation was carried out using two kinds of 2 dimension diffusion codes Batan-2DIFF and SRAC. The beginning of cycle burn-up fraction data were calculated start from 51 st to 60 th using Batan-EQUIL and SRAC COREBN. The analysis results showed that there is a disparity in reactivity values of the two calculation method. The 60 th core critical position resulted from Batan-2DIFF calculation provide the reduction of positive reactivity 1.84 % Δk/k, while the manually calculation results give the increase of positive reactivity 2.19 % Δk/k. The minimum shutdown margin for stuck rod condition for manual and Batan-3DIFF calculation are -3.35 % Δk/k dan -1.13 % Δk/k respectively, it means that both values met the safety criteria, i.e <-0.5 % Δk/k. Excel program can be used for burn-up calculation, but it is needed to provide core management code to reach higher accuracy. (author)

  20. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    Science.gov (United States)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  1. Quantum-mechanical calculation of H on Ni(001) using a model potential based on first-principles calculations

    DEFF Research Database (Denmark)

    Mattsson, T.R.; Wahnström, G.; Bengtsson, L.

    1997-01-01

    First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...

  2. Feasibility of CBCT-based dose calculation: Comparative analysis of HU adjustment techniques

    International Nuclear Information System (INIS)

    Fotina, Irina; Hopfgartner, Johannes; Stock, Markus; Steininger, Thomas; Lütgendorf-Caucig, Carola; Georg, Dietmar

    2012-01-01

    Background and purpose: The aim of this work was to compare the accuracy of different HU adjustments for CBCT-based dose calculation. Methods and materials: Dose calculation was performed on CBCT images of 30 patients. In the first two approaches phantom-based (Pha-CC) and population-based (Pop-CC) conversion curves were used. The third method (WAB) represents override of the structures with standard densities for water, air and bone. In ROI mapping approach all structures were overridden with average HUs from planning CT. All techniques were benchmarked to the Pop-CC and CT-based plans by DVH comparison and γ-index analysis. Results: For prostate plans, WAB and ROI mapping compared to Pop-CC showed differences in PTV D median below 2%. The WAB and Pha-CC methods underestimated the bladder dose in IMRT plans. In lung cases PTV coverage was underestimated by Pha-CC method by 2.3% and slightly overestimated by the WAB and ROI techniques. The use of the Pha-CC method for head–neck IMRT plans resulted in difference in PTV coverage up to 5%. Dose calculation with WAB and ROI techniques showed better agreement with pCT than conversion curve-based approaches. Conclusions: Density override techniques provide an accurate alternative to the conversion curve-based methods for dose calculation on CBCT images.

  3. Calculating the Fee-Based Services of Library Institutions: Theoretical Foundations and Practical Challenges

    Directory of Open Access Journals (Sweden)

    Sysіuk Svitlana V.

    2017-05-01

    Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.

  4. Calculation of marine propeller static strength based on coupled BEM/FEM

    Directory of Open Access Journals (Sweden)

    YE Liyu

    2017-10-01

    Full Text Available [Objectives] The reliability of propeller stress has a great influence on the safe navigation of a ship. To predict propeller stress quickly and accurately,[Methods] a new numerical prediction model is developed by coupling the Boundary Element Method(BEMwith the Finite Element Method (FEM. The low order BEM is used to calculate the hydrodynamic load on the blades, and the Prandtl-Schlichting plate friction resistance formula is used to calculate the viscous load. Next, the calculated hydrodynamic load and viscous correction load are transmitted to the calculation of the Finite Element as surface loads. Considering the particularity of propeller geometry, a continuous contact detection algorithm is developed; an automatic method for generating the finite element mesh is developed for the propeller blade; a code based on the FEM is compiled for predicting blade stress and deformation; the DTRC 4119 propeller model is applied to validate the reliability of the method; and mesh independence is confirmed by comparing the calculated results with different sizes and types of mesh.[Results] The results show that the calculated blade stress and displacement distribution are reliable. This method avoids the process of artificial modeling and finite element mesh generation, and has the advantages of simple program implementation and high calculation efficiency.[Conclusions] The code can be embedded into the code of theoretical and optimized propeller designs, thereby helping to ensure the strength of designed propellers and improve the efficiency of propeller design.

  5. Development and validation of a criticality calculation scheme based on French deterministic transport codes

    International Nuclear Information System (INIS)

    Santamarina, A.

    1991-01-01

    A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)

  6. Calculating CR-39 Response to Radon in Water Using Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Razaie Rayeni Nejad, M. R.

    2012-01-01

    CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m 3 )/(track/cm 2 ) that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m 3 ). With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm 3 for one m 2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m 3 ). Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m 3 ).

  7. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki [Hanyang Univ., Seoul (Korea, Republic of); Lee, Jong-Il; Kim, Jang-Lyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language.

  8. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    International Nuclear Information System (INIS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki; Lee, Jong-Il; Kim, Jang-Lyul

    2014-01-01

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language

  9. Code accuracy evaluation of ISP 35 calculations based on NUPEC M-7-1 test

    International Nuclear Information System (INIS)

    Auria, F.D.; Oriolo, F.; Leonardi, M.; Paci, S.

    1995-01-01

    Quantitative evaluation of code uncertainties is a necessary step in the code assessment process, above all if best-estimate codes are utilised for licensing purposes. Aiming at quantifying the code accuracy, an integral methodology based on the Fast Fourier Transform (FFT) has been developed at the University of Pisa (DCMN) and has been already applied to several calculations related to primary system test analyses. This paper deals with the first application of the FFT based methodology to containment code calculations based on a hydrogen mixing and distribution test performed in the NUPEC (Nuclear Power Engineering Corporation) facility. It is referred to pre-test and post-test calculations submitted for the International Standard Problem (ISP) n. 35. This is a blind exercise, simulating the effects of steam injection and spray behaviour on gas distribution and mixing. The result of the application of this methodology to nineteen selected variables calculated by ten participants are here summarized, and the comparison (where possible) of the accuracy evaluated for the pre-test and for the post-test calculations of a same user is also presented. (author)

  10. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  11. Development of SCINFUL-CG code to calculate response functions of scintillators in various shapes used for neutron measurement

    Energy Technology Data Exchange (ETDEWEB)

    Endo, Akira; Kim, Eunjoo; Yamaguchi, Yasuhiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-10-01

    A Monte Carlo code SCINFUL has been utilized for calculating response functions of organic scintillators for high-energy neutron spectroscopy. However, the applicability of SCINFUL is limited to the calculations for cylindrical NE213 and NE110 scintillators. In the present study, SCINFUL-CG was developed by introducing a geometry specifying function and high-energy neutron cross section data into SCINFUL. The geometry package MARS-CG, the extended version of the CG (Combinatorial Geometry), was programmed into SCINFUL-CG to express various geometries of detectors. Neutron spectra in the regions specified by the CG can be evaluated by the track length estimator. The cross section data of silicon, oxygen and aluminum for neutron transport calculation were incorporated up to 100 MeV using the data of LA150 library. Validity of SCINFUL-CG was examined by comparing calculated results with those by SCINFUL and MCNP and experimental data measured using high-energy neutron fields. SCINFUL-CG can be used for the calculations of the response functions and neutron spectra in the organic scintillators in various shapes. The computer code will be applicable to the designs of high-energy neutron spectrometers and neutron monitors using the organic scintillators. The present report describes the new features of SCINFUL-CG and explains how to use the code. (author)

  12. Calculation of Coupled Vibroacoustics Response Estimates from a Library of Available Uncoupled Transfer Function Sets

    Science.gov (United States)

    Smith, Andrew; LaVerde, Bruce; Hunt, Ron; Fulcher, Clay; Towner, Robert; McDonald, Emmett

    2012-01-01

    The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.

  13. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wu Wan'e

    2012-01-01

    Full Text Available A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ±7.3%; in the formulation range (hydroxyl-terminated polybutadiene 28%–32%, ammonium perchlorate 30%–35%, magnalium alloy 4%–8%, catocene 0%–5%, and boron 30%, the variation of the calculation data is consistent with the experimental results.

  14. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-01-01

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  15. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  16. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    Science.gov (United States)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  17. Smart Demand Response Based on Smart Homes

    Directory of Open Access Journals (Sweden)

    Jingang Lai

    2015-01-01

    Full Text Available Smart homes (SHs are crucial parts for demand response management (DRM of smart grid (SG. The aim of SHs based demand response (DR is to provide a flexible two-way energy feedback whilst (or shortly after the consumption occurs. It can potentially persuade end-users to achieve energy saving and cooperate with the electricity producer or supplier to maintain balance between the electricity supply and demand through the method of peak shaving and valley filling. However, existing solutions are challenged by the lack of consideration between the wide application of fiber power cable to the home (FPCTTH and related users’ behaviors. Based on the new network infrastructure, the design and development of smart DR systems based on SHs are related with not only functionalities as security, convenience, and comfort, but also energy savings. A new multirouting protocol based on Kruskal’s algorithm is designed for the reliability and safety of the SHs distribution network. The benefits of FPCTTH-based SHs are summarized at the end of the paper.

  18. Shear and Turbulence Estimates for Calculation of Wind Turbine Loads and Responses Under Hurricane Strength Winds

    Science.gov (United States)

    Kosovic, B.; Bryan, G. H.; Haupt, S. E.

    2012-12-01

    Schwartz et al. (2010) recently reported that the total gross energy-generating offshore wind resource in the United States in waters less than 30m deep is approximately 1000 GW. Estimated offshore generating capacity is thus equivalent to the current generating capacity in the United States. Offshore wind power can therefore play important role in electricity production in the United States. However, most of this resource is located along the East Coast of the United States and in the Gulf of Mexico, areas frequently affected by tropical cyclones including hurricanes. Hurricane strength winds, associated shear and turbulence can affect performance and structural integrity of wind turbines. In a recent study Rose et al. (2012) attempted to estimate the risk to offshore wind turbines from hurricane strength winds over a lifetime of a wind farm (i.e. 20 years). According to Rose et al. turbine tower buckling has been observed in typhoons. They concluded that there is "substantial risk that Category 3 and higher hurricanes can destroy half or more of the turbines at some locations." More robust designs including appropriate controls can mitigate the risk of wind turbine damage. To develop such designs good estimates of turbine loads under hurricane strength winds are essential. We use output from a large-eddy simulation of a hurricane to estimate shear and turbulence intensity over first couple of hundred meters above sea surface. We compute power spectra of three velocity components at several distances from the eye of the hurricane. Based on these spectra analytical spectral forms are developed and included in TurbSim, a stochastic inflow turbulence code developed by the National Renewable Energy Laboratory (NREL, http://wind.nrel.gov/designcodes/preprocessors/turbsim/). TurbSim provides a numerical simulation including bursts of coherent turbulence associated with organized turbulent structures. It can generate realistic flow conditions that an operating turbine

  19. A simple method for calculating power based on a prior trial.

    NARCIS (Netherlands)

    Borm, G.F.; Bloem, B.R.; Munneke, M.; Teerenstra, S.

    2010-01-01

    OBJECTIVE: When an investigator wants to base the power of a planned clinical trial on the outcome of another trial, the latter study may not have been reported in sufficient detail to allow this. For example, when the outcome is a change from baseline, the power calculation requires the standard

  20. Slope excavation quality assessment and excavated volume calculation in hydraulic projects based on laser scanning technology

    Directory of Open Access Journals (Sweden)

    Chao Hu

    2015-04-01

    Full Text Available Slope excavation is one of the most crucial steps in the construction of a hydraulic project. Excavation project quality assessment and excavated volume calculation are critical in construction management. The positioning of excavation projects using traditional instruments is inefficient and may cause error. To improve the efficiency and precision of calculation and assessment, three-dimensional laser scanning technology was used for slope excavation quality assessment. An efficient data acquisition, processing, and management workflow was presented in this study. Based on the quality control indices, including the average gradient, slope toe elevation, and overbreak and underbreak, cross-sectional quality assessment and holistic quality assessment methods were proposed to assess the slope excavation quality with laser-scanned data. An algorithm was also presented to calculate the excavated volume with laser-scanned data. A field application and a laboratory experiment were carried out to verify the feasibility of these methods for excavation quality assessment and excavated volume calculation. The results show that the quality assessment indices can be obtained rapidly and accurately with design parameters and scanned data, and the results of holistic quality assessment are consistent with those of cross-sectional quality assessment. In addition, the time consumption in excavation quality assessment with the laser scanning technology can be reduced by 70%–90%, as compared with the traditional method. The excavated volume calculated with the scanned data only slightly differs from measured data, demonstrating the applicability of the excavated volume calculation method presented in this study.

  1. Fragment-based quantum mechanical calculation of protein-protein binding affinities.

    Science.gov (United States)

    Wang, Yaqian; Liu, Jinfeng; Li, Jinjin; He, Xiao

    2018-04-29

    The electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method has been successfully utilized for efficient linear-scaling quantum mechanical (QM) calculation of protein energies. In this work, we applied the EE-GMFCC method for calculation of binding affinity of Endonuclease colicin-immunity protein complex. The binding free energy changes between the wild-type and mutants of the complex calculated by EE-GMFCC are in good agreement with experimental results. The correlation coefficient (R) between the predicted binding energy changes and experimental values is 0.906 at the B3LYP/6-31G*-D level, based on the snapshot whose binding affinity is closest to the average result from the molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) calculation. The inclusion of the QM effects is important for accurate prediction of protein-protein binding affinities. Moreover, the self-consistent calculation of PB solvation energy is required for accurate calculations of protein-protein binding free energies. This study demonstrates that the EE-GMFCC method is capable of providing reliable prediction of relative binding affinities for protein-protein complexes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  2. Radial electromagnetic force calculation of induction motor based on multi-loop theory

    Directory of Open Access Journals (Sweden)

    HE Haibo

    2017-12-01

    Full Text Available [Objectives] In order to study the vibration and noise of induction motors, a method of radial electromagnetic force calculation is established on the basis of the multi-loop model.[Methods] Based on the method of calculating air-gap magneto motive force according to stator and rotor fundamental wave current, the analytic formulas are deduced for calculating the air-gap magneto motive force and radial electromagnetic force generated in accordance with any stator winding and rotor conducting bar current. The multi-loop theory and calculation method for the electromagnetic parameters of a motor are introduced, and a dynamic simulation model of an induction motor built to achieve the current of the stator winding and rotor conducting bars, and obtain the calculation formula of radial electromagnetic force. The radial electromagnetic force and vibration are then estimated.[Results] The experimental results indicate that the vibration acceleration frequency and amplitude of the motor are consistent with the experimental results.[Conclusions] The results and calculation method can support the low noise design of converters.

  3. Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene

    Science.gov (United States)

    Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.

    2012-02-01

    We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.

  4. Time Analysis of Building Dynamic Response Under Seismic Action. Part 2: Example of Calculation

    Science.gov (United States)

    Ufimtcev, E. M.

    2017-11-01

    The second part of the article illustrates the use of the time analysis method (TAM) by the example of the calculation of a 3-storey building, the design dynamic model (DDM) of which is adopted in the form of a flat vertical cantilever rod with 3 horizontal degrees of freedom associated with floor and coverage levels. The parameters of natural oscillations (frequencies and modes) and the results of the calculation of the elastic forced oscillations of the building’s DDM - oscillograms of the reaction parameters on the time interval t ∈ [0; 131,25] sec. The obtained results are analyzed on the basis of the computed values of the discrepancy of the DDS motion equation and the comparison of the results calculated on the basis of the numerical approach (FEM) and the normative method set out in SP 14.13330.2014 “Construction in Seismic Regions”. The data of the analysis testify to the accuracy of the construction of the computational model as well as the high accuracy of the results obtained. In conclusion, it is revealed that the use of the TAM will improve the strength of buildings and structures subject to seismic influences when designing them.

  5. Medication calculation: the potential role of digital game-based learning in nurse education.

    Science.gov (United States)

    Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle

    2013-12-01

    Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.

  6. Calculation of fluid (steam) hammer loading to piping systems by the response spectrum method

    International Nuclear Information System (INIS)

    Krause, G.; Schrader, W.; Leimbach, K.R.

    1983-01-01

    Today computations of fluid and steam hammer loading to piping systems are usually performed as a time-history analysis in which the transient pressure forces act as external excitations. For practical purposes it is desirable to be able to treat fluid hammer loading using the response spectrum method similarily as loads from external events. Two advantages arise from the use of spectra in the analysis of piping systems subjected to dynamic force excitations. Firstly, the response spectrum method is much less sensitive to model idealization than the time-history method. Secondly, computational efforts are reduced. In this paper the algorithm for the treatment of force excitations through the modal response spectrum method is briefly presented. The effect of the residuum accounting for higher modes which are not part of the modal decomposition is considered. In particular various methods of superposition of the responses of the dynamic forces and of the modes are investigated. Results and comparisons are presented of several response spectrum analyses and time-history analyses. (orig.)

  7. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  8. Two-dimensional core calculation research for fuel management optimization based on CPACT code

    International Nuclear Information System (INIS)

    Chen Xiaosong; Peng Lianghui; Gang Zhi

    2013-01-01

    Fuel management optimization process requires rapid assessment for the core layout program, and the commonly used methods include two-dimensional diffusion nodal method, perturbation method, neural network method and etc. A two-dimensional loading patterns evaluation code was developed based on the three-dimensional LWR diffusion calculation program CPACT. Axial buckling introduced to simulate the axial leakage was searched in sub-burnup sections to correct the two-dimensional core diffusion calculation results. Meanwhile, in order to get better accuracy, the weight equivalent volume method of the control rod assembly cross-section was improved. (authors)

  9. Calculation and Simulation Study on Transient Stability of Power System Based on Matlab/Simulink

    Directory of Open Access Journals (Sweden)

    Shi Xiu Feng

    2016-01-01

    Full Text Available The stability of the power system is destroyed, will cause a large number of users power outage, even cause the collapse of the whole system, extremely serious consequences. Based on the analysis in single machine infinite system as an example, when at the f point two phase ground fault occurs, the fault lines on either side of the circuit breaker tripping resection at the same time,respectively by two kinds of calculation and simulation methods of system transient stability analysis, the conclusion are consistent. and the simulation analysis is superior to calculation analysis.

  10. Core physics design calculation of mini-type fast reactor based on Monte Carlo method

    International Nuclear Information System (INIS)

    He Keyu; Han Weishi

    2007-01-01

    An accurate physics calculation model has been set up for the mini-type sodium-cooled fast reactor (MFR) based on MCNP-4C code, then a detailed calculation of its critical physics characteristics, neutron flux distribution, power distribution and reactivity control has been carried out. The results indicate that the basic physics characteristics of MFR can satisfy the requirement and objectives of the core design. The power density and neutron flux distribution are symmetrical and reasonable. The control system is able to make a reliable reactivity balance efficiently and meets the request for long-playing operation. (authors)

  11. General Method for Calculating the Response and Noise Spectra of Active Fabry-Perot Semiconductor Waveguides With External Optical Injection

    DEFF Research Database (Denmark)

    Blaaberg, Søren; Mørk, Jesper

    2009-01-01

    We present a theoretical method for calculating small-signal modulation responses and noise spectra of active Fabry-Perot semiconductor waveguides with external light injection. Small-signal responses due to either a modulation of the pump current or due to an optical amplitude or phase modulatio...... amplifiers and an injection-locked laser. We also demonstrate the applicability of the method to analyze slow and fast light effects in semiconductor waveguides. Finite reflectivities of the facets are found to influence the phase changes of the injected microwave-modulated light....

  12. Dielectric Response at THz Frequencies of Fe Water Complexes and Their Interaction with O3 Calculated by Density Functional Theory

    Science.gov (United States)

    2012-10-24

    geometric arrangement of the atoms in a chemical system , at the maximal peak of the energy surface separating reactants from products . In the...Sonnenberg, M. Hada, M. Ehara, K. Toyota , R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda , O. Kitao, H. Nakai, T. Vreven, J. A. Montgomery... using DFT. The calculation of ground state resonance structure is for the construction of parameterized dielectric response functions for excitation

  13. Detector response calculated with libamtrack compared with data for different solid state detectors

    DEFF Research Database (Denmark)

    Herrmann, Rochus; Greilich, Steffen; Grzanka, Leszek

    . Greilich et al. “Amorphous track models: A numerical comparison study”, Radiat. Meas., in press; doi:10.1016/j.radmeas.2010.05.039 [3] Palmans H. “Effect of alanine energy response and phantom materials on depth dose measurements in ocular proton beams.”, Technol Cancer Res Treat.;2:6;579-86;(2003) [4...

  14. Summary of calculations of dynamic response characteristics and design stress of the 1/5 scale PSE torus

    International Nuclear Information System (INIS)

    Arthur, D.

    1977-01-01

    The Lawrence Livermore Laboratory is currently involved in a 1/5 scale testing program on the Mark I BWR pressure suppression system. A key element of the test setup is a pressure vessel that is a 90 0 sector of a torus. Proper performance of the 90 0 torus depends on its structural integrity and structural dynamic characteristics. It must sustain the internal pressurization of the planned tests, and its dynamic response to the transient test loads should be minimal. If the structural vibrations are too great, interpretation of important load cell and pressure transducer data will be difficult. The purpose of the report is to bring together under one cover calculations pertaining to the structural dynamic characteristics and structural integrity of 90 0 torus. The report is divided into the following sections: (1) system description in which the torus and associated hardware are briefly described; (2) structural dynamics in which calculations of natural frequency and dynamic response are presented; and (3) structural integrity in which stress calculations for design purposes are presented; and an appendix which contains an LLL internal report comparing the expected load cell response for a three and four-point supported torus

  15. Calculation Scheme Based on a Weighted Primitive: Application to Image Processing Transforms

    Directory of Open Access Journals (Sweden)

    Gregorio de Miguel Casado

    2007-01-01

    Full Text Available This paper presents a method to improve the calculation of functions which specially demand a great amount of computing resources. The method is based on the choice of a weighted primitive which enables the calculation of function values under the scope of a recursive operation. When tackling the design level, the method shows suitable for developing a processor which achieves a satisfying trade-off between time delay, area costs, and stability. The method is particularly suitable for the mathematical transforms used in signal processing applications. A generic calculation scheme is developed for the discrete fast Fourier transform (DFT and then applied to other integral transforms such as the discrete Hartley transform (DHT, the discrete cosine transform (DCT, and the discrete sine transform (DST. Some comparisons with other well-known proposals are also provided.

  16. Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1997-01-01

    of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the code......The part 1-2 of the Eurocode on Concrete deals with Structural Fire Design.In chapter 3, which is partly written by the author of this paper, some data are given for the development of a few material parameters at high temperatures. These data are intended to represent the worst possible concrete...... to experience form tests on structural specimens based on German siliceous concrete subjected to Standard fire exposure until the time of maximum gas temperature.Chapter 4.3, which is written by the author of this paper, provides a simplified calculation method by means of which the load bearing capacity...

  17. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Science.gov (United States)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  18. Generation of input parameters for OSPM calculations. Sensitivity analysis of a method based on a questionnaire

    Energy Technology Data Exchange (ETDEWEB)

    Vignati, E.; Hertel, O.; Berkowicz, R. [National Environmental Research Inst., Dept. of Atmospheric Enviroment (Denmark); Raaschou-Nielsen, O. [Danish Cancer Society, Division of Cancer Epidemiology (Denmark)

    1997-05-01

    The method for generation of the input data for the calculations with OSPM is presented in this report. The described method which is based on information provided from a questionnaire, will be used for model calculations of long term exposure for a large number of children in connection with an epidemiological study. A test of the calculation method has been performed on a few locations in which detailed measurements of air pollution, meteorological data and traffic were available. Comparisons between measured and calculated concentrations were made for hourly, monthly and yearly values. Beside the measured concentrations, the test results were compared to results obtained with the optimal street configuration data and measured traffic. The main conclusions drawn from this investigation are: (1) The calculation method works satisfactory well for long term averages, whereas the uncertainties are high when short term averages are considered. (2) The street width is one of the most crucial input parameters for the calculation of street pollution levels for both short and long term averages. Using H.C. Andersens Boulevard as an example, it was shown that estimation of street width based on traffic amount can lead to large overestimation of the concentration levels (in this case 50% for NO{sub x} and 30% for NO{sub 2}). (3) The street orientation and geometry is important for prediction of short term concentrations but this importance diminished for longer term averages. (4) The uncertainties in diurnal traffic profiles can influence the accuracy of short term averages, but are less important for long term averages. The correlation is good between modelled and measured concentrations when the actual background concentrations are replaced with the generated values. Even though extreme situations are difficult to reproduce with this method, the comparison between the yearly averaged modelled and measured concentrations is very good. (LN) 20 refs.

  19. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  20. An independent dose calculation algorithm for MLC-based stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Lorenz, Friedlieb; Killoran, Joseph H.; Wenz, Frederik; Zygmanski, Piotr

    2007-01-01

    We have developed an algorithm to calculate dose in a homogeneous phantom for radiotherapy fields defined by multi-leaf collimator (MLC) for both static and dynamic MLC delivery. The algorithm was developed to supplement the dose algorithms of the commercial treatment planning systems (TPS). The motivation for this work is to provide an independent dose calculation primarily for quality assurance (QA) and secondarily for the development of static MLC field based inverse planning. The dose calculation utilizes a pencil-beam kernel. However, an explicit analytical integration results in a closed form for rectangular-shaped beamlets, defined by single leaf pairs. This approach reduces spatial integration to summation, and leads to a simple method of determination of model parameters. The total dose for any static or dynamic MLC field is obtained by summing over all individual rectangles from each segment which offers faster speed to calculate two-dimensional dose distributions at any depth in the phantom. Standard beam data used in the commissioning of the TPS was used as input data for the algorithm. The calculated results were compared with the TPS and measurements for static and dynamic MLC. The agreement was very good (<2.5%) for all tested cases except for very small static MLC sizes of 0.6 cmx0.6 cm (<6%) and some ion chamber measurements in a high gradient region (<4.4%). This finding enables us to use the algorithm for routine QA as well as for research developments

  1. Dose-response regressions for algal growth and similar continuous endpoints: Calculation of effective concentrations

    DEFF Research Database (Denmark)

    Christensen, Erik R.; Kusk, Kresten Ole; Nyholm, Niels

    2009-01-01

    We derive equations for the effective concentration giving 10% inhibition (EC10) with 95% confidence limits for probit (log-normal), Weibull, and logistic dose -responsemodels on the basis of experimentally derived median effective concentrations (EC50s) and the curve slope at the central point (50......% inhibition). For illustration, data from closed, freshwater algal assays are analyzed using the green alga Pseudokirchneriella subcapitata with growth rate as the response parameter. Dose-response regressions for four test chemicals (tetraethylammonium bromide, musculamine, benzonitrile, and 4...... regression program with variance weighting and proper inverse estimation. The Weibull model provides the best fit to the data for all four chemicals. Predicted EC10s (95% confidence limits) from our derived equations are quite accurate; for example, with 4-4-(trifluoromethyl)phenoxy-phenol and the probit...

  2. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  3. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    Science.gov (United States)

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  4. Calculation of t8/5 by response surface methodology for electric arc welding applications

    Directory of Open Access Journals (Sweden)

    Meseguer-Valdenebro José Luis

    2014-01-01

    Full Text Available One of the greatest difficulties traditionally found in stainless steel constructions has been the execution of welding parts in them. At the present time, the available technology allows us to use arc welding processes for that application without any disadvantage. Response surface methodology is used to optimise a process in which the variables that take part in it are not related to each other by a mathematical law. Therefore, an empiric model must be formulated. With this methodology the optimisation of one selected variable may be done. In this work, the cooling time that takes place from 800 to 500ºC, t8/5, after TIG welding operation, is modelled by the response surface method. The arc power, the welding velocity and the thermal efficiency factor are considered as the variables that have influence on the t8/5 value. Different cooling times,t8/5, for different combinations of values for the variables are previously determined by a numerical method. The input values for the variables have been experimentally established. The results indicate that response surface methodology may be considered as a valid technique for these purposes.

  5. Dielectric Response and Born Dynamic Charge of BN Nanotubes from Ab Initio Finite Electric Field Calculations

    Science.gov (United States)

    Guo, Guang-Yu; Ishibashi, Shoji; Tamura, Tomoyuki; Terakura, Kiyoyuki

    2007-03-01

    Since the discovery of carbon nanotubes (CNTs) in 1991 by Iijima, carbon and other nanotubes have attracted considerable interest worldwide because of their unusual properties and also great potentials for technological applications. Though CNTs continue to attract great interest, other nanotubes such as BN nanotubes (BN-NTs) may offer different opportunities that CNTs cannot provide. In this contribution, we present the results of our recent systematic ab initio calculations of the static dielectric constant, electric polarizability, Born dynamical charge, electrostriction coefficient and piezoelectric constant of BN-NTs using the latest crystalline finite electric field theory [1]. [1] I. Souza, J. Iniguez, and D. Vanderbilt, Phys. Rev. Lett. 89, 117602 (2002); P. Umari and A. Pasquarello, Phys. Rev. Lett. 89, 157602 (2002).

  6. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface

    Science.gov (United States)

    Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin

    2018-05-01

    Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.

  7. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    International Nuclear Information System (INIS)

    Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-01-01

    Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  8. Calculation of the Instream Ecological Flow of the Wei River Based on Hydrological Variation

    Directory of Open Access Journals (Sweden)

    Shengzhi Huang

    2014-01-01

    Full Text Available It is of great significance for the watershed management department to reasonably allocate water resources and ensure the sustainable development of river ecosystems. The greatly important issue is to accurately calculate instream ecological flow. In order to precisely compute instream ecological flow, flow variation is taken into account in this study. Moreover, the heuristic segmentation algorithm that is suitable to detect the mutation points of flow series is employed to identify the change points. Besides, based on the law of tolerance and ecological adaptation theory, the maximum instream ecological flow is calculated, which is the highest frequency of the monthly flow based on the GEV distribution and very suitable for healthy development of the river ecosystems. Furthermore, in order to guarantee the sustainable development of river ecosystems under some bad circumstances, minimum instream ecological flow is calculated by a modified Tennant method which is improved by replacing the average flow with the highest frequency of flow. Since the modified Tennant method is more suitable to reflect the law of flow, it has physical significance, and the calculation results are more reasonable.

  9. A New Displacement-based Approach to Calculate Stress Intensity Factors With the Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Marco Gonzalez

    Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.

  10. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.J., E-mail: mark.j.jackson@awe.co.uk; Britton, R.; Davies, A.V.; McLarty, J.L.; Goodwin, M.

    2016-10-21

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ–γ, γ–X, γ–511 and γ–e{sup −} coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted. - Highlights: • Versatile method to calculate coincidence summing factors for gamma-spectrometry analysis. • Based solely on ENSDF format nuclear data and detector efficiency characterisations. • Enables generation of a CSF library for any detector, geometry and radionuclide. • Improves measurement accuracy and reduces acquisition times required to meet MDA.

  11. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  12. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    International Nuclear Information System (INIS)

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  13. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...

  14. Software design to calculate and simulate the mechanical response of electromechanical lifts

    Science.gov (United States)

    Herrera, I.; Romero, E.

    2016-05-01

    Lift engineers and lift companies which are involved in the design process of new products or in the research and development of improved components demand a predictive tool of the lift slender system response before testing expensive prototypes. A method for solving the movement of any specified lift system by means of a computer program is presented. The mechanical response of the lift operating in a user defined installation and configuration, for a given excitation and other configuration parameters of real electric motors and its control system, is derived. A mechanical model with 6 degrees of freedom is used. The governing equations are integrated step by step through the Meden-Kutta algorithm in the MATLAB platform. Input data consists on the set point speed for a standard trip and the control parameters of a number of controllers and lift drive machines. The computer program computes and plots very accurately the vertical displacement, velocity, instantaneous acceleration and jerk time histories of the car, counterweight, frame, passengers/loads and lift drive in a standard trip between any two floors of the desired installation. The resulting torque, rope tension and deviation of the velocity plot with respect to the setpoint speed are shown. The software design is implemented in a demo release of the computer program called ElevaCAD. Further on, the program offers the possibility to select the configuration of the lift system and the performance parameters of each component. In addition to the overall system response, detailed information of transients, vibrations of the lift components, ride quality levels, modal analysis and frequency spectrum (FFT) are plotted.

  15. CT-based dose calculations and in vivo dosimetry for lung cancer treatment

    International Nuclear Information System (INIS)

    Essers, M.; Lanson, J.H.; Leunens, G.; Schnabel, T.; Mijnheer, B.J.

    1995-01-01

    Reliable CT-based dose calculations and dosimetric quality control are essential for the introduction of new conformal techniques for the treatment of lung cancer. The first aim of this study was therefore to check the accuracy of dose calculations based on CT-densities, using a simple inhomogeneity correction model, for lung cancer patients irradiated with an AP-PA treatment technique. Second, the use of diodes for absolute exit dose measurements and an Electronic Portal Imaging Device (EPID) for relative transmission dose verification was investigated for 22 and 12 patients, respectively. The measured dose values were compared with calculations performed using our 3-dimensional treatment planning system, using CT-densities or assuming the patient to be water-equivalent. Using water-equivalent calculations, the actual exit dose value under lung was, on average, underestimated by 30%, with an overall spread of 10% (1 SD). Using inhomogeneity corrections, the exit dose was, on average, overestimated by 4%, with an overall spread of 6% (1 SD). Only 2% of the average deviation was due to the inhomogeneity correction model. An uncertainty in exit dose calculation of 2.5% (1 SD) could be explained by organ motion, resulting from the ventilatory or cardiac cycle. The most important reason for the large overall spread was, however, the uncertainty involved in performing point measurements: about 4% (1 SD). This difference resulted from the systematic and random deviation in patient set-up and therefore in diode position with respect to patient anatomy. Transmission and exit dose values agreed with an average difference of 1.1%. Transmission dose profiles also showed good agreement with calculated exit dose profiles. Our study shows that, for this treatment technique, the dose in the thorax region is quite accurately predicted using CT-based dose calculations, even if a simple inhomogeneity correction model is used. Point detectors such as diodes are not suitable for exit

  16. A NRESPG Monte Carlo code for the calculation of neutron response functions for gas counters

    Energy Technology Data Exchange (ETDEWEB)

    Kudo, K; Takeda, N; Fukuda, A [Electrotechnical Lab., Tsukuba, Ibaraki (Japan); Torii, T; Hashimoto, M; Sugita, T; Yang, X; Dietze, G

    1996-07-01

    In this paper, we show the outline of the NRESPG and some typical results of the response functions and efficiencies of several kinds of gas counters. The cross section data for the several kinds of filled gases and the wall material of stainless steel or aluminum are taken mainly from ENDF/B-IV. The ENDF/B-V for stainless steel is also used to investigate the influence on pulse height spectra of gas counters due to the difference of nuclear data files. (J.P.N.)

  17. Calculation of passive earth pressure of cohesive soil based on Culmann's method

    Directory of Open Access Journals (Sweden)

    Hai-feng Lu

    2011-03-01

    Full Text Available Based on the sliding plane hypothesis of Coulumb earth pressure theory, a new method for calculation of the passive earth pressure of cohesive soil was constructed with Culmann's graphical construction. The influences of the cohesive force, adhesive force, and the fill surface form were considered in this method. In order to obtain the passive earth pressure and sliding plane angle, a program based on the sliding surface assumption was developed with the VB.NET programming language. The calculated results from this method were basically the same as those from the Rankine theory and Coulumb theory formulas. This method is conceptually clear, and the corresponding formulas given in this paper are simple and convenient for application when the fill surface form is complex.

  18. World Wide Web-based system for the calculation of substituent parameters and substituent similarity searches.

    Science.gov (United States)

    Ertl, P

    1998-02-01

    Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.

  19. The calculation of surface free energy based on embedded atom method for solid nickel

    International Nuclear Information System (INIS)

    Luo Wenhua; Hu Wangyu; Su Kalin; Liu Fusheng

    2013-01-01

    Highlights: ► A new solution for accurate prediction of surface free energy based on embedded atom method was proposed. ► The temperature dependent anisotropic surface energy of solid nickel was obtained. ► In isotropic environment, the approach does not change most predictions of bulk material properties. - Abstract: Accurate prediction of surface free energy of crystalline metals is a challenging task. The theory calculations based on embedded atom method potentials often underestimate surface free energy of metals. With an analytical charge density correction to the argument of the embedding energy of embedded atom method, an approach to improve the prediction for surface free energy is presented. This approach is applied to calculate the temperature dependent anisotropic surface energy of bulk nickel and surface energies of nickel nanoparticles, and the obtained results are in good agreement with available experimental data.

  20. Application of shielding calculation of high-energy linear accelerators based on the NCRP-151 protocol

    International Nuclear Information System (INIS)

    Torres Pozas, S.; Monja Rey, P. de la; Sanchez Carrasca, M.; Yanez Lopez, D.; Macias Verde, D.; Martin Oliva, R.

    2011-01-01

    In recent years, the progress experienced in cancer treatment with ionizing radiation can deliver higher doses to smaller volumes and better shaped, making it necessary to take into account new aspects in the calculation of structural barriers. Furthermore, given that forecasts suggest that in the near future will install a large number of accelerators, or existing ones modified, we believe a useful tool to estimate the thickness of the structural barriers of treatment rooms. The shielding calculation methods are based on standard DIN 6847-2 and the recommendations given by the NCRP 151. In our experience we found only estimates originated from the DIN. Therefore, we considered interesting to develop an application that incorporates the formulation suggested by the NCRP, together with previous work based on the rules DIN allow us to establish a comparison between the results of both methods. (Author)

  1. Correction of the calculation of beam loading based in the RF power diffusion equation

    International Nuclear Information System (INIS)

    Silva, R. da.

    1980-01-01

    It is described an empirical correction based upon experimental datas of others authors in ORELA, GELINA and SLAC accelerators, to the calculation of the energy loss due to the beam loading effect as stated by the RF power diffusion equation theory an accelerating structure. It is obtained a dependence of this correction with the electron pulse full width half maximum, but independent of the electron energy. (author) [pt

  2. A Cultural Study of a Science Classroom and Graphing Calculator-based Technology

    OpenAIRE

    Casey, Dennis Alan

    2001-01-01

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology...

  3. Research on trust calculation of wireless sensor networks based on time segmentation

    Science.gov (United States)

    Su, Yaoxin; Gao, Xiufeng; Qiao, Wenxin

    2017-05-01

    Because the wireless sensor network is different from the traditional network characteristics, it is easy to accept the intrusion from the compromise node. The trust mechanism is the most effective way to defend against internal attacks. Aiming at the shortcomings of the existing trust mechanism, a method of calculating the trust of wireless sensor networks based on time segmentation is proposed. It improves the security of the network and extends the life of the network

  4. Monte carlo calculation of energy-dependent response of high-sensitive neutron monitor, HISENS

    International Nuclear Information System (INIS)

    Imanaka, Tetsuji; Ebisawa, Tohru; Kobayashi, Keiji; Koide, Hiroaki; Seo, Takeshi; Kawano, Shinji

    1988-01-01

    A highly sensitive neutron monitor system, HISENS, has been developed to measure leakage neutrons from nuclear facilities. The counter system of HISENS contains a detector bank which consists of ten cylindrical proportional counters filled with 10 atm 3 He gas and a paraffin moderator mounted in an aluminum case. The size of the detector bank is 56 cm high, 66 cm wide and 10 cm thick. It is revealed by a calibration experiment using an 241 Am-Be neutron source that the sensitivity of HISENS is about 2000 times as large as that of a typical commercial rem-counter. Since HISENS is designed to have a high sensitivity in a wide range of neutron energy, the shape of its energy dependent response curve cannot be matched to that of the dose equivalent conversion factor. To estimate dose equivalent values from neutron counts by HISENS, it is necessary to know the energy and angular characteristics of both HISENS and the neutron field. The area of one side of the detector bank is 3700 cm 2 and the detection efficiency in the constant region of the response curve is about 30 %. Thus, the sensitivity of HISENS for this energy range is 740 cps/(n/cm 2 /sec). This value indicates the extremely high sensitivity of HISENS as compared with exsisting highly sensitive neutron monitors. (Nogami, K.)

  5. Calculating the knowledge-based similarity of functional groups using crystallographic data

    Science.gov (United States)

    Watson, Paul; Willett, Peter; Gillet, Valerie J.; Verdonk, Marcel L.

    2001-09-01

    A knowledge-based method for calculating the similarity of functional groups is described and validated. The method is based on experimental information derived from small molecule crystal structures. These data are used in the form of scatterplots that show the likelihood of a non-bonded interaction being formed between functional group A (the `central group') and functional group B (the `contact group' or `probe'). The scatterplots are converted into three-dimensional maps that show the propensity of the probe at different positions around the central group. Here we describe how to calculate the similarity of a pair of central groups based on these maps. The similarity method is validated using bioisosteric functional group pairs identified in the Bioster database and Relibase. The Bioster database is a critical compilation of thousands of bioisosteric molecule pairs, including drugs, enzyme inhibitors and agrochemicals. Relibase is an object-oriented database containing structural data about protein-ligand interactions. The distributions of the similarities of the bioisosteric functional group pairs are compared with similarities for all the possible pairs in IsoStar, and are found to be significantly different. Enrichment factors are also calculated showing the similarity method is statistically significantly better than random in predicting bioisosteric functional group pairs.

  6. Calculation of generalized Lorenz-Mie theory based on the localized beam models

    International Nuclear Information System (INIS)

    Jia, Xiaowei; Shen, Jianqi; Yu, Haitao

    2017-01-01

    It has been proved that localized approximation (LA) is the most efficient way to evaluate the beam shape coefficients (BSCs) in generalized Lorenz-Mie theory (GLMT). The numerical calculation of relevant physical quantities is a challenge for its practical applications due to the limit of computer resources. The study presents an improved algorithm of the GLMT calculation based on the localized beam models. The BSCs and the angular functions are calculated by multiplying them with pre-factors so as to keep their values in a reasonable range. The algorithm is primarily developed for the original localized approximation (OLA) and is further extended to the modified localized approximation (MLA). Numerical results show that the algorithm is efficient, reliable and robust. - Highlights: • In this work, we introduce the proper pre-factors to the Bessel functions, BSCs and the angular functions. With this improvement, all the quantities involved in the numerical calculation are scaled into a reasonable range of values so that the algorithm can be used for computing the physical quantities of the GLMT. • The algorithm is not only an improvement in numerical technique, it also implies that the set of basic functions involved in the electromagnetic scattering (and sonic scattering) can be reasonably chosen. • The algorithms of the GLMT computations introduced in previous references suggested that the order of the n and m sums is interchanged. In this work, the sum of azimuth modes is performed for each partial wave. This offers the possibility to speed up the computation, since the sum of partial waves can be optimized according to the illumination conditions and the sum of azimuth modes can be truncated by selecting a criterion discussed in . • Numerical results show that the algorithm is efficient, reliable and robust, even in very exotic cases. The algorithm presented in this paper is based on the original localized approximation and it can also be used for the

  7. Predicting response to incretin-based therapy

    Directory of Open Access Journals (Sweden)

    Agrawal N

    2011-04-01

    Full Text Available Sanjay Kalra1, Bharti Kalra2, Rakesh Sahay3, Navneet Agrawal41Department of Endocrinology, 2Department of Diabetology, Bharti Hospital, Karnal, India; 3Department of Endocrinology, Osmania Medical College, Hyderabad, India; 4Department of Medicine, GR Medical College, Gwalior, IndiaAbstract: There are two important incretin hormones, glucose-dependent insulin tropic polypeptide (GIP and glucagon-like peptide-1 (GLP-1. The biological activities of GLP-1 include stimulation of glucose-dependent insulin secretion and insulin biosynthesis, inhibition of glucagon secretion and gastric emptying, and inhibition of food intake. GLP-1 appears to have a number of additional effects in the gastrointestinal tract and central nervous system. Incretin based therapy includes GLP-1 receptor agonists like human GLP-1 analogs (liraglutide and exendin-4 based molecules (exenatide, as well as DPP-4 inhibitors like sitagliptin, vildagliptin and saxagliptin. Most of the published studies showed a significant reduction in HbA1c using these drugs. A critical analysis of reported data shows that the response rate in terms of target achievers of these drugs is average. One of the first actions identified for GLP-1 was the glucose-dependent stimulation of insulin secretion from islet cell lines. Following the detection of GLP-1 receptors on islet beta cells, a large body of evidence has accumulated illustrating that GLP-1 exerts multiple actions on various signaling pathways and gene products in the ß cell. GLP-1 controls glucose homeostasis through well-defined actions on the islet ß cell via stimulation of insulin secretion and preservation and expansion of ß cell mass. In summary, there are several factors determining the response rate to incretin therapy. Currently minimal clinical data is available to make a conclusion. Key factors appear to be duration of diabetes, obesity, presence of autonomic neuropathy, resting energy expenditure, plasma glucagon levels and

  8. Application of CFD dispersion calculation in risk based inspection for release of H2S

    International Nuclear Information System (INIS)

    Sharma, Pavan K.; Vinod, Gopika; Singh, R.K.; Rao, V.V.S.S.; Vaze, K.K.

    2011-01-01

    In atmospheric dispersion both deterministic and probabilistic approached have been used for addressing design and regulatory concerns. In context of deterministic calculations the amount of pollutants dispersion in the atmosphere is an important area wherein different approaches are followed in development of good analytical model. The analysis based on Computational Fluid Dynamics (CFD) codes offer an opportunity of model development based on first principles of physics and hence such models have an edge over the existing models. In context of probabilistic methods applying risk based inspection (wherein consequence of failure from each component needs to be assessed) are becoming popular. Consequence evaluation in a process plant is a crucial task. Often the number of components considered for life management will be too huge. Also consequence evaluation of all the components proved to be laborious task. The present paper is the results of joint collaborative work from deterministic and probabilistic modelling group working in the field of atmospheric dispersion. Even though API 581 has simplified qualitative approach, regulators find the some of the factors, in particular, quantity factor, not suitable for process plants. Often dispersion calculations for heavy gas are done with very simple model which can not take care of density based atmospheric dispersion. This necessitates a new approach with a CFD based technical basis is proposed, so that the range of quantity considered along with factors used can be justified. The present paper is aimed at bringing out some of the distinct merits and demerits of the CFD based models. A brief account of the applications of such CFD codes reported in literature is also presented in the paper. This paper describes the approach devised and demonstrated for the said issue with emphasis of CFD calculations. (author)

  9. Calculation of effect of burnup history on spent fuel reactivity based on CASMO5

    International Nuclear Information System (INIS)

    Li Xiaobo; Xia Zhaodong; Zhu Qingfu

    2015-01-01

    Based on the burnup credit of actinides + fission products (APU-2) which are usually considered in spent fuel package, the effect of power density and operating history on k_∞ was studied. All the burnup calculations are based on the two-dimensional fuel assembly burnup program CASMO5. The results show that taking the core average power density of specified power plus a bounding margin of 0.0023 to k_∞, and taking the operating history of specified power without shutdown during cycle and between cycles plus a bounding margin of 0.0045 to k_∞ can meet the bounding principle of burnup credit. (authors)

  10. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    International Nuclear Information System (INIS)

    Osadchy, A V; Obraztsova, E D; Volotovskiy, S G; Golovashkin, D L; Savin, V V

    2016-01-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars. (paper)

  11. Substituent effect on redox potential of nitrido technetium complexes with Schiff base ligand. Theoretical calculations

    International Nuclear Information System (INIS)

    Takayama, T.; Sekine, T.; Kudo, H.

    2003-01-01

    Theoretical calculations based on the density functional theory (DFT) were performed to understand the effect of substituents on the molecular and electronic structures of technetium nitrido complexes with salen type Schiff base ligands. Optimized structures of these complexes are square pyramidal. The electron density on a Tc atom of the complex with electron withdrawing substituents is lower than that of the complex with electron donating substituents. The HOMO energy is lower in the complex with electron withdrawing substituents than that in the complex with electron donating substituents. The charge on Tc atoms is a good measure that reflects the redox potential of [TcN(L)] complex. (author)

  12. Calculator: A Hardware Design, Math and Software Programming Project Base Learning

    Directory of Open Access Journals (Sweden)

    F. Criado

    2015-03-01

    Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.

  13. Wave resistance calculation method combining Green functions based on Rankine and Kelvin source

    Directory of Open Access Journals (Sweden)

    LI Jingyu

    2017-12-01

    Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.

  14. Calculating acid-base and oxygenation status during COPD exacerbation using mathematically arterialised venous blood

    DEFF Research Database (Denmark)

    Rees, Stephen Edward; Rychwicka-Kielek, Beate A; Andersen, Bjarne F

    2012-01-01

    Abstract Background: Repeated arterial puncture is painful. A mathematical method exists for transforming peripheral venous pH, PCO2 and PO2 to arterial eliminating the need for arterial sampling. This study evaluates this method to monitor acid-base and oxygenation during admission...... for exacerbation of chronic obstructive pulmonary disease (COPD). Methods: Simultaneous arterial and peripheral venous blood was analysed. Venous values were used to calculate arterial pH, PCO2 and PO2, with these compared to measured values using Bland-Altman analysis and scatter plots. Calculated values of PO2......H, PCO2 and PO2 were 7.432±0.047, 6.8±1.7 kPa and 9.2±1.5 kPa, respectively. Calculated and measured arterial pH and PCO2 agreed well, differences having small bias and SD (0.000±0.022 pH, -0.06±0.50 kPa PCO2), significantly better than venous blood alone. Calculated PO2 obeyed the clinical rules...

  15. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  16. Absorbed doses behind bones with MR image-based dose calculations for radiotherapy treatment planning.

    Science.gov (United States)

    Korhonen, Juha; Kapanen, Mika; Keyrilainen, Jani; Seppala, Tiina; Tuomikoski, Laura; Tenhunen, Mikko

    2013-01-01

    Magnetic resonance (MR) images are used increasingly in external radiotherapy target delineation because of their superior soft tissue contrast compared to computed tomography (CT) images. Nevertheless, radiotherapy treatment planning has traditionally been based on the use of CT images, due to the restrictive features of MR images such as lack of electron density information. This research aimed to measure absorbed radiation doses in material behind different bone parts, and to evaluate dose calculation errors in two pseudo-CT images; first, by assuming a single electron density value for the bones, and second, by converting the electron density values inside bones from T(1)∕T(2)∗-weighted MR image intensity values. A dedicated phantom was constructed using fresh deer bones and gelatine. The effect of different bone parts to the absorbed dose behind them was investigated with a single open field at 6 and 15 MV, and measuring clinically detectable dose deviations by an ionization chamber matrix. Dose calculation deviations in a conversion-based pseudo-CT image and in a bulk density pseudo-CT image, where the relative electron density to water for the bones was set as 1.3, were quantified by comparing the calculation results with those obtained in a standard CT image by superposition and Monte Carlo algorithms. The calculations revealed that the applied bulk density pseudo-CT image causes deviations up to 2.7% (6 MV) and 2.0% (15 MV) to the dose behind the examined bones. The corresponding values in the conversion-based pseudo-CT image were 1.3% (6 MV) and 1.0% (15 MV). The examinations illustrated that the representation of the heterogeneous femoral bone (cortex denser compared to core) by using a bulk density for the whole bone causes dose deviations up to 2% both behind the bone edge and the middle part of the bone (diameter bones). This study indicates that the decrease in absorbed dose is not dependent on the bone diameter with all types of bones. Thus

  17. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    Science.gov (United States)

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Continuous energy Monte Carlo method based homogenization multi-group constants calculation

    International Nuclear Information System (INIS)

    Li Mancang; Wang Kan; Yao Dong

    2012-01-01

    The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)

  19. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    Science.gov (United States)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  20. Prediction of fission mass-yield distributions based on cross section calculations

    International Nuclear Information System (INIS)

    Hambsch, F.-J.; G.Vladuca; Tudora, Anabella; Oberstedt, S.; Ruskov, I.

    2005-01-01

    For the first time, fission mass-yield distributions have been predicted based on an extended statistical model for fission cross section calculations. In this model, the concept of the multi-modality of the fission process has been incorporated. The three most dominant fission modes, the two asymmetric standard I (S1) and standard II (S2) modes and the symmetric superlong (SL) mode are taken into account. De-convoluted fission cross sections for S1, S2 and SL modes for 235,238 U(n, f) and 237 Np(n, f), based on experimental branching ratios, were calculated for the first time in the incident neutron energy range from 0.01 to 5.5 MeV providing good agreement with the experimental fission cross section data. The branching ratios obtained from the modal fission cross section calculations have been used to deduce the corresponding fission yield distributions, including mean values also for incident neutron energies hitherto not accessible to experiment

  1. Poster - 08: Preliminary Investigation into Collapsed-Cone based Dose Calculations for COMS Eye Plaques

    International Nuclear Information System (INIS)

    Morrison, Hali; Menon, Geetha; Sloboda, Ron

    2016-01-01

    Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm 3 water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.

  2. Poster - 08: Preliminary Investigation into Collapsed-Cone based Dose Calculations for COMS Eye Plaques

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, Hali; Menon, Geetha; Sloboda, Ron [Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB (Canada)

    2016-08-15

    Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm{sup 3} water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.

  3. SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT

    International Nuclear Information System (INIS)

    Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K

    2014-01-01

    Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy

  4. A massively-parallel electronic-structure calculations based on real-space density functional theory

    International Nuclear Information System (INIS)

    Iwata, Jun-Ichi; Takahashi, Daisuke; Oshiyama, Atsushi; Boku, Taisuke; Shiraishi, Kenji; Okada, Susumu; Yabana, Kazuhiro

    2010-01-01

    Based on the real-space finite-difference method, we have developed a first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers. In addition to efficient parallel implementation, we also implemented several computational improvements, substantially reducing the computational costs of O(N 3 ) operations such as the Gram-Schmidt procedure and subspace diagonalization. Using the program on a massively-parallel computer cluster with a theoretical peak performance of several TFLOPS, we perform electronic-structure calculations for a system consisting of over 10,000 Si atoms, and obtain a self-consistent electronic-structure in a few hundred hours. We analyze in detail the costs of the program in terms of computation and of inter-node communications to clarify the efficiency, the applicability, and the possibility for further improvements.

  5. A theoretical study of blue phosphorene nanoribbons based on first-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Jiafeng; Si, M. S., E-mail: sims@lzu.edu.cn; Yang, D. Z.; Zhang, Z. Y.; Xue, D. S. [Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China)

    2014-08-21

    Based on first-principles calculations, we present a quantum confinement mechanism for the band gaps of blue phosphorene nanoribbons (BPNRs) as a function of their widths. The BPNRs considered have either armchair or zigzag shaped edges on both sides with hydrogen saturation. Both the two types of nanoribbons are shown to be indirect semiconductors. An enhanced energy gap of around 1 eV can be realized when the ribbon's width decreases to ∼10 Å. The underlying physics is ascribed to the quantum confinement effect. More importantly, the parameters to describe quantum confinement are obtained by fitting the calculated band gaps with respect to their widths. The results show that the quantum confinement in armchair nanoribbons is stronger than that in zigzag ones. This study provides an efficient approach to tune the band gap in BPNRs.

  6. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  7. Poker-camp: a program for calculating detector responses and phantom organ doses in environmental gamma fields

    International Nuclear Information System (INIS)

    Koblinger, L.

    1981-09-01

    A general description, user's manual and a sample problem are given in this report on the POKER-CAMP adjoint Monte Carlo photon transport program. Gamma fields of different environmental sources which are uniformly or exponentially distributed sources or plane sources in the air, in the soil or in an intermediate layer placed between them are simulated in the code. Calculations can be made on flux, kerma and spectra of photons at any point; and on responses of point-like, cylindrical, or spherical detectors; and on doses absorbed in anthropomorphic phantoms. (author)

  8. Technical Work Plan For: Calculation of Waste Package and Drip Shield Response to Vibratory Ground Motion and Revision of the Seismic Consequence Abstraction

    International Nuclear Information System (INIS)

    M. Gross

    2006-01-01

    The overall objective of the work scope covered by this technical work plan (TWP) is to develop new damage abstractions for the seismic scenario class in total system performance assessment (TSPA). The new abstractions will be based on a new set of waste package and drip shield damage calculations in response to vibratory ground motion and fault displacement. The new damage calculations, which are collectively referred to as damage models in this TWP, are required to represent recent changes in waste form packaging and in the regulatory time frame. The new damage models also respond to comments from the Independent Validation Review Team (IVRT) postvalidation review of the draft TSPA model regarding performance of the drip shield and to an Additional Information Need (AIN) from the U.S. Nuclear Regulatory Commission (NRC)

  9. Technical Work Plan For: Calculation of Waste Packave and Drip Shield Response to Vibratory Ground Motion and Revision of the Seismic Consequence Abstraction

    Energy Technology Data Exchange (ETDEWEB)

    M. Gross

    2006-12-08

    The overall objective of the work scope covered by this technical work plan (TWP) is to develop new damage abstractions for the seismic scenario class in total system performance assessment (TSPA). The new abstractions will be based on a new set of waste package and drip shield damage calculations in response to vibratory ground motion and fault displacement. The new damage calculations, which are collectively referred to as damage models in this TWP, are required to represent recent changes in waste form packaging and in the regulatory time frame. The new damage models also respond to comments from the Independent Validation Review Team (IVRT) postvalidation review of the draft TSPA model regarding performance of the drip shield and to an Additional Information Need (AIN) from the U.S. Nuclear Regulatory Commission (NRC).

  10. Theoretical calculation on CR-39 response for radon measurements and optimum diffusion chambers dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Askari, H.R.; Ghandi, Kh. [Department of Physics, Faculty of Science, Vali-e-Asr University, Rafsanjan 7713936417 (Iran, Islamic Republic of); Rahimi, M. [Department of Physics, Faculty of Science, Vali-e-Asr University, Rafsanjan 7713936417 (Iran, Islamic Republic of)], E-mail: rahimi_bam@yahoo.com; Negarestani, A. [International Center for Science and High Technology and Environmental Sciences, Kerman (Iran, Islamic Republic of)

    2008-11-11

    One method to measure radon gas concentration in the air with a long time of radiation is trace chemical etching technique. There is a direct proportion between the number of traces on solid-state nuclear track detectors (SSNTDs) and activity concentration of radon. In this paper, calibration constant for a cylindrical chamber with CR-39 detector has been measured analytically. Using this measurement, trace curves on the base of concentration for chambers with different heights and radii have been drawn. The results show that to measure radon gas concentration, the optimum chamber should have a height between 3.5 and 4 cm and a radius between 2.5 and 3.2 cm.

  11. Theoretical calculation on CR-39 response for radon measurements and optimum diffusion chambers dimensions

    International Nuclear Information System (INIS)

    Askari, H.R.; Ghandi, Kh.; Rahimi, M.; Negarestani, A.

    2008-01-01

    One method to measure radon gas concentration in the air with a long time of radiation is trace chemical etching technique. There is a direct proportion between the number of traces on solid-state nuclear track detectors (SSNTDs) and activity concentration of radon. In this paper, calibration constant for a cylindrical chamber with CR-39 detector has been measured analytically. Using this measurement, trace curves on the base of concentration for chambers with different heights and radii have been drawn. The results show that to measure radon gas concentration, the optimum chamber should have a height between 3.5 and 4 cm and a radius between 2.5 and 3.2 cm.

  12. Calculation of HPGe Detector Response for NRF Photons Scattered from Threat Materials

    International Nuclear Information System (INIS)

    Park, B. G.; Choi, H. D.

    2009-01-01

    Nuclear Resonance Fluorescence (NRF) is a process of resonant nuclear absorption of photons, followed by deexcitation with emission of fluorescence photons. The cross section of NRF photons process is given by σ i max ≡ 2π(λ/2π) 2 2J+1/2J 0 +1 Γ 0 Γ i /Γ tot 2 , where λ is the wavelength of the photon, J 0 and J are the nuclear spins of the ground state and excited state, respectively, Γ 0 , Γ i and Γ tot are decay width for deexcitation to the ground state, to the i-th mode state and total decay width, respectively. NRF based security inspection technique uses the signatures of resonance energies of the fluorescence photon scattered from nuclides of the illicit materials in cargo container. NRF can be used to identify the material type, quantity and location. It is performed by measuring the fluorescence photon and the transmitted photon spectrum while irradiating Bremsstrahlung photon beam to the sample

  13. Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.

    Science.gov (United States)

    Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R

    2000-07-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.

  14. Digestibility Is Similar between Commercial Diets That Provide Ingredients with Different Perceived Glycemic Responses and the Inaccuracy of Using the Modified Atwater Calculation to Calculate Metabolizable Energy

    Science.gov (United States)

    Asaro, Natalie J.; Guevara, Marcial A.; Berendt, Kimberley; Zijlstra, Ruurd; Shoveller, Anna K.

    2017-01-01

    Dietary starch is required for a dry, extruded kibble; the most common diet type for domesticated felines in North America. However, the amount and source of dietary starch may affect digestibility and metabolism of other macronutrients. The objectives of this study were to evaluate the effects of 3 commercial cat diets on in vivo and in vitro energy and macronutrient digestibility, and to analyze the accuracy of the modified Atwater equation. Dietary treatments differed in their perceived glycemic response (PGR) based on ingredient composition and carbohydrate content (34.1, 29.5, and 23.6% nitrogen-free extract for High, Medium, and LowPGR, respectively). A replicated 3 × 3 Latin square design was used, with 3 diets and 3 periods. In vivo apparent protein, fat, and organic matter digestibility differed among diets, while apparent dry matter digestibility did not. Cats were able to efficiently digest and absorb macronutrients from all diets. Furthermore, the modified Atwater equation underestimated measured metabolizable energy by approximately 12%. Thus, the modified Atwater equation does not accurately determine the metabolizable energy of high quality feline diets. Further research should focus on understanding carbohydrate metabolism in cats, and establishing an equation that accurately predicts the metabolizable energy of feline diets. PMID:29117110

  15. Digestibility Is Similar between Commercial Diets That Provide Ingredients with Different Perceived Glycemic Responses and the Inaccuracy of Using the Modified Atwater Calculation to Calculate Metabolizable Energy

    Directory of Open Access Journals (Sweden)

    Natalie J. Asaro

    2017-11-01

    Full Text Available Dietary starch is required for a dry, extruded kibble; the most common diet type for domesticated felines in North America. However, the amount and source of dietary starch may affect digestibility and metabolism of other macronutrients. The objectives of this study were to evaluate the effects of 3 commercial cat diets on in vivo and in vitro energy and macronutrient digestibility, and to analyze the accuracy of the modified Atwater equation. Dietary treatments differed in their perceived glycemic response (PGR based on ingredient composition and carbohydrate content (34.1, 29.5, and 23.6% nitrogen-free extract for High, Medium, and LowPGR, respectively. A replicated 3 × 3 Latin square design was used, with 3 diets and 3 periods. In vivo apparent protein, fat, and organic matter digestibility differed among diets, while apparent dry matter digestibility did not. Cats were able to efficiently digest and absorb macronutrients from all diets. Furthermore, the modified Atwater equation underestimated measured metabolizable energy by approximately 12%. Thus, the modified Atwater equation does not accurately determine the metabolizable energy of high quality feline diets. Further research should focus on understanding carbohydrate metabolism in cats, and establishing an equation that accurately predicts the metabolizable energy of feline diets.

  16. Short-Term Wind Power Forecasting Based on Clustering Pre-Calculated CFD Method

    Directory of Open Access Journals (Sweden)

    Yimei Wang

    2018-04-01

    Full Text Available To meet the increasing wind power forecasting (WPF demands of newly built wind farms without historical data, physical WPF methods are widely used. The computational fluid dynamics (CFD pre-calculated flow fields (CPFF-based WPF is a promising physical approach, which can balance well the competing demands of computational efficiency and accuracy. To enhance its adaptability for wind farms in complex terrain, a WPF method combining wind turbine clustering with CPFF is first proposed where the wind turbines in the wind farm are clustered and a forecasting is undertaken for each cluster. K-means, hierarchical agglomerative and spectral analysis methods are used to establish the wind turbine clustering models. The Silhouette Coefficient, Calinski-Harabaz index and within-between index are proposed as criteria to evaluate the effectiveness of the established clustering models. Based on different clustering methods and schemes, various clustering databases are built for clustering pre-calculated CFD (CPCC-based short-term WPF. For the wind farm case studied, clustering evaluation criteria show that hierarchical agglomerative clustering has reasonable results, spectral clustering is better and K-means gives the best performance. The WPF results produced by different clustering databases also prove the effectiveness of the three evaluation criteria in turn. The newly developed CPCC model has a much higher WPF accuracy than the CPFF model without using clustering techniques, both on temporal and spatial scales. The research provides supports for both the development and improvement of short-term physical WPF systems.

  17. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  18. An Analysis on the Characteristic of Multi-response CADIS Method for the Monte Carlo Radiation Shielding Calculation

    International Nuclear Information System (INIS)

    Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun

    2014-01-01

    It uses the deterministic method to calculate adjoint fluxes for the decision of the parameters used in the variance reductions. This is called as hybrid Monte Carlo method. The CADIS method, however, has a limitation to reduce the stochastic errors of all responses. The Forward Weighted CADIS (FW-CADIS) was introduced to solve this problem. To reduce the overall stochastic errors of the responses, the forward flux is used. In the previous study, the Multi-Response CADIS (MR-CAIDS) method was derived for minimizing sum of each squared relative error. In this study, the characteristic of the MR-CADIS method was evaluated and compared with the FW-CADIS method. In this study, how the CADIS, FW-CADIS, and MR-CADIS methods are applied to optimize and decide the parameters used in the variance reduction techniques was analyzed. The MR-CADIS Method uses a technique that the sum of squared relative error in each tally region was minimized to achieve uniform uncertainty. To compare the simulation efficiency of the methods, a simple shielding problem was evaluated. Using FW-CADIS method, it was evaluated that the average of the relative errors was minimized; however, MR-CADIS method gives a lowest variance of the relative errors. Analysis shows that, MR-CADIS method can efficiently and uniformly reduce the relative error of the plural response problem than FW-CADIS method

  19. Study of cosmic ray interaction model based on atmospheric muons for the neutrino flux calculation

    International Nuclear Information System (INIS)

    Sanuki, T.; Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.

    2007-01-01

    We have studied the hadronic interaction for the calculation of the atmospheric neutrino flux by summarizing the accurately measured atmospheric muon flux data and comparing with simulations. We find the atmospheric muon and neutrino fluxes respond to errors in the π-production of the hadronic interaction similarly, and compare the atmospheric muon flux calculated using the HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).] code with experimental measurements. The μ + +μ - data show good agreement in the 1∼30 GeV/c range, but a large disagreement above 30 GeV/c. The μ + /μ - ratio shows sizable differences at lower and higher momenta for opposite directions. As the disagreements are considered to be due to assumptions in the hadronic interaction model, we try to improve it phenomenologically based on the quark parton model. The improved interaction model reproduces the observed muon flux data well. The calculation of the atmospheric neutrino flux will be reported in the following paper [M. Honda et al., Phys. Rev. D 75, 043006 (2007).

  20. FragIt: a tool to prepare input files for fragment based quantum chemical calculations.

    Directory of Open Access Journals (Sweden)

    Casper Steinmann

    Full Text Available Near linear scaling fragment based quantum chemical calculations are becoming increasingly popular for treating large systems with high accuracy and is an active field of research. However, it remains difficult to set up these calculations without expert knowledge. To facilitate the use of such methods, software tools need to be available to support these methods and help to set up reasonable input files which will lower the barrier of entry for usage by non-experts. Previous tools relies on specific annotations in structure files for automatic and successful fragmentation such as residues in PDB files. We present a general fragmentation methodology and accompanying tools called FragIt to help setup these calculations. FragIt uses the SMARTS language to locate chemically appropriate fragments in large structures and is applicable to fragmentation of any molecular system given suitable SMARTS patterns. We present SMARTS patterns of fragmentation for proteins, DNA and polysaccharides, specifically for D-galactopyranose for use in cyclodextrins. FragIt is used to prepare input files for the Fragment Molecular Orbital method in the GAMESS program package, but can be extended to other computational methods easily.

  1. Data to calculate emissions intensity for individual beef cattle reared on pasture-based production systems

    Directory of Open Access Journals (Sweden)

    G.A. McAuliffe

    2018-04-01

    Full Text Available With increasing concern about environmental burdens originating from livestock production, the importance of farming system evaluation has never been greater. In order to form a basis for trade-off analysis of pasture-based cattle production systems, liveweight data from 90 Charolais × Hereford-Friesian calves were collected at a high temporal resolution at the North Wyke Farm Platform (NWFP in Devon, UK. These data were then applied to the Intergovernmental Panel on Climate Change (IPCC modelling framework to estimate on-farm methane emissions under three different pasture management strategies, completing a foreground dataset required to calculate emissions intensity of individual beef cattle.

  2. User interface tool based on the MCCM for the calculation of dpa distributions

    International Nuclear Information System (INIS)

    Pinnera, I.; Cruz, C.; Abreu, Y.; Leyva, A.

    2009-01-01

    The Monte Carlo assisted Classical Method (MCCM) was introduced by the authors to calculate the displacements per atom (dpa) distributions in solid materials, making use of the standard outputs of simulation code system MCNP and the classical theories of electron elastic scattering. Based on this method a new DLL with several user interface functions was implemented. Then, an application running on Windows systems was development in order to allow the easy handle of different useful functionalities included on it. In the present work this application is presented and some examples of it successful use in different interesting materials are exposed. (Author)

  3. Interest of thermochemical data bases linked to complex equilibria calculation codes for practical applications

    International Nuclear Information System (INIS)

    Cenerino, G.; Marbeuf, A.; Vahlas, C.

    1992-01-01

    Since 1974, Thermodata has been working on developing an Integrated Information System in Inorganic Chemistry. A major effort was carried on the thermochemical data assessment of both pure substances and multicomponent solution phases. The available data bases are connected to powerful calculation codes (GEMINI = Gibbs Energy Minimizer), which allow to determine the thermodynamical equilibrium state in multicomponent systems. The high interest of such an approach is illustrated by recent applications in as various fields as semi-conductors, chemical vapor deposition, hard alloys and nuclear safety. (author). 26 refs., 6 figs

  4. Method for calculating thermal properties of lightweight floor heating panels based on an experimental setup

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    , radiation and conduction of the heat transfer between pipe and surrounding materials. The European Standard for floor heating, EN1264, does not cover lightweight systems, while the supplemental Nordtest Method VVS127 is aimed at lightweight systems. The thermal properties can be found using tabulated values...... simulation model. It has been shown that the method is accurate with an error on the heat fluxes of less than 5% for different supply temperatures. An error of around 5% is also recorded when comparing measurements to calculated heat flows using the Nordtest VVS 127 method based on the experimental setup...

  5. Comparison of CT number calibration techniques for CBCT-based dose calculation

    International Nuclear Information System (INIS)

    Dunlop, Alex; McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe; Murray, Julia; Bhide, Shreerang; Harrington, Kevin; Poludniowski, Gavin; Nutting, Christopher; Newbold, Kate

    2015-01-01

    The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT r ); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS auto ), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS auto provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT r (0.5 %) and RS auto (0.6 %) performing best. For lung cases, WL and RS auto methods generated dose distributions most similar to the ground truth. The RS auto density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS auto methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [de

  6. A brief comparison between grid based real space algorithms and spectrum algorithms for electronic structure calculations

    International Nuclear Information System (INIS)

    Wang, Lin-Wang

    2006-01-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N 3 ) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  7. Accelerating atomic orbital-based electronic structure calculation via pole expansion and selected inversion

    International Nuclear Information System (INIS)

    Lin, Lin; Yang, Chao; Chen, Mohan; He, Lixin

    2013-01-01

    We describe how to apply the recently developed pole expansion and selected inversion (PEXSI) technique to Kohn–Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating the charge density, the total energy, the Helmholtz free energy and the atomic forces (including both the Hellmann–Feynman force and the Pulay force) without using the eigenvalues and eigenvectors of the Kohn–Sham Hamiltonian. We also show how to update the chemical potential without using Kohn–Sham eigenvalues. The advantage of using PEXSI is that it has a computational complexity much lower than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEXSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEXSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEXSI are modest. This even makes it possible to perform Kohn–Sham DFT calculations for 10 000-atom nanotubes with a sequential implementation of the selected inversion algorithm. We also perform an accurate geometry optimization calculation on a truncated (8, 0) boron nitride nanotube system containing 1024 atoms. Numerical results indicate that the use of PEXSI does not lead to loss of the accuracy required in a practical DFT calculation. (paper)

  8. Damage identification in beams by a response surface based technique

    Directory of Open Access Journals (Sweden)

    Teidj S.

    2014-01-01

    Full Text Available In this work, identification of damage in uniform homogeneous metallic beams was considered through the propagation of non dispersive elastic torsional waves. The proposed damage detection procedure consisted of the following sequence. Giving a localized torque excitation, having the form of a short half-sine pulse, the first step was calculating the transient solution of the resulting torsional wave. This torque could be generated in practice by means of asymmetric laser irradiation of the beam surface. Then, a localized defect assumed to be characterized by an abrupt reduction of beam section area with a given height and extent was placed at a known location of the beam. Next, the response in terms of transverse section rotation rate was obtained for a point situated afterwards the defect, where the sensor was positioned. This last could utilize in practice the concept of laser vibrometry. A parametric study has been conducted after that by using a full factorial design of experiments table and numerical simulations based on a finite difference characteristic scheme. This has enabled the derivation of a response surface model that was shown to represent adequately the response of the system in terms of the following factors: defect extent and severity. The final step was performing the inverse problem solution in order to identify the defect characteristics by using measurement.

  9. Calculation of the Fission Product Release for the HTR-10 based on its Operation History

    International Nuclear Information System (INIS)

    Xhonneux, A.; Druska, C.; Struth, S.; Allelein, H.-J.

    2014-01-01

    Since the first criticality of the HTR-10 test reactor in 2000, a rather complex operation history was performed. As the HTR-10 is the only pebble bed reactor in operation today delivering experimental data for HTR simulation codes, an attempt was made to simulate the whole reactor operation up to the presence. Special emphasis was put on the fission product release behaviour as it is an important safety aspect of such a reactor. The operation history has to be simulated with respect to the neutronics, fluid mechanics and depletion to get a detailed knowledge about the time-dependent nuclide inventory. In this paper we report about such a simulation with VSOP 99/11 and our new fission product release code STACY. While STACY (Source Term Analysis Code System) so far was able to calculate the fission product release rates in case of an equilibrium core and during transients, it now can also be applied to running-in-phases. This coupling demonstrates a first step towards an HCP Prototype. Based on the published power histogram of the HTR-10 and additional information about the fuel loading and shuffling, a coupled neutronics, fluid dynamics and depletion calculation was performed. Special emphasis was put on the complex fuel-shuffling scheme within both VSOP and STACY. The simulations have shown that the HTR-10 up to now generated about 2580 MWd while reshuffling the core about 2.3 times. Within this paper, STACY results for the equilibrium core will be compared with FRESCO-II results being published by INET. Compared to these release rates, which are based on a few user defined life histories, in this new approach the fission product release rates of Ag-110m, Cs-137, Sr-90 and I-131 have been simulated for about 4000 tracer pebbles with STACY. For the calculation of the HTR-10 operation history time-dependent release rates are being presented as well. (author)

  10. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    Directory of Open Access Journals (Sweden)

    Brejnev Muhizi Muhire

    Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.

  11. Evaluation of signal energy calculation methods for a light-sharing SiPM-based PET detector

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qingyang [School of Automation and Electrical Engineering, University of Science & Technology Beijing, Beijing 100083 (China); Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing 100083 (China); Ma, Tianyu; Xu, Tianpeng; Liu, Yaqiang; Wang, Shi [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Gu, Yu, E-mail: guyu@ustb.edu.cn [School of Automation and Electrical Engineering, University of Science & Technology Beijing, Beijing 100083 (China)

    2017-03-11

    Signals of a light-sharing positron emission tomography (PET) detector are commonly multiplexed to three analog pulses (E, X, and Y) and then digitally sampled. From this procedure, the signal energy that are critical to detector performance are obtained. In this paper, different signal energy calculation strategies for a self-developed SiPM-based PET detector, including pulse height and different integration methods, are evaluated in terms of energy resolution and spread of the crystal response in the flood histogram using a root-mean-squared (RMS) index. Results show that integrations outperform the pulse height. Integration using the maximum derivative value of the pulse E as the landmark point and 28 integrated points (448 ns) has the best performance in these evaluated methods for our detector. Detector performance in terms of energy and position is improved with this integration method. The proposed methodology is expected to be applicable for other light-sharing PET detectors.

  12. An analytical method for calculating stresses and strains of ATF cladding based on thick walled theory

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Hyun; Kim, Hak Sung [Hanyang University, Seoul (Korea, Republic of); Kim, Hyo Chan; Yang, Yong Sik; In, Wang kee [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this paper, an analytical method based on thick walled theory has been studied to calculate stress and strain of ATF cladding. In order to prescribe boundary conditions of the analytical method, two algorithms were employed which are called subroutine 'Cladf' and 'Couple' of FRACAS, respectively. To evaluate the developed method, equivalent model using finite element method was established and stress components of the method were compared with those of equivalent FE model. One of promising ATF concepts is the coated cladding, which take advantages such as high melting point, a high neutron economy, and low tritium permeation rate. To evaluate the mechanical behavior and performance of the coated cladding, we need to develop the specified model to simulate the ATF behaviors in the reactor. In particular, the model for simulation of stress and strain for the coated cladding should be developed because the previous model, which is 'FRACAS', is for one body model. The FRACAS module employs the analytical method based on thin walled theory. According to thin-walled theory, radial stress is defined as zero but this assumption is not suitable for ATF cladding because value of the radial stress is not negligible in the case of ATF cladding. Recently, a structural model for multi-layered ceramic cylinders based on thick-walled theory was developed. Also, FE-based numerical simulation such as BISON has been developed to evaluate fuel performance. An analytical method that calculates stress components of ATF cladding was developed in this study. Thick-walled theory was used to derive equations for calculating stress and strain. To solve for these equations, boundary and loading conditions were obtained by subroutine 'Cladf' and 'Couple' and applied to the analytical method. To evaluate the developed method, equivalent FE model was established and its results were compared to those of analytical model. Based on the

  13. Towards SSVEP-based, portable, responsive Brain-Computer Interface.

    Science.gov (United States)

    Kaczmarek, Piotr; Salomon, Pawel

    2015-08-01

    A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.

  14. BaTiO3-based nanolayers and nanotubes: first-principles calculations.

    Science.gov (United States)

    Evarestov, Robert A; Bandura, Andrei V; Kuruch, Dmitrii D

    2013-01-30

    The first-principles calculations using hybrid exchange-correlation functional and localized atomic basis set are performed for BaTiO(3) (BTO) nanolayers and nanotubes (NTs) with the structure optimization. Both the cubic and the ferroelectric BTO phases are used for the nanolayers and NTs modeling. It follows from the calculations that nanolayers of the different ferroelectric BTO phases have the practically identical surface energies and are more stable than nanolayers of the cubic phase. Thin nanosheets composed of three or more dense layers of (0 1 0) and (0 1 1[overline]) faces preserve the ferroelectric displacements inherent to the initial bulk phase. The structure and stability of BTO single-wall NTs depends on the original bulk crystal phase and a wall thickness. The majority of the considered NTs with the low formation and strain energies has the mirror plane perpendicular to the tube axis and therefore cannot exhibit ferroelectricity. The NTs folded from (0 1 1[overline]) layers may show antiferroelectric arrangement of Ti-O bonds. Comparison of stability of the BTO-based and SrTiO(3)-based NTs shows that the former are more stable than the latter. Copyright © 2012 Wiley Periodicals, Inc.

  15. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    Science.gov (United States)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  16. Calculations of helium separation via uniform pores of stanene-based membranes

    Directory of Open Access Journals (Sweden)

    Guoping Gao

    2015-12-01

    Full Text Available The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn and decorated 2D Sn (SnH and SnF honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K, two practical strategies (i.e., the application of strain and functionalization are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation.

  17. Development of a Carbon Emission Calculations System for Optimizing Building Plan Based on the LCA Framework

    Directory of Open Access Journals (Sweden)

    Feifei Fu

    2014-01-01

    Full Text Available Life cycle thinking has become widely applied in the assessment for building environmental performance. Various tool are developed to support the application of life cycle assessment (LCA method. This paper focuses on the carbon emission during the building construction stage. A partial LCA framework is established to assess the carbon emission in this phase. Furthermore, five typical LCA tools programs have been compared and analyzed for demonstrating the current application of LCA tools and their limitations in the building construction stage. Based on the analysis of existing tools and sustainability demands in building, a new computer calculation system has been developed to calculate the carbon emission for optimizing the sustainability during the construction stage. The system structure and detail functions are described in this paper. Finally, a case study is analyzed to demonstrate the designed LCA framework and system functions. This case is based on a typical building in UK with different plans of masonry wall and timber frame to make a comparison. The final results disclose that a timber frame wall has less embodied carbon emission than a similar masonry structure. 16% reduction was found in this study.

  18. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  19. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    Hebert, Alain; Coste, Mireille

    2002-01-01

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  20. Assessment of Calculation Procedures for Piles in Clay Based on Static Loading Tests

    DEFF Research Database (Denmark)

    Augustesen, Anders; Andersen, Lars

    2008-01-01

    College in London. The calculation procedures are assessed based on an established database of static loading tests. To make a consistent evaluation of the design methods, corrections related to undrained shear strength and time between pile driving and testing have been employed. The study indicates...... that the interpretation of the field tests is of paramount importance, both with regard to the soil profile and the loading conditions. Based on analyses of 253 static pile loading tests distributed on 111 sites, API-RP2A provides the better description of the data. However, it should be emphasised that some input......Numerous methods are available for the prediction of the axial capacity of piles in clay. In this paper, two well-known models are considered, namely the current API-RP2A (1987 to present) and the recently developed ICP method. The latter is developed by Jardine and his co-workers at Imperial...

  1. Quasiparticle properties of DNA bases from GW calculations in a Wannier basis

    Science.gov (United States)

    Qian, Xiaofeng; Marzari, Nicola; Umari, Paolo

    2009-03-01

    The quasiparticle GW-Wannier (GWW) approach [1] has been recently developed to overcome the size limitations of conventional planewave GW calculations. By taking advantage of the localization properties of the maximally-localized Wannier functions and choosing a small set of polarization basis we reduce the number of Bloch wavefunctions products required for the evaluation of dynamical polarizabilities, and in turn greatly reduce memory requirements and computational efficiency. We apply GWW to study quasiparticle properties of different DNA bases and base-pairs, and solvation effects on the energy gap, demonstrating in the process the key advantages of this approach. [1] P. Umari,G. Stenuit, and S. Baroni, cond-mat/0811.1453

  2. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    International Nuclear Information System (INIS)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.

    2014-08-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  3. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  4. An elastic elements calculation in the construction of electrical connectors based on flexible printed cables

    Directory of Open Access Journals (Sweden)

    Yefimenko A. A.

    2016-05-01

    connectors. We got an analytic dependence that can be used to find the Young's modulus for a known value of hardness on a scale Shore A. We gave examples of the amount of compression calculation in the elastomeric liner to provide a reliable contact for specified values of the transition resistance for the removable and permanent connectors based on flexible printed cable.

  5. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  6. Dose-volume histograms based on serial intravascular ultrasound: a calculation model for radioactive stents

    International Nuclear Information System (INIS)

    Kirisits, Christian; Wexberg, Paul; Gottsauner-Wolf, Michael; Pokrajac, Boris; Ortmann, Elisabeth; Aiginger, Hannes; Glogar, Dietmar; Poetter, Richard

    2001-01-01

    Background and purpose: Radioactive stents are under investigation for reduction of coronary restenosis. However, the actual dose delivered to specific parts of the coronary artery wall based on the individual vessel anatomy has not been determined so far. Dose-volume histograms (DVHs) permit an estimation of the actual dose absorbed by the target volume. We present a method to calculate DVHs based on intravascular ultrasound (IVUS) measurements to determine the dose distribution within the vessel wall. Materials and methods: Ten patients were studied by intravascular ultrasound after radioactive stenting (BX Stent, P-32, 15-mm length) to obtain tomographic cross-sections of the treated segments. We developed a computer algorithm using the actual dose distribution of the stent to calculate differential and cumulative DVHs. The minimal target dose, the mean target dose, the minimal doses delivered to 10 and 90% of the adventitia (DV10, DV90), and the percentage of volume receiving a reference dose at 0.5 mm from the stent surface cumulated over 28 days were derived from the DVH plots. Results were expressed as mean±SD. Results: The mean activity of the stents was 438±140 kBq at implantation. The mean reference dose was 111±35 Gy, whereas the calculated mean target dose within the adventitia along the stent was 68±20 Gy. On average, DV90 and DV10 were 33±9 Gy and 117±41 Gy, respectively. Expanding the target volume to include 2.5-mm-long segments at the proximal and distal ends of the stent, the calculated mean target dose decreased to 55±17 Gy, and DV 90 and DV 10 were 6.4±2.4 Gy and 107±36 Gy, respectively. Conclusions: The assessment of DVHs seems in principle to be a valuable tool for both prospective and retrospective analysis of dose-distribution of radioactive stents. It may provide the basis to adapt treatment planning in coronary brachytherapy to the common standards of radiotherapy

  7. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.

    Science.gov (United States)

    Martinez-Rovira, I; Sempau, J; Prezado, Y

    2012-05-01

    Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two

  8. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  9. Neutron spectra calculation and doses in a subcritical nuclear reactor based on thorium

    International Nuclear Information System (INIS)

    Medina C, D.; Hernandez A, P. L.; Hernandez D, V. M.; Vega C, H. R.; Sajo B, L.

    2015-10-01

    This paper describes a heterogeneous subcritical nuclear reactor with molten salts based on thorium, with graphite moderator and a source of 252 Cf, whose dose levels in the periphery allows its use in teaching and research activities. The design was done by the Monte Carlo method with the code MCNP5 where the geometry, dimensions and fuel was varied in order to obtain the best design. The result is a cubic reactor of 110 cm side with graphite moderator and reflector. In the central part they have 9 ducts that were placed in the direction of axis Y. The central duct contains the source of 252 Cf, of 8 other ducts, are two irradiation ducts and the other six contain a molten salt ( 7 LiF - BeF 2 - ThF 4 - UF 4 ) as fuel. For design the k eff , neutron spectra and ambient dose equivalent was calculated. In the first instance the above calculation for a virgin fuel was called case 1, then a percentage of 233 U was used and the percentage of Th was decreased and was called case 2. This with the purpose to compare two different fuels working inside the reactor. In the case 1 a value was obtained for the k eff of 0.13 and case 2 of 0.28, maintaining the subcriticality in both cases. In the dose levels the higher value is in case 2 in the axis Y with a value of 3.31 e-3 ±1.6% p Sv/Q this value is reported in for one. With this we can calculate the exposure time of personnel working in the reactor. (Author)

  10. Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital

    Science.gov (United States)

    Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud

    2016-01-01

    Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974

  11. Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital.

    Science.gov (United States)

    Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud

    2015-05-17

    Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran.‎ This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS.‎ The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department.

  12. Determination of structural fluctuations of proteins from structure-based calculations of residual dipolar couplings

    International Nuclear Information System (INIS)

    Montalvao, Rinaldo W.; De Simone, Alfonso; Vendruscolo, Michele

    2012-01-01

    Residual dipolar couplings (RDCs) have the potential of providing detailed information about the conformational fluctuations of proteins. It is very challenging, however, to extract such information because of the complex relationship between RDCs and protein structures. A promising approach to decode this relationship involves structure-based calculations of the alignment tensors of protein conformations. By implementing this strategy to generate structural restraints in molecular dynamics simulations we show that it is possible to extract effectively the information provided by RDCs about the conformational fluctuations in the native states of proteins. The approach that we present can be used in a wide range of alignment media, including Pf1, charged bicelles and gels. The accuracy of the method is demonstrated by the analysis of the Q factors for RDCs not used as restraints in the calculations, which are significantly lower than those corresponding to existing high-resolution structures and structural ensembles, hence showing that we capture effectively the contributions to RDCs from conformational fluctuations.

  13. Calculation of color difference and measurement of the spectrum of aerosol based on human visual system

    Science.gov (United States)

    Dai, Mengyan; Liu, Jianghai; Cui, Jianlin; Chen, Chunsheng; Jia, Peng

    2017-10-01

    In order to solve the problem of the quantitative test of spectrum and color of aerosol, the measurement method of spectrum of aerosol based on human visual system was proposed. The spectrum characteristics and color parameters of three different aerosols were tested, and the color differences were calculated according to the CIE1976-L*a*b* color difference formula. Three tested powders (No 1# No 2# and No 3# ) were dispersed in a plexglass box and turned into aerosol. The powder sample was released by an injector with different dosages in each experiment. The spectrum and color of aerosol were measured by the PRO 6500 Fiber Optic Spectrometer. The experimental results showed that the extinction performance of aerosol became stronger and stronger with the increase of concentration of aerosol. While the chromaticity value differences of aerosols in the experiment were so small, luminance was verified to be the main influence factor of human eye visual perception and contributed most in the three factors of the color difference calculation. The extinction effect of No 3# aerosol was the strongest of all and caused the biggest change of luminance and color difference which would arouse the strongest human visual perception. According to the sensation level of chromatic color by Chinese, recognition color difference would be produced when the dosage of No 1# powder was more than 0.10 gram, the dosage of No 2# powder was more than 0.15 gram, and the dosage of No 3# powder was more than 0.05 gram.

  14. Acoustical contribution calculation and analysis of compressor shell based on acoustic transfer vector method

    Science.gov (United States)

    Chen, Xiaol; Guo, Bei; Tuo, Jinliang; Zhou, Ruixin; Lu, Yang

    2017-08-01

    Nowadays, people are paying more and more attention to the noise reduction of household refrigerator compressor. This paper established a sound field bounded by compressor shell and ISO3744 standard field points. The Acoustic Transfer Vector (ATV) in the sound field radiated by a refrigerator compressor shell were calculated which fits the test result preferably. Then the compressor shell surface is divided into several parts. Based on Acoustic Transfer Vector approach, the sound pressure contribution to the field points and the sound power contribution to the sound field of each part were calculated. To obtain the noise radiation in the sound field, the sound pressure cloud charts were analyzed, and the contribution curves in different frequency of each part were acquired. Meanwhile, the sound power contribution of each part in different frequency was analyzed, to ensure those parts where contributes larger sound power. Through the analysis of acoustic contribution, those parts where radiate larger noise on the compressor shell were determined. This paper provides a credible and effective approach on the structure optimal design of refrigerator compressor shell, which is meaningful in the noise and vibration reduction.

  15. Extension of the COSYMA-ECONOMICS module - cost calculations based on different economic sectors

    International Nuclear Information System (INIS)

    Faude, D.

    1994-12-01

    The COSYMA program system for evaluating the off-site consequences of accidental releases of radioactive material to the atmosphere includes an ECONOMICS module for assessing economic consequences. The aim of this module is to convert various consequences (radiation-induced health effects and impacts resulting from countermeasures) caused by an accident into the common framework of economic costs; this allows different effects to be expressed in the same terms and thus to make these effects comparable. With respect to the countermeasure 'movement of people', the dominant cost categories are 'loss-of-income costs' and 'costs of lost capital services'. In the original version of the ECONOMICS module these costs are calculated on the basis of the total number of people moved. In order to take into account also regional or local economic peculiarities of a nuclear site, the ECONOMICS module has been extended: Calculation of the above mentioned cost categories is now based on the number of employees in different economic sectors in the affected area. This extension of the COSYMA ECONOMICS module is described in more detail. (orig.)

  16. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    International Nuclear Information System (INIS)

    Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan

    2014-01-01

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M o ), moment magnitude (M W ), rupture duration (T o ) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M W =7.8 and the 17 July 2006 Pangandaran earthquake with M W =7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M W =7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake

  17. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    Science.gov (United States)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  18. Implementation and validation of an implant-based coordinate system for RSA migration calculation.

    Science.gov (United States)

    Laende, Elise K; Deluzio, Kevin J; Hennigar, Allan W; Dunbar, Michael J

    2009-10-16

    An in vitro radiostereometric analysis (RSA) phantom study of a total knee replacement was carried out to evaluate the effect of implementing two new modifications to the conventional RSA procedure: (i) adding a landmark of the tibial component as an implant marker and (ii) defining an implant-based coordinate system constructed from implant landmarks for the calculation of migration results. The motivation for these two modifications were (i) to improve the representation of the implant by the markers by including the stem tip marker which increases the marker distribution (ii) to recover clinical RSA study cases with insufficient numbers of markers visible in the implant polyethylene and (iii) to eliminate errors in migration calculations due to misalignment of the anatomical axes with the RSA global coordinate system. The translational and rotational phantom studies showed no loss of accuracy with the two new measurement methods. The RSA system employing these methods has a precision of better than 0.05 mm for translations and 0.03 degrees for rotations, and an accuracy of 0.05 mm for translations and 0.15 degrees for rotations. These results indicate that the new methods to improve the interpretability, relevance, and standardization of the results do not compromise precision and accuracy, and are suitable for application to clinical data.

  19. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com [Badan Meteorologi Klimatologi Geofisika, Jl Angkasa I No. 2 Jakarta (Indonesia); Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan [Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2014-03-24

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.

  20. Microcontroller-based network for meteorological sensing and weather forecast calculations

    Directory of Open Access Journals (Sweden)

    A. Vas

    2012-06-01

    Full Text Available Weather forecasting needs a lot of computing power. It is generally accomplished by using supercomputers which are expensive to rent and to maintain. In addition, weather services also have to maintain radars, balloons and pay for worldwide weather data measured by stations and satellites. Weather forecasting computations usually consist of solving differential equations based on the measured parameters. To do that, the computer uses the data of close and distant neighbor points. Accordingly, if small-sized weather stations, which are capable of making measurements, calculations and communication, are connected through the Internet, then they can be used to run weather forecasting calculations like a supercomputer does. It doesn’t need any central server to achieve this, because this network operates as a distributed system. We chose Microchip’s PIC18 microcontroller (μC platform in the implementation of the hardware, and the embedded software uses the TCP/IP Stack v5.41 provided by Microchip.

  1. Method to Calculate the Electricity Generated by a Photovoltaic Cell, Based on Its Mathematical Model Simulations in MATLAB

    Directory of Open Access Journals (Sweden)

    Carlos Morcillo-Herrera

    2015-01-01

    Full Text Available This paper presents a practical method for calculating the electrical energy generated by a PV panel (kWhr through MATLAB simulations based on the mathematical model of the cell, which obtains the “Mean Maximum Power Point” (MMPP in the characteristic V-P curve, in response to evaluating historical climate data at specific location. This five-step method calculates through MMPP per day, per month, or per year, the power yield by unit area, then electrical energy generated by PV panel, and its real conversion efficiency. To validate the method, it was applied to Sewage Treatment Plant for a Group of Drinking Water and Sewerage of Yucatan (JAPAY, México, testing 250 Wp photovoltaic panels of five different manufacturers. As a result, the performance, the real conversion efficiency, and the electricity generated by five different PV panels in evaluation were obtained and show the best technical-economic option to develop the PV generation project.

  2. Effects of B site doping on electronic structures of InNbO4 based on hybrid density functional calculations

    Science.gov (United States)

    Lu, M. F.; Zhou, C. P.; Li, Q. Q.; Zhang, C. L.; Shi, H. F.

    2018-01-01

    In order to improve the photocatalytic activity under visible-light irradiation, we adopted first principle calculations based on density functional theory (DFT) to calculate the electronic structures of B site transition metal element doped InNbO4. The results indicated that the complete hybridization of Nb 4d states and some Ti 3d states contributed to the new conduction band of Ti doped InNbO4, barely changing the position of band edge. For Cr doping, some localized Cr 3d states were introduced into the band gap. Nonetheless, the potential of localized levels was too positive to cause visible-light reaction. When it came to Cu doping, the band gap was almost same with that of InNbO4 as well as some localized Cu 3d states appeared above the top of VB. The introduction of localized energy levels benefited electrons to migrate from valence band (VB) to conduction band (CB) by absorbing lower energy photons, realizing visible-light response.

  3. SU-C-204-03: DFT Calculations of the Stability of DOTA-Based-Radiopharmaceuticals

    Energy Technology Data Exchange (ETDEWEB)

    Khabibullin, A.R.; Woods, L.M. [University of South Florida, Tampa, Florida (United States); Karolak, A.; Budzevich, M.M.; Martinez, M.V. [H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States); McLaughlin, M.L.; Morse, D.L. [University of South Florida, Tampa, Florida (United States); H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States)

    2016-06-15

    Purpose: Application of the density function theory (DFT) to investigate the structural stability of complexes applied in cancer therapy consisting of the 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) chelated to Ac225, Fr221, At217, Bi213, and Gd68 radio-nuclei. Methods: The possibility to deliver a toxic payload directly to tumor cells is a highly desirable aim in targeted alpha particle therapy. The estimation of bond stability between radioactive atoms and the DOTA chelating agent is the key element in understanding the foundations of this delivery process. Thus, we adapted the Vienna Ab-initio Simulation Package (VASP) with the projector-augmented wave method and a plane-wave basis set in order to study the stability and electronic properties of DOTA ligand chelated to radioactive isotopes. In order to count for the relativistic effect of radioactive isotopes we included Spin-Orbit Coupling (SOC) in the DFT calculations. Five DOTA complex structures were represented as unit cells, each containing 58 atoms. The energy optimization was performed for all structures prior to calculations of electronic properties. Binding energies, electron localization functions as well as bond lengths between atoms were estimated. Results: Calculated binding energies for DOTA-radioactive atom systems were −17.792, −5.784, −8.872, −13.305, −18.467 eV for Ac, Fr, At, Bi and Gd complexes respectively. The displacements of isotopes in DOTA cages were estimated from the variations in bond lengths, which were within 2.32–3.75 angstroms. The detailed representation of chemical bonding in all complexes was obtained with the Electron Localization Function (ELF). Conclusion: DOTA-Gd, DOTA-Ac and DOTA-Bi were the most stable structures in the group. Inclusion of SOC had a significant role in the improvement of DFT calculation accuracy for heavy radioactive atoms. Our approach is found to be proper for the investigation of structures with DOTA-based

  4. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Directory of Open Access Journals (Sweden)

    Chang Wook Jeong

    Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy

  5. Development of 3-D FBR heterogeneous core calculation method based on characteristics method

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Maruyama, Manabu; Hamada, Yuzuru; Nishi, Hiroshi; Ishibashi, Junichi; Kitano, Akihiro

    2002-01-01

    A new 3-D transport calculation method taking into account the heterogeneity of fuel assemblies has been developed by combining the characteristics method and the nodal transport method. In the axial direction the nodal transport method is applied, and the characteristics method is applied to take into account the radial heterogeneity of fuel assemblies. The numerical calculations have been performed to verify 2-D radial calculations of FBR assemblies and partial core calculations. Results are compared with the reference Monte-Carlo calculations. A good agreement has been achieved. It is shown that the present method has an advantage in calculating reaction rates in a small region

  6. Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies

    Science.gov (United States)

    Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.

    2017-04-01

    Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4%  ±  3.1% for CBAC and 3.5%  ±  3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a

  7. Relativistic many-body perturbation-theory calculations based on Dirac-Fock-Breit wave functions

    International Nuclear Information System (INIS)

    Ishikawa, Y.; Quiney, H.M.

    1993-01-01

    A relativistic many-body perturbation theory based on the Dirac-Fock-Breit wave functions has been developed and implemented by employing analytic basis sets of Gaussian-type functions. The instantaneous Coulomb and low-frequency Breit interactions are treated using a unified formalism in both the construction of the Dirac-Fock-Breit self-consistent-field atomic potential and in the evaluation of many-body perturbation-theory diagrams. The relativistic many-body perturbation-theory calculations have been performed on the helium atom and ions of the helium isoelectronic sequence up to Z=50. The contribution of the low-frequency Breit interaction to the relativistic correlation energy is examined for the helium isoelectronic sequence

  8. Status of CINDER and ENDF/B-V based libraries for transmutation calculations

    International Nuclear Information System (INIS)

    Wilson, W.B.; England, T.R.; LaBauve, R.J.; Battat, M.E.; Wessol, D.E.; Perry, R.T.

    1980-01-01

    The CINDER codes and their data libraries are described, and their range of calculational capabilities are described using documented applications. The importance of ENDF/B data and the features of the ENDF/B-IV and ENDF/B-V fission-product and actinide data files are emphasized. The actinide decay data of ENDF/B-V, augmented by additional data from available sources, are used to produce average decay energy values and neutron source values from sponteneous fission, (α,n) and delayed neutron emission for 144 actinide nuclides that are formed in reactor fuel. The status and characteristics of the CINDER-2 code is described, along with a brief description of more well known code versions; a review of the status of new ENDF/B-V based libraries for all versions is presented

  9. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  10. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  11. A GPU-based solution for fast calculation of the betweenness centrality in large weighted networks

    Directory of Open Access Journals (Sweden)

    Rui Fan

    2017-12-01

    Full Text Available Betweenness, a widely employed centrality measure in network science, is a decent proxy for investigating network loads and rankings. However, its extremely high computational cost greatly hinders its applicability in large networks. Although several parallel algorithms have been presented to reduce its calculation cost for unweighted networks, a fast solution for weighted networks, which are commonly encountered in many realistic applications, is still lacking. In this study, we develop an efficient parallel GPU-based approach to boost the calculation of the betweenness centrality (BC for large weighted networks. We parallelize the traditional Dijkstra algorithm by selecting more than one frontier vertex each time and then inspecting the frontier vertices simultaneously. By combining the parallel SSSP algorithm with the parallel BC framework, our GPU-based betweenness algorithm achieves much better performance than its CPU counterparts. Moreover, to further improve performance, we integrate the work-efficient strategy, and to address the load-imbalance problem, we introduce a warp-centric technique, which assigns many threads rather than one to a single frontier vertex. Experiments on both realistic and synthetic networks demonstrate the efficiency of our solution, which achieves 2.9× to 8.44× speedups over the parallel CPU implementation. Our algorithm is open-source and free to the community; it is publicly available through https://dx.doi.org/10.6084/m9.figshare.4542405. Considering the pervasive deployment and declining price of GPUs in personal computers and servers, our solution will offer unprecedented opportunities for exploring betweenness-related problems and will motivate follow-up efforts in network science.

  12. A cultural study of a science classroom and graphing calculator-based technology

    Science.gov (United States)

    Casey, Dennis Alan

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.

  13. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.

    Science.gov (United States)

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-05-18

    The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  14. Base data for looking-up tables of calculation errors in JACS code system

    International Nuclear Information System (INIS)

    Murazaki, Minoru; Okuno, Hiroshi

    1999-03-01

    The report intends to clarify the base data for the looking-up tables of calculation errors cited in 'Nuclear Criticality Safety Handbook'. The tables were obtained by classifying the benchmarks made by JACS code system, and there are two kinds: One kind is for fuel systems in general geometry with a reflected and another kind is for fuel systems specific to simple geometry with a reflector. Benchmark systems were further categorized into eight groups according to the fuel configuration: homogeneous or heterogeneous; and fuel kind: uranium, plutonium and their mixtures, etc. The base data for fuel systems in general geometry with a reflected are summarized in this report for the first time. The base data for fuel systems in simple geometry with a reflector were summarized in a technical report published in 1987. However, the data in a group named homogeneous low-enriched uranium were further selected out later by the working group for making the Nuclear Criticality Safety Handbook. This report includes the selection. As a project has been organized by OECD/NEA for evaluation of criticality safety benchmark experiments, the results are also described. (author)

  15. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms

    International Nuclear Information System (INIS)

    Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick

    2014-01-01

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the

  16. Development and Validation of a Clinically Based Risk Calculator for the Transdiagnostic Prediction of Psychosis

    Science.gov (United States)

    Rutigliano, Grazia; Stahl, Daniel; Davies, Cathy; Bonoldi, Ilaria; Reilly, Thomas; McGuire, Philip

    2017-01-01

    Importance The overall effect of At Risk Mental State (ARMS) services for the detection of individuals who will develop psychosis in secondary mental health care is undetermined. Objective To measure the proportion of individuals with a first episode of psychosis detected by ARMS services in secondary mental health services, and to develop and externally validate a practical web-based individualized risk calculator tool for the transdiagnostic prediction of psychosis in secondary mental health care. Design, Setting, and Participants Clinical register-based cohort study. Patients were drawn from electronic, real-world, real-time clinical records relating to 2008 to 2015 routine secondary mental health care in the South London and the Maudsley National Health Service Foundation Trust. The study included all patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within the South London and the Maudsley National Health Service Foundation Trust in the period between January 1, 2008, and December 31, 2015. Data analysis began on September 1, 2016. Main Outcomes and Measures Risk of development of nonorganic International Statistical Classification of Diseases and Related Health Problems, Tenth Revision psychotic disorders. Results A total of 91 199 patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within South London and the Maudsley National Health Service Foundation Trust were included in the derivation (n = 33 820) or external validation (n = 54 716) data sets. The mean age was 32.97 years, 50.88% were men, and 61.05% were white race/ethnicity. The mean follow-up was 1588 days. The overall 6-year risk of psychosis in secondary mental health care was 3.02 (95% CI, 2.88-3.15), which is higher than the 6-year risk in the local general population (0.62). Compared with the ARMS designation, all of the International Statistical Classification of Diseases and Related Health Problems

  17. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  18. Nurse Staffing Calculation in the Emergency Department - Performance-Oriented Calculation Based on the Manchester Triage System at the University Hospital Bonn.

    Directory of Open Access Journals (Sweden)

    Ingo Gräff

    Full Text Available To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe.Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS, taking into account specific workload fluctuations (50th-95th percentiles.Patients classified to the MTS category red (n = 35 required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118, nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181, 40.95 min, while the two MTS categories with the least acute patients, green (n = 129 and blue (n = 40 required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010 emergency patients, 67-123 emergency patients (50-95% percentile per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1.Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments.

  19. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    International Nuclear Information System (INIS)

    Tian, Zhen; Jia, Xun; Jiang, Steve B; Graves, Yan Jiang

    2014-01-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of d max dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  20. Actinide-lanthanide separation by bipyridyl-based ligands. DFT calculations and experimental results

    International Nuclear Information System (INIS)

    Borisova, Nataliya E.; Eroshkina, Elizaveta A.; Korotkov, Leonid A.; Ustynyuk, Yuri A.; Alyapyshev, Mikhail Yu.; Eliseev, Ivan I.; Babain, Vasily A.

    2011-01-01

    In order to gain insights into effect of substituents on selectivity of Am/Eu separation, the synthesis and extractions tests were undertaken on the series of bipyridyl-based ligands (amides of 2,2'-bipyridyl-6,6'-dicarboxylic acid: L Ph - N,N'-diethyl-N,N'-diphenyl amide; L Bu2 - tetrabutyl amide; L Oct2 - tetraoctyl amide; L 3FPh - N,N'-diethyl-N,N'-bis-(3-fluorophenyl) amide; as well as N,N'-diethyl-N,N'-diphenyl amide of 4,4'-dibrom-2,2'-bipyridyl-6,6'-dicarboxylic acid and N,N'-diethyl-N,N'-diphenyl amide of 4,4'-dinitro-2,2'-bipyridyl-6,6'-dicarboxylic acid) as well as structure and stability of their complexes with lanthanides and actinides were studied. The extraction tests were performed for Am, lanthanide series and transition metals in polar diluents in presence of chlorinated cobalt dicarbolide and have shown high distribution coefficients for Am. Also was found that the type of substituents on amidic nitrogen exerts great influence on the extraction of light lanthanides. For understanding of the nature of this effect we made QC-calculations at DFT level, binding constants determination and X-Ray structure determination of the complexes. The UV/VIS titration performed show that the composition of all complexes of the amides with lanthanides in solution is 1:1. In spite of the binding constants are high (lgβ about 6-7 in acetonitrile solution), lanthanide ions have binding constants with the same order of magnitude for dialkyl substituted extractants. The X-Ray structures of the complexes of bipyridyl-based amides show the composition of 1:1 and the coordination number of the ions being 10. The DFT optimized structures of the compounds are in good agreement with that obtained by X-Ray. The gas phase affinity of the amides to lanthanides shows strong correlation with the distribution ratios. We can infer that the bipyridyl-based amides form complexes with metal nitrates which have similar structure in solid and gas phases and in solution, and the DFT

  1. Using 3d Bim Model for the Value-Based Land Share Calculations

    Science.gov (United States)

    Çelik Şimşek, N.; Uzun, B.

    2017-11-01

    According to the Turkish condominium ownership system, 3D physical buildings and its condominium units are registered to the condominium ownership books via 2D survey plans. Currently, 2D representations of the 3D physical objects, causes inaccurate and deficient implementations for the determination of the land shares. Condominium ownership and easement right are established with a clear indication of land shares (condominium ownership law, article no. 3). So, the land share of each condominium unit have to be determined including the value differences among the condominium units. However the main problem is that, land share has often been determined with area based over the project before construction of the building. The objective of this study is proposing a new approach in terms of value-based land share calculations of the condominium units that subject to condominium ownership. So, the current approaches and its failure that have taken into account in determining the land shares are examined. And factors that affect the values of the condominium units are determined according to the legal decisions. This study shows that 3D BIM models can provide important approaches for the valuation problems in the determination of the land shares.

  2. Weight Calculation for Cases Generated by Tacit Knowledge Explicit Based on RS-FAHP

    Directory of Open Access Journals (Sweden)

    Cao Yue

    2017-01-01

    Full Text Available In the knowledge economy, it becomes the core competence of persons, groups and organizations to effectively organize and manage tacit knowledge, affecting their sustainable development. Case explicit for tacit knowledge is an effective way to improve their clarity, improve management efficiency. it determines the validity of the case view to calculate legitimately the weights for the case aspects or attributes, and further affect the application benefit of the explicit knowledge. The case view affected seriously by the subjective, obtaining via traditional direct weighting method, and the objectivity of the result is not strong. On the other hand, the objective weights configuration is not only ignored the expert knowledge, but also lead to the acceptance barriers for the body of knowledge to accept the result. Therefore, in this paper, relying on rough set (RS theory, the integrating algorithm of two objective weight configuration is analyzed Systematically, based on conditional entropy and property dependence. Simultaneously, Fuzzy Analytic Hierarchy Process (AHP is studied to take into account the operational experience and knowledge of experts in the field. And then, case attribute RS-FAHP comprehensive weight placement algorithms is designed, based on the integration of subjective and objective thinking. The work mentioned above can improve and perfect the traditional configuration of weights, and support to apply and manage the tacit knowledge explicit cases effectively.

  3. Design of Pd-Based Bimetallic Catalysts for ORR: A DFT Calculation Study

    Directory of Open Access Journals (Sweden)

    Lihui Ou

    2015-01-01

    Full Text Available Developing Pd-lean catalysts for oxygen reduction reaction (ORR is the key for large-scale application of proton exchange membrane fuel cells (PEMFCs. In the present paper, we have proposed a multiple-descriptor strategy for designing efficient and durable ORR Pd-based alloy catalysts. We demonstrated that an ideal Pd-based bimetallic alloy catalyst for ORR should possess simultaneously negative alloy formation energy, negative surface segregation energy of Pd, and a lower oxygen binding ability than pure Pt. By performing detailed DFT calculations on the thermodynamics, surface chemistry and electronic properties of Pd-M alloys, Pd-V, Pd-Fe, Pd-Zn, Pd-Nb, and Pd-Ta, are identified theoretically to have stable Pd segregated surface and improved ORR activity. Factors affecting these properties are analyzed. The alloy formation energy of Pd with transition metals M can be mainly determined by their electron interaction. This may be the origin of the negative alloy formation energy for Pd-M alloys. The surface segregation energy of Pd is primarily determined by the surface energy and the atomic radius of M. The metals M which have smaller atomic radius and higher surface energy would tend to favor the surface segregation of Pd in corresponding Pd-M alloys.

  4. Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting

    Directory of Open Access Journals (Sweden)

    Lin Li

    2013-01-01

    Full Text Available The tilt and decentration of intraocular lens (IOL result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6–12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL’s location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  5. Research on calculation of the IOL tilt and decentration based on surface fitting.

    Science.gov (United States)

    Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng

    2013-01-01

    The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration) and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6-12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL's location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  6. GIS supported calculations of 137Cs deposition in Sweden based on precipitation data

    International Nuclear Information System (INIS)

    Almgren, Sara; Nilsson, Elisabeth; Erlandsson, Bengt; Isaksson, Mats

    2006-01-01

    It is of interest to know the spatial variation and the amount of 137 Cs e.g. in case of an accident with a radioactive discharge. In this study, the spatial distribution of the quarterly 137 Cs deposition over Sweden due to nuclear weapons fallout (NWF) during the period 1962-1966 was determined by relating the measured deposition density at a reference site to the amount of precipitation. Measured quarterly values of 137 Cs deposition density per unit precipitation at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations. The reference sites were assumed to represent areas with different quarterly mean precipitation. The extent of these areas was determined from the distribution of the mean measured precipitation between 1961 and 1990 and varied according to seasonal variations in the mean precipitation pattern. Deposition maps were created by interpolation within a geographical information system (GIS). Both integrated (total) and cumulative (decay corrected) deposition densities were calculated. The lowest levels of NWF 137 Cs deposition density were noted in north-eastern and eastern parts of Sweden and the highest levels in the western parts of Sweden. Furthermore the deposition density of 137 Cs, resulting from the Chernobyl accident was determined for an area in western Sweden based on precipitation data. The highest levels of Chernobyl 137 Cs in western Sweden were found in the western parts of the area along the coast and the lowest in the east. The sum of the deposition densities from NWF and Chernobyl in western Sweden was then compared to the total activity measured in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values and other studies

  7. Development of sustainable water treatment technology using scientifically based calculated indexes of source water quality indicators

    Directory of Open Access Journals (Sweden)

    А. С. Трякина

    2017-10-01

    Full Text Available The article describes selection process of sustainable technological process flow chart for water treatment procedure developed on scientifically based calculated indexes of quality indicators for water supplied to water treatment facilities. In accordance with the previously calculated values of the indicators of the source water quality, the main purification facilities are selected. A more sustainable flow chart for the modern water quality of the Seversky Donets-Donbass channel is a two-stage filtering with contact prefilters and high-rate filters. The article proposes a set of measures to reduce such an indicator of water quality as permanganate oxidation. The most suitable for these purposes is sorption purification using granular activated carbon for water filtering. The increased water hardness is also quite topical. The method of ion exchange on sodium cation filters was chosen to reduce the water hardness. We also evaluated the reagents for decontamination of water. As a result, sodium hypochlorite is selected for treatment of water, which has several advantages over chlorine and retains the necessary aftereffect, unlike ozone. A technological flow chart with two-stage purification on contact prefilters and two-layer high-rate filters (granular activated carbon - quartz sand with disinfection of sodium hypochlorite and softening of a part of water on sodium-cation exchangers filters is proposed. This technological flow chart of purification with any fluctuations in the quality of the source water is able to provide purified water that meets the requirements of the current sanitary-hygienic standards. In accordance with the developed flow chart, guidelines and activities for the reconstruction of the existing Makeevka Filtering Station were identified. The recommended flow chart uses more compact and less costly facilities, as well as additional measures to reduce those water quality indicators, the values of which previously were in

  8. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    Science.gov (United States)

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.

  9. Multi-scale calculation of the electric properties of organic-based devices from the molecular structure

    KAUST Repository

    Li, Haoyuan; Qiu, Yong; Duan, Lian

    2016-01-01

    A method is proposed to calculate the electric properties of organic-based devices from the molecular structure. The charge transfer rate is obtained using non-adiabatic molecular dynamics. The organic film in the device is modeled using

  10. [Development and effectiveness of a drug dosage calculation training program using cognitive loading theory based on smartphone application].

    Science.gov (United States)

    Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon

    2012-10-01

    This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, psmartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.

  11. Improvement of calculation method for temperature coefficient of HTTR by neutronics calculation code based on diffusion theory. Analysis for temperature coefficient by SRAC code system

    International Nuclear Information System (INIS)

    Goto, Minoru; Takamatsu, Kuniyoshi

    2007-03-01

    The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)

  12. Calculation of the clearance requirements for the development of a hemodialysis-based wearable artificial kidney.

    Science.gov (United States)

    Kim, Dong Ki; Lee, Jung Chan; Lee, Hajeong; Joo, Kwon Wook; Oh, Kook-Hwan; Kim, Yon Su; Yoon, Hyung-Jin; Kim, Hee Chan

    2016-04-01

    Wearable artificial kidney (WAK) has been considered an alternative to standard hemodialysis (HD) for many years. Although various novel WAK systems have been recently developed for use in clinical applications, the target performance or standard dose of dialysis has not yet been determined. To calculate the appropriate clearance for a HD-based WAK system for the treatment of patients with end-stage renal disease with various dialysis conditions, a classic variable-volume two-compartment kinetic model was used to simulate an anuric patient with variable target time-averaged creatinine concentration (TAC), daily water intake volume, daily dialysis pause time, and patient body weight. A 70-kg anuric patient with a HD-based WAK system operating for 24 h required dialysis clearances of creatinine of at least 100, 50, and 25 mL/min to achieve TACs of 1.0, 2.0, and 4.0 mg/dL, respectively. The daily water intake volume did not affect the clearance required for dialysis under various conditions. As the pause time per day for the dialysis increased, higher dialysis clearances were required to maintain the target TAC. The present study provided theoretical dialysis doses for an HD-based WAK system to achieve various target TACs through relevant mathematical kinetic modeling. The theoretical results may contribute to the determination of the technical specifications required for the development of a WAK system. © 2015 The Authors. Hemodialysis International published by Wiley Periodicals, Inc. on behalf of International Society for Hemodialysis.

  13. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    Science.gov (United States)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  14. Comparison of CT number calibration techniques for CBCT-based dose calculation

    Energy Technology Data Exchange (ETDEWEB)

    Dunlop, Alex [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); Murray, Julia; Bhide, Shreerang; Harrington, Kevin [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); The Institute of Cancer Research, London (United Kingdom); Poludniowski, Gavin [Karolinska University Hospital, Department of Medical Physics, Stockholm (Sweden); Nutting, Christopher [The Institute of Cancer Research, London (United Kingdom); Newbold, Kate [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom)

    2015-12-15

    The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT{sub r}); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS{sub auto}), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS{sub auto} provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT{sub r} (0.5 %) and RS{sub auto} (0.6 %) performing best. For lung cases, WL and RS{sub auto} methods generated dose distributions most similar to the ground truth. The RS{sub auto} density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS{sub auto} methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [German] Ziel dieser Arbeit ist der Vergleich und die Validierung mehrerer CT-Kalibrierungsmethoden zur Dosisberechnung auf der Grundlage von Kegelstrahlcomputertomographie

  15. A flow-based methodology for the calculation of TSO to TSO compensations for cross-border flows

    International Nuclear Information System (INIS)

    Glavitsch, H.; Andersson, G.; Lekane, Th.; Marien, A.; Mees, E.; Naef, U.

    2004-01-01

    In the context of the development of the European internal electricity market, several methods for the tarification of cross-border flows have been proposed. This paper presents a flow-based method for the calculation of TSO to TSO compensations for cross-border flows. The basic principle of this approach is the allocation of the costs of cross-border flows to the TSOs who are responsible for these flows. This method is cost reflective, non-transaction based and compatible with domestic tariffs. It can be applied when limited data are available. Each internal transmission network is then modelled as an aggregated node, called 'supernode', and the European network is synthesized by a graph of supernodes and arcs, each arc representing all cross-border lines between two adjacent countries. When detailed data are available, the proposed methodology is also applicable to all the nodes and lines of the transmission network. Costs associated with flows transiting through supernodes or network elements are forwarded through the network in a way reflecting how the flows make use of the network. The costs can be charged either towards loads and exports or towards generations and imports. Combination of the two charging directions can also be considered. (author)

  16. Determination of water pH using absorption-based optical sensors: evaluation of different calculation methods

    Science.gov (United States)

    Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin

    2017-02-01

    Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.

  17. Calculating the Efficiency of Steam Boilers Based on Its Most Effecting Factors: A Case Study

    OpenAIRE

    Nabil M. Muhaisen; Rajab Abdullah Hokoma

    2012-01-01

    This paper is concerned with calculating boiler efficiency as one of the most important types of performance measurements in any steam power plant. That has a key role in determining the overall effectiveness of the whole system within the power station. For this calculation, a Visual-Basic program was developed, and a steam power plant known as El-Khmus power plant, Libya was selected as a case study. The calculation of the boiler efficiency was applied by using heating ...

  18. Calculation of DC Arc Plasma Torch Voltage- Current Characteristics Based on Steebeck Model

    International Nuclear Information System (INIS)

    Gnedenko, V.G.; Ivanov, A.A.; Pereslavtsev, A.V.; Tresviatsky, S.S.

    2006-01-01

    The work is devoted to the problem of the determination of plasma torches parameters and power sources parameters (working voltage and current of plasma torch) at the predesigning stage. The sequence of calculation of voltage-current characteristics of DC arc plasma torch is proposed. It is shown that the simple Steenbeck model of arc discharge in cylindrical channel makes it possible to carry out this calculation. The results of the calculation are confirmed by the experiments

  19. Enzymatic logic calculation systems based on solid-state electrochemiluminescence and molecularly imprinted polymer film electrodes.

    Science.gov (United States)

    Lian, Wenjing; Liang, Jiying; Shen, Li; Jin, Yue; Liu, Hongyun

    2018-02-15

    The molecularly imprinted polymer (MIP) films were electropolymerized on the surface of Au electrodes with luminol and pyrrole (PY) as the two monomers and ampicillin (AM) as the template molecule. The electrochemiluminescence (ECL) intensity peak of polyluminol (PL) of the AM-free MIP films at 0.7V vs Ag/AgCl could be greatly enhanced by AM rebinding. In addition, the ECL signals of the MIP films could also be enhanced by the addition of glucose oxidase (GOD)/glucose and/or ferrocenedicarboxylic acid (Fc(COOH) 2 ) in the testing solution. Moreover, Fc(COOH) 2 exhibited cyclic voltammetric (CV) response at the AM-free MIP film electrodes. Based on these results, a binary 3-input/6-output biomolecular logic gate system was established with AM, GOD and Fc(COOH) 2 as inputs and the ECL responses at different levels and CV signal as outputs. Some functional non-Boolean logic devices such as an encoder, a decoder and a demultiplexer were also constructed on the same platform. Particularly, on the basis of the same system, a ternary AND logic gate was established. The present work combined MIP film electrodes, the solid-state ECL, and the enzymatic reaction together, and various types of biomolecular logic circuits and devices were developed, which opened a novel avenue to construct more complicated bio-logic gate systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.

    Science.gov (United States)

    Yuan, Y; Duan, J; Popple, R; Brezovich, I

    2012-06-01

    To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.

  1. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom.

    Science.gov (United States)

    Lesperance, Marielle; Inglis-Whalen, M; Thomson, R M

    2014-02-01

    To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with(125)I, (103)Pd, or (131)Cs seeds, and to investigate doses to ocular structures. An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20-30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%-10% and 13%-14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%-17% and 29%-34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16%. In the full eye model

  2. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    International Nuclear Information System (INIS)

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.

    2014-01-01

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with 125 I, 103 Pd, or 131 Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up

  3. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods; Avenir des nouveaux concepts des calculs dosimetriques bases sur les methodes de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)

    2009-01-15

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  4. Real-Time Continuous Response Spectra Exceedance Calculation Displayed in a Web-Browser Enables Rapid and Robust Damage Evaluation by First Responders

    Science.gov (United States)

    Franke, M.; Skolnik, D. A.; Harvey, D.; Lindquist, K.

    2014-12-01

    A novel and robust approach is presented that provides near real-time earthquake alarms for critical structures at distributed locations and large facilities using real-time estimation of response spectra obtained from near free-field motions. Influential studies dating back to the 1980s identified spectral response acceleration as a key ground motion characteristic that correlates well with observed damage in structures. Thus, monitoring and reporting on exceedance of spectra-based thresholds are useful tools for assessing the potential for damage to facilities or multi-structure campuses based on input ground motions only. With as little as one strong-motion station per site, this scalable approach can provide rapid alarms on the damage status of remote towns, critical infrastructure (e.g., hospitals, schools) and points of interests (e.g., bridges) for a very large number of locations enabling better rapid decision making during critical and difficult immediate post-earthquake response actions. Details on the novel approach are presented along with an example implementation for a large energy company. Real-time calculation of PSA exceedance and alarm dissemination are enabled with Bighorn, an extension module based on the Antelope software package that combines real-time spectral monitoring and alarm capabilities with a robust built-in web display server. Antelope is an environmental data collection software package from Boulder Real Time Technologies (BRTT) typically used for very large seismic networks and real-time seismic data analyses. The primary processing engine produces continuous time-dependent response spectra for incoming acceleration streams. It utilizes expanded floating-point data representations within object ring-buffer packets and waveform files in a relational database. This leads to a very fast method for computing response spectra for a large number of channels. A Python script evaluates these response spectra for exceedance of one or more

  5. Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.

    Science.gov (United States)

    Bandura, A V; Evarestov, R A; Lukyanov, S I

    2014-07-28

    A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure.

  6. Shapley Value-Based Payment Calculation for Energy Exchange between Micro- and Utility Grids

    Directory of Open Access Journals (Sweden)

    Robin Pilling

    2017-10-01

    Full Text Available In recent years, microgrids have developed as important parts of power systems and have provided affordable, reliable, and sustainable supplies of electricity. Each microgrid is managed as a single controllable entity with respect to the existing power system but demands for joint operation and sharing the benefits between a microgrid and its hosting utility. This paper is focused on the joint operation of a microgrid and its hosting utility, which cooperatively minimize daily generation costs through energy exchange, and presents a payment calculation scheme for power transactions based on a fair allocation of reduced generation costs. To fairly compensate for energy exchange between the micro- and utility grids, we adopt the cooperative game theoretic solution concept of Shapley value. We design a case study for a fictitious interconnection model between the Mueller microgrid in Austin, Texas and the utility grid in Taiwan. Our case study shows that when compared to standalone generations, both the micro- and utility grids are better off when they collaborate in power exchange regardless of their individual contributions to the power exchange coalition.

  7. Improvement of Power Flow Calculation with Optimization Factor Based on Current Injection Method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2014-01-01

    Full Text Available This paper presents an improvement in power flow calculation based on current injection method by introducing optimization factor. In the method proposed by this paper, the PQ buses are represented by current mismatches while the PV buses are represented by power mismatches. It is different from the representations in conventional current injection power flow equations. By using the combined power and current injection mismatches method, the number of the equations required can be decreased to only one for each PV bus. The optimization factor is used to improve the iteration process and to ensure the effectiveness of the improved method proposed when the system is ill-conditioned. To verify the effectiveness of the method, the IEEE test systems are tested by conventional current injection method and the improved method proposed separately. Then the results are compared. The comparisons show that the optimization factor improves the convergence character effectively, especially that when the system is at high loading level and R/X ratio, the iteration number is one or two times less than the conventional current injection method. When the overloading condition of the system is serious, the iteration number in this paper appears 4 times less than the conventional current injection method.

  8. First-principles calculations of orientation dependence of Si thermal oxidation based on Si emission model

    Science.gov (United States)

    Nagura, Takuya; Kawachi, Shingo; Chokawa, Kenta; Shirakawa, Hiroki; Araidai, Masaaki; Kageshima, Hiroyuki; Endoh, Tetsuo; Shiraishi, Kenji

    2018-04-01

    It is expected that the off-state leakage current of MOSFETs can be reduced by employing vertical body channel MOSFETs (V-MOSFETs). However, in fabricating these devices, the structure of the Si pillars sometimes cannot be maintained during oxidation, since Si atoms sometimes disappear from the Si/oxide interface (Si missing). Thus, in this study, we used first-principles calculations based on the density functional theory, and investigated the Si emission behavior at the various interfaces on the basis of the Si emission model including its atomistic structure and dependence on Si crystal orientation. The results show that the order in which Si atoms are more likely to be emitted during thermal oxidation is (111) > (110) > (310) > (100). Moreover, the emission of Si atoms is enhanced as the compressive strain increases. Therefore, the emission of Si atoms occurs more easily in V-MOSFETs than in planar MOSFETs. To reduce Si missing in V-MOSFETs, oxidation processes that induce less strain, such as wet or pyrogenic oxidation, are necessary.

  9. SAR Imagery Simulation of Ship Based on Electromagnetic Calculations and Sea Clutter Modelling for Classification Applications

    International Nuclear Information System (INIS)

    Ji, K F; Zhao, Z; Xing, X W; Zou, H X; Zhou, S L

    2014-01-01

    Ship detection and classification with space-borne SAR has many potential applications within the maritime surveillance, fishery activity management, monitoring ship traffic, and military security. While ship detection techniques with SAR imagery are well established, ship classification is still an open issue. One of the main reasons may be ascribed to the difficulties on acquiring the required quantities of real data of vessels under different observation and environmental conditions with precise ground truth. Therefore, simulation of SAR images with high scenario flexibility and reasonable computation costs is compulsory for ship classification algorithms development. However, the simulation of SAR imagery of ship over sea surface is challenging. Though great efforts have been devoted to tackle this difficult problem, it is far from being conquered. This paper proposes a novel scheme for SAR imagery simulation of ship over sea surface. The simulation is implemented based on high frequency electromagnetic calculations methods of PO, MEC, PTD and GO. SAR imagery of sea clutter is modelled by the representative K-distribution clutter model. Then, the simulated SAR imagery of ship can be produced by inserting the simulated SAR imagery chips of ship into the SAR imagery of sea clutter. The proposed scheme has been validated with canonical and complex ship targets over a typical sea scene

  10. Fission yield calculation using toy model based on Monte Carlo simulation

    International Nuclear Information System (INIS)

    Jubaidah; Kurniadi, Rizal

    2015-01-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R c ), mean of left curve (μ L ) and mean of right curve (μ R ), deviation of left curve (σ L ) and deviation of right curve (σ R ). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  11. Experimental verification of internal dosimetry calculations: Construction of a heterogeneous phantom based on human organs

    International Nuclear Information System (INIS)

    Lauridsen, B.; Hedemann Jensen, P.

    1987-01-01

    The basic dosimetric quantity in ICRP-publication no. 30 is the aborbed fraction AF(T<-S). This parameter is the fraction of energy absorbed in a target organ T per emission of radiation from activity deposited in the source organ S. Based upon this fraction it is possible to calculate the Specific Effective Energy SEE(T<-S). From this, the committed effective dose equivalent from an intake of radioactive material can be found, and thus the annual limit of intake for given radionuclides can be determined. A male phantom has been constructed with the aim of measuring the Specific Effective Energy SEE(T<-S) in various target organs. Impressions-of real human organs have been used to produce vacuum forms. Tissue equivalent plastic sheets were sucked into the vacuum forms producing a shell with a shape identical to the original organ. Each organ has been made of two shells. The same procedure has been used for the body. Thin tubes through the organs make it possible to place TL dose meters in a matrix so the dose distribution can be measured. The phantom has been supplied with lungs, liver, kidneys, spleen, stomach, bladder, pancreas, and thyroid gland. To select a suitable body liquid for the phantom, laboratory experiments have been made with different liquids and different radionuclides. In these experiments the change in dose rate due to changes in density and composition of the liquid was determined. Preliminary results of the experiments are presented. (orig.)

  12. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  13. Poster - 20: Detector selection for commissioning of a Monte Carlo based electron dose calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Anusionwu, Princess [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Alpuche Aviles, Jorge E. [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Pistorius, Stephen [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Department of Radiology, University of Manitoba, Winnipeg (Canada)

    2016-08-15

    Objective: Commissioning of a Monte Carlo based electron dose calculation algorithm requires percentage depth doses (PDDs) and beam profiles which can be measured with multiple detectors. Electron dosimetry is commonly performed with cylindrical chambers but parallel plate chambers and diodes can also be used. The purpose of this study was to determine the most appropriate detector to perform the commissioning measurements. Methods: PDDs and beam profiles were measured for beams with energies ranging from 6 MeV to 15 MeV and field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Detectors used included diodes, cylindrical and parallel plate ionization chambers. Beam profiles were measured in water (100 cm source to surface distance) and in air (95 cm source to detector distance). Results: PDDs for the cylindrical chambers were shallower (1.3 mm averaged over all energies and field sizes) than those measured with the parallel plate chambers and diodes. Surface doses measured with the diode and cylindrical chamber were on average larger by 1.6 % and 3% respectively than those of the parallel plate chamber. Profiles measured with a diode resulted in penumbra values smaller than those measured with the cylindrical chamber by 2 mm. Conclusion: The diode was selected as the most appropriate detector since PDDs agreed with those measured with parallel plate chambers (typically recommended for low energies) and results in sharper profiles. Unlike ion chambers, no corrections are needed to measure PDDs, making it more convenient to use.

  14. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    International Nuclear Information System (INIS)

    Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)

  15. Ruthenia-based electrochemical supercapacitors: insights from first-principles calculations.

    Science.gov (United States)

    Ozoliņš, Vidvuds; Zhou, Fei; Asta, Mark

    2013-05-21

    Electrochemical supercapacitors (ECs) have important applications in areas wherethe need for fast charging rates and high energy density intersect, including in hybrid and electric vehicles, consumer electronics, solar cell based devices, and other technologies. In contrast to carbon-based supercapacitors, where energy is stored in the electrochemical double-layer at the electrode/electrolyte interface, ECs involve reversible faradaic ion intercalation into the electrode material. However, this intercalation does not lead to phase change. As a result, ECs can be charged and discharged for thousands of cycles without loss of capacity. ECs based on hydrous ruthenia, RuO2·xH2O, exhibit some of the highest specific capacitances attained in real devices. Although RuO2 is too expensive for widespread practical use, chemists have long used it as a model material for investigating the fundamental mechanisms of electrochemical supercapacitance and heterogeneous catalysis. In this Account, we discuss progress in first-principles density-functional theory (DFT) based studies of the electronic structure, thermodynamics, and kinetics of hydrous and anhydrous RuO2. We find that DFT correctly reproduces the metallic character of the RuO2 band structure. In addition, electron-proton double-insertion into bulk RuO2 leads to the formation of a polar covalent O-H bond with a fractional increase of the Ru charge in delocalized d-band states by only 0.3 electrons. This is in slight conflict with the common assumption of a Ru valence change from Ru(4+) to Ru(3+). Using the prototype electrostatic ground state (PEGS) search method, we predict a crystalline RuOOH compound with a formation energy of only 0.15 eV per proton. The calculated voltage for the onset of bulk proton insertion in the dilute limit is only 0.1 V with respect to the reversible hydrogen electrode (RHE), in reasonable agreement with the 0.4 V threshold for a large diffusion-limited contribution measured experimentally

  16. Neutronic calculations of AFPR-100 reactor based on Spherical Cermet Fuel particles

    International Nuclear Information System (INIS)

    Benchrif, A.; Chetaine, A.; Amsil, H.

    2013-01-01

    Highlights: • AFPR-100 reactor considered as a small nuclear reactor without on-site refueling originally based on TRISO micro-fuel element. • The AFPR-100 reactor was re-designed using the new Spherical Cermet fuel element. • The adoption of the Cermet fuel instead of TRISO fuel reduces the core lifetime operation by 3.1 equivalent full power years. • We discussed the new micro-fuel element candidate for small and medium sized reactors. - Abstract: The Atoms For Peace Reactor (AFPR-100), as a 100 MW(e) without the need of on-site refueling, was originally based on UO2 TRISO fuel coated particles embedded in a carbon matrix directly cooled by light water. AFPR-100 is considered as a small nuclear reactor without open-vessel refueling which is proposed by Pacific Northwest National Laboratory (PNNL). An account of significant irradiation swelling in the silicon carbide fission product barrier coating layer of TRISO fuel element, a Spherical Cermet Fuel element has been proposed. Indeed, the new fuel concept, which was developed by PNNL, consists of changing the pyro-carbon and ceramic coatings that are incompatible with low temperature by Zirconium. The latter was chosen to avoid any potential Wigner energy effect issues in the TRISO fuel element. Actually, the purpose of this study is to assess the goal of AFPR-100 concept using the Cermet fuel; undeniably, the fuel core lifetime prediction may be extended for reasonably long period without on-site refueling. In fact, we investigated some neutronic parameters of reactor core by the calculation code SRAC95. The results suggest that the core fuel lifetime beyond 12 equivalent full power years (EFPYs) is possible. Hence, the adoption of Cermet fuel concept shows a core lifetime decrease of about 3.1 EFPY

  17. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).

    Science.gov (United States)

    Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M

    2007-02-21

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

  18. Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education

    Science.gov (United States)

    Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.

    2014-01-01

    Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…

  19. Review of theoretical calculations of hydrogen storage in carbon-based materials

    Energy Technology Data Exchange (ETDEWEB)

    Meregalli, V.; Parrinello, M. [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany)

    2001-02-01

    In this paper we review the existing theoretical literature on hydrogen storage in single-walled nanotubes and carbon nanofibers. The reported calculations indicate a hydrogen uptake smaller than some of the more optimistic experimental results. Furthermore the calculations suggest that a variety of complex chemical processes could accompany hydrogen storage and release. (orig.)

  20. Volumetric Arterial Wall Shear Stress Calculation Based on Cine Phase Contrast MRI

    NARCIS (Netherlands)

    Potters, Wouter V.; van Ooij, Pim; Marquering, Henk; VanBavel, Ed; Nederveen, Aart J.

    2015-01-01

    PurposeTo assess the accuracy and precision of a volumetric wall shear stress (WSS) calculation method applied to cine phase contrast magnetic resonance imaging (PC-MRI) data. Materials and MethodsVolumetric WSS vectors were calculated in software phantoms. WSS algorithm parameters were optimized

  1. Performance Analyses of Counter-Flow Closed Wet Cooling Towers Based on a Simplified Calculation Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-02-01

    Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water

  2. Bending Moment Calculations for Piles Based on the Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yu-xin Jie

    2013-01-01

    Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.

  3. Nonlinear optimization method of ship floating condition calculation in wave based on vector

    Science.gov (United States)

    Ding, Ning; Yu, Jian-xing

    2014-08-01

    Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.

  4. Humidity Response of Polyaniline Based Sensor

    Directory of Open Access Journals (Sweden)

    Mamta PANDEY

    2010-02-01

    Full Text Available Abstract: This paper presents hitherto unreported humidity sensing capacity of emeraldine salt form of polyaniline. Humidity plays a major role in different processes in industries ranging from food to electronic goods besides human comfort and therefore its monitoring is an essential requirement during various processes. Polyaniline has a wide use for making sensors as it can be easily synthesized and has long stability. Polyaniline is synthesized here by chemical route and is found to sense humidity as it shows variation in electrical resistance with variation in relative humidity. Results are presented here for a range of 15 to 90 RH%. The resistance falls from 5.8 to 0.72 Giga ohms as RH varies from 15 to 65 % and then falls to 13.9 Mega ohms as RH approaches 90 %. The response and recovery times are also measured.

  5. MACK-IV, a new version of MACK: a program to calculate nuclear response functions from data in ENDF/B format

    International Nuclear Information System (INIS)

    Abdou, M.A.; Gohar, Y.; Wright, R.Q.

    1978-07-01

    MACK-IV calculates nuclear response functions important to the neutronics analysis of nuclear and fusion systems. A central part of the code deals with the calculation of the nuclear response function for nuclear heating more commonly known as the kerma factor. Pointwise and multigroup neutron kerma factors, individual reactions, helium, hydrogen, and tritium production response functions are calculated from any basic nuclear data library in ENDF/B format. The program processes all reactions in the energy range of 0 to 20 MeV for fissionable and nonfissionable materials. The program also calculates the gamma production cross sections and the gamma production energy matrix. A built-in computational capability permits the code to calculate the cross sections in the resolved and unresolved resonance regions from resonance parameters in ENDF/B with an option for Doppler broadening. All energy pointwise and multigroup data calculated by the code can be punched, printed and/or written on tape files. Multigroup response functions (e.g., kerma factors, reaction cross sections, gas production, atomic displacements, etc.) can be outputted in the format of MACK-ACTIVITY-Table suitable for direct use with current neutron (and photon) transport codes

  6. A fast dose calculation method based on table lookup for IMRT optimization

    International Nuclear Information System (INIS)

    Wu Qiuwen; Djajaputra, David; Lauterbach, Marc; Wu Yan; Mohan, Radhe

    2003-01-01

    This note describes a fast dose calculation method that can be used to speed up the optimization process in intensity-modulated radiotherapy (IMRT). Most iterative optimization algorithms in IMRT require a large number of dose calculations to achieve convergence and therefore the total amount of time needed for the IMRT planning can be substantially reduced by using a faster dose calculation method. The method that is described in this note relies on an accurate dose calculation engine that is used to calculate an approximate dose kernel for each beam used in the treatment plan. Once the kernel is computed and saved, subsequent dose calculations can be done rapidly by looking up this kernel. Inaccuracies due to the approximate nature of the kernel in this method can be reduced by performing scheduled kernel updates. This fast dose calculation method can be performed more than two orders of magnitude faster than the typical superposition/convolution methods and therefore is suitable for applications in which speed is critical, e.g., in an IMRT optimization that requires a simulated annealing optimization algorithm or in a practical IMRT beam-angle optimization system. (note)

  7. A Lagrangian parcel based mixing plane method for calculating water based mixed phase particle flows in turbo-machinery

    Science.gov (United States)

    Bidwell, Colin S.

    2015-05-01

    A method for calculating particle transport through turbo-machinery using the mixing plane analogy was developed and used to analyze the energy efficient engine . This method allows the prediction of temperature and phase change of water based particles along their path and the impingement efficiency and particle impact property data on various components in the engine. This methodology was incorporated into the LEWICE3D V3.5 software. The method was used to predict particle transport in the low pressure compressor of the . The was developed by NASA and GE in the early 1980s as a technology demonstrator and is representative of a modern high bypass turbofan engine. The flow field was calculated using the NASA Glenn ADPAC turbo-machinery flow solver. Computations were performed for a Mach 0.8 cruise condition at 11,887 m assuming a standard warm day for ice particle sizes of 5, 20 and 100 microns and a free stream particle concentration of . The impingement efficiency results showed that as particle size increased average impingement efficiencies and scoop factors increased for the various components. The particle analysis also showed that the amount of mass entering the inner core decreased with increased particle size because the larger particles were less able to negotiate the turn into the inner core due to particle inertia. The particle phase change analysis results showed that the larger particles warmed less as they were transported through the low pressure compressor. Only the smallest 5 micron particles were warmed enough to produce melting with a maximum average melting fraction of 0.18. The results also showed an appreciable amount of particle sublimation and evaporation for the 5 micron particles entering the engine core (22.6 %).

  8. Method for stability analysis based on the Floquet theory and Vidyn calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ganander, Hans

    2005-03-01

    This report presents the activity 3.7 of the STEM-project Aerobig and deals with aeroelastic stability of the complete wind turbine structure at operation. As a consequence of the increase of sizes of wind turbines dynamic couplings are being more important for loads and dynamic properties. The steady ambition to increase the cost competitiveness of wind turbine energy by using optimisation methods lowers design margins, which in turn makes questions about stability of the turbines more important. The main objective of the project is to develop a general stability analysis tool, based on the VIDYN methodology regarding the turbine dynamic equations and the Floquet theory for the stability analysis. The reason for selecting the Floquet theory is that it is independent of number of blades, thus can be used for 2 as well as 3 bladed turbines. Although the latter ones are dominating on the market, the former has large potential when talking about offshore large turbines. The fact that cyclic and individual blade pitch controls are being developed as a mean for reduction of fatigue also speaks for general methods as Floquet. The first step of a general system for stability analysis has been developed, the code VIDSTAB. Together with other methods, as the snap shot method, the Coleman transformation and the use of Fourier series, eigenfrequences and modes can be analysed. It is general with no restrictions on the number of blades nor the symmetry of the rotor. The derivatives of the aerodynamic forces are calculated numerically in this first version. Later versions would include state space formulations of these forces. This would also be the case for the controllers of turbine rotation speed, yaw direction and pitch angle.

  9. Calculation of the yearly energy performance of heating systems based on the European Building Energy Directive and related CEN Standards

    DEFF Research Database (Denmark)

    Olesen, Bjarne W.; de Carli, Michele

    2011-01-01

    According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting syst......–20% of the building energy demand. The additional loss depends on the type of heat emitter, type of control, pump and boiler. Keywords: Heating systems; CEN standards; Energy performance; Calculation methods......According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting...... systems. This energy declaration must refer to the primary energy or CO2 emissions. The European Organization for Standardization (CEN) has prepared a series of standards for energy performance calculations for buildings and systems. This paper presents related standards for heating systems. The relevant...

  10. Analysis of Bi-directional Effects on the Response of a Seismic Base Isolation System

    International Nuclear Information System (INIS)

    Park, Hyung-Kui; Kim, Jung-Han; Kim, Min Kyu; Choi, In-Kil

    2014-01-01

    The floor response spectrum depends on the height of the floor of the structure. Also FRS depends on the characteristics of the seismic base isolation system such as the natural frequency, damping ratio. In the previous study, the floor response spectrum of the base isolated structure was calculated for each axis without considering bi-directional effect. However, the shear behavior of the seismic base isolation system of two horizontal directions are correlated each other by the bi-directional effects. If the shear behavior of the seismic isolation system changes, it can influence the floor response spectrum and displacement response of isolators. In this study, the analysis of a bi-directional effect on the floor response spectrum was performed. In this study, the response of the seismic base isolation system based on the bi-directional effects was analyzed. By analyzing the time history result, while there is no alteration in the maximum shear force of seismic base isolation system, it is confirmed that the shear force is generally more decreased in a one-directional that in a two-directional in most parts. Due to the overall decreased shear force, the floor response spectrum is more reduced in a two-directional than in a one-directional

  11. Energetics and performance of a microscopic heat engine based on exact calculations of work and heat distributions

    International Nuclear Information System (INIS)

    Chvosta, Petr; Holubec, Viktor; Ryabov, Artem; Einax, Mario; Maass, Philipp

    2010-01-01

    We investigate a microscopic motor based on an externally controlled two-level system. One cycle of the motor operation consists of two strokes. Within each stroke, the two-level system is in contact with a given thermal bath and its energy levels are driven at a constant rate. The time evolutions of the occupation probabilities of the two states are controlled by one rate equation and represent the system's response with respect to the external driving. We give the exact solution of the rate equation for the limit cycle and discuss the emerging thermodynamics: the work done on the environment, the heat exchanged with the baths, the entropy production, the motor's efficiency, and the power output. Furthermore we introduce an augmented stochastic process which reflects, at a given time, both the occupation probabilities for the two states and the time spent in the individual states during the previous evolution. The exact calculation of the evolution operator for the augmented process allows us to discuss in detail the probability density for the work performed during the limit cycle. In the strongly irreversible regime, the density exhibits important qualitative differences with respect to the more common Gaussian shape in the regime of weak irreversibility

  12. Electronic, Magnetic, and Transport Properties of Polyacrylonitrile-Based Carbon Nanofibers of Various Widths: Density-Functional Theory Calculations

    Science.gov (United States)

    Partovi-Azar, P.; Panahian Jand, S.; Kaghazchi, P.

    2018-01-01

    Edge termination of graphene nanoribbons is a key factor in determination of their physical and chemical properties. Here, we focus on nitrogen-terminated zigzag graphene nanoribbons resembling polyacrylonitrile-based carbon nanofibers (CNFs) which are widely studied in energy research. In particular, we investigate magnetic, electronic, and transport properties of these CNFs as functions of their widths using density-functional theory calculations together with the nonequilibrium Green's function method. We report on metallic behavior of all the CNFs considered in this study and demonstrate that the narrow CNFs show finite magnetic moments. The spin-polarized electronic states in these fibers exhibit similar spin configurations on both edges and result in spin-dependent transport channels in the narrow CNFs. We show that the partially filled nitrogen dangling-bond bands are mainly responsible for the ferromagnetic spin ordering in the narrow samples. However, the magnetic moment becomes vanishingly small in the case of wide CNFs where the dangling-bond bands fall below the Fermi level and graphenelike transport properties arising from the π orbitals are recovered. The magnetic properties of the CNFs as well as their stability have also been discussed in the presence of water molecules and the hexagonal boron nitride substrate.

  13. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  14. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  15. [Calculation and analysis of arc temperature field of pulsed TIG welding based on Fowler-Milne method].

    Science.gov (United States)

    Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang

    2012-09-01

    Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.

  16. Simulation Of Premi Calculation Claims Insurance Base On Web; Case Study PT. Sinarmas Insurance Padang

    OpenAIRE

    Rohendi, Keukeu; Putra, Ilham Eka

    2016-01-01

    Sinarmas currently has several insurance services featured. To perform its function as a good insurance company is need for reform in terms of services in the process of calculating insurance premiums of insurance carried by marketing to use a calculator which interferes with the activities of marketing activities, slow printing insurance policies, automobile claims process that requires the customer to come to the office ASM, slow printing of Work Order (SPK) and the difficulty recap custome...

  17. Study on the acceleration of the neutronics calculation based on GPGPU

    International Nuclear Information System (INIS)

    Ohoka, Y.; Tatsumi, M.

    2007-01-01

    The cost of the reactor physics calculation tends to become higher with more detail treatment in the physics models and computational algorithms. For example, SCOPE2 requires considerably high computational costs for multi-group transport calculation in 3-D pin-by-pin geometry. In this paper, applicability of GPGPU to acceleration of neutronics calculation is discussed. At first, performance and accuracy of the basic matrix calculations with fundamental arithmetic operators and the exponential, function are studied. The calculation was performed on a machine with Pentium 4 of 3.2 MHz and GPU of nVIDIA GeForce7800GTX using a test program written in C++, OpenGL and GLSL on Linux. When matrix size becomes large, the calculation on GPU is 10-50 times faster than that on CPU for fundamental arithmetic operators. For the exponential function, calculation on GPU is 270-370 times faster than that on CPU. The precision of all the cases are equivalent to that on CPU, which is less than the criterion of IEEE754 (10 -6 as single precision). Next, the GPGPU is applied to a functional module in SCOPE2. In the present study, as the first step of GPGPU application, calculations in. small geometry are tested. Performance gain, by GPGPU in this application was relatively modest, approximately 15%, compared to the feasibility study. This is because the part in which GPGPU was applied had appropriate structure for GPGPU implementation but had only small fraction of computational load. For much advanced acceleration, it is important to consider various factors such as easiness of implementation, fraction of computational load and bottleneck in data transfer between GPU and CPU. (authors)

  18. Understanding the interfacial properties of graphene-based materials/BiOI heterostructures by DFT calculations

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Wen-Wu [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Zhao, Zong-Yan, E-mail: zzy@kmust.edu.cn [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Jiangsu Provincial Key Laboratory for Nanotechnology, Nanjing University, Nanjing 210093 (China)

    2017-06-01

    Highlights: • Heterostructure constructing is an effective way to enhance the photocatalytic performance. • Graphene-like materials and BiOI were in contact and formed van der Waals heterostructures. • Band edge positions of GO/g-C{sub 3}N{sub 4} and BiOI changed to form standard type-II heterojunction. • 2D materials can promote the separation of photo-generated electron-hole pairs in BiOI. - Abstract: Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C{sub 3}N{sub 4}) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C{sub 3}N{sub 4} and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and

  19. Calculation and analysis of the source term of the reactor core based on different data libraries

    International Nuclear Information System (INIS)

    Chen Haiying; Zhang Chunming; Wang Shaowei; Lan Bing; Liu Qiaofeng; Han Jingru

    2014-01-01

    The nuclear fuel in reactor core produces large amount of radioactive nuclides in the fission process. ORIGEN-S can calculate the accumulation and decay of radioactive nuclides in the core by using various forms of data libraries, including card-image library, binary library and ORIGEN-S cross section library generated by ARP through interpolation method. In this paper, the information of each data library was described, and the reactor core inventory was calculated by using Card-image library and ARP library. The radioactivity concentration of typical nuclides with the change of fuel burnup was analyzed. The results showed that the influence of data libraries on the calculation of nuclide radioactivity was various. Compared to Card-image library, the radioactivity of a small part of nuclides calculated by ARP library were larger and the radioactivity of "1"3"4Cs, "1"3"6Cs were calculated smaller by about 15%. For some typical nuclides, with the deepening of fuel burnup, the difference of nuclide radioactivity calculated by the two libraries increased. However, the changes of the ratio of nuclide radioactivity were different. (authors)

  20. Effectiveness of a computer based medication calculation education and testing programme for nurses.

    Science.gov (United States)

    Sherriff, Karen; Burston, Sarah; Wallis, Marianne

    2012-01-01

    The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  1. Environmentally responsible behavior of nature-based tourists: A review

    Directory of Open Access Journals (Sweden)

    Lee, T.H.

    2013-03-01

    Full Text Available This study assesses the conceptualization of environmentally responsible behavior and methods for measuring such behavior based on a review of previous studies. Four major scales for the extent to which an individual’s behavior is responsible behavior are discussed. Various theoretical backgrounds and cultures provide diverse conceptualizations of environmentally responsible behavior. Both general and site-specific environmentally responsible behavior has been identified in the past studies. This study also discusses the precedents of environmentally responsible behavior and with a general overview; it provides insight into improving future research on this subject.

  2. Oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor: Sensing ability, TD-DFT calculations and its application as an efficient solid state sensor

    Science.gov (United States)

    Lan, Linxin; Li, Tianduo; Wei, Tao; Pang, He; Sun, Tao; Wang, Enhua; Liu, Haixia; Niu, Qingfen

    2018-03-01

    An oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor 3 T-2CN was reported. Sensor 3 T-2CN showed both naked-eye recognition and ratiometric fluorescence response for CN- with an excellent selectivity and high sensitivity. The sensing mechanism based on the nucleophilic attack of CN- on the vinyl Cdbnd C bond has been successfully confirmed by the optical measurements, 1H NMR titration, FT-IR spectra as well as the DFT/TD-DFT calculations. Moreover, the detection limit was calculated to be 0.19 μM, which is much lower than the maximum permission concentration in drinking water (1.9 μM). Importantly, test strips (filter paper and TLC plates) containing 3 T-2CN were fabricated, which could act as a practical and efficient solid state optical sensor for CN- in field measurements.

  3. Monte Carlo based electron treatment planning and cutout output factor calculations

    Science.gov (United States)

    Mitrou, Ellis

    Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.

  4. Python-based framework for coupled MC-TH reactor calculations

    International Nuclear Information System (INIS)

    Travleev, A.A.; Molitor, R.; Sanchez, V.

    2013-01-01

    We have developed a set of Python packages to provide a modern programming interface to codes used for analysis of nuclear reactors. Python classes can be classified by their functionality into three groups: low-level interfaces, general model classes and high-level interfaces. A low-level interface describes an interface between Python and a particular code. General model classes can be used to describe calculation geometry and meshes to represent system variables. High-level interface classes are used to convert geometry described with general model classes into instances of low-level interface classes and to put results of code calculations (read by low-interface classes) back to general model. The implementation of Python interfaces to the Monte Carlo neutronics code MCNP and thermo-hydraulic code SCF allow efficient description of calculation models and provide a framework for coupled calculations. In this paper we illustrate how these interfaces can be used to describe a pin model, and report results of coupled MCNP-SCF calculations performed for a PWR fuel assembly, organized by means of the interfaces

  5. A thermodynamic data base for Tc to calculate equilibrium solubilities at temperatures up to 300 deg C

    International Nuclear Information System (INIS)

    Puigdomenech, I.; Bruno, J.

    1995-04-01

    Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T r Cdeg pm values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs

  6. Analytic models of spectral responses of fiber-grating-based interferometers on FMC theory.

    Science.gov (United States)

    Zeng, Xiangkai; Wei, Lai; Pan, Yingjun; Liu, Shengping; Shi, Xiaohui

    2012-02-13

    In this paper the analytic models (AMs) of the spectral responses of fiber-grating-based interferometers are derived from the Fourier mode coupling (FMC) theory proposed recently. The interferometers include Fabry-Perot cavity, Mach-Zehnder and Michelson interferometers, which are constructed by uniform fiber Bragg gratings and long-period fiber gratings, and also by Gaussian-apodized ones. The calculated spectra based on the analytic models are achieved, and compared with the measured cases and those on the transfer matrix (TM) method. The calculations and comparisons have confirmed that the AM-based spectrum is in excellent agreement with the TM-based one and the measured case, of which the efficiency is improved up to ~2990 times that of the TM method for non-uniform-grating-based in-fiber interferometers.

  7. Semi-empirical Calculation of Detection Efficiency for Voluminous Source Based on Effective Solid Angle Concept

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D.; Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the Effective Solid Angle (ESA) Code. The procedure for semi-empirical determination of the FE efficiency for the arbitrary volume sources and the calculation principles and processes about ESA code is referred to, and the code was validated with a HPGe detector (relative efficiency 32%, n-type) in previous studies. In this study, we use different type and efficiency of HPGe detectors, in order to verify the performance of the ESA code for the various detectors. We calculated the efficiency curve of voluminous source and compared with experimental data. We will carry out additional validation by measurement of various medium, volume and shape of CRM volume sources with detector of different efficiency and type. And we will reflect the effect of the dead layer of p-type HPGe detector and coincidence summing correction technique in near future.

  8. Cell verification of parallel burnup calculation program MCBMPI based on MPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Wang Guanbo; Yang Xin; She Ding

    2014-01-01

    The parallel burnup calculation program MCBMPI was developed. The program was modularized. The parallel MCNP5 program MCNP5MPI was employed as neutron transport calculation module. And a composite of three solution methods was used to solve burnup equation, i.e. matrix exponential technique, TTA analytical solution, and Gauss Seidel iteration. MPI parallel zone decomposition strategy was concluded in the program. The program system only consists of MCNP5MPI and burnup subroutine. The latter achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, and data exchanging with MCNP5MPI. Also, the program was verified with the pressurized water reactor (PWR) cell burnup benchmark. The results show that it,s capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  9. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  10. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  11. Characterization of Ferrofluid-based Stimuli-responsive Elastomers

    OpenAIRE

    Sandra dePedro; Xavier Munoz-Berbel; Rosalia Rodríguez-Rodríguez; Jordi Sort; Jose Antonio Plaza; Juergen Brugger; Andreu Llobera; Victor J Cadarso

    2016-01-01

    Stimuli-responsive materials undergo physicochemical and/or structural changes when a specific actuation is applied. They are heterogeneous composites, consisting of a non-responsive matrix where functionality is provided by the filler. Surprisingly, the synthesis of polydimethylsiloxane (PDMS)-based stimuli-responsive elastomers (SRE) has seldomly been presented. Here, we present the structural, biological, optical, magnetic, and mechanical properties of several magnetic SRE (M-SRE) obtained...

  12. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  13. Calculation of intercepted runoff depth based on stormwater quality and environmental capacity of receiving waters for initial stormwater pollution management.

    Science.gov (United States)

    Peng, Hai-Qin; Liu, Yan; Gao, Xue-Long; Wang, Hong-Wu; Chen, Yi; Cai, Hui-Yi

    2017-11-01

    While point source pollutions have gradually been controlled in recent years, the non-point source pollution problem has become increasingly prominent. The receiving waters are frequently polluted by the initial stormwater from the separate stormwater system and the wastewater from sewage pipes through stormwater pipes. Consequently, calculating the intercepted runoff depth has become a problem that must be resolved immediately for initial stormwater pollution management. The accurate calculation of intercepted runoff depth provides a solid foundation for selecting the appropriate size of intercepting facilities in drainage and interception projects. This study establishes a separate stormwater system for the Yishan Building watershed of Fuzhou City using the InfoWorks Integrated Catchment Management (InfoWorks ICM), which can predict the stormwater flow velocity and the flow of discharge outlet after each rainfall. The intercepted runoff depth is calculated from the stormwater quality and environmental capacity of the receiving waters. The average intercepted runoff depth from six rainfall events is calculated as 4.1 mm based on stormwater quality. The average intercepted runoff depth from six rainfall events is calculated as 4.4 mm based on the environmental capacity of the receiving waters. The intercepted runoff depth differs when calculated from various aspects. The selection of the intercepted runoff depth depends on the goal of water quality control, the self-purification capacity of the water bodies, and other factors of the region.

  14. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations

    International Nuclear Information System (INIS)

    Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H.

    2016-01-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10 −2 –1.0 × 10 −5 M with detection limit 8.5 × 10 −6 M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.

  15. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations

    Energy Technology Data Exchange (ETDEWEB)

    Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H., E-mail: Ahmed.hussienabdelazim@hotmil.com

    2016-04-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10{sup −2}–1.0 × 10{sup −5} M with detection limit 8.5 × 10{sup −6} M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.

  16. The Effect of Indium Concentration on the Structure and Properties of Zirconium Based Intermetallics: First-Principles Calculations

    Directory of Open Access Journals (Sweden)

    Fuda Guo

    2016-01-01

    Full Text Available The phase stability, mechanical, electronic, and thermodynamic properties of In-Zr compounds have been explored using the first-principles calculation based on density functional theory (DFT. The calculated formation enthalpies show that these compounds are all thermodynamically stable. Information on electronic structure indicates that they possess metallic characteristics and there is a common hybridization between In-p and Zr-d states near the Fermi level. Elastic properties have been taken into consideration. The calculated results on the ratio of the bulk to shear modulus (B/G validate that InZr3 has the strongest deformation resistance. The increase of indium content results in the breakout of a linear decrease of the bulk modulus and Young’s modulus. The calculated theoretical hardness of α-In3Zr is higher than the other In-Zr compounds.

  17. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    Science.gov (United States)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  18. A nodal method based on matrix-response method

    International Nuclear Information System (INIS)

    Rocamora Junior, F.D.; Menezes, A.

    1982-01-01

    A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt

  19. Calculation of Collective Variable-based PMF by Combining WHAM with Umbrella Sampling

    International Nuclear Information System (INIS)

    Xu Wei-Xin; Li Yang; Zhang, John Z. H.

    2012-01-01

    Potential of mean force (PMF) with respect to localized reaction coordinates (RCs) such as distance is often applied to evaluate the free energy profile along the reaction pathway for complex molecular systems. However, calculation of PMF as a function of global RCs is still a challenging and important problem in computational biology. We examine the combined use of the weighted histogram analysis method and the umbrella sampling method for the calculation of PMF as a function of a global RC from the coarse-grained Langevin dynamics simulations for a model protein. The method yields the folding free energy profile projected onto a global RC, which is in accord with benchmark results. With this method rare global events would be sufficiently sampled because the biased potential can be used for restricting the global conformation to specific regions during free energy calculations. The strategy presented can also be utilized in calculating the global intra- and intermolecular PMF at more detailed levels. (cross-disciplinary physics and related areas of science and technology)

  20. Performance of SOPPA-based methods in the calculation of vertical excitation energies and oscillator strengths

    DEFF Research Database (Denmark)

    Sauer, Stephan P. A.; Pitzner-Frydendahl, Henrik Frank; Buse, Mogens

    2015-01-01

    methods, the original SOPPA method as well as SOPPA(CCSD) and RPA(D) in the calculation of vertical electronic excitation energies and oscillator strengths is investigated for a large benchmark set of 28 medium-size molecules with 139 singlet and 71 triplet excited states. The results are compared...

  1. Simulation of Space Charge Effects in Electron Optical System Based on the Calculations of Current Density

    Czech Academy of Sciences Publication Activity Database

    Zelinka, Jiří; Oral, Martin; Radlička, Tomáš

    2015-01-01

    Roč. 21, S4 (2015), s. 246-251 ISSN 1431-9276 R&D Projects: GA MŠk(CZ) LO1212 Institutional support: RVO:68081731 Keywords : electron optical system * calculations of current density Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.730, year: 2015

  2. Development of a risk-based mine closure cost calculation model

    CSIR Research Space (South Africa)

    Du Plessis, A

    2006-06-01

    Full Text Available . This research is important because currently there are a number of mines that do not have sufficient financial provision to close and rehabilitate the mines. The magnitude of the lack of funds could be reduced or eliminated if the closure cost calculation...

  3. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals

    Czech Academy of Sciences Publication Activity Database

    Červinka, C.; Fulem, Michal; Růžička, K.

    2016-01-01

    Roč. 144, č. 6 (2016), 1-15, č. článku 064505. ISSN 0021-9606 Institutional support: RVO:68378271 Keywords : density-functional theory * organic oxygen compounds * quantum -mechanical calculations Subject RIV: BJ - Thermodynamics Impact factor: 2.965, year: 2016

  4. APPLICATION OF THE SPECTROMETRIC METHOD FOR CALCULATING THE DOSE RATE FOR CREATING CALIBRATION HIGHLY SENSITIVE INSTRUMENTS BASED ON SCINTILLATION DETECTION UNITS

    Directory of Open Access Journals (Sweden)

    R. V. Lukashevich

    2017-01-01

    Full Text Available Devices based on scintillation detector are highly sensitive to photon radiation and are widely used to measure the environment dose rate. Modernization of the measuring path to minimize the error in measuring the response of the detector to gamma radiation has already reached its technological ceiling and does not give the proper effect. More promising for this purpose are new methods of processing the obtained spectrometric information. The purpose of this work is the development of highly sensitive instruments based on scintillation detection units using a spectrometric method for calculating dose rate.In this paper we consider the spectrometric method of dosimetry of gamma radiation based on the transformation of the measured instrumental spectrum. Using predetermined or measured functions of the detector response to the action of gamma radiation of a given energy and flux density, a certain function of the energy G(E is determined. Using this function as the core of the integral transformation from the field to dose characteristic, it is possible to obtain the dose value directly from the current instrumentation spectrum. Applying the function G(E to the energy distribution of the fluence of photon radiation in the environment, the total dose rate can be determined without information on the distribution of radioisotopes in the environment.To determine G(E by Monte-Carlo method instrumental response function of the scintillator detector to monoenergetic photon radiation sources as well as other characteristics are calculated. Then the whole full-scale energy range is divided into energy ranges for which the function G(E is calculated using a linear interpolation.Spectrometric method for dose calculation using the function G(E, which allows the use of scintillation detection units for a wide range of dosimetry applications is considered in the article. As well as describes the method of calculating this function by using Monte-Carlo methods

  5. A comparison study for dose calculation in radiation therapy: pencil beam Kernel based vs. Monte Carlo simulation vs. measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)

    2002-07-01

    Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.

  6. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    International Nuclear Information System (INIS)

    Xu, H; Guerrero, M; Prado, K; Yi, B

    2016-01-01

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  7. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H; Guerrero, M; Prado, K; Yi, B [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  8. Application of perturbation theory to lattice calculations based on method of cyclic characteristics

    Science.gov (United States)

    Assawaroongruengchot, Monchai

    computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR

  9. Application of perturbation theory to lattice calculations based on method of cyclic characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Assawaroongruengchot, M

    2007-07-01

    computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k{sub eff} at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and k{sub eff}-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and k{sub eff}-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these

  10. Application of perturbation theory to lattice calculations based on method of cyclic characteristics

    International Nuclear Information System (INIS)

    Assawaroongruengchot, M.

    2007-01-01

    computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and k eff -EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and k eff -EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR

  11. PID Controller Settings Based on a Transient Response Experiment

    Science.gov (United States)

    Silva, Carlos M.; Lito, Patricia F.; Neves, Patricia S.; Da Silva, Francisco A.

    2008-01-01

    An experimental work on controller tuning for chemical engineering undergraduate students is proposed using a small heat exchange unit. Based upon process reaction curves in open-loop configuration, system gain and time constant are determined for first order model with time delay with excellent accuracy. Afterwards students calculate PID…

  12. Calculations of reactivity based in the solution of the Neutron transport equation in X Y geometry and Lineal perturbation theory

    International Nuclear Information System (INIS)

    Valle G, E. del; Mugica R, C.A.

    2005-01-01

    In our country, in last congresses, Gomez et al carried out reactivity calculations based on the solution of the diffusion equation for an energy group using nodal methods in one dimension and the TPL approach (Lineal Perturbation Theory). Later on, Mugica extended the application to the case of multigroup so much so much in one as in two dimensions (X Y geometry) with excellent results. Presently work is carried out similar calculations but this time based on the solution of the neutron transport equation in X Y geometry using nodal methods and again the TPL approximation. The idea is to provide a calculation method that allows to obtain in quick form the reactivity solving the direct problem as well as the enclosed problem of the not perturbed problem. A test problem for the one that results are provided for the effective multiplication factor is described and its are offered some conclusions. (Author)

  13. The Fundamentals of a Business Model Based on Responsible Investments

    Directory of Open Access Journals (Sweden)

    Vadim Dumitrascu

    2016-03-01

    Full Text Available The harmonization of profitability and social responsibility is possible under the adoption and practice conditions by the companies of some adequate business models. “Responsible profitability” must benefit as well of management tools that guide the business sequentially, based on some objective decision making criteria towards sustainable economic behaviors. The simultaneous increase of the specific economic over-value generated by social responsible investment (SRI project and responsible intensity of economic employment reflects the company’s strong subscription to the authentic sustainable development path.

  14. Seismic Response Analysis and Design of Structure with Base Isolation

    International Nuclear Information System (INIS)

    Rosko, Peter

    2010-01-01

    The paper reports the study on seismic response and energy distribution of a multi-story civil structure. The nonlinear analysis used the 2003 Bam earthquake acceleration record as the excitation input to the structural model. The displacement response was analyzed in time domain and in frequency domain. The displacement and its derivatives result energy components. The energy distribution in each story provides useful information for the structural upgrade with help of added devices. The objective is the structural displacement response minimization. The application of the structural seismic response research is presented in base-isolation example.

  15. Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hünemohr, Nora, E-mail: n.huenemohr@dkfz.de; Greilich, Steffen [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg (Germany); Paganetti, Harald; Seco, Joao [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Jäkel, Oliver [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany and Department of Radiation Oncology and Radiation Therapy, University Hospital of Heidelberg, 69120 Heidelberg (Germany)

    2014-06-15

    Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would

  16. Modelling of nonhomogeneous atmosphere in NPP containment using lumped-parameter model based on CFD calculations

    International Nuclear Information System (INIS)

    Kljenak, I.; Mavko, B.; Babic, M.

    2005-01-01

    Full text of publication follows: The modelling and simulation of atmosphere mixing and stratification in nuclear power plant containments is a topic, which is currently being intensely investigated. With the increase of computer power, it has now become possible to model these phenomena with a local instantaneous description, using so-called Computational Fluid Dynamics (CFD) codes. However, calculations with these codes still take relatively long times. An alternative faster approach, which is also being applied, is to model nonhomogeneous atmosphere with lumped-parameter codes by dividing larger control volumes into smaller volumes, in which conditions are modelled as homogeneous. The flow between smaller volumes is modelled using one-dimensional approaches, which includes the prescription of flow loss coefficients. However, some authors have questioned this approach, as it appears that atmosphere stratification may sometimes be well simulated only by adjusting flow loss coefficients to adequate 'artificial' values that are case-dependent. To start the resolution of this issue, a modelling of nonhomogeneous atmosphere with a lumped-parameter code is proposed, where the subdivision of a large volume into smaller volumes is based on results of CFD simulations. The basic idea is to use the results of a CFD simulation to define regions, in which the flow velocities have roughly the same direction. These regions are then modelled as control volumes in a lumped-parameter model. In the proposed work, this procedure was applied to a simulation of an experiment of atmosphere mixing and stratification, which was performed in the TOSQAN facility. The facility is located at the Institut de Radioprotection et de Surete Nucleaire (IRSN) in Saclay (France) and consists of a cylindrical vessel (volume: 7 m3), in which gases are injected. In the experiment, which was also proposed for the OECD/NEA International Standard Problem No.47, air was initially present in the vessel, and

  17. Calculating and experimental technique for forecasting the bipolar digital integrated circuit response; Raschetno-ehksperimental`nyj metod prognozirovaniya reaktsii bipolyarnykh Ts IS

    Energy Technology Data Exchange (ETDEWEB)

    Butin, V I; Trofimov, Eh N

    1994-12-31

    Typical responses of the bipolar digital integrated circuits (DIC) of the combination type under the action of pulse gamma radiation are presented. Analysis of the DIC transients is carried out. A calculation-experimental method for forecasting the temporal serviceability loss of bipolar DIC is proposed. The reliability of the method is confirmed experimentally. 1 fig.

  18. Development of internal dose calculation model and the data base updated IDES (Internal Dose Estimation System)

    International Nuclear Information System (INIS)

    Hongo, Shozo; Yamaguchi, Hiroshi; Takeshita, Hiroshi; Iwai, Satoshi.

    1994-01-01

    A computer program named IDES is developed by BASIC language for a personal computer and translated to C language of engineering work station. The IDES carries out internal dose calculations described in ICRP Publication 30 and it installs the program of transformation method which is an empirical method to estimate absorbed fractions of different physiques from ICRP Referenceman. The program consists of three tasks: productions of SAF for Japanese including children, productions of SEE, Specific Effective Energy, and calculation of effective dose equivalents. Each task and corresponding data file appear as a module so as to meet future requirement for revisions of the related data. Usefulness of IDES is discussed by exemplifying the case that 5 age groups of Japanese intake orally Co-60 or Mn-54. (author)

  19. Navier-Stokes calculations on multi-element airfoils using a chimera-based solver

    Science.gov (United States)

    Jasper, Donald W.; Agrawal, Shreekant; Robinson, Brian A.

    1993-01-01

    A study of Navier-Stokes calculations of flows about multielement airfoils using a chimera grid approach is presented. The chimera approach utilizes structured, overlapped grids which allow great flexibility of grid arrangement and simplifies grid generation. Calculations are made for two-, three-, and four-element airfoils, and modeling of the effect of gap distance between elements is demonstrated for a two element case. Solutions are obtained using the thin-layer form of the Reynolds averaged Navier-Stokes equations with turbulence closure provided by the Baldwin-Lomax algebraic model or the Baldwin-Barth one equation model. The Baldwin-Barth turbulence model is shown to provide better agreement with experimental data and to dramatically improve convergence rates for some cases. Recently developed, improved farfield boundary conditions are incorporated into the solver for greater efficiency. Computed results show good comparison with experimental data which include aerodynamic forces, surface pressures, and boundary layer velocity profiles.

  20. Calculation of TC in a normal-superconductor bilayer using the microscopic-based Usadel theory

    International Nuclear Information System (INIS)

    Martinis, John M.; Hilton, G.C.; Irwin, K.D.; Wollman, D.A.

    2000-01-01

    The Usadel equations give a theory of superconductivity, valid in the diffusive limit, that is a generalization of the microscopic equations of the BCS theory. Because the theory is expressed in a tractable and physical form, even experimentalists can analytically and numerically calculate detailed properties of superconductors in physically relevant geometries. Here, we describe the Usadel equations and review their solution in the case of predicting the transition temperature T C of a thin normal-superconductor bilayer. We also extend this calculation for thicker bilayers to show the dependence on the resistivity of the films. These results, which show a dependence on both the interface resistance and heat capacity of the films, provide important guidance on fabricating bilayers with reproducible transition temperatures

  1. Analysis of the computational methods on the equipment shock response based on ANSYS environments

    International Nuclear Information System (INIS)

    Wang Yu; Li Zhaojun

    2005-01-01

    With the developments and completions of equipment shock vibration theory, math calculation method simulation technique and other aspects, equipment shock calculation methods are gradually developing form static development to dynamic and from linearity to non-linearity. Now, the equipment shock calculation methods applied worldwide in engineering practices mostly include equivalent static force method, Dynamic Design Analysis Method (abbreviated to DDAM) and real-time simulation method. The DDAM is a method based on the modal analysis theory, which inputs the shock design spectrum as shock load and gets hold of the shock response of the integrated system by applying separate cross-modal integrating method within the frequency domain. The real-time simulation method is to carry through the computational analysis of the equipment shock response within the time domain, use the time-history curves obtained from real-time measurement or spectrum transformation as the equipment shock load and find an iterative solution of a differential equation of the system movement by using the computational procedure within the time domain. Conclusions: Using the separate DDAM and Real-time Simulation Method, this paper carried through the shock analysis of a three-dimensional frame floating raft in ANSYS environments, analyzed the result, and drew the following conclusion: Because DDAM does not calculate damping, non-linear effect and phase difference between mode responses, the result is much bigger than that of real-time simulation method. The coupling response is much complex when the mode result of 3-dimension structure is being calculated, and the coupling response of non-shock direction is also much bigger than that of real-time simulation method when DDAM is applied. Both DDAM and real-time simulation method has its good points and scope of application. The designers should select the design method that is economic and in point according to the features and anti

  2. Calculations of the hurricane eye motion based on singularity propagation theory

    Directory of Open Access Journals (Sweden)

    Vladimir Danilov

    2002-02-01

    Full Text Available We discuss the possibility of using calculating singularities to forecast the dynamics of hurricanes. Our basic model is the shallow-water system. By treating the hurricane eye as a vortex type singularity and truncating the corresponding sequence of Hugoniot type conditions, we carry out many numerical experiments. The comparison of our results with the tracks of three actual hurricanes shows that our approach is rather fruitful.

  3. Dose reconstruction in radioactively contaminated areas based on radiation transport calculations and measurements

    International Nuclear Information System (INIS)

    Hiller, Mauritius Michael

    2015-01-01

    The external radiation exposure at the former village of Metlino, Russia, was reconstructed. The Techa river in Metlino was contaminated by water from the Majak plant. The village was evacuated in 1956 and a reservoir lake created. Absorbed doses in bricks were measured and a model of the present-day and the historic Metlino was created for Monte Carlo calculations. By combining both, the air kerma at shoreline could be reconstructed to evaluate the Techa River Dosimetry System.

  4. A generalized approach for the calculation and automation of potentiometric titrations Part 1. Acid-Base Titrations

    NARCIS (Netherlands)

    Stur, J.; Bos, M.; van der Linden, W.E.

    1984-01-01

    Fast and accurate calculation procedures for pH and redox potentials are required for optimum control of automatic titrations. The procedure suggested is based on a three-dimensional titration curve V = f(pH, redox potential). All possible interactions between species in the solution, e.g., changes

  5. Accurate pKa Calculation of the Conjugate Acids of Alkanolamines, Alkaloids and Nucleotide Bases by Quantum Chemical Methods

    NARCIS (Netherlands)

    Gangarapu, S.; Marcelis, A.T.M.; Zuilhof, H.

    2013-01-01

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum

  6. GIS supported calculations of 137Cs deposition in Sweden based on precipitation data

    International Nuclear Information System (INIS)

    Almgren, S.; Nilsson, E.; Isaksson, M.; Erlandsson, B.

    2005-01-01

    137 Cs deposition maps were made using Kriging interpolation in a Geographical Information System (GIS). Quarterly values of 137 Cs deposition density per unit precipitation (Bq/m 2 /mm) at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations of Nuclear Weapons Fallout (NWF). The deposition density of 137 Cs, resulting from the Chernobyl accident, was calculated for western Sweden using precipitation data from 46 stations. The lowest levels of NWF 137 Cs deposition density were noted in the northeastern and eastern Sweden and the highest levels in the western parts of Sweden. The Chernobyl 137 Cs deposition density is highest along the coast in the selected area and the lowest in the southeastern part and along the middle. The sum of the calculated deposition density from NWF and Chernobyl in western Sweden was compared to accumulated activities in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values

  7. Mathematical Calculations Of Heat Transfer For The CNC Deposition Platform Based On Chemical Thermal Method

    Science.gov (United States)

    Essa, Mohammed Sh.; Chiad, Bahaa T.; Hussein, Khalil A.

    2018-05-01

    Chemical thermal deposition techniques are highly depending on deposition platform temperature as well as surface substrate temperatures, so in this research thermal distribution and heat transfer was calculated to optimize the deposition platform temperature distribution, determine the power required for the heating element, to improve thermal homogeneity. Furthermore, calculate the dissipated thermal power from the deposition platform. Moreover, the thermal imager (thermal camera) was used to estimate the thermal destitution in addition to, the temperature allocation over 400cm2 heated plate area. In order to reach a plate temperature at 500 oC, a plate supported with an electrical heater of power (2000 W). Stainless steel plate of 12mm thickness was used as a heated plate and deposition platform and subjected to lab tests using element analyzer X-ray fluorescence system (XRF) to check its elemental composition and found the grade of stainless steel and found to be 316 L. The total heat losses calculated at this temperature was 612 W. Homemade heating element was used to heat the plate and can reach 450 oC with less than 15 min as recorded from the system.as well as the temperatures recorded and monitored using Arduino/UNO microcontroller with cold-junction-compensated K-thermocouple-to-digital converter type MAX6675.

  8. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  9. Basic Research about Calculation of the Decommissioning Unit Cost based on The KRR-2 Decommissioning Project

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Hee-Seong; Ha, Jea-Hyun; Jin, Hyung-Gon; Park, Seung-Kook

    2015-01-01

    The KAERI be used to calculate the decommissioning cost and manage the data of decommissioning activity experience through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). Some country such as Japan and The United States have the information for decommissioning experience of the NPP and publish reports on decommissioning cost analysis. These reports as valuable data be used to compare with the decommissioning unit cost. In particular, need a method to estimate the decommissioning cost of the NPP because there is no decommissioning experience of NPP in case of Korea. makes possible to predict the more precise prediction about the decommissioning unit cost. But still, there are many differences on calculation for the decommissioning unit cost in domestic and foreign country. Typically, it is difficult to compare with data because published not detailed reports. Therefore, field of estimation for decommissioning cost have to use a unified framework in order to the decommissioning cost be provided to exact of the decommissioning cost

  10. Basic Research about Calculation of the Decommissioning Unit Cost based on The KRR-2 Decommissioning Project

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Hee-Seong; Ha, Jea-Hyun; Jin, Hyung-Gon; Park, Seung-Kook [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    The KAERI be used to calculate the decommissioning cost and manage the data of decommissioning activity experience through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). Some country such as Japan and The United States have the information for decommissioning experience of the NPP and publish reports on decommissioning cost analysis. These reports as valuable data be used to compare with the decommissioning unit cost. In particular, need a method to estimate the decommissioning cost of the NPP because there is no decommissioning experience of NPP in case of Korea. makes possible to predict the more precise prediction about the decommissioning unit cost. But still, there are many differences on calculation for the decommissioning unit cost in domestic and foreign country. Typically, it is difficult to compare with data because published not detailed reports. Therefore, field of estimation for decommissioning cost have to use a unified framework in order to the decommissioning cost be provided to exact of the decommissioning cost.

  11. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    Science.gov (United States)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  12. Critical and subcritical mass calculations of fissionable nuclides based on JENDL-3.2+

    International Nuclear Information System (INIS)

    Okuno, H.

    2002-01-01

    We calculated critical and subcritical masses of 10 fissionable actinides ( 233 U, 235 U, 238 Pu, 239 Pu, 241 Pu, 242m Am, 243 Cm, 244 Cm, 249 Cf and 251 Cf) in metal and in metal-water mixtures (except 238 Pu and 244 Cm). The calculation was made with a combination of a continuous energy Monte Carlo neutron transport code, MCNP-4B2, and the latest released version of the Japanese Evaluated Nuclear Data Library, JENDL-3.2. Other evaluated nuclear data files, ENDF/B-VI, JEF-2.2, and JENDL-3.3 in its preliminary version were also applied to find differences in results originated from different nuclear data files. For the so-called big three fissiles ( 233 U, 235 U and 239 Pu), analyzing the criticality experiments cited in ICSBEP Handbook validated the code-library combination, and calculation errors were consequently evaluated. Estimated critical and lower limit critical masses of the big three in a sphere with/without a water or SS-304 reflector were supplied, and they were compared with the subcritical mass limits of ANS-8.1. (author)

  13. First-principles calculations of bulk and interfacial thermodynamic properties for fcc-based Al-Sc alloys

    International Nuclear Information System (INIS)

    Asta, M.; Foiles, S.M.; Quong, A.A.

    1998-01-01

    The configurational thermodynamic properties of fcc-based Al-Sc alloys and coherent Al/Al 3 Sc interphase-boundary interfaces have been calculated from first principles. The computational approach used in this study combines the results of pseudopotential total-energy calculations with a cluster-expansion description of the alloy energetics. Bulk and interface configurational-thermodynamic properties are computed using a low-temperature-expansion technique. Calculated values of the {100} and {111} Al/Al 3 Sc interfacial energies at zero temperature are, respectively, 192 and 226mJ/m 2 . The temperature dependence of the calculated interfacial free energies is found to be very weak for {100} and more appreciable for {111} orientations; the primary effect of configurational disordering at finite temperature is to reduce the degree of crystallographic anisotropy associated with calculated interfacial free energies. The first-principles-computed solid-solubility limits for Sc in bulk fcc Al are found to be underestimated significantly in comparison with experimental measurements. It is argued that this discrepancy can be largely attributed to nonconfigurational contributions to the entropy which have been neglected in the present thermodynamic calculations. copyright 1998 The American Physical Society

  14. Rose-like I-doped Bi_2O_2CO_3 microspheres with enhanced visible light response: DFT calculation, synthesis and photocatalytic performance

    International Nuclear Information System (INIS)

    Zai, Jiantao; Cao, Fenglei; Liang, Na; Yu, Ke; Tian, Yuan; Sun, Huai; Qian, Xuefeng

    2017-01-01

    Highlights: • DFT reveals I"− can partially substitute CO_3"2"−to narrow the bandgap of Bi_2O_2CO_3. • Sodium citrate play a key role on the formation of rose-like I-doped Bi_2O_2CO_3. • Rose-like I-doped Bi_2O_2CO_3 show enhanced visible light response. • The catalyst has enhanced photocatalytic activity to organic and Cr(VI) pollutes. - Abstract: Based on the crystal structure and the DFT calculation of Bi_2O_2CO_3, I"− can partly replace the CO_3"2"−in Bi_2O_2CO_3 to narrow its bandgap and to enhance its visible light absorption. With this in mind, rose-like I-doped Bi_2O_2CO_3 microspheres were prepared via a hydrothermal process. This method can also be extended to synthesize rose-like Cl- or Br-doped Bi_2O_2CO_3 microspheres. Photoelectrochemical test supports the DFT calculation result that I- doping narrows the bandgap of Bi_2O_2CO_3 by forming two intermediate levels in its forbidden band. Further study reveals that I-doped Bi_2O_2CO_3 microspheres with optimized composition exhibit the best photocatalytic activity. Rhodamine B can be completely degraded within 6 min and about 90% of Cr(VI) can be reduced after 25 min under the irradiation of visible light (λ > 400 nm).

  15. Alternate approach for calculating hardness based on residual indentation depth: Comparison with experiments

    Science.gov (United States)

    Ananthakrishna, G.; K, Srikanth

    2018-03-01

    It is well known that plastic deformation is a highly nonlinear dissipative irreversible phenomenon of considerable complexity. As a consequence, little progress has been made in modeling some well-known size-dependent properties of plastic deformation, for instance, calculating hardness as a function of indentation depth independently. Here, we devise a method of calculating hardness by calculating the residual indentation depth and then calculate the hardness as the ratio of the load to the residual imprint area. Recognizing the fact that dislocations are the basic defects controlling the plastic component of the indentation depth, we set up a system of coupled nonlinear time evolution equations for the mobile, forest, and geometrically necessary dislocation densities. Within our approach, we consider the geometrically necessary dislocations to be immobile since they contribute to additional hardness. The model includes dislocation multiplication, storage, and recovery mechanisms. The growth of the geometrically necessary dislocation density is controlled by the number of loops that can be activated under the contact area and the mean strain gradient. The equations are then coupled to the load rate equation. Our approach has the ability to adopt experimental parameters such as the indentation rates, the geometrical parameters defining the Berkovich indenter, including the nominal tip radius. The residual indentation depth is obtained by integrating the Orowan expression for the plastic strain rate, which is then used to calculate the hardness. Consistent with the experimental observations, the increasing hardness with decreasing indentation depth in our model arises from limited dislocation sources at small indentation depths and therefore avoids divergence in the limit of small depths reported in the Nix-Gao model. We demonstrate that for a range of parameter values that physically represent different materials, the model predicts the three characteristic

  16. DEPDOSE: An interactive, microcomputer based program to calculate doses from exposure to radionuclides deposited on the ground

    International Nuclear Information System (INIS)

    Beres, D.A.; Hull, A.P.

    1991-12-01

    DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer

  17. Machine learning assisted first-principles calculation of multicomponent solid solutions: estimation of interface energy in Ni-based superalloys

    Science.gov (United States)

    Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok

    2018-02-01

    A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.

  18. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    International Nuclear Information System (INIS)

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-01-01

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  19. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  20. Model-based calculations of off-axis ratio of conic beams for a dedicated 6 MV radiosurgery unit

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J. N.; Ding, X.; Du, W.; Pino, R. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology, Methodist Hospital, Houston, Texas 77030 (United States)

    2010-10-15

    Purpose: Because the small-radius photon beams shaped by cones in stereotactic radiosurgery (SRS) lack lateral electronic equilibrium and a detector's finite cross section, direct experimental measurement of dosimetric data for these beams can be subject to large uncertainties. As the dose calculation accuracy of a treatment planning system largely depends on how well the dosimetric data are measured during the machine's commissioning, there is a critical need for an independent method to validate measured results. Therefore, the authors studied the model-based calculation as an approach to validate measured off-axis ratios (OARs). Methods: The authors previously used a two-component analytical model to calculate central axis dose and associated dosimetric data (e.g., scatter factors and tissue-maximum ratio) in a water phantom and found excellent agreement between the calculated and the measured central axis doses for small 6 MV SRS conic beams. The model was based on that of Nizin and Mooij [''An approximation of central-axis absorbed dose in narrow photon beams,'' Med. Phys. 24, 1775-1780 (1997)] but was extended to account for apparent attenuation, spectral differences between broad and narrow beams, and the need for stricter scatter dose calculations for clinical beams. In this study, the authors applied Clarkson integration to this model to calculate OARs for conic beams. OARs were calculated for selected cones with radii from 0.2 to 1.0 cm. To allow comparisons, the authors also directly measured OARs using stereotactic diode (SFD), microchamber, and film dosimetry techniques. The calculated results were machine-specific and independent of direct measurement data for these beams. Results: For these conic beams, the calculated OARs were in excellent agreement with the data measured using an SFD. The discrepancies in radii and in 80%-20% penumbra were within 0.01 cm, respectively. Using SFD-measured OARs as the reference data, the

  1. Analysis of calculated neutron flux response at detectors of G.A. Siwabessy multipurpose reactor (RSG-GAS Reactor)

    International Nuclear Information System (INIS)

    Taryo, Taswanda

    2002-01-01

    Multi Purpose Reactor G.A. Siwabessy (RSG-GAS) reactor core possesses 4 fission-chamber detectors to measure intermediate power level of RSG-GAS reactor. Another detector, also fission-chamber detector, is intended to measure power level of RSG-GAS reactor. To investigate influence of space to the neutron flux values for each detector measuring intermediate and power levels has been carried out. The calculation was carried out using combination of WIMS/D4 and CITATION-3D code and focused on calculation of neutron flux at different detector location of RSG-GAS typical working core various scenarios. For different scenarios, all calculation results showed that each detector, located at different location in the RSG-GAS reactor core, causes different neutron flux occurred in the reactor core due to spatial time effect

  2. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience

    International Nuclear Information System (INIS)

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael

    2007-01-01

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm 3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach

  3. Comparison of Conductor-Temperature Calculations Based on Different Radial-Position-Temperature Detections for High-Voltage Power Cable

    Directory of Open Access Journals (Sweden)

    Lin Yang

    2018-01-01

    Full Text Available In this paper, the calculation of the conductor temperature is related to the temperature sensor position in high-voltage power cables and four thermal circuits—based on the temperatures of insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface are established to calculate the conductor temperature. To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum are built up, and thermocouples are placed at the four radial positions in a 110 kV cross-linked polyethylene (XLPE insulated power cable to measure the temperatures of four positions. In measurements, six cases of current heating test under three laying environments, such as duct, water, and backfilled soil were carried out. Both errors of the conductor temperature calculation and the simulation based on the temperature of insulation shield were significantly smaller than others under all laying environments. It is the uncertainty of the thermal resistivity, together with the difference of the initial temperature of each radial position by the solar radiation, which led to the above results. The thermal capacitance of the air has little impact on errors. The thermal resistance of the air gap is the largest error source. Compromising the temperature-estimation accuracy and the insulation-damage risk, the waterproof compound is the recommended sensor position to improve the accuracy of conductor-temperature calculation. When the thermal resistances were calculated correctly, the aluminum sheath is also the recommended sensor position besides the waterproof compound.

  4. Very fast mass balance and other fuel cycle response calculations for studying back end of fuel cycle scenari

    International Nuclear Information System (INIS)

    Dekens, O.; Marguet, S.; Risch, P.

    1997-01-01

    In order to optimize nuclear fuel utilization, as far as irradiation and storage are concerned, the Research and Development Division of Electricite de France (EDF) developed as fast and accurate software that simulates a fuel assembly life from the inside-reactor stay to the final repository: STRAPONTIN. The discrepancies between reference calculations and STRAPONTIN are generally smaller than 5 %. Moreover, the low calculation time enables to couple STRAPONTIN to any large code in order to widen its scope without impairing its CPU time. (authors)

  5. Web-Based Applications in Calculation of Family Heritage (Science of Faroidh)

    OpenAIRE

    Zufria, M. Hasan Azhari, Ilka

    2017-01-01

    Separating of inheritance according in sciences of faroidh each heir does not get the same part depending on their relationship to the inheritance recipient. This is because the needs of each heir are different, such as heirs of boys and girls differ the reason boys have a big responsibility and if already married  should have been obliged to offer for his family while the daughter if already have family responsible for the necessities of life. In addition, in sciences of faroidh also there i...

  6. A Review of Solid-Solution Models of High-Entropy Alloys Based on Ab Initio Calculations

    Directory of Open Access Journals (Sweden)

    Fuyang Tian

    2017-11-01

    Full Text Available Similar to the importance of XRD in experiments, ab initio calculations, as a powerful tool, have been applied to predict the new potential materials and investigate the intrinsic properties of materials in theory. As a typical solid-solution material, the large degree of uncertainty of high-entropy alloys (HEAs results in the difficulty of ab initio calculations application to HEAs. The present review focuses on the available ab initio based solid-solution models (virtual lattice approximation, coherent potential approximation, special quasirandom structure, similar local atomic environment, maximum-entropy method, and hybrid Monte Carlo/molecular dynamics and their applications and limits in single phase HEAs.

  7. Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings

    Science.gov (United States)

    Ucun, Fatih; Tokatlı, Ahmet

    2015-02-01

    In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.

  8. Calculation of temperature rise for cable conductor of DCS cabinet power based on theory of numerical thermal transfer

    International Nuclear Information System (INIS)

    Tian Yong; Zhang Longqiang; Yang Zhen; Yu Bin

    2014-01-01

    In order to ensure a long-term reliable operation of the DCS cabinet's 220 V AC power cable, it was needed to confirm whether the conductor temperature rise of power cable meet the requirement of the cable specification. Based on the actual data in site and the theory of numerical heat transfer, conservative model was established, and the conductor temperature was calculated. The calculation results show that the cable arrangement on the cable tray will not lead to the conductor temperature rise of power cable over than the required temperature in technical specification. (authors)

  9. Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object.

    Science.gov (United States)

    Kim, Hak Gu; Man Ro, Yong

    2017-11-27

    In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.

  10. Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems

    Science.gov (United States)

    da Jornada, Felipe H.

    2015-03-01

    Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.

  11. Convergence study of isogeometric analysis based on Bezier extraction in electronic structure calculations

    Czech Academy of Sciences Publication Activity Database

    Cimrman, R.; Novák, Matyáš; Kolman, Radek; Tůma, Miroslav; Plešek, Jiří; Vackář, Jiří

    2018-01-01

    Roč. 319, Feb (2018), s. 138-152 ISSN 0096-3003 R&D Projects: GA ČR GA17-12925S; GA ČR(CZ) GAP108/11/0853; GA MŠk(CZ) EF15_003/0000493 Institutional support: RVO:68378271 ; RVO:61388998 ; RVO:67985807 Keywords : electronic structure calculation * density functional theory * finite element method * isogeometric analysis OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.); Materials engineering (UT-L); Applied mathematics (UIVT-O) Impact factor: 1.738, year: 2016

  12. A camera based calculation of 99m Tc-MAG-3 clearance using conjugate views method

    International Nuclear Information System (INIS)

    Hojabr, M.; Rajabi, H.; Eftekhari, M.

    2004-01-01

    Background: measurement of absolute or different renal function using radiotracers plays an important role in the clinical management of various renal diseases. Gamma camera quantitative methods is approximations of renal clearance may potentially be as accurate as plasma clearance methods. However some critical factors such as kidney depth and background counts are still troublesome in the use of this technique. In this study the conjugate-view method along with some background correction technique have been used for the measurement of renal activity in 99m Tc- MAG 3 renography. Transmission data were used for attenuation correction and the source volume was considered for accurate background subtraction. Materials and methods: the study was performed in 35 adult patients referred to our department for conventional renography and ERPF calculation. Depending on patients weight approximately 10-15 mCi 99 Tc-MAG 3 was injected in the form of a sharp bolus and 60 frames of 1 second followed by 174 frames of 10 seconds were acquired for each patient. Imaging was performed on a dual-head gamma camera(SOLUS; SunSpark10, ADAC Laboratories, Milpitas, CA) anterior and posterior views were acquired simultaneously. A LEHR collimator was used to correct the scatter for the emission and transmission images. Buijs factor was applied on background counts before background correction (Rutland-Patlak equation). gamma camera clearance was calculated using renal uptake in 1-2, 1.5-2.5, 2-3 min. The same procedure was repeated for both renograms obtained from posterior projection and conjugated views. The plasma clearance was also directly calculated by three blood samples obtained at 40, 80, 120 min after injection. Results: 99 Tc-MAG 3 clearance using direct sampling method were used as reference values and compared to the results obtained from the renograms. The maximum correlation was found between conjugate view clearance at 2-3 min (R=0.99, R 2 =0.98, SE=15). Conventional

  13. Programs and subroutines for calculating cadmium body burdens based on a one-compartment model

    International Nuclear Information System (INIS)

    Robinson, C.V.; Novak, K.M.

    1980-08-01

    A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters

  14. REITP3-Hazard evaluation program for heat release based on thermochemical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Akutsu, Yoshiaki.; Tamura, Masamitsu. [The University of Tokyo, Tokyo (Japan). School of Engineering; Kawakatsu, Yuichi. [Oji Paper Corp., Tokyo (Japan); Wada, Yuji. [National Institute for Resources and Environment, Tsukuba (Japan); Yoshida, Tadao. [Hosei University, Tokyo (Japan). College of Engineering

    1999-06-30

    REITP3-A hazard evaluation program for heat release besed on thermochemical calculation has been developed by modifying REITP2 (Revised Estimation of Incompatibility from Thermochemical Properties{sup 2)}. The main modifications are as follows. (1) Reactants are retrieved from the database by chemical formula. (2) As products are listed in an external file, the addition of products and change in order of production can be easily conducted. (3) Part of the program has been changed by considering its use on a personal computer or workstation. These modifications will promote the usefulness of the program for energy hazard evaluation. (author)

  15. Reference quantum chemical calculations on RNA base pairs directly involving the 2'-OH group of ribose

    Czech Academy of Sciences Publication Activity Database

    Šponer, Jiří; Zgarbová, M.; Jurečka, Petr; Riley, K.E.; Šponer, Judit E.; Hobza, Pavel

    2009-01-01

    Roč. 5, č. 4 (2009), s. 1166-1179 ISSN 1549-9618 R&D Projects: GA AV ČR(CZ) IAA400040802; GA AV ČR(CZ) IAA400550701; GA MŠk(CZ) LC06030; GA MŠk(CZ) LC512 Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702; CEZ:AV0Z40550506 Keywords : RNA * ribose * quantum calculations Subject RIV: BO - Biophysics Impact factor: 4.804, year: 2009

  16. Effects of sulfur on lead partitioning during sludge incineration based on experiments and thermodynamic calculations

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jing-yong, E-mail: www053991@126.com [School of Environmental Science and Engineering, Guangdong University of Technology, Guangzhou 510006 (China); Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe [School of Environmental Science and Engineering, Guangdong University of Technology, Guangzhou 510006 (China); Li, Xiao-ming [Guangdong Testing Institute of Product Quality Supervision, Guangzhou 510330 (China); Chen, Tao [State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Luo, Guang-qian [State Key Laboratory of Coal Combustion, Huazhong University of Science and Technology, Wuhan 430074 (China); Xie, Wu-ming; Wang, Yu-jie; Zhuo, Zhong-xu; Fu, Jie-wen [School of Environmental Science and Engineering, Guangdong University of Technology, Guangzhou 510006 (China)

    2015-04-15

    Highlights: • A thermodynamic equilibrium calculation was carried out. • Effects of three types of sulfurs on Pb distribution were investigated. • The mechanism for three types of sulfurs acting on Pb partitioning were proposed. • Lead partitioning and species in bottom ash and fly ash were identified. - Abstract: Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na{sub 2}S and Na{sub 2}SO{sub 4}) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na{sub 2}SO{sub 4} and Na{sub 2}S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO{sub 4}(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO{sub 2}, CaO, TiO{sub 2}, and Al{sub 2}O{sub 3} containing materials function as condensed phase solids in the temperature range of 800–1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the

  17. Microscopic calculations of elastic scattering between light nuclei based on a realistic nuclear interaction

    Energy Technology Data Exchange (ETDEWEB)

    Dohet-Eraly, Jeremy [F.R.S.-FNRS (Belgium); Sparenberg, Jean-Marc; Baye, Daniel, E-mail: jdoheter@ulb.ac.be, E-mail: jmspar@ulb.ac.be, E-mail: dbaye@ulb.ac.be [Physique Nucleaire et Physique Quantique, CP229, Universite Libre de Bruxelles (ULB), B-1050 Brussels (Belgium)

    2011-09-16

    The elastic phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions are calculated in a cluster approach by the Generator Coordinate Method coupled with the Microscopic R-matrix Method. Two interactions are derived from the realistic Argonne potentials AV8' and AV18 with the Unitary Correlation Operator Method. With a specific adjustment of correlations on the {alpha} + {alpha} collision, the phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions agree rather well with experimental data.

  18. Photon and electron data bases and their use in radiation transport calculations

    International Nuclear Information System (INIS)

    Cullen, D.E.; Perkins, S.T.; Seltzer, S.M.

    1992-02-01

    The ENDF/B-VI photon interaction library includes data to describe the interaction of photons with the elements Z=1 to 100 over the energy range 10 eV to 100 MeV. This library has been designed to meet the traditional needs of users to model the interaction and transport of primary photons. However, this library contains additional information which used in a combination with our other data libraries can be used to perform much more detailed calculations, e.g., emission of secondary fluorescence photons. This paper describes both traditional and more detailed uses of this library

  19. A Microsoft Excel® 2010 Based Tool for Calculating Interobserver Agreement

    Science.gov (United States)

    Azulay, Richard L

    2011-01-01

    This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel®) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work. PMID:22649578

  20. A microsoft excel(®) 2010 based tool for calculating interobserver agreement.

    Science.gov (United States)

    Reed, Derek D; Azulay, Richard L

    2011-01-01

    This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel(®)) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work.

  1. Lattice dynamics calculations based on density-functional perturbation theory in real space

    Science.gov (United States)

    Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias

    2017-06-01

    A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.

  2. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    Institute of Scientific and Technical Information of China (English)

    桂劲松; 刘红; 康海贵

    2004-01-01

    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  3. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  4. Opportunity costs calculation in agent-based vehicle routing and scheduling

    NARCIS (Netherlands)

    Mes, Martijn R.K.; van der Heijden, Matthijs C.; Schuur, Peter

    2006-01-01

    In this paper we consider a real-time, dynamic pickup and delivery problem with timewindows where orders should be assigned to one of a set of competing transportation companies. Our approach decomposes the problem into a multi-agent structure where vehicle agents are responsible for the routing and

  5. Calculation of the uncertainty in complication probability for various dose-response models, applied to the parotid gland

    International Nuclear Information System (INIS)

    Schilstra, C.; Meertens, H.

    2001-01-01

    Purpose: Usually, models that predict normal tissue complication probability (NTCP) are fitted to clinical data with the maximum likelihood (ML) method. This method inevitably causes a loss of information contained in the data. In this study, an alternative method is investigated that calculates the parameter probability distribution (PD), and, thus, conserves all information. The PD method also allows the calculation of the uncertainty in the NTCP, which is an (often-neglected) prerequisite for the intercomparison of both treatment plans and NTCP models. The PD and ML methods are applied to parotid gland data, and the results are compared. Methods and Materials: The drop in salivary flow due to radiotherapy was measured in 25 parotid glands of 15 patients. Together with the parotid gland dose-volume histograms (DVH), this enabled the calculation of the parameter PDs for three different NTCP models (Lyman, relative seriality, and critical volume). From these PDs, the NTCP and its uncertainty could be calculated for arbitrary parotid gland DVHs. ML parameters and resulting NTCP values were calculated also. Results: All models fitted equally well. The parameter PDs turned out to have nonnormal shapes and long tails. The NTCP predictions of the ML and PD method usually differed considerably, depending on the NTCP model and the nature of irradiation. NTCP curves and ML parameters suggested a highly parallel organization of the parotid gland. Conclusions: Considering the substantial differences between the NTCP predictions of the ML and PD method, the use of the PD method is preferred, because this is the only method that takes all information contained in the clinical data into account. Furthermore, PD method gives a true measure of the uncertainty in the NTCP

  6. Three-Phase Short-Circuit Current Calculation of Power Systems with High Penetration of VSC-Based Renewable Energy

    Directory of Open Access Journals (Sweden)

    Niancheng Zhou

    2018-03-01

    Full Text Available Short-circuit current level of power grid will be increased with high penetration of VSC-based renewable energy, and a strong coupling between transient fault process and control strategy will change the fault features. The full current expression of VSC-based renewable energy was obtained according to transient characteristics of short-circuit current. Furtherly, by analyzing the closed-loop transfer function model of controller and current source characteristics presented in steady state during a fault, equivalent circuits of VSC-based renewable energy of fault transient state and steady state were proposed, respectively. Then the correctness of the theory was verified by experimental tests. In addition, for power grid with VSC-based renewable energy, superposition theorem was used to calculate AC component and DC component of short-circuit current, respectively, then the peak value of short-circuit current was evaluated effectively. The calculated results could be used for grid planning and design, short-circuit current management as well as adjustment of relay protection. Based on comparing calculation and simulation results of 6-node 500 kV Huainan power grid and 35-node 220 kV Huaisu power grid, the effectiveness of the proposed method was verified.

  7. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  8. Consideration of relativistic effects in band structure calculations based on the empirical tight-binding method

    International Nuclear Information System (INIS)

    Hanke, M.; Hennig, D.; Kaschte, A.; Koeppen, M.

    1988-01-01

    The energy band structure of cadmium telluride and mercury telluride materials is investigated by means of the tight-binding (TB) method considering relativistic effects and the spin-orbit interaction. Taking into account relativistic effects in the method is rather simple though the size of the Hamilton matrix doubles. Such considerations are necessary for the interesting small-interstice semiconductors, and the experimental results are reflected correctly in the band structures. The transformation behaviour of the eigenvectors within the Brillouin zone gets more complicated, but is, nevertheless, theoretically controllable. If, however, the matrix elements of the Green operator are to be calculated, one has to use formula manipulation programmes in particular for non-diagonal elements. For defect calculations by the Koster-Slater theory of scattering it is necessary to know these matrix elements. Knowledge of the transformation behaviour of eigenfunctions saves frequent diagonalization of the Hamilton matrix and thus permits a numerical solution of the problem. Corresponding results for the sp 3 basis are available

  9. Effects of sulfur on lead partitioning during sludge incineration based on experiments and thermodynamic calculations.

    Science.gov (United States)

    Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-Jie; Zhuo, Zhong-xu; Fu, Jie-wen

    2015-04-01

    Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na2S and Na2SO4) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na2SO4 and Na2S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO4(s) at low temperatures (incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the concentration of Si, Ca and Al-containing compounds in the sludge. These findings provide useful information for understanding the partitioning behavior of Pb, facilitating the development of strategies to control the volatilization of Pb during sludge incineration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. REVIEW OF ADVANCES IN COBB ANGLE CALCULATION AND IMAGE-BASED MODELLING TECHNIQUES FOR SPINAL DEFORMITIES

    Directory of Open Access Journals (Sweden)

    V. Giannoglou

    2016-06-01

    Full Text Available Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  11. Evaluation of Accuracy of Calculational Prediction of Criticality Based on ICSBEP Handbook Experiments

    International Nuclear Information System (INIS)

    Golovko, Yury; Rozhikhin, Yevgeniy; Tsibulya, Anatoly; Koscheev, Vladimir

    2008-01-01

    Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k eff caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)

  12. Surface energy budget and thermal inertia at Gale Crater: Calculations from ground-based measurements.

    Science.gov (United States)

    Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M

    2014-08-01

    The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia ( I ) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼10 4  m 2 to ∼10 7  m 2 . Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼10 2  m 2 . We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I  = 452 J m -2  K -1  s -1/2 (SI units used throughout this article) is found at YKB followed by PL with I  = 306 and RCK with I  = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars.

  13. Risk Analysis of Reservoir Flood Routing Calculation Based on Inflow Forecast Uncertainty

    Directory of Open Access Journals (Sweden)

    Binquan Li

    2016-10-01

    Full Text Available Possible risks in reservoir flood control and regulation cannot be objectively assessed by deterministic flood forecasts, resulting in the probability of reservoir failure. We demonstrated a risk analysis of reservoir flood routing calculation accounting for inflow forecast uncertainty in a sub-basin of Huaihe River, China. The Xinanjiang model was used to provide deterministic flood forecasts, and was combined with the Hydrologic Uncertainty Processor (HUP to quantify reservoir inflow uncertainty in the probability density function (PDF form. Furthermore, the PDFs of reservoir water level (RWL and the risk rate of RWL exceeding a defined safety control level could be obtained. Results suggested that the median forecast (50th percentiles of HUP showed better agreement with observed inflows than the Xinanjiang model did in terms of the performance measures of flood process, peak, and volume. In addition, most observations (77.2% were bracketed by the uncertainty band of 90% confidence interval, with some small exceptions of high flows. Results proved that this framework of risk analysis could provide not only the deterministic forecasts of inflow and RWL, but also the fundamental uncertainty information (e.g., 90% confidence band for the reservoir flood routing calculation.

  14. Identifying the Interaction of Vancomycin With Novel pH-Responsive Lipids as Antibacterial Biomaterials Via Accelerated Molecular Dynamics and Binding Free Energy Calculations.

    Science.gov (United States)

    Ahmed, Shaimaa; Vepuri, Suresh B; Jadhav, Mahantesh; Kalhapure, Rahul S; Govender, Thirumala

    2018-06-01

    Nano-drug delivery systems have proven to be an efficient formulation tool to overcome the challenges with current antibiotics therapy and resistance. A series of pH-responsive lipid molecules were designed and synthesized for future liposomal formulation as a nano-drug delivery system for vancomycin at the infection site. The structures of these lipids differ from each other in respect of hydrocarbon tails: Lipid1, 2, 3 and 4 have stearic, oleic, linoleic, and linolenic acid hydrocarbon chains, respectively. The impact of variation in the hydrocarbon chain in the lipid structure on drug encapsulation and release profile, as well as mode of drug interaction, was investigated using molecular modeling analyses. A wide range of computational tools, including accelerated molecular dynamics, normal molecular dynamics, binding free energy calculations and principle component analysis, were applied to provide comprehensive insight into the interaction landscape between vancomycin and the designed lipid molecules. Interestingly, both MM-GBSA and MM-PBSA binding affinity calculations using normal molecular dynamics and accelerated molecular dynamics trajectories showed a very consistent trend, where the order of binding affinity towards vancomycin was lipid4 > lipid1 > lipid2 > lipid3. From both normal molecular dynamics and accelerated molecular dynamics, the interaction of lipid3 with vancomycin is demonstrated to be the weakest (∆G binding  = -2.17 and -11.57, for normal molecular dynamics and accelerated molecular dynamics, respectively) when compared to other complexes. We believe that the degree of unsaturation of the hydrocarbon chain in the lipid molecules may impact on the overall conformational behavior, interaction mode and encapsulation (wrapping) of the lipid molecules around the vancomycin molecule. This thorough computational analysis prior to the experimental investigation is a valuable approach to guide for predicting the encapsulation

  15. TO THE SOLUTION OF PROBLEMS ABOUT THE RAILWAYS CALCULATION FOR STRENGTH TAKING INTO ACCOUNT UNEQUAL ELASTICITY OF THE SUBRAIL BASE

    Directory of Open Access Journals (Sweden)

    D. M. Kurhan

    2014-11-01

    Full Text Available Purpose. The module of elasticity of the subrail base is one of the main characteristics for an assessment intense the deformed condition of a track. Need for different cases to consider unequal elasticity of the subrail base repeatedly was considered, however, results contained rather difficult mathematical approaches and the obtained decisions didn't keep within borders of standard engineering calculation of a railway on strength. Therefore the purpose of this work is obtaining the decision within this document. Methodology. It is offered to consider a rail model as a beam which has the distributed loading of such outline corresponding to value of the module of elasticity that gives an equivalent deflection at free seating on bearing parts. Findings. The method of the accounting of gradual change of the module of elasticity of the subrail base by means of the correcting coefficient in engineering calculation of a way on strength was received. Expansion of existing calculation of railways strength was developed for the accounting of sharp change of the module of elasticity of the subrail base (for example, upon transition from a ballast design of a way on the bridge. The characteristic of change of forces operating from a rail on a basis, depending on distance to the bridge on an approach site from a ballast design of a way was received. The results of the redistribution of forces after a sudden change in the elastic modulus of the base under the rail explain the formation of vertical irregularities before the bridge. Originality. The technique of engineering calculation of railways strength for performance of calculations taking into account unequal elasticity of the subrail base was improved. Practical value. The obtained results allow carrying out engineering calculations for an assessment of strength of a railway in places of unequal elasticity caused by a condition of a way or features of a design. The solution of the return task on

  16. Thermal neutron dose calculations in a brain phantom from 7Li(p,n) reaction based BNCT setup

    International Nuclear Information System (INIS)

    Elshahat, B.A.; Naqvi, A.A.; Maalej, N.; Abdallah, Khalid

    2006-01-01

    Monte Carlo simulations were carried out to calculate neutron dose in a brain phantom from a 7 Li(p,n) reaction based setup utilizing a high density polyethylene moderator with graphite reflector. The dimensions of the moderator and the reflector were optimized through optimization of epithermal /(fast +thermal) neutron intensity ratio as a function of geometric parameters of the setup. Results of our calculation showed the capability of our setup to treat the tumor within 4 cm of the head surface. The calculated Peak Therapeutic Ratio for the setup was found to be 2.15. With further improvement in the moderator design and brain phantom irradiation arrangement, the setup capabilities can be improved to reach further deep-seated tumor. (author)

  17. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y; Tian, Z; Song, T; Jia, X; Gu, X; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accounting for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.

  18. Web-based Tsunami Early Warning System with instant Tsunami Propagation Calculations in the GPU Cloud

    Science.gov (United States)

    Hammitzsch, M.; Spazier, J.; Reißland, S.

    2014-12-01

    Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the

  19. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    International Nuclear Information System (INIS)

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-01-01

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm 2 ). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm 2 . Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm 2 ) only 92% of the data meet the criteria. Total scatter factors show a good agreement ( 2 ) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm 2 . Special care must be taken for smaller fields.

  20. A Scientific Calculator for Exact Real Number Computation Based on LRT, GMP and FC++

    Directory of Open Access Journals (Sweden)

    J. A. Hernández

    2012-03-01

    Full Text Available Language for Redundant Test (LRT is a programming language for exact real number computation. Its lazy evaluation mechanism (also called call-by-need and its infinite list requirement, make the language appropriate to be implemented in a functional programming language such as Haskell. However, a direction translation of the operational semantics of LRT into Haskell as well as the algorithms to implement basic operations (addition subtraction, multiplication, division and trigonometric functions (sin, cosine, tangent, etc. makes the resulting scientific calculator time consuming and so inefficient. In this paper, we present an alternative implementation of the scientific calculator using FC++ and GMP. FC++ is a functional C++ library while GMP is a GNU multiple presicion library. We show that a direct translation of LRT in FC++ results in a faster scientific calculator than the one presented in Haskell.El lenguaje de verificación redundante (LRT, por sus siglas en inglés es un lenguaje de programación para el cómputo con números reales exactos. Su método de evaluación lazy (o mejor conocido como llamada por necesidad y el manejo de listas infinitas requerido, hace que el lenguaje sea apropiado para su implementación en un lenguaje funcional como Haskell. Sin embargo, la implementación directa de la semántica operacional de LRT en Haskell así como los algoritmos para funciones básicas (suma, resta, multiplicación y división y funciones trigonométricas (seno, coseno, tangente, etc hace que la calculadora científica resultante sea ineficiente. En este artículo, presentamos una implementación alternativa de la calculadora científica usando FC++ y GMP. FC++ es una librería que utiliza el paradigma Funcional en C++ mientras que GMP es una librería GNU de múltiple precisión. En el artículo mostramos que la implementación directa de LRT en FC++ resulta en una librería más eficiente que la implementada en Haskell.

  1. Reliability Analysis-Based Numerical Calculation of Metal Structure of Bridge Crane

    Directory of Open Access Journals (Sweden)

    Wenjun Meng

    2013-01-01

    Full Text Available The study introduced a finite element model of DQ75t-28m bridge crane metal structure and made finite element static analysis to obtain the stress response of the dangerous point of metal structure in the most extreme condition. The simulated samples of the random variable and the stress of the dangerous point were successfully obtained through the orthogonal design. Then, we utilized BP neural network nonlinear mapping function trains to get the explicit expression of stress in response to the random variable. Combined with random perturbation theory and first-order second-moment (FOSM method, the study analyzed the reliability and its sensitivity of metal structure. In conclusion, we established a novel method for accurately quantitative analysis and design of bridge crane metal structure.

  2. A noise level prediction method based on electro-mechanical frequency response function for capacitors.

    Science.gov (United States)

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.

  3. Molecular interactions of nucleic acid bases. From ab initio calculations to molecular dynamics simulations

    Czech Academy of Sciences Publication Activity Database

    Šponer, Jiří

    2002-01-01

    Roč. 223, - (2002), s. 212 ISSN 0065-7727. [Annual Meeting of the American Chemistry Society /223./. 07.04.2002-11.04.2002, Orlando ] Institutional research plan: CEZ:AV0Z5004920 Keywords : quantum chemistry * base pairing * base stacking Subject RIV: BO - Biophysics

  4. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K [Department of Radiation Oncology, NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  5. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    International Nuclear Information System (INIS)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K

    2016-01-01

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  6. Calculation of elastic-plastic strain ranges for fatigue analysis based on linear elastic stresses

    International Nuclear Information System (INIS)

    Sauer, G.

    1998-01-01

    Fatigue analysis requires that the maximum strain ranges be known. These strain ranges are generally computed from linear elastic analysis. The elastic strain ranges are enhanced by a factor K e to obtain the total elastic-plastic strain range. The reliability of the fatigue analysis depends on the quality of this factor. Formulae for calculating the K e factor are proposed. A beam is introduced as a computational model for determining the elastic-plastic strains. The beam is loaded by the elastic stresses of the real structure. The elastic-plastic strains of the beam are compared with the beam's elastic strains. This comparison furnishes explicit expressions for the K e factor. The K e factor is tested by means of seven examples. (orig.)

  7. Analysis of calculating methods for failure distribution function based on maximal entropy principle

    International Nuclear Information System (INIS)

    Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli

    2009-01-01

    The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)

  8. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  9. An approach to calculating metal particle detection in lubrication oil based on a micro inductive sensor

    Science.gov (United States)

    Wu, Yu; Zhang, Hongpeng

    2017-12-01

    A new microfluidic chip is presented to enhance the sensitivity of a micro inductive sensor, and an approach to coil inductance change calculation is introduced for metal particle detection in lubrication oil. Electromagnetic knowledge is used to establish a mathematical model of an inductive sensor for metal particle detection, and the analytic expression of coil inductance change is obtained by a magnetic vector potential. Experimental verification is carried out. The results show that copper particles 50-52 µm in diameter have been detected; the relative errors between the theoretical and experimental values are 7.68% and 10.02% at particle diameters of 108-110 µm and 50-52 µm, respectively. The approach presented here can provide a theoretical basis for an inductive sensor in metal particle detection in oil and other areas of application.

  10. Calculation of the Strip Foundation on Solid Elastic Base, Taking into Account the Karst Collapse

    Science.gov (United States)

    Sharapov, R.; Lodigina, N.

    2017-07-01

    Karst processes greatly complicate the construction and operation of buildings and structures. Due to the karstic deformations at different times there have been several major accidents, which analysis showed that in all cases the fundamental errors committed at different stages of building development: site selection, engineering survey, design, construction or operation of the facilities. Theory analysis of beams on elastic foundation is essential in building practice. Specialist engineering facilities often have to resort to multiple designing in finding efficient forms of construction of these facilities. In work the calculation of stresses in cross-sections of the strip foundation evenly distributed load in the event of karst. A comparison of extreme stress in the event of karst and without accounting for the strip foundation as a beam on an elastic foundation.

  11. CALCULATION METHOD OF ELECTRIC POWER LINES MAGNETIC FIELD STRENGTH BASED ON CYLINDRICAL SPATIAL HARMONICS

    Directory of Open Access Journals (Sweden)

    A.V. Erisov

    2016-05-01

    Full Text Available Purpose. Simplification of accounting ratio to determine the magnetic field strength of electric power lines, and assessment of their environmental safety. Methodology. Description of the transmission lines of the magnetic field by using techniques of spatial harmonic analysis in the cylindrical coordinate system is carried out. Results. For engineering calculations of electric power lines magnetic field with sufficient accuracy describes their first spatial harmonic magnetic field. Originality. Substantial simplification of the definition of the impact of the construction of transmission line poles on the value of its magnetic field and the bands of land alienation sizes. Practical value. The environmentally friendly projection electric power lines on the level of the magnetic field.

  12. Shielding property of bismuth glass based on MCNP 5 and WINXCOM simulated calculation

    International Nuclear Information System (INIS)

    Zhang Zhicheng; Zhang Jinzhao; Liu Ze; Lu Chunhai; Chen Min

    2013-01-01

    Background: Currently, lead glass is widely used as observation window, while lead is toxic heavy metal. Purpose: Non-toxic materials and their shielding effects are researched in order to find a new material to replace lead containing material. Methods: The mass attenuation coefficients of bismuth silicate glass were investigated with gamma-ray's energy at 0.662 MeV, 1.17 MeV and 1.33 MeV, respectively, by MCNP 5 (Monte Carlo) and WINXCOM program, and compared with those of the lead glass. Results: With attenuation factor K, shielding and mechanical properties taken into consideration bismuth glass containing 50% bismuth oxide might be selected as the right material. Dose rate distributions of water phantom were calculated with 2-cm and 10-cm thick glass, respectively, irradiated by 137 Cs and 60 Co in turn. Conclusion: Results show that the bismuth glass may replace lead glass for radiation shielding with appropriate energy. (authors)

  13. KBERG: KnowledgeBase for Estrogen Responsive Genes

    DEFF Research Database (Denmark)

    Tang, Suisheng; Zhang, Zhuo; Tan, Sin Lam

    2007-01-01

    Estrogen has a profound impact on human physiology affecting transcription of numerous genes. To decipher functional characteristics of estrogen responsive genes, we developed KnowledgeBase for Estrogen Responsive Genes (KBERG). Genes in KBERG were derived from Estrogen Responsive Gene Database...... (ERGDB) and were analyzed from multiple aspects. We explored the possible transcription regulation mechanism by capturing highly conserved promoter motifs across orthologous genes, using promoter regions that cover the range of [-1200, +500] relative to the transcription start sites. The motif detection...... is based on ab initio discovery of common cis-elements from the orthologous gene cluster from human, mouse and rat, thus reflecting a degree of promoter sequence preservation during evolution. The identified motifs are linked to transcription factor binding sites based on the TRANSFAC database. In addition...

  14. Improvement in MFTF data base system response times

    International Nuclear Information System (INIS)

    Lang, N.C.; Nelson, B.C.

    1983-01-01

    The Supervisory Control and Diagnostic System for the Mirror Fusion Test Facility (MFTF) has been designed as an event driven system. To this end we have designed a data base notification facility in which a task can request that it be loaded and started whenever an element in the data base is changed beyond some user defined range. Our initial implementation of the notify facility exhibited marginal response times whenever a data base table with a large number of outstanding notifies was written into. In this paper we discuss the sources of the slow response and describe in detail a new structure for the list of notifies which minimizes search time resulting in significantly faster response

  15. Dielectric Response at THz Frequencies of Mg Water Complexes Interacting with O3 Calculated by Density Functional Theory

    Science.gov (United States)

    2012-10-24

    of the atoms in a chemical system , at the maximal peak of the energy surface separating reactants from products . In the transition state every normal...Hada, M. Ehara, K. Toyota , R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda , O. Kitao, H. Nakai, T. Vreven, J. A. Montgomery, Jr., J. E...calculations of ground state resonance structure associated with water complexes of Mg and the interaction of these complexes with Ozone using DFT. The

  16. Equipment response spectra for base-isolated shear beam structures

    International Nuclear Information System (INIS)

    Ahmadi, G.; Su, L.

    1992-01-01

    Equipment response spectra in base-isolated structure under seismic ground excitations are studied. The equipment is treated as a single-degree-of-freedom system attached to a nonuniform elastic beam structural model. Several leading base isolation systems, including the laminated rubber bearing, the resilient-friction base isolator with and without a sliding upper plate, and the EDF system are considered. Deflection and acceleration response spectra for the equipment and the shear beam structure subject to a sinusoidal and the accelerogram of the N00W component of El Centro 1940 earthquake are evaluated. Primary-secondary interaction effects are included in the analysis. Several numerical parametric studies are carried out and the effectiveness of different base isolation systems in protecting the nonstructural components is studied. It is shown that use of properly designed base isolation systems provides considerable protection for secondary systems, as well as, the structure against severe seismic loadings. (orig.)

  17. TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, T; Bush, K [Stanford School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identify the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.

  18. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  19. A thermodynamic data base for Tc to calculate equilibrium solubilities at temperatures up to 300 deg C

    Energy Technology Data Exchange (ETDEWEB)

    Puigdomenech, I [Studsvik AB, Nykoeping (Sweden); Bruno, J [Intera Information Technologies SL, Cerdanyola (Spain)

    1995-04-01

    Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T<100 deg C and at the steam saturated pressure at higher temperatures. For aqueous species, the revised Helgeson-Kirkham-Flowers model is used for temperature extrapolations. The data base contains a large amount of estimated data, and the methods used for these estimations are described in detail. A new equation is presented that allows the estimation of {Delta}{sub r}Cdeg{sub pm} values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs.

  20. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  1. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  2. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    International Nuclear Information System (INIS)

    Pan, Zhao; Thomson, Scott; Whitehead, Jared; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. (paper)

  3. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587

  4. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    Science.gov (United States)

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.

  5. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    Science.gov (United States)

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift

  6. Calculations of radiation fields and monkey mid-head and mid-thorax responses in AFRRI-TRIGA reactor facility experiments

    International Nuclear Information System (INIS)

    Johnson, J.O.; Emmett, M.B.; Pace, J.V. III.

    1983-07-01

    A computational study was performed to characterize the radiation exposure fields and the mid-head and mid-thorax response functions for monkeys irradiated in the Armed Forces Radiobiological Research Institute (AFRRI) reactor exposure facilities. Discrete ordinates radiation transport calculations were performed in one-dimensional spherical geometry to obtain the energy spectra of the neutrons and gamma rays entering the room through various spectrum modifiers and reaching the irradiation position. Adjoint calculations performed in two-dimensional cylindrical geometry yielded the mid-head and mid-thorax response functions, which were then folded with flux spectra to obtain the monkey mid-head and mid-thorax doses (kerma rates) received at the irradiation position. The results of the study are presented both as graphs and as tables. The resulting spectral shapes compared favorably with previous work; however, the magnitudes of the fluxes did not. The differences in the magnitudes may be due to the normalization factor used

  7. One-velocity neutron diffusion calculations based on a two-group reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Bingulac, S; Radanovic, L; Lazarevic, B; Matausek, M; Pop-Jordanov, J [Boris Kidric Institute of Nuclear Sciences, Vinca, Belgrade (Yugoslavia)

    1965-07-01

    Many processes in reactor physics are described by the energy dependent neutron diffusion equations which for many practical purposes can often be reduced to one-dimensional two-group equations. Though such two-group models are satisfactory from the standpoint of accuracy, they require rather extensive computations which are usually iterative and involve the use of digital computers. In many applications, however, and particularly in dynamic analyses, where the studies are performed on analogue computers, it is preferable to avoid iterative calculations. The usual practice in such situations is to resort to one group models, which allow the solution to be expressed analytically. However, the loss in accuracy is rather great particularly when several media of different properties are involved. This paper describes a procedure by which the solution of the two-group neutron diffusion. equations can be expressed analytically in the form which, from the computational standpoint, is as simple as the one-group model, but retains the accuracy of the two-group treatment. In describing the procedure, the case of a multi-region nuclear reactor of cylindrical geometry is treated, but the method applied and the results obtained are of more general application. Another approach in approximate solution of diffusion equations, suggested by Galanin is applicable only in special ideal cases.

  8. Calculation of benefit reserves based on true m-thly benefit premiums

    Science.gov (United States)

    Riaman; Susanti, Dwi; Supriatna, Agus; Nurani Ruchjana, Budi

    2017-10-01

    Life insurance is a form of insurance that provides risk mitigation in life or death of a human. One of its advantages is measured life insurance. Insurance companies ought to give a sum of money as reserves to the customers. The benefit reserves are an alternative calculation which involves net and cost premiums. An insured may pay a series of benefit premiums to an insurer equivalent, at the date of policy issue, to the sum of to be paid on the death of the insured, or on survival of the insured to the maturity date. A balancing item is required and this item is a liability for one of the parties and the other is an asset. The balancing item, in loan, is the outstanding principle, an asset for the lender and the liability for the borrower. In this paper we examined the benefit reserves formulas corresponding to the formulas for true m-thly benefit premiums by the prospective method. This method specifies that, the reserves at the end of the first year are zero. Several principles can be used for the determined of benefit premiums, an equivalence relation is established in our discussion.

  9. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods

    International Nuclear Information System (INIS)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.

    2009-01-01

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  10. Dispersion calculation method based on S-transform and coordinate rotation for Love channel waves with two components

    Science.gov (United States)

    Feng, Lei; Zhang, Yugui

    2017-08-01

    Dispersion analysis is an important part of in-seam seismic data processing, and the calculation accuracy of the dispersion curve directly influences pickup errors of channel wave travel time. To extract an accurate channel wave dispersion curve from in-seam seismic two-component signals, we proposed a time-frequency analysis method based on single-trace signal processing; in addition, we formulated a dispersion calculation equation, based on S-transform, with a freely adjusted filter window width. To unify the azimuth of seismic wave propagation received by a two-component geophone, the original in-seam seismic data undergoes coordinate rotation. The rotation angle can be calculated based on P-wave characteristics, with high energy in the wave propagation direction and weak energy in the vertical direction. With this angle acquisition, a two-component signal can be converted to horizontal and vertical directions. Because Love channel waves have a particle vibration track perpendicular to the wave propagation direction, the signal in the horizontal and vertical directions is mainly Love channel waves. More accurate dispersion characters of Love channel waves can be extracted after the coordinate rotation of two-component signals.

  11. SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy

    International Nuclear Information System (INIS)

    Kalantzis, G; Leventouri, T; Tachibana, H; Shang, C

    2015-01-01

    Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction while the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms

  12. Dementia caregivers' responses to 2 Internet-based intervention programs.

    Science.gov (United States)

    Marziali, Elsa; Garcia, Linda J

    2011-02-01

    The aim of this study was to examine the impact on dementia caregivers' experienced stress and health status of 2 Internet-based intervention programs. Ninety-one dementia caregivers were given the choice of being involved in either an Internet-based chat support group or an Internet-based video conferencing support group. Pre-post outcome measures focused on distress, health status, social support, and service utilization. In contrast to the Chat Group, the Video Group showed significantly greater improvement in mental health status. Also, for the Video Group, improvements in self-efficacy, neuroticism, and social support were associated with lower stress response to coping with the care recipient's cognitive impairment and decline in function. The results show that, of 2 Internet-based intervention programs for dementia caregivers, the video conferencing intervention program was more effective in improving mental health status and improvement in personal characteristics were associated with lower caregiver stress response.

  13. Ship motion-based wave estimation using a spectral residual-calculation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; H. Brodtkorb, Astrid

    2018-01-01

    This paper presents a study focused on a newly developed procedure for wave spectrum estimation using wave-induced motion recordings from a ship. The particular procedure stands out from other existing, similar ship motion-based pro-cedures by its computational efficiency and - at the same time- ...

  14. Calculating the Entropy of Solid and Liquid Metals, Based on Acoustic Data

    Science.gov (United States)

    Tekuchev, V. V.; Kalinkin, D. P.; Ivanova, I. V.

    2018-05-01

    The entropies of iron, cobalt, rhodium, and platinum are studied for the first time, based on acoustic data and using the Debye theory and rigid-sphere model, from 298 K up to the boiling point. A formula for the melting entropy of metals is validated. Good agreement between the research results and the literature data is obtained.

  15. Experience with a modular application system with central data base for scientific-technical calculations

    International Nuclear Information System (INIS)

    Ruehle, R.; Wohland, H.; Reyer, G.

    1976-01-01

    The basic principles of the application system RSYST are presented. This programme system is developed and used in the Institut fuer Kernenergetik (IKE). Data base, structure, and connection of data, and the application language are described. The process of problem formulation typical for RSYST is discussed. Just now the system is being extended to dialogue and remote data processing. (orig.) [de

  16. Automatic hearing loss detection system based on auditory brainstem response

    International Nuclear Information System (INIS)

    Aldonate, J; Mercuri, C; Reta, J; Biurrun, J; Bonell, C; Gentiletti, G; Escobar, S; Acevedo, R

    2007-01-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory

  17. Biological Bases for Radiation Adaptive Responses in the Lung

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bobby R. [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States); Lin, Yong [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States); Wilder, Julie [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States); Belinsky, Steven [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States)

    2015-03-01

    Our main research objective was to determine the biological bases for low-dose, radiation-induced adaptive responses in the lung, and use the knowledge gained to produce an improved risk model for radiation-induced lung cancer that accounts for activated natural protection, genetic influences, and the role of epigenetic regulation (epiregulation). Currently, low-dose radiation risk assessment is based on the linear-no-threshold hypothesis, which now is known to be unsupported by a large volume of data.

  18. Considering Affective Responses towards Environments for Enhancing Location Based Services

    Science.gov (United States)

    Huang, H.; Gartner, G.; Klettner, S.; Schmidt, M.

    2014-04-01

    A number of studies in the field of environmental psychology show that humans perceive and evaluate their surroundings affectively. Some places are experienced as unsafe, while some others as attractive and interesting. Experiences from daily life show that many of our daily behaviours and decision-making are often influenced by this kind of affective responses towards environments. Location based services (LBS) are often designed to assist and support people's behaviours and decision-making in space. In order to provide services with high usefulness (usability and utility), LBS should consider these kinds of affective responses towards environments. This paper reports on the results of a research project, which studies how people's affective responses towards environments can be modelled and acquired, as well as how LBS can benefit by considering these affective responses. As one of the most popular LBS applications, mobile pedestrian navigation systems are used as an example for illustration.

  19. Analytic energy derivatives for the calculation of the first-order molecular properties using the domain-based local pair-natural orbital coupled-cluster theory

    Science.gov (United States)

    Datta, Dipayan; Kossmann, Simone; Neese, Frank

    2016-09-01

    The domain-based local pair-natural orbital coupled-cluster (DLPNO-CC) theory has recently emerged as an efficient and powerful quantum-chemical method for the calculation of energies of molecules comprised of several hundred atoms. It has been demonstrated that the DLPNO-CC approach attains the accuracy of a standard canonical coupled-cluster calculation to about 99.9% of the basis set correlation energy while realizing linear scaling of the computational cost with respect to system size. This is achieved by combining (a) localized occupied orbitals, (b) large virtual orbital correlation domains spanned by the projected atomic orbitals (PAOs), and (c) compaction of the virtual space through a truncated pair natural orbital (PNO) basis. In this paper, we report on the implementation of an analytic scheme for the calculation of the first derivatives of the DLPNO-CC energy for basis set independent perturbations within the singles and doubles approximation (DLPNO-CCSD) for closed-shell molecules. Perturbation-independent one-particle density matrices have been implemented in order to account for the response of the CC wave function to the external perturbation. Orbital-relaxation effects due to external perturbation are not taken into account in the current implementation. We investigate in detail the dependence of the computed first-order electrical properties (e.g., dipole moment) on the three major truncation parameters used in a DLPNO-CC calculation, namely, the natural orbital occupation number cutoff used for the construction of the PNOs, the weak electron-pair cutoff, and the domain size cutoff. No additional truncation parameter has been introduced for property calculation. We present benchmark calculations on dipole moments for a set of 10 molecules consisting of 20-40 atoms. We demonstrate that 98%-99% accuracy relative to the canonical CCSD results can be consistently achieved in these calculations. However, this comes with the price of tightening the

  20. DP-THOT - a calculational tool for bundle-specific decay power based on actual irradiation history

    International Nuclear Information System (INIS)

    Johnston, S.; Morrison, C.A.; Albasha, H.; Arguner, D.

    2005-01-01

    A tool has been created for calculating the decay power of an individual fuel bundle to take account of its actual irradiation history, as tracked by the fuel management code SORO. The DP-THOT tool was developed in two phases: first as a standalone executable code for decay power calculation, which could accept as input an entirely arbitrary irradiation history; then as a module integrated with SORO auxiliary codes, which directly accesses SORO history files to retrieve the operating power history of the bundle since it first entered the core. The methodology implemented in the standalone code is based on the ANSI/ANS-5.1-1994 formulation, which has been specifically adapted for calculating decay power in irradiated CANDU reactor fuel, by making use of fuel type specific parameters derived from WIMS lattice cell simulations for both 37 element and 28 element CANDU fuel bundle types. The approach also yields estimates of uncertainty in the calculated decay power quantities, based on the evaluated error in the decay heat correlations built-in for each fissile isotope, in combination with the estimated uncertainty in user-supplied inputs. The method was first implemented in the form of a spreadsheet, and following successful testing against decay powers estimated using the code ORIGEN-S, the algorithm was coded in FORTRAN to create an executable program. The resulting standalone code, DP-THOT, accepts an arbitrary irradiation history and provides the calculated decay power and estimated uncertainty over any user-specified range of cooling times, for either 37 element or 28 element fuel bundles. The overall objective was to produce an integrated tool which could be used to find the decay power associated with any identified fuel bundle or channel in the core, taking into account the actual operating history of the bundles involved. The benefit is that the tool would allow a more realistic calculation of bundle and channel decay powers for outage heat sink planning