Photon Splitting in a Strong Magnetic Field: Recalculation and Comparison with Previous Calculations
International Nuclear Information System (INIS)
Adler, S.L.; Schubert, C.
1996-01-01
We recalculate the amplitude for photon splitting in a strong magnetic field below the pair production threshold, using the world line path integral variant of the Bern-Kosower formalism. Numerical comparison (using programs that we have made available for public access on the Internet) shows that the results of the recalculation are identical to the earlier calculations of Adler and later of Stoneham, and to the recent recalculation by Baier, Milstein, and Shaisultanov. copyright 1996 The American Physical Society
Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History
Directory of Open Access Journals (Sweden)
Danping Wang
2017-01-01
Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.
Groebner bases in perturbative calculations
Energy Technology Data Exchange (ETDEWEB)
Gerdt, Vladimir P. [Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)
2004-10-01
In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations.
Groebner bases in perturbative calculations
International Nuclear Information System (INIS)
Gerdt, Vladimir P.
2004-01-01
In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations
Data base to compare calculations and observations
International Nuclear Information System (INIS)
Tichler, J.L.
1985-01-01
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed
Exact-exchange-based quasiparticle calculations
International Nuclear Information System (INIS)
Aulbur, Wilfried G.; Staedele, Martin; Goerling, Andreas
2000-01-01
One-particle wave functions and energies from Kohn-Sham calculations with the exact local Kohn-Sham exchange and the local density approximation (LDA) correlation potential [EXX(c)] are used as input for quasiparticle calculations in the GW approximation (GWA) for eight semiconductors. Quasiparticle corrections to EXX(c) band gaps are small when EXX(c) band gaps are close to experiment. In the case of diamond, quasiparticle calculations are essential to remedy a 0.7 eV underestimate of the experimental band gap within EXX(c). The accuracy of EXX(c)-based GWA calculations for the determination of band gaps is as good as the accuracy of LDA-based GWA calculations. For the lowest valence band width a qualitatively different behavior is observed for medium- and wide-gap materials. The valence band width of medium- (wide-) gap materials is reduced (increased) in EXX(c) compared to the LDA. Quasiparticle corrections lead to a further reduction (increase). As a consequence, EXX(c)-based quasiparticle calculations give valence band widths that are generally 1-2 eV smaller (larger) than experiment for medium- (wide-) gap materials. (c) 2000 The American Physical Society
Attribute and topology based change detection in a constellation of previously detected objects
Paglieroni, David W.; Beer, Reginald N.
2016-01-19
A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.
GPU based acceleration of first principles calculation
International Nuclear Information System (INIS)
Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T
2010-01-01
We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.
DEFF Research Database (Denmark)
Rettedal, Elizabeth; Gumpert, Heidi; Sommer, Morten
2014-01-01
The human gut microbiota is linked to a variety of human health issues and implicated in antibiotic resistance gene dissemination. Most of these associations rely on culture-independent methods, since it is commonly believed that gut microbiota cannot be easily or sufficiently cultured. Here, we...... microbiota. Based on the phenotypic mapping, we tailor antibiotic combinations to specifically select for previously uncultivated bacteria. Utilizing this method we cultivate and sequence the genomes of four isolates, one of which apparently belongs to the genus Oscillibacter; uncultivated Oscillibacter...
Evaluation bases for calculation methods in radioecology
International Nuclear Information System (INIS)
Bleck-Neuhaus, J.; Boikat, U.; Franke, B.; Hinrichsen, K.; Hoepfner, U.; Ratka, R.; Steinhilber-Schwab, B.; Teufel, D.; Urbach, M.
1982-03-01
The seven contributions in this book deal with the state and problems of radioecology. In particular it analyses: The propagation of radioactive materials in the atmosphere, the transfer of radioactive substances from the soil into plants, respectively from animal feed into meat, the exposure pathways for, and high-risk groups of the population, the uncertainties and the band width of the ingestion factor, as well as the treatment of questions of radioecology in practice. The calculation model is assessed and the difficulty evaluated of laying down data in the general calculation basis. (DG) [de
Energy Technology Data Exchange (ETDEWEB)
Tedeschi, Enrico; Canna, Antonietta; Cocozza, Sirio; Russo, Carmela; Angelini, Valentina; Brunetti, Arturo [University ' ' Federico II' ' , Neuroradiology, Department of Advanced Biomedical Sciences, Naples (Italy); Palma, Giuseppe; Quarantelli, Mario [National Research Council, Institute of Biostructure and Bioimaging, Naples (Italy); Borrelli, Pasquale; Salvatore, Marco [IRCCS SDN, Naples (Italy); Lanzillo, Roberta; Postiglione, Emanuela; Morra, Vincenzo Brescia [University ' ' Federico II' ' , Department of Neurosciences, Reproductive and Odontostomatological Sciences, Naples (Italy)
2016-12-15
To evaluate changes in T1 and T2* relaxometry of dentate nuclei (DN) with respect to the number of previous administrations of Gadolinium-based contrast agents (GBCA). In 74 relapsing-remitting multiple sclerosis (RR-MS) patients with variable disease duration (9.8±6.8 years) and severity (Expanded Disability Status Scale scores:3.1±0.9), the DN R1 (1/T1) and R2* (1/T2*) relaxation rates were measured using two unenhanced 3D Dual-Echo spoiled Gradient-Echo sequences with different flip angles. Correlations of the number of previous GBCA administrations with DN R1 and R2* relaxation rates were tested, including gender and age effect, in a multivariate regression analysis. The DN R1 (normalized by brainstem) significantly correlated with the number of GBCA administrations (p<0.001), maintaining the same significance even when including MS-related factors. Instead, the DN R2* values correlated only with age (p=0.003), and not with GBCA administrations (p=0.67). In a subgroup of 35 patients for whom the administered GBCA subtype was known, the effect of GBCA on DN R1 appeared mainly related to linear GBCA. In RR-MS patients, the number of previous GBCA administrations correlates with R1 relaxation rates of DN, while R2* values remain unaffected, suggesting that T1-shortening in these patients is related to the amount of Gadolinium given. (orig.)
Calculating Traffic based on Road Sensor Data
Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Rata, Debanshu; Sikora, Monika
2014-01-01
Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for
Criticality criteria for submissions based on calculations
International Nuclear Information System (INIS)
Burgess, M.H.
1975-06-01
Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)
[Biometric bases: basic concepts of probability calculation].
Dinya, E
1998-04-26
The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.
Kwon, Ji-Sun; Yoon, Jungsoon; Kim, Yeon-Jung; Kang, Kyuho; Woo, Sunje; Jung, Dea-Im; Song, Man Ki; Kim, Eun-Ha; Kwon, Hyeok-Il; Choi, Young Ki; Kim, Jihye; Lee, Jeewon; Yoon, Yeup; Shin, Eui-Cheol; Youn, Jin-Won
2014-08-01
Growing concerns about unpredictable influenza pandemics require a broadly protective vaccine against diverse influenza strains. One of the promising approaches was a T cell-based vaccine, but the narrow breadth of T-cell immunity due to the immunodominance hierarchy established by previous influenza infection and efficacy against only mild challenge condition are important hurdles to overcome. To model T-cell immunodominance hierarchy in humans in an experimental setting, influenza-primed C57BL/6 mice were chosen and boosted with a mixture of vaccinia recombinants, individually expressing consensus sequences from avian, swine, and human isolates of influenza internal proteins. As determined by IFN-γ ELISPOT and polyfunctional cytokine secretion, the vaccinia recombinants of influenza expanded the breadth of T-cell responses to include subdominant and even minor epitopes. Vaccine groups were successfully protected against 100 LD50 challenges with PR/8/34 and highly pathogenic avian influenza H5N1, which contained the identical dominant NP366 epitope. Interestingly, in challenge with pandemic A/Cal/04/2009 containing mutations in the dominant epitope, only the group vaccinated with rVV-NP + PA showed improved protection. Taken together, a vaccinia-based influenza vaccine expressing conserved internal proteins improved the breadth of influenza-specific T-cell immunity and provided heterosubtypic protection against immunologically close as well as distant influenza strains. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The impact of previous knee injury on force plate and field-based measures of balance.
Baltich, Jennifer; Whittaker, Jackie; Von Tscharner, Vinzenz; Nettel-Aguirre, Alberto; Nigg, Benno M; Emery, Carolyn
2015-10-01
Individuals with post-traumatic osteoarthritis demonstrate increased sway during quiet stance. The prospective association between balance and disease onset is unknown. Improved understanding of balance in the period between joint injury and disease onset could inform secondary prevention strategies to prevent or delay the disease. This study examines the association between youth sport-related knee injury and balance, 3-10years post-injury. Participants included 50 individuals (ages 15-26years) with a sport-related intra-articular knee injury sustained 3-10years previously and 50 uninjured age-, sex- and sport-matched controls. Force-plate measures during single-limb stance (center-of-pressure 95% ellipse-area, path length, excursion, entropic half-life) and field-based balance scores (triple single-leg hop, star-excursion, unipedal dynamic balance) were collected. Descriptive statistics (mean within-pair difference; 95% confidence intervals) were used to compare groups. Linear regression (adjusted for injury history) was used to assess the relationship between ellipse-area and field-based scores. Injured participants on average demonstrated greater medio-lateral excursion [mean within-pair difference (95% confidence interval); 2.8mm (1.0, 4.5)], more regular medio-lateral position [10ms (2, 18)], and shorter triple single-leg hop distances [-30.9% (-8.1, -53.7)] than controls, while no between group differences existed for the remaining outcomes. After taking into consideration injury history, triple single leg hop scores demonstrated a linear association with ellipse area (β=0.52, 95% confidence interval 0.01, 1.01). On average the injured participants adjusted their position less frequently and demonstrated a larger magnitude of movement during single-limb stance compared to controls. These findings support the evaluation of balance outcomes in the period between knee injury and post-traumatic osteoarthritis onset. Copyright © 2015 Elsevier Ltd. All rights
Late preterm birth and previous cesarean section: a population-based cohort study.
Yasseen Iii, Abdool S; Bassil, Kate; Sprague, Ann; Urquia, Marcelo; Maguire, Jonathon L
2018-02-21
Late preterm birth (LPB) is increasingly common and associated with higher morbidity and mortality than term birth. Yet, little is known about the influence of previous cesarean section (PCS) and the occurrence of LPB in subsequent pregnancies. We aim to evaluate this association along with the potential mediation by cesarean sections in the current pregnancy. We use population-based birth registry data (2005-2012) to establish a cohort of live born singleton infants born between 34 and 41 gestational weeks to multiparous mothers. PCS was the primary exposure, LPB (34-36 weeks) was the primary outcome, and an unplanned or emergency cesarean section in the current pregnancy was the potential mediator. Associations were quantified using propensity weighted multivariable Poisson regression, and mediating associations were explored using the Baron-Kenny approach. The cohort included 481,531 births, 21,893 (4.5%) were LPB, and 119,983 (24.9%) were predated by at least one PCS. Among mothers with at least one PCS, 6307 (5.26%) were LPB. There was increased risk of LPB among women with at least one PCS (adjusted Relative Risk (aRR): 1.20 (95%CI [1.16, 1.23]). Unplanned or emergency cesarean section in the current pregnancy was identified as a strong mediator to this relationship (mediation ratio = 97%). PCS was associated with higher risk of LPB in subsequent pregnancies. This may be due to an increased risk of subsequent unplanned or emergency preterm cesarean sections. Efforts to minimize index cesarean sections may reduce the risk of LPB in subsequent pregnancies.
International Nuclear Information System (INIS)
Shaikh, S.; Devrajani, B.R.; Kalhoro, M.
2012-01-01
Objective: To determine the efficacy of peg-interferon-based therapy in patients refractory to previous conventional interferon-based treatment and factors predicting sustained viral response (SVR). Study Design: Analytical study. Place and Duration of Study: Medical Unit IV, Liaquat University Hospital, Jamshoro, from July 2009 to June 2011. Methodology: This study included consecutive patients of hepatitis C who were previously treated with conventional interferon-based treatment for 6 months but were either non-responders, relapsed or had virologic breakthrough and stage = 2 with fibrosis on liver biopsy. All eligible patients were provided peg-interferon at the dosage of 180 mu g weekly with ribavirin thrice a day for 6 months. Sustained Viral Response (SVR) was defined as absence of HCV RNA at twenty four week after treatment. All data was processed on SPSS version 16. Results: Out of 450 patients enrolled in the study, 192 were excluded from the study on the basis of minimal fibrosis (stage 0 and 1). Two hundred and fifty eight patients fulfilled the inclusion criteria and 247 completed the course of peg-interferon treatment. One hundred and sixty one (62.4%) were males and 97 (37.6%) were females. The mean age was 39.9 +- 6.1 years, haemoglobin was 11.49 +- 2.45 g/dl, platelet count was 127.2 +- 50.6 10/sup 3/ /mm/sup 3/, ALT was 99 +- 65 IU/L. SVR was achieved in 84 (32.6%). The strong association was found between SVR and the pattern of response (p = 0. 001), degree of fibrosis and early viral response (p = 0.001). Conclusion: Peg-interferon based treatment is an effective and safe treatment option for patients refractory to conventional interferon-based treatment. (author)
Volume-based geometric modeling for radiation transport calculations
International Nuclear Information System (INIS)
Li, Z.; Williamson, J.F.
1992-01-01
Accurate theoretical characterization of radiation fields is a valuable tool in the design of complex systems, such as linac heads and intracavitary applicators, and for generation of basic dose calculation data that is inaccessible to experimental measurement. Both Monte Carlo and deterministic solutions to such problems require a system for accurately modeling complex 3-D geometries that supports ray tracing, point and segment classification, and 2-D graphical representation. Previous combinatorial approaches to solid modeling, which involve describing complex structures as set-theoretic combinations of simple objects, are limited in their ease of use and place unrealistic constraints on the geometric relations between objects such as excluding common boundaries. A new approach to volume-based solid modeling has been developed which is based upon topologically consistent definitions of boundary, interior, and exterior of a region. From these definitions, FORTRAN union, intersection, and difference routines have been developed that allow involuted and deeply nested structures to be described as set-theoretic combinations of ellipsoids, elliptic cylinders, prisms, cones, and planes that accommodate shared boundaries. Line segments between adjacent intersections on a trajectory are assigned to the appropriate region by a novel sorting algorithm that generalizes upon Siddon's approach. Two 2-D graphic display tools are developed to help the debugging of a given geometric model. In this paper, the mathematical basis of our system is described, it is contrasted to other approaches, and examples are discussed
Analysis of Product Buying Decision on Lazada E-commerce based on Previous Buyers’ Comments
Directory of Open Access Journals (Sweden)
Neil Aldrin
2017-06-01
Full Text Available The aims of the present research are: 1 to know that product buying decision possibly occurs, 2 to know how product buying decision occurs on Lazada e-commerce’s customers, 3 how previous buyers’ comments can increase product buying decision on Lazada e-commerce. This research utilizes qualitative research method. Qualitative research is a research that investigates other researches and makes assumption or discussion result so that other analysis results can be made in order to widen idea and opinion. Research result shows that product which has many ratings and reviews will trigger other buyers to purchase or get that product. The conclusion is that product buying decision may occur because there are some processes before making decision which are: looking for recognition and searching for problems, knowing the needs, collecting information, evaluating alternative, evaluating after buying. In those stages, buying decision on Lazada e-commerce is supported by price, promotion, service, and brand.
Calculation of electromagnetic parameter based on interpolation algorithm
International Nuclear Information System (INIS)
Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan
2015-01-01
Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment
Programmable calculator: alternative to minicomputer-based analyzer
International Nuclear Information System (INIS)
Hochel, R.C.
1979-01-01
Described are a number of typical field and laboratory counting systems that use standard stand-alone multichannel analyzers (MCA) interfaced to a Hewlett-Packard Company (HP 9830) programmable calculator. Such systems can offer significant advantages in cost and flexibility over a minicomputyr-based system. Because most laboratories tend to accumulate MCA's over the years, the programmable calculator also offers an easy way to upgrade the laboratory while making optimum use of existing systems. Software programs are easily tailored to fit a variety of general or specific applications. The only disadvantage of the calculator vs a computer-based system is in speed of analyses; however, for most applications this handicap is minimal. Applications discussed give a brief overview of the power and flexibility of the MCA-calculator approach to automated counting and data reduction
Moran, E M L; French, R A; Kennedy, R R
2011-09-01
Predicting workforce requirements is a difficult but necessary part of health resource planning. A 'snapshot' workforce survey undertaken in 2002 examined issues that New Zealand anaesthesia trainees expected would influence their choice of future workplace. We have restudied the same cohort to see if that workforce survey was a good predictor of outcome. Seventy (51%) of 138 surveys were completed in 2009 compared with 100 (80%) of 138 in the 2002 survey. Eighty percent of the 2002 respondents planned consultant positions in New Zealand. We found 64% of respondents were working in New Zealand (P New Zealand based respondents but only 40% of those living outside New Zealand agreed or strongly agreed with this statement (P New Zealand but was important for only 2% of those resident in New Zealand (P New Zealand were predominantly between NZ$150,000 and $200,000 while those overseas received between NZ$300,000 and $400,000. Of those that are resident in New Zealand, 84% had studied in a New Zealand medical school compared with 52% of those currently working overseas (P < 0.01). Our study shows that stated career intentions in a group do not predict the actual group outcomes. We suggest that 'snapshot' studies examining workforce intentions are of little value for workforce planning. However we believe an ongoing program matching career aspirations against career outcomes would be a useful tool in workforce planning.
THE ACCOUNTING POSTEMPLOYMENT BENEFITS BASED ON ACTUARIAL CALCULATIONS
Directory of Open Access Journals (Sweden)
Anna CEBOTARI
2017-11-01
Full Text Available The accounting post-employment benefits, based on actuarial calculations, at present remains a subject studied in Moldova only theoretically. Applying actuarial calculations of accounting in fact denotes its character of evolving. Because national accounting standards have been adapted to international, which, in turn, require the valuation of assets and debts at fair value, there is a need to draw up exact calculations on which stands the theory of probability and mathematical statistics. One of the main objectives of accounting information is reflected in its financial situations and providing internal and external users of the entity. Hence, arises the need to reflect highly reliable information that can be provided by applying actuarial calculations.
Fischer, Alexander H.; Wang, Timothy S.; Yenokyan, Gayane; Kang, Sewon; Chien, Anna L.
2016-01-01
Background Individuals with previous nonmelanoma skin cancer (NMSC) are at increased risk for subsequent skin cancer, and should therefore limit UV exposure. Objective To determine whether individuals with previous NMSC engage in better sun protection than those with no skin cancer history. Methods We pooled self-reported data (2005 and 2010 National Health Interview Surveys) from US non-Hispanic white adults (758 with and 34,161 without previous NMSC). We calculated adjusted prevalence odds ratios (aPOR) and 95% confidence intervals (95% CI), taking into account the complex survey design. Results Individuals with previous NMSC versus no history of NMSC had higher rates of frequent use of shade (44.3% versus 27.0%; aPOR=1.41; 1.16–1.71), long sleeves (20.5% versus 7.7%; aPOR=1.55; 1.21–1.98), a wide-brimmed hat (26.1% versus 10.5%; aPOR=1.52; 1.24–1.87), and sunscreen (53.7% versus 33.1%; aPOR=2.11; 95% CI=1.73–2.59), but did not have significantly lower odds of recent sunburn (29.7% versus 40.7%; aPOR=0.95; 0.77–1.17). Among subjects with previous NMSC, recent sunburn was inversely associated with age, sun avoidance, and shade but not sunscreen. Limitations Self-reported cross-sectional data and unavailable information quantifying regular sun exposure. Conclusion Physicians should emphasize sunburn prevention when counseling patients with previous NMSC, especially younger adults, focusing on shade and sun avoidance over sunscreen. PMID:27198078
Data base for terrestrial food pathways dose commitment calculations
International Nuclear Information System (INIS)
Bailey, C.E.
1979-01-01
A computer program is under development to allow calculation of the dose-to-man in Georgia and South Carolina from ingestion of radionuclides in terrestrial foods resulting from deposition of airborne radionuclides. This program is based on models described in Regulatory Guide 1.109 (USNRC, 1977). The data base describes the movement of radionuclides through the terrestrial food chain, growth and consumption factors for a variety of radionuclides
Software-Based Visual Loan Calculator For Banking Industry
Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.
2012-03-01
industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.
Calculations of accelerator-based neutron sources characteristics
International Nuclear Information System (INIS)
Tertytchnyi, R.G.; Shorin, V.S.
2000-01-01
Accelerator-based quasi-monoenergetic neutron sources (T(p,n), D(d;n), T(d;n) and Li (p,n)-reactions) are widely used in experiments on measuring the interaction cross-sections of fast neutrons with nuclei. The present work represents the code for calculation of the yields and spectra of neutrons generated in (p, n)- and ( d; n)-reactions on some targets of light nuclei (D, T; 7 Li). The peculiarities of the stopping processes of charged particles (with incident energy up to 15 MeV) in multilayer and multicomponent targets are taken into account. The code version is made in terms of the 'SOURCE,' a subroutine for the well-known MCNP code. Some calculation results for the most popular accelerator- based neutron sources are given. (authors)
New Products and Technologies, Based on Calculations Developed Areas
Directory of Open Access Journals (Sweden)
Gheorghe Vertan
2013-09-01
Full Text Available Following statistics, currently prosperous and have high GDP / capita, only countries that have and fructify intensively large natural resources and/or produce and export products massive based on patented inventions accordingly. Without great natural wealth and the lowest GDP / capita in the EU, Romania will prosper only with such products. Starting from the top experience in the country, some patented, can develop new and competitive technologies and patentable and exportable products, based on exact calculations of developed areas, such as that double shells welded assemblies and plating of ships' propellers and blade pump and hydraulic turbines.
Plasma density calculation based on the HCN waveform data
International Nuclear Information System (INIS)
Chen Liaoyuan; Pan Li; Luo Cuiwen; Zhou Yan; Deng Zhongchao
2004-01-01
A method to improve the plasma density calculation is introduced using the base voltage and the phase zero points obtained from the HCN interference waveform data. The method includes making the signal quality higher by putting the signal control device and the analog-to-digit converters in the same location and charging them by the same power, and excluding the noise's effect according to the possible changing rate of the signal's phase, and to make the base voltage more accurate by dynamical data processing. (authors)
Modeling and Calculation of Dent Based on Pipeline Bending Strain
Directory of Open Access Journals (Sweden)
Qingshan Feng
2016-01-01
Full Text Available The bending strain of long-distance oil and gas pipelines can be calculated by the in-line inspection tool which used inertial measurement unit (IMU. The bending strain is used to evaluate the strain and displacement of the pipeline. During the bending strain inspection, the dent existing in the pipeline can affect the bending strain data as well. This paper presents a novel method to model and calculate the pipeline dent based on the bending strain. The technique takes inertial mapping data from in-line inspection and calculates depth of dent in the pipeline using Bayesian statistical theory and neural network. To verify accuracy of the proposed method, an in-line inspection tool is used to inspect pipeline to gather data. The calculation of dent shows the method is accurate for the dent, and the mean relative error is 2.44%. The new method provides not only strain of the pipeline dent but also the depth of dent. It is more benefit for integrity management of pipeline for the safety of the pipeline.
DEFF Research Database (Denmark)
Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms
2009-01-01
of this misclassification for analysing the risk of breast cancer. MATERIALS AND METHODS: Prescription data were obtained from Danish Registry of Medicinal Products Statistics and we applied various methods to approximate treatment episodes. We analysed the duration of HT episodes to study the ability to identify......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...
The internal radiation dose calculations based on Chinese mathematical phantom
International Nuclear Information System (INIS)
Wang Haiyan; Li Junli; Cheng Jianping; Fan Jiajin
2006-01-01
The internal radiation dose calculations built on Chinese facts become more and more important according to the development of nuclear medicine. the MIRD method developed and consummated by the society of Nuclear Medicine (America) is based on the European and American mathematical phantom and can't fit Chinese well. The transport of γ-ray in the Chinese mathematical phantom was simulated with Monte Carlo method in programs as MCNP4C. the specific absorbed fraction (Φ) of Chinese were calculated and the Chinese Φ database was created. The results were compared with the recommended values by ORNL. the method was proved correct by the coherence when the target organ was the same with the source organ. Else, the difference was due to the different phantom and the choice of different physical model. (authors)
A density gradient theory based method for surface tension calculations
DEFF Research Database (Denmark)
Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios
2016-01-01
The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...
Validation of GPU based TomoTherapy dose calculation engine.
Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond
2012-04-01
The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.
Metric for Calculation of System Complexity based on its Connections
Directory of Open Access Journals (Sweden)
João Ricardo Braga de Paiva
2017-02-01
Full Text Available This paper proposes a methodology based on system connections to calculate its complexity. Two study cases are proposed: the dining Chinese philosophers’ problem and the distribution center. Both studies are modeled using the theory of Discrete Event Systems and simulations in different contexts were performed in order to measure their complexities. The obtained results present i the static complexity as a limiting factor for the dynamic complexity, ii the lowest cost in terms of complexity for each unit of measure of the system performance and iii the output sensitivity to the input parameters. The associated complexity and performance measures aggregate knowledge about the system.
International Nuclear Information System (INIS)
Falero, B.; Bueno, P.; Chaves, M. A.; Ordiales, J. M.; Villafana, O.; Gonzalez, M. J.
2013-01-01
The aim of this study was to develop a software application that performs calculation shields in radiology room depending on the type of equipment. The calculation will be done by selecting the user, the method proposed in the Guide 5.11, the Report 144 and 147 and also for the methodology given by the Portuguese Health Ministry. (Author)
Jet identification based on probability calculations using Bayes' theorem
International Nuclear Information System (INIS)
Jacobsson, C.; Joensson, L.; Lindgren, G.; Nyberg-Werther, M.
1994-11-01
The problem of identifying jets at LEP and HERA has been studied. Identification using jet energies and fragmentation properties was treated separately in order to investigate the degree of quark-gluon separation that can be achieved by either of these approaches. In the case of the fragmentation-based identification, a neural network was used, and a test of the dependence on the jet production process and the fragmentation model was done. Instead of working with the separation variables directly, these have been used to calculate probabilities of having a specific type of jet, according to Bayes' theorem. This offers a direct interpretation of the performance of the jet identification and provides a simple means of combining the results of the energy- and fragmentation-based identifications. (orig.)
The PHREEQE Geochemical equilibrium code data base and calculations
International Nuclear Information System (INIS)
Andersoon, K.
1987-01-01
Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)
A drainage data-based calculation method for coalbed permeability
International Nuclear Information System (INIS)
Lai, Feng-peng; Li, Zhi-ping; Fu, Ying-kun; Yang, Zhi-hao
2013-01-01
This paper establishes a drainage data-based calculation method for coalbed permeability. The method combines material balance and production equations. We use a material balance equation to derive the average pressure of the coalbed in the production process. The dimensionless water production index is introduced into the production equation for the water production stage. In the subsequent stage, which uses both gas and water, the gas and water production ratio is introduced to eliminate the effect of flush-flow radius, skin factor, and other uncertain factors in the calculation of coalbed methane permeability. The relationship between permeability and surface cumulative liquid production can be described as a single-variable cubic equation by derivation. The trend shows that the permeability initially declines and then increases after ten wells in the southern Qinshui coalbed methane field. The results show an exponential relationship between permeability and cumulative water production. The relationship between permeability and cumulative gas production is represented by a linear curve and that between permeability and surface cumulative liquid production is represented by a cubic polynomial curve. The regression result of the permeability and surface cumulative liquid production agrees with the theoretical mathematical relationship. (paper)
Goal based mesh adaptivity for fixed source radiation transport calculations
International Nuclear Information System (INIS)
Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.S.; Goffin, M.A.; Merton, S.R.; Warner, P.
2013-01-01
Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed
Tartaglione, Luciana; Gambuti, Angelita; De Cicco, Paola; Ercolano, Giuseppe; Ianaro, Angela; Taglialatela-Scafati, Orazio; Moio, Luigi; Forino, Martino
2018-03-01
Vitis vinifera cv Falanghina is an ancient grape variety of Southern Italy. A thorough phytochemical analysis of the Falanghina leaves was conducted to investigate its specialised metabolite content. Along with already known molecules, such as caftaric acid, quercetin-3-O-β-d-glucopyranoside, quercetin-3-O-β-d-glucuronide, kaempferol-3-O-β-d-glucopyranoside and kaempferol-3-O-β-d-glucuronide, a previously undescribed biflavonoid was identified. For this last compound, a moderate bioactivity against metastatic melanoma cells proliferation was discovered. This datum can be of some interest to researchers studying human melanoma. The high content in antioxidant glycosylated flavonoids supports the exploitation of grape vine leaves as an inexpensive source of natural products for the food industry and for both pharmaceutical and nutraceutical companies. Additionally, this study offers important insights into the plant physiology, thus prompting possible technological researches of genetic selection based on the vine adaptation to specific pedo-climatic environments. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Torres Pozas, S.; Monja Rey, P. de la; Sanchez Carrasca, M.; Yanez Lopez, D.; Macias Verde, D.; Martin Oliva, R.
2011-01-01
In recent years, the progress experienced in cancer treatment with ionizing radiation can deliver higher doses to smaller volumes and better shaped, making it necessary to take into account new aspects in the calculation of structural barriers. Furthermore, given that forecasts suggest that in the near future will install a large number of accelerators, or existing ones modified, we believe a useful tool to estimate the thickness of the structural barriers of treatment rooms. The shielding calculation methods are based on standard DIN 6847-2 and the recommendations given by the NCRP 151. In our experience we found only estimates originated from the DIN. Therefore, we considered interesting to develop an application that incorporates the formulation suggested by the NCRP, together with previous work based on the rules DIN allow us to establish a comparison between the results of both methods. (Author)
Method of characteristics - Based sensitivity calculations for international PWR benchmark
International Nuclear Information System (INIS)
Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.
2013-01-01
Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)
Energy Technology Data Exchange (ETDEWEB)
Kulicke, B [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany); Schlegel, S [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany)
1993-06-28
An important part of network operation management is the estimation and maintenance of the security of supply. So far the control personnel has only been supported by static network analyses and safety calculations. The authors describe an expert system, which is coupled to a real time simulation program on a transputer basis, for dynamic network safety calculations. They also introduce the system concept and the most important functions of the expert system. (orig.)
Hybrid Electric Vehicle Control Strategy Based on Power Loss Calculations
Boyd, Steven J
2006-01-01
Defining an operation strategy for a Split Parallel Architecture (SPA) Hybrid Electric Vehicle (HEV) is accomplished through calculating powertrain component losses. The results of these calculations define how the vehicle can decrease fuel consumption while maintaining low vehicle emissions. For a HEV, simply operating the vehicle's engine in its regions of high efficiency does not guarantee the most efficient vehicle operation. The results presented are meant only to define a literal str...
International Nuclear Information System (INIS)
Du Yanjun; Liu Qingcheng; Liu Hongzhang; Qin Guoxiu
2009-01-01
In order to find the feasibility of calculating mine radiation dose based on γ field theory, this paper calculates the γ radiation dose of a mine by means of γ field theory based calculation method. The results show that the calculated radiation dose is of small error and can be used to monitor mine environment of nuclear radiation. (authors)
Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine
International Nuclear Information System (INIS)
Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois
2013-01-01
Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6 MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, − 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3 mm criteria. The mean and standard deviation of pixels passing
Electric field calculations in brain stimulation based on finite elements
DEFF Research Database (Denmark)
Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel
2013-01-01
The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...
DEFF Research Database (Denmark)
Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu
2014-01-01
BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...
Application of CFD based wave loads in aeroelastic calculations
DEFF Research Database (Denmark)
Schløer, Signe; Paulsen, Bo Terp; Bredmose, Henrik
2014-01-01
Two fully nonlinear irregular wave realizations with different significant wave heights are considered. The wave realizations are both calculated in the potential flow solver Ocean-Wave3D and in a coupled domain decomposed potential-flow CFD solver. The surface elevations of the calculated wave...... domain decomposed potentialflow CFD solver result in different dynamic forces in the tower and monopile, despite that the static forces on a fixed monopile are similar. The changes are due to differences in the force profiles and wave steepness in the two solvers. The results indicate that an accurate...
Inverse boundary element calculations based on structural modes
DEFF Research Database (Denmark)
Juhl, Peter Møller
2007-01-01
The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods sol...
Baker, Stuart G
2018-02-20
A surrogate endpoint in a randomized clinical trial is an endpoint that occurs after randomization and before the true, clinically meaningful, endpoint that yields conclusions about the effect of treatment on true endpoint. A surrogate endpoint can accelerate the evaluation of new treatments but at the risk of misleading conclusions. Therefore, criteria are needed for deciding whether to use a surrogate endpoint in a new trial. For the meta-analytic setting of multiple previous trials, each with the same pair of surrogate and true endpoints, this article formulates 5 criteria for using a surrogate endpoint in a new trial to predict the effect of treatment on the true endpoint in the new trial. The first 2 criteria, which are easily computed from a zero-intercept linear random effects model, involve statistical considerations: an acceptable sample size multiplier and an acceptable prediction separation score. The remaining 3 criteria involve clinical and biological considerations: similarity of biological mechanisms of treatments between the new trial and previous trials, similarity of secondary treatments following the surrogate endpoint between the new trial and previous trials, and a negligible risk of harmful side effects arising after the observation of the surrogate endpoint in the new trial. These 5 criteria constitute an appropriately high bar for using a surrogate endpoint to make a definitive treatment recommendation. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Calculation of generalized Lorenz-Mie theory based on the localized beam models
International Nuclear Information System (INIS)
Jia, Xiaowei; Shen, Jianqi; Yu, Haitao
2017-01-01
It has been proved that localized approximation (LA) is the most efficient way to evaluate the beam shape coefficients (BSCs) in generalized Lorenz-Mie theory (GLMT). The numerical calculation of relevant physical quantities is a challenge for its practical applications due to the limit of computer resources. The study presents an improved algorithm of the GLMT calculation based on the localized beam models. The BSCs and the angular functions are calculated by multiplying them with pre-factors so as to keep their values in a reasonable range. The algorithm is primarily developed for the original localized approximation (OLA) and is further extended to the modified localized approximation (MLA). Numerical results show that the algorithm is efficient, reliable and robust. - Highlights: • In this work, we introduce the proper pre-factors to the Bessel functions, BSCs and the angular functions. With this improvement, all the quantities involved in the numerical calculation are scaled into a reasonable range of values so that the algorithm can be used for computing the physical quantities of the GLMT. • The algorithm is not only an improvement in numerical technique, it also implies that the set of basic functions involved in the electromagnetic scattering (and sonic scattering) can be reasonably chosen. • The algorithms of the GLMT computations introduced in previous references suggested that the order of the n and m sums is interchanged. In this work, the sum of azimuth modes is performed for each partial wave. This offers the possibility to speed up the computation, since the sum of partial waves can be optimized according to the illumination conditions and the sum of azimuth modes can be truncated by selecting a criterion discussed in . • Numerical results show that the algorithm is efficient, reliable and robust, even in very exotic cases. The algorithm presented in this paper is based on the original localized approximation and it can also be used for the
Calculation laboratory: game based learning in exact discipline
Directory of Open Access Journals (Sweden)
André Felipe de Almeida Xavier
2017-12-01
Full Text Available The Calculation Laboratory appeared with the need to give meaning to the learning of students entering the courses of Engineering, in the discipline of Differential Calculus, in the semester 1/2016. After obtaining good results, the activity was also extended to the classes of Analytical Geometry and Linear Algebra (GAAL and Integral Calculus, so that these incoming students could continue the process. Historically, students present some difficulty in these contents, and it is necessary to give meaning to their learning. Given the table presented, the Calculation Laboratory aims to give meaning to the contents worked, giving students autonomy, having the teacher as the tutor, as intermediary between the student and the knowledge, creating various practical, playful and innovative activities to assist in this process. Through this article, it is intended to report a little about the activities created to facilitate this process of execution of the Calculation Laboratory, in addition to demonstrating the results obtained and measured after its application. Through these proposed activities, it is noticed that the student is gradually gaining autonomy in the search for knowledge.
DEFF Research Database (Denmark)
Andreasen, Charlotte Hartig; Nielsen, Jonas B; Refsgaard, Lena
2013-01-01
Cardiomyopathies are a heterogeneous group of diseases with various etiologies. We focused on three genetically determined cardiomyopathies: hypertrophic (HCM), dilated (DCM), and arrhythmogenic right ventricular cardiomyopathy (ARVC). Eighty-four genes have so far been associated with these card......Cardiomyopathies are a heterogeneous group of diseases with various etiologies. We focused on three genetically determined cardiomyopathies: hypertrophic (HCM), dilated (DCM), and arrhythmogenic right ventricular cardiomyopathy (ARVC). Eighty-four genes have so far been associated...... with these cardiomyopathies, but the disease-causing effect of reported variants is often dubious. In order to identify possible false-positive variants, we investigated the prevalence of previously reported cardiomyopathy-associated variants in recently published exome data. We searched for reported missense and nonsense...... variants in the NHLBI-Go Exome Sequencing Project (ESP) containing exome data from 6500 individuals. In ESP, we identified 94 variants out of 687 (14%) variants previously associated with HCM, 58 out of 337 (17%) variants associated with DCM, and 38 variants out of 209 (18%) associated with ARVC...
Calculation of crack stress density of cement base materials
Directory of Open Access Journals (Sweden)
Chun-e Sui
2018-01-01
Full Text Available In this paper, the fracture load of cement paste with different water cement ratio, different mineral admixtures, including fly ash, silica fume and slag, is obtained through experiments. the three-dimensional fracture surface is reconstructed and the three-dimensional effective area of the fracture surface is calculated. the effective fracture stress density of different cement paste is obtained. The results show that the polynomial function can accurately describe the relationship between the three-dimensional total area and the tensile strength
DEFF Research Database (Denmark)
Nascimento, Marcelle M; Gordan, Valeria V; Qvist, Vibeke
2010-01-01
The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so....
Freeway travel speed calculation model based on ETC transaction data.
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.
International Nuclear Information System (INIS)
Morrison, Hali; Menon, Geetha; Sloboda, Ron
2016-01-01
Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm 3 water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.
Energy Technology Data Exchange (ETDEWEB)
Morrison, Hali; Menon, Geetha; Sloboda, Ron [Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB (Canada)
2016-08-15
Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm{sup 3} water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.
Many-body calculations with deuteron based single-particle bases and their associated natural orbits
Puddu, G.
2018-06-01
We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.
FragIt: a tool to prepare input files for fragment based quantum chemical calculations.
Directory of Open Access Journals (Sweden)
Casper Steinmann
Full Text Available Near linear scaling fragment based quantum chemical calculations are becoming increasingly popular for treating large systems with high accuracy and is an active field of research. However, it remains difficult to set up these calculations without expert knowledge. To facilitate the use of such methods, software tools need to be available to support these methods and help to set up reasonable input files which will lower the barrier of entry for usage by non-experts. Previous tools relies on specific annotations in structure files for automatic and successful fragmentation such as residues in PDB files. We present a general fragmentation methodology and accompanying tools called FragIt to help setup these calculations. FragIt uses the SMARTS language to locate chemically appropriate fragments in large structures and is applicable to fragmentation of any molecular system given suitable SMARTS patterns. We present SMARTS patterns of fragmentation for proteins, DNA and polysaccharides, specifically for D-galactopyranose for use in cyclodextrins. FragIt is used to prepare input files for the Fragment Molecular Orbital method in the GAMESS program package, but can be extended to other computational methods easily.
Yang, Shan; Tong, Xiangqian
2016-01-01
Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...
QED Based Calculation of the Fine Structure Constant
Energy Technology Data Exchange (ETDEWEB)
Lestone, John Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-13
Quantum electrodynamics is complex and its associated mathematics can appear overwhelming for those not trained in this field. Here, semi-classical approaches are used to obtain a more intuitive feel for what causes electrostatics, and the anomalous magnetic moment of the electron. These intuitive arguments lead to a possible answer to the question of the nature of charge. Virtual photons, with a reduced wavelength of λ, are assumed to interact with isolated electrons with a cross section of πλ^{2}. This interaction is assumed to generate time-reversed virtual photons that are capable of seeking out and interacting with other electrons. This exchange of virtual photons between particles is assumed to generate and define the strength of electromagnetism. With the inclusion of near-field effects the model presented here gives a fine structure constant of ~1/137 and an anomalous magnetic moment of the electron of ~0.00116. These calculations support the possibility that near-field corrections are the key to understanding the numerical value of the dimensionless fine structure constant.
Grid-based electronic structure calculations: The tensor decomposition approach
Energy Technology Data Exchange (ETDEWEB)
Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)
2016-05-01
We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
Chen, Tingting; Hedman, Lea; Mattila, Petri S.; Jartti, Laura; Jartti, Tuomas; Ruuskanen, Olli; Söderlund-Venermo, Maria; Hedman, Klaus
2012-01-01
Biotin is an essential vitamin that binds streptavidin or avidin with high affinity and specificity. As biotin is a small molecule that can be linked to proteins without affecting their biological activity, biotinylation is applied widely in biochemical assays. In our laboratory, IgM enzyme immuno assays (EIAs) of µ-capture format have been set up against many viruses, using as antigen biotinylated virus like particles (VLPs) detected by horseradish peroxidase-conjugated streptavidin. We recently encountered one serum sample reacting with the biotinylated VLP but not with the unbiotinylated one, suggesting in human sera the occurrence of biotin-reactive antibodies. In the present study, we search the general population (612 serum samples from adults and 678 from children) for IgM antibodies reactive with biotin and develop an indirect EIA for quantification of their levels and assessment of their seroprevalence. These IgM antibodies were present in 3% adults regardless of age, but were rarely found in children. The adverse effects of the biotin IgM on biotinylation-based immunoassays were assessed, including four inhouse and one commercial virus IgM EIAs, showing that biotin IgM do cause false positivities. The biotin can not bind IgM and streptavidin or avidin simultaneously, suggesting that these biotin-interactive compounds compete for the common binding site. In competitive inhibition assays, the affinities of biotin IgM antibodies ranged from 2.1×10−3 to 1.7×10−4 mol/L. This is the first report on biotin antibodies found in humans, providing new information on biotinylation-based immunoassays as well as new insights into the biomedical effects of vitamins. PMID:22879954
UAV-based NDVI calculation over grassland: An alternative approach
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a
Keipert, Peter E
2017-01-01
Historically, hemoglobin-based oxygen carriers (HBOCs) were being developed as "blood substitutes," despite their transient circulatory half-life (~ 24 h) vs. transfused red blood cells (RBCs). More recently, HBOC commercial development focused on "oxygen therapeutic" indications to provide a temporary oxygenation bridge until medical or surgical interventions (including RBC transfusion, if required) can be initiated. This included the early trauma trials with HemAssist ® (BAXTER), Hemopure ® (BIOPURE) and PolyHeme ® (NORTHFIELD) for resuscitating hypotensive shock. These trials all failed due to safety concerns (e.g., cardiac events, mortality) and certain protocol design limitations. In 2008 the Food and Drug Administration (FDA) put all HBOC trials in the US on clinical hold due to the unfavorable benefit:risk profile demonstrated by various HBOCs in different clinical studies in a meta-analysis published by Natanson et al. (2008). During standard resuscitation in trauma, organ dysfunction and failure can occur due to ischemia in critical tissues, which can be detected by the degree of lactic acidosis. SANGART'S Phase 2 trauma program with MP4OX therefore added lactate >5 mmol/L as an inclusion criterion to enroll patients who had lost sufficient blood to cause a tissue oxygen debt. This was key to the successful conduct of their Phase 2 program (ex-US, from 2009 to 2012) to evaluate MP4OX as an adjunct to standard fluid resuscitation and transfusion of RBCs. In 2013, SANGART shared their Phase 2b results with the FDA, and succeeded in getting the FDA to agree that a planned Phase 2c higher dose comparison study of MP4OX in trauma could include clinical sites in the US. Unfortunately, SANGART failed to secure new funding and was forced to terminate development and operations in Dec 2013, even though a regulatory path forward with FDA approval to proceed in trauma had been achieved.
Simulation and analysis of main steam control system based on heat transfer calculation
Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai
2018-05-01
In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.
The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.
Youn, Ahrim; Wang, Shuang
2018-01-01
Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .
Validation of KENO-based criticality calculations at Rocky Flats
International Nuclear Information System (INIS)
Felsher, P.D.; McKamy, J.N.; Monahan, S.P.
1992-01-01
In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Environment-based pin-power reconstruction method for homogeneous core calculations
International Nuclear Information System (INIS)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-01-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
Directory of Open Access Journals (Sweden)
А. С. Трякина
2017-10-01
Full Text Available The article describes selection process of sustainable technological process flow chart for water treatment procedure developed on scientifically based calculated indexes of quality indicators for water supplied to water treatment facilities. In accordance with the previously calculated values of the indicators of the source water quality, the main purification facilities are selected. A more sustainable flow chart for the modern water quality of the Seversky Donets-Donbass channel is a two-stage filtering with contact prefilters and high-rate filters. The article proposes a set of measures to reduce such an indicator of water quality as permanganate oxidation. The most suitable for these purposes is sorption purification using granular activated carbon for water filtering. The increased water hardness is also quite topical. The method of ion exchange on sodium cation filters was chosen to reduce the water hardness. We also evaluated the reagents for decontamination of water. As a result, sodium hypochlorite is selected for treatment of water, which has several advantages over chlorine and retains the necessary aftereffect, unlike ozone. A technological flow chart with two-stage purification on contact prefilters and two-layer high-rate filters (granular activated carbon - quartz sand with disinfection of sodium hypochlorite and softening of a part of water on sodium-cation exchangers filters is proposed. This technological flow chart of purification with any fluctuations in the quality of the source water is able to provide purified water that meets the requirements of the current sanitary-hygienic standards. In accordance with the developed flow chart, guidelines and activities for the reconstruction of the existing Makeevka Filtering Station were identified. The recommended flow chart uses more compact and less costly facilities, as well as additional measures to reduce those water quality indicators, the values of which previously were in
Fischer, Alexander H; Wang, Timothy S; Yenokyan, Gayane; Kang, Sewon; Chien, Anna L
2016-08-01
Individuals with previous nonmelanoma skin cancer (NMSC) are at increased risk for subsequent skin cancer, and should therefore limit ultraviolet exposure. We sought to determine whether individuals with previous NMSC engage in better sun protection than those with no skin cancer history. We pooled self-reported data (2005 and 2010 National Health Interview Surveys) from US non-Hispanic white adults (758 with and 34,161 without previous NMSC). We calculated adjusted prevalence odds ratios (aPOR) and 95% confidence intervals (CI), taking into account the complex survey design. Individuals with previous NMSC versus no history of NMSC had higher rates of frequent use of shade (44.3% vs 27.0%; aPOR 1.41; 95% CI 1.16-1.71), long sleeves (20.5% vs 7.7%; aPOR 1.55; 95% CI 1.21-1.98), a wide-brimmed hat (26.1% vs 10.5%; aPOR 1.52; 95% CI 1.24-1.87), and sunscreen (53.7% vs 33.1%; aPOR 2.11; 95% CI 1.73-2.59), but did not have significantly lower odds of recent sunburn (29.7% vs 40.7%; aPOR 0.95; 95% CI 0.77-1.17). Among those with previous NMSC, recent sunburn was inversely associated with age, sun avoidance, and shade but not sunscreen. Self-reported cross-sectional data and unavailable information quantifying regular sun exposure are limitations. Physicians should emphasize sunburn prevention when counseling patients with previous NMSC, especially younger adults, focusing on shade and sun avoidance over sunscreen. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kim, Dong Hyun; Kim, Hak Sung [Hanyang University, Seoul (Korea, Republic of); Kim, Hyo Chan; Yang, Yong Sik; In, Wang kee [KAERI, Daejeon (Korea, Republic of)
2016-05-15
In this paper, an analytical method based on thick walled theory has been studied to calculate stress and strain of ATF cladding. In order to prescribe boundary conditions of the analytical method, two algorithms were employed which are called subroutine 'Cladf' and 'Couple' of FRACAS, respectively. To evaluate the developed method, equivalent model using finite element method was established and stress components of the method were compared with those of equivalent FE model. One of promising ATF concepts is the coated cladding, which take advantages such as high melting point, a high neutron economy, and low tritium permeation rate. To evaluate the mechanical behavior and performance of the coated cladding, we need to develop the specified model to simulate the ATF behaviors in the reactor. In particular, the model for simulation of stress and strain for the coated cladding should be developed because the previous model, which is 'FRACAS', is for one body model. The FRACAS module employs the analytical method based on thin walled theory. According to thin-walled theory, radial stress is defined as zero but this assumption is not suitable for ATF cladding because value of the radial stress is not negligible in the case of ATF cladding. Recently, a structural model for multi-layered ceramic cylinders based on thick-walled theory was developed. Also, FE-based numerical simulation such as BISON has been developed to evaluate fuel performance. An analytical method that calculates stress components of ATF cladding was developed in this study. Thick-walled theory was used to derive equations for calculating stress and strain. To solve for these equations, boundary and loading conditions were obtained by subroutine 'Cladf' and 'Couple' and applied to the analytical method. To evaluate the developed method, equivalent FE model was established and its results were compared to those of analytical model. Based on the
International Nuclear Information System (INIS)
Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng
2011-01-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.
Directory of Open Access Journals (Sweden)
Luis F López-Cortés
Full Text Available Significant controversy still exists about ritonavir-boosted protease inhibitor monotherapy (mtPI/rtv as a simplification strategy that is used up to now to treat patients that have not experienced previous virological failure (VF while on protease inhibitor (PI -based regimens. We have evaluated the effectiveness of two mtPI/rtv regimens in an actual clinical practice setting, including patients that had experienced previous VF with PI-based regimens.This retrospective study analyzed 1060 HIV-infected patients with undetectable viremia that were switched to lopinavir/ritonavir or darunavir/ritonavir monotherapy. In cases in which the patient had previously experienced VF while on a PI-based regimen, the lack of major HIV protease resistance mutations to lopinavir or darunavir, respectively, was mandatory. The primary endpoint of this study was the percentage of participants with virological suppression after 96 weeks according to intention-to-treat analysis (non-complete/missing = failure.A total of 1060 patients were analyzed, including 205 with previous VF while on PI-based regimens, 90 of whom were on complex therapies due to extensive resistance. The rates of treatment effectiveness (intention-to-treat analysis and virological efficacy (on-treatment analysis at week 96 were 79.3% (CI95, 76.8-81.8 and 91.5% (CI95, 89.6-93.4, respectively. No relationships were found between VF and earlier VF while on PI-based regimens, the presence of major or minor protease resistance mutations, the previous time on viral suppression, CD4+ T-cell nadir, and HCV-coinfection. Genotypic resistance tests were available in 49 out of the 74 patients with VFs and only four patients presented new major protease resistance mutations.Switching to mtPI/rtv achieves sustained virological control in most patients, even in those with previous VF on PI-based regimens as long as no major resistance mutations are present for the administered drug.
Energy Technology Data Exchange (ETDEWEB)
Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.
International Nuclear Information System (INIS)
Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon
2014-01-01
KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist
Torrallardona, D; Andrés-Elias, N; López-Soria, S; Badiola, I; Cerdà-Cuéllar, M
2012-12-01
A trial was conducted to evaluate the effect of different cereals on the performance, gut mucosa, and microbiota of weanling pigs with or without previous access to creep feed during lactation. A total of 108 newly weaned pigs (7.4 kg BW; 26 d of age; half with and half without creep feed) were used. Piglets were distributed by BW into 36 pens according to a 2 × 6 factorial arrangement of treatments with previous access to creep feed (with or without) and cereal source in the experimental diet [barley (Hordeum vulgare), rice (Oryza sativa)-wheat (Triticum aestivum) bran, corn (Zea mays), naked oats (Avena sativa), oats, or rice] as main factors. Pigs were offered the experimental diets for 21 d and performance was monitored. At day 21, 4 piglets from each treatment were killed and sampled for the histological evaluation of jejunal mucosa and the study of ileal and cecal microbiota by RFLP. The Manhattan distances between RFLP profiles were calculated and intragroup similarities (IGS) were estimated for each treatment. An interaction between cereal source and previous creep feeding was observed for ADFI (P creep feeding increased ADFI for the rice-wheat bran diet it reduced it for naked oats. No differences in mucosal morphology were observed except for deeper crypts in pigs that did not have previous access to creep feed (P creep feeding and cereal was also observed for the IGS of the cecal microbiota at day 21 (P creep feed reduced IGS in the piglets fed oats or barley but no differences were observed for the other cereal sources. It is concluded that the effect of creep feeding during lactation on the performance and the microbiota of piglets after weaning is dependent on the nature of the cereal in the postweaning diet.
International Nuclear Information System (INIS)
Burger, B.
1991-07-01
This system (THEXSYST) will be used for control, analysis and presentation of thermal hydraulic simulation calculations of light water reactors. THEXSYST is a modular system consisting of an expert shell with user interface, a data base, and a simulation program and uses techniques available in RSYST. A knowledge base, which was created to control the simulational calculation of pressurized water reactors, includes both the steady state calculation and the transient calculation in the domain of the depressurization, as a result of a small break loss of coolant accident. The methods developed are tested using a simulational calculation with RELAP5/Mod2. It will be seen that the application of knowledge base techniques may be a helpful tool to support existing solutions especially in graphical analysis. (orig./HP) [de
Directory of Open Access Journals (Sweden)
Shan Yang
2016-01-01
Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.
Wan'e, Wu; Zuoming, Zhu
2012-01-01
A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ± 7 . 3 %; in the formulation rang...
19 CFR 351.405 - Calculation of normal value based on constructed value.
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and...
International Nuclear Information System (INIS)
Gasco, C.; Anton, M. P.; Ampudia, J.
2003-01-01
The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs
Pencil kernel correction and residual error estimation for quality-index-based dose calculations
International Nuclear Information System (INIS)
Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael
2006-01-01
Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).
Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M
2007-02-21
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.
Energy Technology Data Exchange (ETDEWEB)
Kang, M. Y.; Kim, J. H.; Choi, H. D.; Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the Effective Solid Angle (ESA) Code. The procedure for semi-empirical determination of the FE efficiency for the arbitrary volume sources and the calculation principles and processes about ESA code is referred to, and the code was validated with a HPGe detector (relative efficiency 32%, n-type) in previous studies. In this study, we use different type and efficiency of HPGe detectors, in order to verify the performance of the ESA code for the various detectors. We calculated the efficiency curve of voluminous source and compared with experimental data. We will carry out additional validation by measurement of various medium, volume and shape of CRM volume sources with detector of different efficiency and type. And we will reflect the effect of the dead layer of p-type HPGe detector and coincidence summing correction technique in near future.
Applications of thermodynamic calculations to Mg alloy design: Mg-Sn based alloy development
International Nuclear Information System (INIS)
Jung, In-Ho; Park, Woo-Jin; Ahn, Sang Ho; Kang, Dae Hoon; Kim, Nack J.
2007-01-01
Recently an Mg-Sn based alloy system has been investigated actively in order to develop new magnesium alloys which have a stable structure and good mechanical properties at high temperatures. Thermodynamic modeling of the Mg-Al-Mn-Sb-Si-Sn-Zn system was performed based on available thermodynamic, phase equilibria and phase diagram data. Using the optimized database, the phase relationships of the Mg-Sn-Al-Zn alloys with additions of Si and Sb were calculated and compared with their experimental microstructures. It is shown that the calculated results are in good agreement with experimental microstructures, which proves the applicability of thermodynamic calculations for new Mg alloy design. All calculations were performed using FactSage thermochemical software. (orig.)
Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation
Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team
2017-11-01
Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.
Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin
2016-02-01
Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1 h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1 h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1 h -1 to 26.8 ml kg -1 h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.
A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses
Directory of Open Access Journals (Sweden)
Pei-Yuan Li
2015-05-01
Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.
Model-based calculations of off-axis ratio of conic beams for a dedicated 6 MV radiosurgery unit
Energy Technology Data Exchange (ETDEWEB)
Yang, J. N.; Ding, X.; Du, W.; Pino, R. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology, Methodist Hospital, Houston, Texas 77030 (United States)
2010-10-15
Purpose: Because the small-radius photon beams shaped by cones in stereotactic radiosurgery (SRS) lack lateral electronic equilibrium and a detector's finite cross section, direct experimental measurement of dosimetric data for these beams can be subject to large uncertainties. As the dose calculation accuracy of a treatment planning system largely depends on how well the dosimetric data are measured during the machine's commissioning, there is a critical need for an independent method to validate measured results. Therefore, the authors studied the model-based calculation as an approach to validate measured off-axis ratios (OARs). Methods: The authors previously used a two-component analytical model to calculate central axis dose and associated dosimetric data (e.g., scatter factors and tissue-maximum ratio) in a water phantom and found excellent agreement between the calculated and the measured central axis doses for small 6 MV SRS conic beams. The model was based on that of Nizin and Mooij [''An approximation of central-axis absorbed dose in narrow photon beams,'' Med. Phys. 24, 1775-1780 (1997)] but was extended to account for apparent attenuation, spectral differences between broad and narrow beams, and the need for stricter scatter dose calculations for clinical beams. In this study, the authors applied Clarkson integration to this model to calculate OARs for conic beams. OARs were calculated for selected cones with radii from 0.2 to 1.0 cm. To allow comparisons, the authors also directly measured OARs using stereotactic diode (SFD), microchamber, and film dosimetry techniques. The calculated results were machine-specific and independent of direct measurement data for these beams. Results: For these conic beams, the calculated OARs were in excellent agreement with the data measured using an SFD. The discrepancies in radii and in 80%-20% penumbra were within 0.01 cm, respectively. Using SFD-measured OARs as the reference data, the
Calculation of parameters of radial-piston reducer based on the use of functional semantic networks
Directory of Open Access Journals (Sweden)
Pashkevich V.M.
2016-12-01
Full Text Available The questions of сalculation of parameters of radial-piston reducer are considered in this article. It is used the approach which is based technologies of functional semantic networks. It is considered possibility applications of functional se-mantic networks for calculation of parameters of radial-piston reducer. Semantic networks to calculate the mass of the radial piston reducer are given.
International Nuclear Information System (INIS)
Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio
2001-01-01
An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment
Chan, Emily Y Y; Kim, Jean H; Lin, Cherry; Cheung, Eliza Y L; Lee, Polly P Y
2014-06-01
Disaster preparedness is an important preventive strategy for protecting health and mitigating adverse health effects of unforeseen disasters. A multi-site based ethnic minority project (2009-2015) is set up to examine health and disaster preparedness related issues in remote, rural, disaster prone communities in China. The primary objective of this reported study is to examine if previous disaster experience significantly increases household disaster preparedness levels in remote villages in China. A cross-sectional, household survey was conducted in January 2011 in Gansu Province, in a predominately Hui minority-based village. Factors related to disaster preparedness were explored using quantitative methods. Two focus groups were also conducted to provide additional contextual explanations to the quantitative findings of this study. The village household response rate was 62.4 % (n = 133). Although previous disaster exposure was significantly associated with perception of living in a high disaster risk area (OR = 6.16), only 10.7 % households possessed a disaster emergency kit. Of note, for households with members who had non-communicable diseases, 9.6 % had prepared extra medications to sustain clinical management of their chronic conditions. This is the first study that examined disaster preparedness in an ethnic minority population in remote communities in rural China. Our results indicate the need of disaster mitigation education to promote preparedness in remote, resource-poor communities.
Laparoscopy After Previous Laparotomy
Directory of Open Access Journals (Sweden)
Zulfo Godinjak
2006-11-01
Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.
Evaluation of RSG-GAS Core Management Based on Burnup Calculation
International Nuclear Information System (INIS)
Lily Suparlina; Jati Susilo
2009-01-01
Evaluation of RSG-GAS Core Management Based on Burnup Calculation. Presently, U 3 Si 2 -Al dispersion fuel is used in RSG-GAS core and had passed the 60 th core. At the beginning of each cycle the 5/1 fuel reshuffling pattern is used. Since 52 nd core, operators did not use the core fuel management computer code provided by vendor for this activity. They use the manually calculation using excel software as the solving. To know the accuracy of the calculation, core calculation was carried out using two kinds of 2 dimension diffusion codes Batan-2DIFF and SRAC. The beginning of cycle burn-up fraction data were calculated start from 51 st to 60 th using Batan-EQUIL and SRAC COREBN. The analysis results showed that there is a disparity in reactivity values of the two calculation method. The 60 th core critical position resulted from Batan-2DIFF calculation provide the reduction of positive reactivity 1.84 % Δk/k, while the manually calculation results give the increase of positive reactivity 2.19 % Δk/k. The minimum shutdown margin for stuck rod condition for manual and Batan-3DIFF calculation are -3.35 % Δk/k dan -1.13 % Δk/k respectively, it means that both values met the safety criteria, i.e <-0.5 % Δk/k. Excel program can be used for burn-up calculation, but it is needed to provide core management code to reach higher accuracy. (author)
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
Monte Carlo based electron treatment planning and cutout output factor calculations
Mitrou, Ellis
Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.
DEFF Research Database (Denmark)
Mattsson, T.R.; Wahnström, G.; Bengtsson, L.
1997-01-01
First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...
Directory of Open Access Journals (Sweden)
Xiaoqing Wei
2017-02-01
Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water
Feasibility of CBCT-based dose calculation: Comparative analysis of HU adjustment techniques
International Nuclear Information System (INIS)
Fotina, Irina; Hopfgartner, Johannes; Stock, Markus; Steininger, Thomas; Lütgendorf-Caucig, Carola; Georg, Dietmar
2012-01-01
Background and purpose: The aim of this work was to compare the accuracy of different HU adjustments for CBCT-based dose calculation. Methods and materials: Dose calculation was performed on CBCT images of 30 patients. In the first two approaches phantom-based (Pha-CC) and population-based (Pop-CC) conversion curves were used. The third method (WAB) represents override of the structures with standard densities for water, air and bone. In ROI mapping approach all structures were overridden with average HUs from planning CT. All techniques were benchmarked to the Pop-CC and CT-based plans by DVH comparison and γ-index analysis. Results: For prostate plans, WAB and ROI mapping compared to Pop-CC showed differences in PTV D median below 2%. The WAB and Pha-CC methods underestimated the bladder dose in IMRT plans. In lung cases PTV coverage was underestimated by Pha-CC method by 2.3% and slightly overestimated by the WAB and ROI techniques. The use of the Pha-CC method for head–neck IMRT plans resulted in difference in PTV coverage up to 5%. Dose calculation with WAB and ROI techniques showed better agreement with pCT than conversion curve-based approaches. Conclusions: Density override techniques provide an accurate alternative to the conversion curve-based methods for dose calculation on CBCT images.
Directory of Open Access Journals (Sweden)
Sysіuk Svitlana V.
2017-05-01
Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.
Calculation of marine propeller static strength based on coupled BEM/FEM
Directory of Open Access Journals (Sweden)
YE Liyu
2017-10-01
Full Text Available [Objectives] The reliability of propeller stress has a great influence on the safe navigation of a ship. To predict propeller stress quickly and accurately,[Methods] a new numerical prediction model is developed by coupling the Boundary Element Method(BEMwith the Finite Element Method (FEM. The low order BEM is used to calculate the hydrodynamic load on the blades, and the Prandtl-Schlichting plate friction resistance formula is used to calculate the viscous load. Next, the calculated hydrodynamic load and viscous correction load are transmitted to the calculation of the Finite Element as surface loads. Considering the particularity of propeller geometry, a continuous contact detection algorithm is developed; an automatic method for generating the finite element mesh is developed for the propeller blade; a code based on the FEM is compiled for predicting blade stress and deformation; the DTRC 4119 propeller model is applied to validate the reliability of the method; and mesh independence is confirmed by comparing the calculated results with different sizes and types of mesh.[Results] The results show that the calculated blade stress and displacement distribution are reliable. This method avoids the process of artificial modeling and finite element mesh generation, and has the advantages of simple program implementation and high calculation efficiency.[Conclusions] The code can be embedded into the code of theoretical and optimized propeller designs, thereby helping to ensure the strength of designed propellers and improve the efficiency of propeller design.
International Nuclear Information System (INIS)
Santamarina, A.
1991-01-01
A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.
Energy Technology Data Exchange (ETDEWEB)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki [Hanyang Univ., Seoul (Korea, Republic of); Lee, Jong-Il; Kim, Jang-Lyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language.
International Nuclear Information System (INIS)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki; Lee, Jong-Il; Kim, Jang-Lyul
2014-01-01
In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language
Code accuracy evaluation of ISP 35 calculations based on NUPEC M-7-1 test
International Nuclear Information System (INIS)
Auria, F.D.; Oriolo, F.; Leonardi, M.; Paci, S.
1995-01-01
Quantitative evaluation of code uncertainties is a necessary step in the code assessment process, above all if best-estimate codes are utilised for licensing purposes. Aiming at quantifying the code accuracy, an integral methodology based on the Fast Fourier Transform (FFT) has been developed at the University of Pisa (DCMN) and has been already applied to several calculations related to primary system test analyses. This paper deals with the first application of the FFT based methodology to containment code calculations based on a hydrogen mixing and distribution test performed in the NUPEC (Nuclear Power Engineering Corporation) facility. It is referred to pre-test and post-test calculations submitted for the International Standard Problem (ISP) n. 35. This is a blind exercise, simulating the effects of steam injection and spray behaviour on gas distribution and mixing. The result of the application of this methodology to nineteen selected variables calculated by ten participants are here summarized, and the comparison (where possible) of the accuracy evaluated for the pre-test and for the post-test calculations of a same user is also presented. (author)
Rajabi, A; Dabiri, A
2012-01-01
Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.
Directory of Open Access Journals (Sweden)
Wu Wan'e
2012-01-01
Full Text Available A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ±7.3%; in the formulation range (hydroxyl-terminated polybutadiene 28%–32%, ammonium perchlorate 30%–35%, magnalium alloy 4%–8%, catocene 0%–5%, and boron 30%, the variation of the calculation data is consistent with the experimental results.
Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method
Energy Technology Data Exchange (ETDEWEB)
Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)
2017-05-15
The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.
a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution
Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin
Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.
A simple method for calculating power based on a prior trial.
Borm, G.F.; Bloem, B.R.; Munneke, M.; Teerenstra, S.
2010-01-01
OBJECTIVE: When an investigator wants to base the power of a planned clinical trial on the outcome of another trial, the latter study may not have been reported in sufficient detail to allow this. For example, when the outcome is a change from baseline, the power calculation requires the standard
Directory of Open Access Journals (Sweden)
Chao Hu
2015-04-01
Full Text Available Slope excavation is one of the most crucial steps in the construction of a hydraulic project. Excavation project quality assessment and excavated volume calculation are critical in construction management. The positioning of excavation projects using traditional instruments is inefficient and may cause error. To improve the efficiency and precision of calculation and assessment, three-dimensional laser scanning technology was used for slope excavation quality assessment. An efficient data acquisition, processing, and management workflow was presented in this study. Based on the quality control indices, including the average gradient, slope toe elevation, and overbreak and underbreak, cross-sectional quality assessment and holistic quality assessment methods were proposed to assess the slope excavation quality with laser-scanned data. An algorithm was also presented to calculate the excavated volume with laser-scanned data. A field application and a laboratory experiment were carried out to verify the feasibility of these methods for excavation quality assessment and excavated volume calculation. The results show that the quality assessment indices can be obtained rapidly and accurately with design parameters and scanned data, and the results of holistic quality assessment are consistent with those of cross-sectional quality assessment. In addition, the time consumption in excavation quality assessment with the laser scanning technology can be reduced by 70%–90%, as compared with the traditional method. The excavated volume calculated with the scanned data only slightly differs from measured data, demonstrating the applicability of the excavated volume calculation method presented in this study.
Fragment-based quantum mechanical calculation of protein-protein binding affinities.
Wang, Yaqian; Liu, Jinfeng; Li, Jinjin; He, Xiao
2018-04-29
The electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method has been successfully utilized for efficient linear-scaling quantum mechanical (QM) calculation of protein energies. In this work, we applied the EE-GMFCC method for calculation of binding affinity of Endonuclease colicin-immunity protein complex. The binding free energy changes between the wild-type and mutants of the complex calculated by EE-GMFCC are in good agreement with experimental results. The correlation coefficient (R) between the predicted binding energy changes and experimental values is 0.906 at the B3LYP/6-31G*-D level, based on the snapshot whose binding affinity is closest to the average result from the molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) calculation. The inclusion of the QM effects is important for accurate prediction of protein-protein binding affinities. Moreover, the self-consistent calculation of PB solvation energy is required for accurate calculations of protein-protein binding free energies. This study demonstrates that the EE-GMFCC method is capable of providing reliable prediction of relative binding affinities for protein-protein complexes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Radial electromagnetic force calculation of induction motor based on multi-loop theory
Directory of Open Access Journals (Sweden)
HE Haibo
2017-12-01
Full Text Available [Objectives] In order to study the vibration and noise of induction motors, a method of radial electromagnetic force calculation is established on the basis of the multi-loop model.[Methods] Based on the method of calculating air-gap magneto motive force according to stator and rotor fundamental wave current, the analytic formulas are deduced for calculating the air-gap magneto motive force and radial electromagnetic force generated in accordance with any stator winding and rotor conducting bar current. The multi-loop theory and calculation method for the electromagnetic parameters of a motor are introduced, and a dynamic simulation model of an induction motor built to achieve the current of the stator winding and rotor conducting bars, and obtain the calculation formula of radial electromagnetic force. The radial electromagnetic force and vibration are then estimated.[Results] The experimental results indicate that the vibration acceleration frequency and amplitude of the motor are consistent with the experimental results.[Conclusions] The results and calculation method can support the low noise design of converters.
Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene
Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.
2012-02-01
We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.
Medication calculation: the potential role of digital game-based learning in nurse education.
Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle
2013-12-01
Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.
Ferreira, Natália Noronha; Perez, Taciane Alvarenga; Pedreiro, Liliane Neves; Prezotti, Fabíola Garavello; Boni, Fernanda Isadora; Cardoso, Valéria Maria de Oliveira; Venâncio, Tiago; Gremião, Maria Palmira Daflon
2017-10-01
This work aimed to develop a calcium alginate hydrogel as a pH responsive delivery system for polymyxin B (PMX) sustained-release through the vaginal route. Two samples of sodium alginate from different suppliers were characterized. The molecular weight and M/G ratio determined were, approximately, 107 KDa and 1.93 for alginate_S and 32 KDa and 1.36 for alginate_V. Polymer rheological investigations were further performed through the preparation of hydrogels. Alginate_V was selected for subsequent incorporation of PMX due to the acquisition of pseudoplastic viscous system able to acquiring a differential structure in simulated vaginal microenvironment (pH 4.5). The PMX-loaded hydrogel (hydrogel_PMX) was engineered based on polyelectrolyte complexes (PECs) formation between alginate and PMX followed by crosslinking with calcium chloride. This system exhibited a morphology with variable pore sizes, ranging from 100 to 200 μm and adequate syringeability. The hydrogel liquid uptake ability in an acid environment was minimized by the previous PECs formation. In vitro tests evidenced the hydrogels mucoadhesiveness. PMX release was pH-dependent and the system was able to sustain the release up to 6 days. A burst release was observed at pH 7.4 and drug release was driven by an anomalous transport, as determined by the Korsmeyer-Peppas model. At pH 4.5, drug release correlated with Weibull model and drug transport was driven by Fickian diffusion. The calcium alginate hydrogels engineered by the previous formation of PECs showed to be a promising platform for sustained release of cationic drugs through vaginal administration.
Belihu, Fetene B; Small, Rhonda; Davey, Mary-Ann
2017-03-01
Variations in caesarean section (CS) between some immigrant groups and receiving country populations have been widely reported. Often, African immigrant women are at higher risk of CS than the receiving population in developed countries. However, evidence about subsequent mode of birth following CS for African women post-migration is lacking. The objective of this study was to examine differences in attempted and successful vaginal birth after previous caesarean (VBAC) for Eastern African immigrants (Eritrea, Ethiopia, Somalia and Sudan) compared with Australian-born women. A population-based observational study was conducted using the Victorian Perinatal Data Collection. Pearson's chi-square test and logistic regression analysis were performed to generate adjusted odds ratios for attempted and successful VBAC. Victoria, Australia. 554 Eastern African immigrants and 24,587 Australian-born eligible women with previous CS having singleton births in public care. 41.5% of Eastern African immigrant women and 26.1% Australian-born women attempted a VBAC with 50.9% of Eastern African immigrants and 60.5% of Australian-born women being successful. After adjusting for maternal demographic characteristics and available clinical confounding factors, Eastern African immigrants were more likely to attempt (OR adj 1.94, 95% CI 1.57-2.47) but less likely to succeed (OR adj 0.54 95% CI 0.41-0.71) in having a VBAC. There are disparities in attempted and successful VBAC between Eastern African origin and Australian-born women. Unsuccessful VBAC attempt is more common among Eastern African immigrants, suggesting the need for improved strategies to select and support potential candidates for vaginal birth among these immigrants to enhance success and reduce potential complications associated with failed VBAC attempt. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Two-dimensional core calculation research for fuel management optimization based on CPACT code
International Nuclear Information System (INIS)
Chen Xiaosong; Peng Lianghui; Gang Zhi
2013-01-01
Fuel management optimization process requires rapid assessment for the core layout program, and the commonly used methods include two-dimensional diffusion nodal method, perturbation method, neural network method and etc. A two-dimensional loading patterns evaluation code was developed based on the three-dimensional LWR diffusion calculation program CPACT. Axial buckling introduced to simulate the axial leakage was searched in sub-burnup sections to correct the two-dimensional core diffusion calculation results. Meanwhile, in order to get better accuracy, the weight equivalent volume method of the control rod assembly cross-section was improved. (authors)
Calculation and Simulation Study on Transient Stability of Power System Based on Matlab/Simulink
Directory of Open Access Journals (Sweden)
Shi Xiu Feng
2016-01-01
Full Text Available The stability of the power system is destroyed, will cause a large number of users power outage, even cause the collapse of the whole system, extremely serious consequences. Based on the analysis in single machine infinite system as an example, when at the f point two phase ground fault occurs, the fault lines on either side of the circuit breaker tripping resection at the same time,respectively by two kinds of calculation and simulation methods of system transient stability analysis, the conclusion are consistent. and the simulation analysis is superior to calculation analysis.
Core physics design calculation of mini-type fast reactor based on Monte Carlo method
International Nuclear Information System (INIS)
He Keyu; Han Weishi
2007-01-01
An accurate physics calculation model has been set up for the mini-type sodium-cooled fast reactor (MFR) based on MCNP-4C code, then a detailed calculation of its critical physics characteristics, neutron flux distribution, power distribution and reactivity control has been carried out. The results indicate that the basic physics characteristics of MFR can satisfy the requirement and objectives of the core design. The power density and neutron flux distribution are symmetrical and reasonable. The control system is able to make a reliable reactivity balance efficiently and meets the request for long-playing operation. (authors)
Directory of Open Access Journals (Sweden)
Xu XX
2013-10-01
Full Text Available Xiao-xiao Xu,1 Bei Yan,2 Zhen-xing Wang,3 Yong Yu,1 Xiao-xiong Wu,2 Yi-zhuo Zhang11Department of Hematology, Tianjin Medical University Cancer Institute and Hospital, Tianjin Key Laboratory of Cancer Prevention and Therapy, Tianjin, 2Department of Hematology, First Affiliated Hospital of Chinese People's Liberation Army General Hospital, Beijing, 3Department of Stomach Oncology, TianJin Medical University Cancer Institute and Hospital, Key Laboratory of Cancer Prevention and Therapy, Tianjin, People's Republic of ChinaAbstract: Fludarabine-based regimens and CHOP (doxorubicin, cyclophosphamide, vincristine, prednisone-like regimens with or without rituximab are the most common treatment modalities for indolent lymphoma. However, there is no clear evidence to date about which chemotherapy regimen should be the proper initial treatment of indolent lymphoma. More recently, the use of fludarabine has raised concerns due to its high number of toxicities, especially hematological toxicity and infectious complications. The present study aimed to retrospectively evaluate both the efficacy and the potential toxicities of the two main regimens (fludarabine-based and CHOP-like regimens in patients with previously untreated indolent lymphoma. Among a total of 107 patients assessed, 54 patients received fludarabine-based regimens (FLU arm and 53 received CHOP or CHOPE (doxorubicin, cyclophosphamide, vincristine, prednisone, or plus etoposide regimens (CHOP arm. The results demonstrated that fludarabine-based regimens could induce significantly improved progression-free survival (PFS compared with CHOP-like regimens. However, the FLU arm showed overall survival, complete response, and overall response rates similar to those of the CHOP arm. Grade 3–4 neutropenia occurred in 42.6% of the FLU arm and 7.5% of the CHOP arm (P 60 years and presentation of grade 3–4 myelosuppression were the independent factors to infection, and the FLU arm had significantly
Calculation Scheme Based on a Weighted Primitive: Application to Image Processing Transforms
Directory of Open Access Journals (Sweden)
Gregorio de Miguel Casado
2007-01-01
Full Text Available This paper presents a method to improve the calculation of functions which specially demand a great amount of computing resources. The method is based on the choice of a weighted primitive which enables the calculation of function values under the scope of a recursive operation. When tackling the design level, the method shows suitable for developing a processor which achieves a satisfying trade-off between time delay, area costs, and stability. The method is particularly suitable for the mathematical transforms used in signal processing applications. A generic calculation scheme is developed for the discrete fast Fourier transform (DFT and then applied to other integral transforms such as the discrete Hartley transform (DHT, the discrete cosine transform (DCT, and the discrete sine transform (DST. Some comparisons with other well-known proposals are also provided.
Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2
DEFF Research Database (Denmark)
Hertz, Kristian Dahl
1997-01-01
of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the code......The part 1-2 of the Eurocode on Concrete deals with Structural Fire Design.In chapter 3, which is partly written by the author of this paper, some data are given for the development of a few material parameters at high temperatures. These data are intended to represent the worst possible concrete...... to experience form tests on structural specimens based on German siliceous concrete subjected to Standard fire exposure until the time of maximum gas temperature.Chapter 4.3, which is written by the author of this paper, provides a simplified calculation method by means of which the load bearing capacity...
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Energy Technology Data Exchange (ETDEWEB)
Vignati, E.; Hertel, O.; Berkowicz, R. [National Environmental Research Inst., Dept. of Atmospheric Enviroment (Denmark); Raaschou-Nielsen, O. [Danish Cancer Society, Division of Cancer Epidemiology (Denmark)
1997-05-01
The method for generation of the input data for the calculations with OSPM is presented in this report. The described method which is based on information provided from a questionnaire, will be used for model calculations of long term exposure for a large number of children in connection with an epidemiological study. A test of the calculation method has been performed on a few locations in which detailed measurements of air pollution, meteorological data and traffic were available. Comparisons between measured and calculated concentrations were made for hourly, monthly and yearly values. Beside the measured concentrations, the test results were compared to results obtained with the optimal street configuration data and measured traffic. The main conclusions drawn from this investigation are: (1) The calculation method works satisfactory well for long term averages, whereas the uncertainties are high when short term averages are considered. (2) The street width is one of the most crucial input parameters for the calculation of street pollution levels for both short and long term averages. Using H.C. Andersens Boulevard as an example, it was shown that estimation of street width based on traffic amount can lead to large overestimation of the concentration levels (in this case 50% for NO{sub x} and 30% for NO{sub 2}). (3) The street orientation and geometry is important for prediction of short term concentrations but this importance diminished for longer term averages. (4) The uncertainties in diurnal traffic profiles can influence the accuracy of short term averages, but are less important for long term averages. The correlation is good between modelled and measured concentrations when the actual background concentrations are replaced with the generated values. Even though extreme situations are difficult to reproduce with this method, the comparison between the yearly averaged modelled and measured concentrations is very good. (LN) 20 refs.
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
An independent dose calculation algorithm for MLC-based stereotactic radiotherapy
International Nuclear Information System (INIS)
Lorenz, Friedlieb; Killoran, Joseph H.; Wenz, Frederik; Zygmanski, Piotr
2007-01-01
We have developed an algorithm to calculate dose in a homogeneous phantom for radiotherapy fields defined by multi-leaf collimator (MLC) for both static and dynamic MLC delivery. The algorithm was developed to supplement the dose algorithms of the commercial treatment planning systems (TPS). The motivation for this work is to provide an independent dose calculation primarily for quality assurance (QA) and secondarily for the development of static MLC field based inverse planning. The dose calculation utilizes a pencil-beam kernel. However, an explicit analytical integration results in a closed form for rectangular-shaped beamlets, defined by single leaf pairs. This approach reduces spatial integration to summation, and leads to a simple method of determination of model parameters. The total dose for any static or dynamic MLC field is obtained by summing over all individual rectangles from each segment which offers faster speed to calculate two-dimensional dose distributions at any depth in the phantom. Standard beam data used in the commissioning of the TPS was used as input data for the algorithm. The calculated results were compared with the TPS and measurements for static and dynamic MLC. The agreement was very good (<2.5%) for all tested cases except for very small static MLC sizes of 0.6 cmx0.6 cm (<6%) and some ion chamber measurements in a high gradient region (<4.4%). This finding enables us to use the algorithm for routine QA as well as for research developments
Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.
2014-01-01
The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044
Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin
2018-05-01
Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
International Nuclear Information System (INIS)
Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-01-01
Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Calculation of the Instream Ecological Flow of the Wei River Based on Hydrological Variation
Directory of Open Access Journals (Sweden)
Shengzhi Huang
2014-01-01
Full Text Available It is of great significance for the watershed management department to reasonably allocate water resources and ensure the sustainable development of river ecosystems. The greatly important issue is to accurately calculate instream ecological flow. In order to precisely compute instream ecological flow, flow variation is taken into account in this study. Moreover, the heuristic segmentation algorithm that is suitable to detect the mutation points of flow series is employed to identify the change points. Besides, based on the law of tolerance and ecological adaptation theory, the maximum instream ecological flow is calculated, which is the highest frequency of the monthly flow based on the GEV distribution and very suitable for healthy development of the river ecosystems. Furthermore, in order to guarantee the sustainable development of river ecosystems under some bad circumstances, minimum instream ecological flow is calculated by a modified Tennant method which is improved by replacing the average flow with the highest frequency of flow. Since the modified Tennant method is more suitable to reflect the law of flow, it has physical significance, and the calculation results are more reasonable.
Directory of Open Access Journals (Sweden)
Marco Gonzalez
Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.
An automated Monte-Carlo based method for the calculation of cascade summing factors
Energy Technology Data Exchange (ETDEWEB)
Jackson, M.J., E-mail: mark.j.jackson@awe.co.uk; Britton, R.; Davies, A.V.; McLarty, J.L.; Goodwin, M.
2016-10-21
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ–γ, γ–X, γ–511 and γ–e{sup −} coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted. - Highlights: • Versatile method to calculate coincidence summing factors for gamma-spectrometry analysis. • Based solely on ENSDF format nuclear data and detector efficiency characterisations. • Enables generation of a CSF library for any detector, geometry and radionuclide. • Improves measurement accuracy and reduces acquisition times required to meet MDA.
Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa
Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F
2014-01-01
The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Modelling lateral beam quality variations in pencil kernel based photon dose calculations
International Nuclear Information System (INIS)
Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M
2006-01-01
Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error
CT-based dose calculations and in vivo dosimetry for lung cancer treatment
International Nuclear Information System (INIS)
Essers, M.; Lanson, J.H.; Leunens, G.; Schnabel, T.; Mijnheer, B.J.
1995-01-01
Reliable CT-based dose calculations and dosimetric quality control are essential for the introduction of new conformal techniques for the treatment of lung cancer. The first aim of this study was therefore to check the accuracy of dose calculations based on CT-densities, using a simple inhomogeneity correction model, for lung cancer patients irradiated with an AP-PA treatment technique. Second, the use of diodes for absolute exit dose measurements and an Electronic Portal Imaging Device (EPID) for relative transmission dose verification was investigated for 22 and 12 patients, respectively. The measured dose values were compared with calculations performed using our 3-dimensional treatment planning system, using CT-densities or assuming the patient to be water-equivalent. Using water-equivalent calculations, the actual exit dose value under lung was, on average, underestimated by 30%, with an overall spread of 10% (1 SD). Using inhomogeneity corrections, the exit dose was, on average, overestimated by 4%, with an overall spread of 6% (1 SD). Only 2% of the average deviation was due to the inhomogeneity correction model. An uncertainty in exit dose calculation of 2.5% (1 SD) could be explained by organ motion, resulting from the ventilatory or cardiac cycle. The most important reason for the large overall spread was, however, the uncertainty involved in performing point measurements: about 4% (1 SD). This difference resulted from the systematic and random deviation in patient set-up and therefore in diode position with respect to patient anatomy. Transmission and exit dose values agreed with an average difference of 1.1%. Transmission dose profiles also showed good agreement with calculated exit dose profiles. Our study shows that, for this treatment technique, the dose in the thorax region is quite accurately predicted using CT-based dose calculations, even if a simple inhomogeneity correction model is used. Point detectors such as diodes are not suitable for exit
Calculation of passive earth pressure of cohesive soil based on Culmann's method
Directory of Open Access Journals (Sweden)
Hai-feng Lu
2011-03-01
Full Text Available Based on the sliding plane hypothesis of Coulumb earth pressure theory, a new method for calculation of the passive earth pressure of cohesive soil was constructed with Culmann's graphical construction. The influences of the cohesive force, adhesive force, and the fill surface form were considered in this method. In order to obtain the passive earth pressure and sliding plane angle, a program based on the sliding surface assumption was developed with the VB.NET programming language. The calculated results from this method were basically the same as those from the Rankine theory and Coulumb theory formulas. This method is conceptually clear, and the corresponding formulas given in this paper are simple and convenient for application when the fill surface form is complex.
Ertl, P
1998-02-01
Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.
The calculation of surface free energy based on embedded atom method for solid nickel
International Nuclear Information System (INIS)
Luo Wenhua; Hu Wangyu; Su Kalin; Liu Fusheng
2013-01-01
Highlights: ► A new solution for accurate prediction of surface free energy based on embedded atom method was proposed. ► The temperature dependent anisotropic surface energy of solid nickel was obtained. ► In isotropic environment, the approach does not change most predictions of bulk material properties. - Abstract: Accurate prediction of surface free energy of crystalline metals is a challenging task. The theory calculations based on embedded atom method potentials often underestimate surface free energy of metals. With an analytical charge density correction to the argument of the embedding energy of embedded atom method, an approach to improve the prediction for surface free energy is presented. This approach is applied to calculate the temperature dependent anisotropic surface energy of bulk nickel and surface energies of nickel nanoparticles, and the obtained results are in good agreement with available experimental data.
Correction of the calculation of beam loading based in the RF power diffusion equation
International Nuclear Information System (INIS)
Silva, R. da.
1980-01-01
It is described an empirical correction based upon experimental datas of others authors in ORELA, GELINA and SLAC accelerators, to the calculation of the energy loss due to the beam loading effect as stated by the RF power diffusion equation theory an accelerating structure. It is obtained a dependence of this correction with the electron pulse full width half maximum, but independent of the electron energy. (author) [pt
A Cultural Study of a Science Classroom and Graphing Calculator-based Technology
Casey, Dennis Alan
2001-01-01
Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology...
Research on trust calculation of wireless sensor networks based on time segmentation
Su, Yaoxin; Gao, Xiufeng; Qiao, Wenxin
2017-05-01
Because the wireless sensor network is different from the traditional network characteristics, it is easy to accept the intrusion from the compromise node. The trust mechanism is the most effective way to defend against internal attacks. Aiming at the shortcomings of the existing trust mechanism, a method of calculating the trust of wireless sensor networks based on time segmentation is proposed. It improves the security of the network and extends the life of the network
DP-THOT - a calculational tool for bundle-specific decay power based on actual irradiation history
International Nuclear Information System (INIS)
Johnston, S.; Morrison, C.A.; Albasha, H.; Arguner, D.
2005-01-01
A tool has been created for calculating the decay power of an individual fuel bundle to take account of its actual irradiation history, as tracked by the fuel management code SORO. The DP-THOT tool was developed in two phases: first as a standalone executable code for decay power calculation, which could accept as input an entirely arbitrary irradiation history; then as a module integrated with SORO auxiliary codes, which directly accesses SORO history files to retrieve the operating power history of the bundle since it first entered the core. The methodology implemented in the standalone code is based on the ANSI/ANS-5.1-1994 formulation, which has been specifically adapted for calculating decay power in irradiated CANDU reactor fuel, by making use of fuel type specific parameters derived from WIMS lattice cell simulations for both 37 element and 28 element CANDU fuel bundle types. The approach also yields estimates of uncertainty in the calculated decay power quantities, based on the evaluated error in the decay heat correlations built-in for each fissile isotope, in combination with the estimated uncertainty in user-supplied inputs. The method was first implemented in the form of a spreadsheet, and following successful testing against decay powers estimated using the code ORIGEN-S, the algorithm was coded in FORTRAN to create an executable program. The resulting standalone code, DP-THOT, accepts an arbitrary irradiation history and provides the calculated decay power and estimated uncertainty over any user-specified range of cooling times, for either 37 element or 28 element fuel bundles. The overall objective was to produce an integrated tool which could be used to find the decay power associated with any identified fuel bundle or channel in the core, taking into account the actual operating history of the bundles involved. The benefit is that the tool would allow a more realistic calculation of bundle and channel decay powers for outage heat sink planning
Calculating the knowledge-based similarity of functional groups using crystallographic data
Watson, Paul; Willett, Peter; Gillet, Valerie J.; Verdonk, Marcel L.
2001-09-01
A knowledge-based method for calculating the similarity of functional groups is described and validated. The method is based on experimental information derived from small molecule crystal structures. These data are used in the form of scatterplots that show the likelihood of a non-bonded interaction being formed between functional group A (the `central group') and functional group B (the `contact group' or `probe'). The scatterplots are converted into three-dimensional maps that show the propensity of the probe at different positions around the central group. Here we describe how to calculate the similarity of a pair of central groups based on these maps. The similarity method is validated using bioisosteric functional group pairs identified in the Bioster database and Relibase. The Bioster database is a critical compilation of thousands of bioisosteric molecule pairs, including drugs, enzyme inhibitors and agrochemicals. Relibase is an object-oriented database containing structural data about protein-ligand interactions. The distributions of the similarities of the bioisosteric functional group pairs are compared with similarities for all the possible pairs in IsoStar, and are found to be significantly different. Enrichment factors are also calculated showing the similarity method is statistically significantly better than random in predicting bioisosteric functional group pairs.
Application of CFD dispersion calculation in risk based inspection for release of H2S
International Nuclear Information System (INIS)
Sharma, Pavan K.; Vinod, Gopika; Singh, R.K.; Rao, V.V.S.S.; Vaze, K.K.
2011-01-01
In atmospheric dispersion both deterministic and probabilistic approached have been used for addressing design and regulatory concerns. In context of deterministic calculations the amount of pollutants dispersion in the atmosphere is an important area wherein different approaches are followed in development of good analytical model. The analysis based on Computational Fluid Dynamics (CFD) codes offer an opportunity of model development based on first principles of physics and hence such models have an edge over the existing models. In context of probabilistic methods applying risk based inspection (wherein consequence of failure from each component needs to be assessed) are becoming popular. Consequence evaluation in a process plant is a crucial task. Often the number of components considered for life management will be too huge. Also consequence evaluation of all the components proved to be laborious task. The present paper is the results of joint collaborative work from deterministic and probabilistic modelling group working in the field of atmospheric dispersion. Even though API 581 has simplified qualitative approach, regulators find the some of the factors, in particular, quantity factor, not suitable for process plants. Often dispersion calculations for heavy gas are done with very simple model which can not take care of density based atmospheric dispersion. This necessitates a new approach with a CFD based technical basis is proposed, so that the range of quantity considered along with factors used can be justified. The present paper is aimed at bringing out some of the distinct merits and demerits of the CFD based models. A brief account of the applications of such CFD codes reported in literature is also presented in the paper. This paper describes the approach devised and demonstrated for the said issue with emphasis of CFD calculations. (author)
Calculation of effect of burnup history on spent fuel reactivity based on CASMO5
International Nuclear Information System (INIS)
Li Xiaobo; Xia Zhaodong; Zhu Qingfu
2015-01-01
Based on the burnup credit of actinides + fission products (APU-2) which are usually considered in spent fuel package, the effect of power density and operating history on k_∞ was studied. All the burnup calculations are based on the two-dimensional fuel assembly burnup program CASMO5. The results show that taking the core average power density of specified power plus a bounding margin of 0.0023 to k_∞, and taking the operating history of specified power without shutdown during cycle and between cycles plus a bounding margin of 0.0045 to k_∞ can meet the bounding principle of burnup credit. (authors)
Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method
International Nuclear Information System (INIS)
Osadchy, A V; Obraztsova, E D; Volotovskiy, S G; Golovashkin, D L; Savin, V V
2016-01-01
In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars. (paper)
International Nuclear Information System (INIS)
Takayama, T.; Sekine, T.; Kudo, H.
2003-01-01
Theoretical calculations based on the density functional theory (DFT) were performed to understand the effect of substituents on the molecular and electronic structures of technetium nitrido complexes with salen type Schiff base ligands. Optimized structures of these complexes are square pyramidal. The electron density on a Tc atom of the complex with electron withdrawing substituents is lower than that of the complex with electron donating substituents. The HOMO energy is lower in the complex with electron withdrawing substituents than that in the complex with electron donating substituents. The charge on Tc atoms is a good measure that reflects the redox potential of [TcN(L)] complex. (author)
Directory of Open Access Journals (Sweden)
Renchun Huang
2015-03-01
Full Text Available Various methods are available for calculating the TOC of shale reservoirs with logging data, and each method has its unique applicability and accuracy. So it is especially important to establish a regional experimental calculation model based on a thorough analysis of their applicability. With the Upper Ordovician Wufeng Fm-Lower Silurian Longmaxi Fm shale reservoirs as an example, TOC calculation models were built by use of the improved ΔlgR, bulk density, natural gamma spectroscopy, multi-fitting and volume model methods respectively, considering the previous research results and the geologic features of the area. These models were compared based on the core data. Finally, the bulk density method was selected as the regional experimental calculation model. Field practices demonstrated that the improved ΔlgR and natural gamma spectroscopy methods are poor in accuracy; although the multi-fitting method and bulk density method have relatively high accuracy, the bulk density method is simpler and wider in application. For further verifying its applicability, the bulk density method was applied to calculate the TOC of shale reservoirs in several key wells in the Jiaoshiba shale gas field, Sichuan Basin, and the calculation accuracy was clarified with the measured data of core samples, showing that the coincidence rate of logging-based TOC calculation is up to 90.5%–91.0%.
Calculator: A Hardware Design, Math and Software Programming Project Base Learning
Directory of Open Access Journals (Sweden)
F. Criado
2015-03-01
Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.
Wave resistance calculation method combining Green functions based on Rankine and Kelvin source
Directory of Open Access Journals (Sweden)
LI Jingyu
2017-12-01
Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.
DEFF Research Database (Denmark)
Rees, Stephen Edward; Rychwicka-Kielek, Beate A; Andersen, Bjarne F
2012-01-01
Abstract Background: Repeated arterial puncture is painful. A mathematical method exists for transforming peripheral venous pH, PCO2 and PO2 to arterial eliminating the need for arterial sampling. This study evaluates this method to monitor acid-base and oxygenation during admission...... for exacerbation of chronic obstructive pulmonary disease (COPD). Methods: Simultaneous arterial and peripheral venous blood was analysed. Venous values were used to calculate arterial pH, PCO2 and PO2, with these compared to measured values using Bland-Altman analysis and scatter plots. Calculated values of PO2......H, PCO2 and PO2 were 7.432±0.047, 6.8±1.7 kPa and 9.2±1.5 kPa, respectively. Calculated and measured arterial pH and PCO2 agreed well, differences having small bias and SD (0.000±0.022 pH, -0.06±0.50 kPa PCO2), significantly better than venous blood alone. Calculated PO2 obeyed the clinical rules...
International Nuclear Information System (INIS)
Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees
2015-01-01
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts
Korhonen, Juha; Kapanen, Mika; Keyrilainen, Jani; Seppala, Tiina; Tuomikoski, Laura; Tenhunen, Mikko
2013-01-01
Magnetic resonance (MR) images are used increasingly in external radiotherapy target delineation because of their superior soft tissue contrast compared to computed tomography (CT) images. Nevertheless, radiotherapy treatment planning has traditionally been based on the use of CT images, due to the restrictive features of MR images such as lack of electron density information. This research aimed to measure absorbed radiation doses in material behind different bone parts, and to evaluate dose calculation errors in two pseudo-CT images; first, by assuming a single electron density value for the bones, and second, by converting the electron density values inside bones from T(1)∕T(2)∗-weighted MR image intensity values. A dedicated phantom was constructed using fresh deer bones and gelatine. The effect of different bone parts to the absorbed dose behind them was investigated with a single open field at 6 and 15 MV, and measuring clinically detectable dose deviations by an ionization chamber matrix. Dose calculation deviations in a conversion-based pseudo-CT image and in a bulk density pseudo-CT image, where the relative electron density to water for the bones was set as 1.3, were quantified by comparing the calculation results with those obtained in a standard CT image by superposition and Monte Carlo algorithms. The calculations revealed that the applied bulk density pseudo-CT image causes deviations up to 2.7% (6 MV) and 2.0% (15 MV) to the dose behind the examined bones. The corresponding values in the conversion-based pseudo-CT image were 1.3% (6 MV) and 1.0% (15 MV). The examinations illustrated that the representation of the heterogeneous femoral bone (cortex denser compared to core) by using a bulk density for the whole bone causes dose deviations up to 2% both behind the bone edge and the middle part of the bone (diameter bones). This study indicates that the decrease in absorbed dose is not dependent on the bone diameter with all types of bones. Thus
DEFF Research Database (Denmark)
Guo, Jingjing; Jensen, Christian D.; Ma, Jianfeng
2016-01-01
Mobile devices have become more powerful and are increasingly integrated in the everyday life of people; from playing games, taking pictures and interacting with social media to replacing credit cards in payment solutions. Some actions may only be appropriate in some situations, so the security...... of a mobile device is therefore increasingly linked to its context, such as its location, surroundings (e.g. objects in the immediate environment) and so on. However, situational awareness and context are not captured by traditional security models. In this paper, we examine the notion of Device Comfort......, which captures a device's ability to secure and reason about its environment. Specifically, we study the feasibility of two device comfort calculation methods we proposed in previous work. We do trace driven simulations based on a large body of sensed data from mobile devices in the real world...
DEFF Research Database (Denmark)
Yan, Wei; Belkadi, Abdelkrim; Michelsen, Michael Locht
2013-01-01
Flash calculation can be a time-consuming part in compositional reservoir simulations, and several approaches have been proposed to speed it up. One recent approach is the shadow-region method that reduces the computation time mainly by skipping stability analysis for a large portion...... of the compositions in the single-phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be used with the initial estimates from the previous step. Another approach is the compositional-space adaptive-tabulation (CSAT) approach, which is based on tie-line table look-up (TTL). It saves...... be made. Comparison between the shadow-region approach and the approximation approach, including TTL and TDBA, has been made with a slimtube simulator by which the simulation temperature and the simulation pressure are set constant. It is shown that TDBA can significantly improve the speed in the two...
Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir
2010-09-01
Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Continuous energy Monte Carlo method based homogenization multi-group constants calculation
International Nuclear Information System (INIS)
Li Mancang; Wang Kan; Yao Dong
2012-01-01
The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Prediction of fission mass-yield distributions based on cross section calculations
International Nuclear Information System (INIS)
Hambsch, F.-J.; G.Vladuca; Tudora, Anabella; Oberstedt, S.; Ruskov, I.
2005-01-01
For the first time, fission mass-yield distributions have been predicted based on an extended statistical model for fission cross section calculations. In this model, the concept of the multi-modality of the fission process has been incorporated. The three most dominant fission modes, the two asymmetric standard I (S1) and standard II (S2) modes and the symmetric superlong (SL) mode are taken into account. De-convoluted fission cross sections for S1, S2 and SL modes for 235,238 U(n, f) and 237 Np(n, f), based on experimental branching ratios, were calculated for the first time in the incident neutron energy range from 0.01 to 5.5 MeV providing good agreement with the experimental fission cross section data. The branching ratios obtained from the modal fission cross section calculations have been used to deduce the corresponding fission yield distributions, including mean values also for incident neutron energies hitherto not accessible to experiment
Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter
2017-06-01
We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.
SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT
International Nuclear Information System (INIS)
Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K
2014-01-01
Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy
Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M
2014-08-01
The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia ( I ) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼10 4 m 2 to ∼10 7 m 2 . Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼10 2 m 2 . We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m -2 K -1 s -1/2 (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars.
A massively-parallel electronic-structure calculations based on real-space density functional theory
International Nuclear Information System (INIS)
Iwata, Jun-Ichi; Takahashi, Daisuke; Oshiyama, Atsushi; Boku, Taisuke; Shiraishi, Kenji; Okada, Susumu; Yabana, Kazuhiro
2010-01-01
Based on the real-space finite-difference method, we have developed a first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers. In addition to efficient parallel implementation, we also implemented several computational improvements, substantially reducing the computational costs of O(N 3 ) operations such as the Gram-Schmidt procedure and subspace diagonalization. Using the program on a massively-parallel computer cluster with a theoretical peak performance of several TFLOPS, we perform electronic-structure calculations for a system consisting of over 10,000 Si atoms, and obtain a self-consistent electronic-structure in a few hundred hours. We analyze in detail the costs of the program in terms of computation and of inter-node communications to clarify the efficiency, the applicability, and the possibility for further improvements.
A theoretical study of blue phosphorene nanoribbons based on first-principles calculations
Energy Technology Data Exchange (ETDEWEB)
Xie, Jiafeng; Si, M. S., E-mail: sims@lzu.edu.cn; Yang, D. Z.; Zhang, Z. Y.; Xue, D. S. [Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China)
2014-08-21
Based on first-principles calculations, we present a quantum confinement mechanism for the band gaps of blue phosphorene nanoribbons (BPNRs) as a function of their widths. The BPNRs considered have either armchair or zigzag shaped edges on both sides with hydrogen saturation. Both the two types of nanoribbons are shown to be indirect semiconductors. An enhanced energy gap of around 1 eV can be realized when the ribbon's width decreases to ∼10 Å. The underlying physics is ascribed to the quantum confinement effect. More importantly, the parameters to describe quantum confinement are obtained by fitting the calculated band gaps with respect to their widths. The results show that the quantum confinement in armchair nanoribbons is stronger than that in zigzag ones. This study provides an efficient approach to tune the band gap in BPNRs.
Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm
International Nuclear Information System (INIS)
Xia Xinyi; Xia Jun
2016-01-01
A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)
Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.
Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R
2000-07-01
A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.
C.C. Hunault; J.D.F. Habbema (Dik); M.J.C. Eijkemans (René); J.A. Collins (John); J.L.H. Evers (Johannes); E.R. te Velde (Egbert)
2004-01-01
textabstractBACKGROUND: Several models have been published for the prediction of spontaneous pregnancy among subfertile patients. The aim of this study was to broaden the empirical basis for these predictions by making a synthesis of three previously published models. METHODS:
Short-Term Wind Power Forecasting Based on Clustering Pre-Calculated CFD Method
Directory of Open Access Journals (Sweden)
Yimei Wang
2018-04-01
Full Text Available To meet the increasing wind power forecasting (WPF demands of newly built wind farms without historical data, physical WPF methods are widely used. The computational fluid dynamics (CFD pre-calculated flow fields (CPFF-based WPF is a promising physical approach, which can balance well the competing demands of computational efficiency and accuracy. To enhance its adaptability for wind farms in complex terrain, a WPF method combining wind turbine clustering with CPFF is first proposed where the wind turbines in the wind farm are clustered and a forecasting is undertaken for each cluster. K-means, hierarchical agglomerative and spectral analysis methods are used to establish the wind turbine clustering models. The Silhouette Coefficient, Calinski-Harabaz index and within-between index are proposed as criteria to evaluate the effectiveness of the established clustering models. Based on different clustering methods and schemes, various clustering databases are built for clustering pre-calculated CFD (CPCC-based short-term WPF. For the wind farm case studied, clustering evaluation criteria show that hierarchical agglomerative clustering has reasonable results, spectral clustering is better and K-means gives the best performance. The WPF results produced by different clustering databases also prove the effectiveness of the three evaluation criteria in turn. The newly developed CPCC model has a much higher WPF accuracy than the CPFF model without using clustering techniques, both on temporal and spatial scales. The research provides supports for both the development and improvement of short-term physical WPF systems.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation
Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe
2015-08-01
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
Study of cosmic ray interaction model based on atmospheric muons for the neutrino flux calculation
International Nuclear Information System (INIS)
Sanuki, T.; Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.
2007-01-01
We have studied the hadronic interaction for the calculation of the atmospheric neutrino flux by summarizing the accurately measured atmospheric muon flux data and comparing with simulations. We find the atmospheric muon and neutrino fluxes respond to errors in the π-production of the hadronic interaction similarly, and compare the atmospheric muon flux calculated using the HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).] code with experimental measurements. The μ + +μ - data show good agreement in the 1∼30 GeV/c range, but a large disagreement above 30 GeV/c. The μ + /μ - ratio shows sizable differences at lower and higher momenta for opposite directions. As the disagreements are considered to be due to assumptions in the hadronic interaction model, we try to improve it phenomenologically based on the quark parton model. The improved interaction model reproduces the observed muon flux data well. The calculation of the atmospheric neutrino flux will be reported in the following paper [M. Honda et al., Phys. Rev. D 75, 043006 (2007).
Directory of Open Access Journals (Sweden)
G.A. McAuliffe
2018-04-01
Full Text Available With increasing concern about environmental burdens originating from livestock production, the importance of farming system evaluation has never been greater. In order to form a basis for trade-off analysis of pasture-based cattle production systems, liveweight data from 90 Charolais × Hereford-Friesian calves were collected at a high temporal resolution at the North Wyke Farm Platform (NWFP in Devon, UK. These data were then applied to the Intergovernmental Panel on Climate Change (IPCC modelling framework to estimate on-farm methane emissions under three different pasture management strategies, completing a foreground dataset required to calculate emissions intensity of individual beef cattle.
User interface tool based on the MCCM for the calculation of dpa distributions
International Nuclear Information System (INIS)
Pinnera, I.; Cruz, C.; Abreu, Y.; Leyva, A.
2009-01-01
The Monte Carlo assisted Classical Method (MCCM) was introduced by the authors to calculate the displacements per atom (dpa) distributions in solid materials, making use of the standard outputs of simulation code system MCNP and the classical theories of electron elastic scattering. Based on this method a new DLL with several user interface functions was implemented. Then, an application running on Windows systems was development in order to allow the easy handle of different useful functionalities included on it. In the present work this application is presented and some examples of it successful use in different interesting materials are exposed. (Author)
International Nuclear Information System (INIS)
Cenerino, G.; Marbeuf, A.; Vahlas, C.
1992-01-01
Since 1974, Thermodata has been working on developing an Integrated Information System in Inorganic Chemistry. A major effort was carried on the thermochemical data assessment of both pure substances and multicomponent solution phases. The available data bases are connected to powerful calculation codes (GEMINI = Gibbs Energy Minimizer), which allow to determine the thermodynamical equilibrium state in multicomponent systems. The high interest of such an approach is illustrated by recent applications in as various fields as semi-conductors, chemical vapor deposition, hard alloys and nuclear safety. (author). 26 refs., 6 figs
DEFF Research Database (Denmark)
Weitzmann, Peter; Svendsen, Svend
2005-01-01
, radiation and conduction of the heat transfer between pipe and surrounding materials. The European Standard for floor heating, EN1264, does not cover lightweight systems, while the supplemental Nordtest Method VVS127 is aimed at lightweight systems. The thermal properties can be found using tabulated values...... simulation model. It has been shown that the method is accurate with an error on the heat fluxes of less than 5% for different supply temperatures. An error of around 5% is also recorded when comparing measurements to calculated heat flows using the Nordtest VVS 127 method based on the experimental setup...
Comparison of CT number calibration techniques for CBCT-based dose calculation
International Nuclear Information System (INIS)
Dunlop, Alex; McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe; Murray, Julia; Bhide, Shreerang; Harrington, Kevin; Poludniowski, Gavin; Nutting, Christopher; Newbold, Kate
2015-01-01
The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT r ); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS auto ), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS auto provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT r (0.5 %) and RS auto (0.6 %) performing best. For lung cases, WL and RS auto methods generated dose distributions most similar to the ground truth. The RS auto density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS auto methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [de
International Nuclear Information System (INIS)
Wang, Lin-Wang
2006-01-01
Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N 3 ) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the
International Nuclear Information System (INIS)
Lin, Lin; Yang, Chao; Chen, Mohan; He, Lixin
2013-01-01
We describe how to apply the recently developed pole expansion and selected inversion (PEXSI) technique to Kohn–Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating the charge density, the total energy, the Helmholtz free energy and the atomic forces (including both the Hellmann–Feynman force and the Pulay force) without using the eigenvalues and eigenvectors of the Kohn–Sham Hamiltonian. We also show how to update the chemical potential without using Kohn–Sham eigenvalues. The advantage of using PEXSI is that it has a computational complexity much lower than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEXSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEXSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEXSI are modest. This even makes it possible to perform Kohn–Sham DFT calculations for 10 000-atom nanotubes with a sequential implementation of the selected inversion algorithm. We also perform an accurate geometry optimization calculation on a truncated (8, 0) boron nitride nanotube system containing 1024 atoms. Numerical results indicate that the use of PEXSI does not lead to loss of the accuracy required in a practical DFT calculation. (paper)
Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L
2017-11-01
Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.
Setuain, Igor; González-Izal, Miriam; Alfaro, Jesús; Gorostiaga, Esteban; Izquierdo, Mikel
2015-12-01
Handball is one of the most challenging sports for the knee joint. Persistent biomechanical and jumping capacity alterations can be observed in athletes with an anterior cruciate ligament (ACL) injury. Commonly identified jumping biomechanical alterations have been described by the use of laboratory technologies. However, portable and easy-to-handle technologies that enable an evaluation of jumping biomechanics at the training field are lacking. To analyze unilateral/bilateral acceleration and orientation jumping performance differences among elite male handball athletes with or without previous ACL reconstruction via a single inertial sensor unit device. Case control descriptive study. At the athletes' usual training court. Twenty-two elite male (6 ACL-reconstructed and 16 uninjured control players) handball players were evaluated. The participants performed a vertical jump test battery that included a 50-cm vertical bilateral drop jump, a 20-cm vertical unilateral drop jump, and vertical unilateral countermovement jump maneuvers. Peak 3-dimensional (X, Y, Z) acceleration (m·s(-2)), jump phase duration and 3-dimensional orientation values (°) were obtained from the inertial sensor unit device. Two-tailed t-tests and a one-way analysis of variance were performed to compare means. The P value cut-off for significance was set at P handball athletes with previous ACL reconstruction demonstrated a jumping biomechanical profile similar to control players, including similar jumping performance values in both bilateral and unilateral jumping maneuvers, several years after ACL reconstruction. These findings are in agreement with previous research showing full functional restoration of abilities in top-level male athletes after ACL reconstruction, rehabilitation and subsequent return to sports at the previous level. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Prior, Phillip; Tai, An; Erickson, Beth; Li, X Allen
2014-01-01
To consolidate duodenum and small bowel toxicity data from clinical studies with different dose fractionation schedules using the modified linear quadratic (MLQ) model. A methodology of adjusting the dose-volume (D,v) parameters to different levels of normal tissue complication probability (NTCP) was presented. A set of NTCP model parameters for duodenum toxicity were estimated by the χ(2) fitting method using literature-based tolerance dose and generalized equivalent uniform dose (gEUD) data. These model parameters were then used to convert (D,v) data into the isoeffective dose in 2 Gy per fraction, (D(MLQED2),v) and convert these parameters to an isoeffective dose at another NTCP (D(MLQED2'),v). The literature search yielded 5 reports useful in making estimates of duodenum and small bowel toxicity. The NTCP model parameters were found to be TD50(1)(model) = 60.9 ± 7.9 Gy, m = 0.21 ± 0.05, and δ = 0.09 ± 0.03 Gy(-1). Isoeffective dose calculations and toxicity rates associated with hypofractionated radiation therapy reports were found to be consistent with clinical data having different fractionation schedules. Values of (D(MLQED2'),v) between different NTCP levels remain consistent over a range of 5%-20%. MLQ-based isoeffective calculations of dose-response data corresponding to grade ≥2 duodenum toxicity were found to be consistent with one another within the calculation uncertainty. The (D(MLQED2),v) data could be used to determine duodenum and small bowel dose-volume constraints for new dose escalation strategies. Copyright © 2014 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Calculation of the Fission Product Release for the HTR-10 based on its Operation History
International Nuclear Information System (INIS)
Xhonneux, A.; Druska, C.; Struth, S.; Allelein, H.-J.
2014-01-01
Since the first criticality of the HTR-10 test reactor in 2000, a rather complex operation history was performed. As the HTR-10 is the only pebble bed reactor in operation today delivering experimental data for HTR simulation codes, an attempt was made to simulate the whole reactor operation up to the presence. Special emphasis was put on the fission product release behaviour as it is an important safety aspect of such a reactor. The operation history has to be simulated with respect to the neutronics, fluid mechanics and depletion to get a detailed knowledge about the time-dependent nuclide inventory. In this paper we report about such a simulation with VSOP 99/11 and our new fission product release code STACY. While STACY (Source Term Analysis Code System) so far was able to calculate the fission product release rates in case of an equilibrium core and during transients, it now can also be applied to running-in-phases. This coupling demonstrates a first step towards an HCP Prototype. Based on the published power histogram of the HTR-10 and additional information about the fuel loading and shuffling, a coupled neutronics, fluid dynamics and depletion calculation was performed. Special emphasis was put on the complex fuel-shuffling scheme within both VSOP and STACY. The simulations have shown that the HTR-10 up to now generated about 2580 MWd while reshuffling the core about 2.3 times. Within this paper, STACY results for the equilibrium core will be compared with FRESCO-II results being published by INET. Compared to these release rates, which are based on a few user defined life histories, in this new approach the fission product release rates of Ag-110m, Cs-137, Sr-90 and I-131 have been simulated for about 4000 tracer pebbles with STACY. For the calculation of the HTR-10 operation history time-dependent release rates are being presented as well. (author)
SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.
Directory of Open Access Journals (Sweden)
Brejnev Muhizi Muhire
Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.
Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D
2018-05-18
Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.
BaTiO3-based nanolayers and nanotubes: first-principles calculations.
Evarestov, Robert A; Bandura, Andrei V; Kuruch, Dmitrii D
2013-01-30
The first-principles calculations using hybrid exchange-correlation functional and localized atomic basis set are performed for BaTiO(3) (BTO) nanolayers and nanotubes (NTs) with the structure optimization. Both the cubic and the ferroelectric BTO phases are used for the nanolayers and NTs modeling. It follows from the calculations that nanolayers of the different ferroelectric BTO phases have the practically identical surface energies and are more stable than nanolayers of the cubic phase. Thin nanosheets composed of three or more dense layers of (0 1 0) and (0 1 1[overline]) faces preserve the ferroelectric displacements inherent to the initial bulk phase. The structure and stability of BTO single-wall NTs depends on the original bulk crystal phase and a wall thickness. The majority of the considered NTs with the low formation and strain energies has the mirror plane perpendicular to the tube axis and therefore cannot exhibit ferroelectricity. The NTs folded from (0 1 1[overline]) layers may show antiferroelectric arrangement of Ti-O bonds. Comparison of stability of the BTO-based and SrTiO(3)-based NTs shows that the former are more stable than the latter. Copyright © 2012 Wiley Periodicals, Inc.
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Calculations of helium separation via uniform pores of stanene-based membranes
Directory of Open Access Journals (Sweden)
Guoping Gao
2015-12-01
Full Text Available The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn and decorated 2D Sn (SnH and SnF honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K, two practical strategies (i.e., the application of strain and functionalization are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation.
Directory of Open Access Journals (Sweden)
Feifei Fu
2014-01-01
Full Text Available Life cycle thinking has become widely applied in the assessment for building environmental performance. Various tool are developed to support the application of life cycle assessment (LCA method. This paper focuses on the carbon emission during the building construction stage. A partial LCA framework is established to assess the carbon emission in this phase. Furthermore, five typical LCA tools programs have been compared and analyzed for demonstrating the current application of LCA tools and their limitations in the building construction stage. Based on the analysis of existing tools and sustainability demands in building, a new computer calculation system has been developed to calculate the carbon emission for optimizing the sustainability during the construction stage. The system structure and detail functions are described in this paper. Finally, a case study is analyzed to demonstrate the designed LCA framework and system functions. This case is based on a typical building in UK with different plans of masonry wall and timber frame to make a comparison. The final results disclose that a timber frame wall has less embodied carbon emission than a similar masonry structure. 16% reduction was found in this study.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes
International Nuclear Information System (INIS)
Hebert, Alain; Coste, Mireille
2002-01-01
As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented
Assessment of Calculation Procedures for Piles in Clay Based on Static Loading Tests
DEFF Research Database (Denmark)
Augustesen, Anders; Andersen, Lars
2008-01-01
College in London. The calculation procedures are assessed based on an established database of static loading tests. To make a consistent evaluation of the design methods, corrections related to undrained shear strength and time between pile driving and testing have been employed. The study indicates...... that the interpretation of the field tests is of paramount importance, both with regard to the soil profile and the loading conditions. Based on analyses of 253 static pile loading tests distributed on 111 sites, API-RP2A provides the better description of the data. However, it should be emphasised that some input......Numerous methods are available for the prediction of the axial capacity of piles in clay. In this paper, two well-known models are considered, namely the current API-RP2A (1987 to present) and the recently developed ICP method. The latter is developed by Jardine and his co-workers at Imperial...
Quasiparticle properties of DNA bases from GW calculations in a Wannier basis
Qian, Xiaofeng; Marzari, Nicola; Umari, Paolo
2009-03-01
The quasiparticle GW-Wannier (GWW) approach [1] has been recently developed to overcome the size limitations of conventional planewave GW calculations. By taking advantage of the localization properties of the maximally-localized Wannier functions and choosing a small set of polarization basis we reduce the number of Bloch wavefunctions products required for the evaluation of dynamical polarizabilities, and in turn greatly reduce memory requirements and computational efficiency. We apply GWW to study quasiparticle properties of different DNA bases and base-pairs, and solvation effects on the energy gap, demonstrating in the process the key advantages of this approach. [1] P. Umari,G. Stenuit, and S. Baroni, cond-mat/0811.1453
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.
2014-08-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Previously unknown species of Aspergillus.
Gautier, M; Normand, A-C; Ranque, S
2016-08-01
The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and
Directory of Open Access Journals (Sweden)
Yefimenko A. A.
2016-05-01
connectors. We got an analytic dependence that can be used to find the Young's modulus for a known value of hardness on a scale Shore A. We gave examples of the amount of compression calculation in the elastomeric liner to provide a reliable contact for specified values of the transition resistance for the removable and permanent connectors based on flexible printed cable.
Sample size calculation to externally validate scoring systems based on logistic regression models.
Directory of Open Access Journals (Sweden)
Antonio Palazón-Bru
Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.
International Nuclear Information System (INIS)
Kirisits, Christian; Wexberg, Paul; Gottsauner-Wolf, Michael; Pokrajac, Boris; Ortmann, Elisabeth; Aiginger, Hannes; Glogar, Dietmar; Poetter, Richard
2001-01-01
Background and purpose: Radioactive stents are under investigation for reduction of coronary restenosis. However, the actual dose delivered to specific parts of the coronary artery wall based on the individual vessel anatomy has not been determined so far. Dose-volume histograms (DVHs) permit an estimation of the actual dose absorbed by the target volume. We present a method to calculate DVHs based on intravascular ultrasound (IVUS) measurements to determine the dose distribution within the vessel wall. Materials and methods: Ten patients were studied by intravascular ultrasound after radioactive stenting (BX Stent, P-32, 15-mm length) to obtain tomographic cross-sections of the treated segments. We developed a computer algorithm using the actual dose distribution of the stent to calculate differential and cumulative DVHs. The minimal target dose, the mean target dose, the minimal doses delivered to 10 and 90% of the adventitia (DV10, DV90), and the percentage of volume receiving a reference dose at 0.5 mm from the stent surface cumulated over 28 days were derived from the DVH plots. Results were expressed as mean±SD. Results: The mean activity of the stents was 438±140 kBq at implantation. The mean reference dose was 111±35 Gy, whereas the calculated mean target dose within the adventitia along the stent was 68±20 Gy. On average, DV90 and DV10 were 33±9 Gy and 117±41 Gy, respectively. Expanding the target volume to include 2.5-mm-long segments at the proximal and distal ends of the stent, the calculated mean target dose decreased to 55±17 Gy, and DV 90 and DV 10 were 6.4±2.4 Gy and 107±36 Gy, respectively. Conclusions: The assessment of DVHs seems in principle to be a valuable tool for both prospective and retrospective analysis of dose-distribution of radioactive stents. It may provide the basis to adapt treatment planning in coronary brachytherapy to the common standards of radiotherapy
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
International Nuclear Information System (INIS)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon–electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783–97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48–0.53% for the electron beam cases and 0.15–0.17% for the photon beam cases. In terms of efficiency, goMC was ∼4–16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.
Martinez-Rovira, I; Sempau, J; Prezado, Y
2012-05-01
Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Neutron spectra calculation and doses in a subcritical nuclear reactor based on thorium
International Nuclear Information System (INIS)
Medina C, D.; Hernandez A, P. L.; Hernandez D, V. M.; Vega C, H. R.; Sajo B, L.
2015-10-01
This paper describes a heterogeneous subcritical nuclear reactor with molten salts based on thorium, with graphite moderator and a source of 252 Cf, whose dose levels in the periphery allows its use in teaching and research activities. The design was done by the Monte Carlo method with the code MCNP5 where the geometry, dimensions and fuel was varied in order to obtain the best design. The result is a cubic reactor of 110 cm side with graphite moderator and reflector. In the central part they have 9 ducts that were placed in the direction of axis Y. The central duct contains the source of 252 Cf, of 8 other ducts, are two irradiation ducts and the other six contain a molten salt ( 7 LiF - BeF 2 - ThF 4 - UF 4 ) as fuel. For design the k eff , neutron spectra and ambient dose equivalent was calculated. In the first instance the above calculation for a virgin fuel was called case 1, then a percentage of 233 U was used and the percentage of Th was decreased and was called case 2. This with the purpose to compare two different fuels working inside the reactor. In the case 1 a value was obtained for the k eff of 0.13 and case 2 of 0.28, maintaining the subcriticality in both cases. In the dose levels the higher value is in case 2 in the axis Y with a value of 3.31 e-3 ±1.6% p Sv/Q this value is reported in for one. With this we can calculate the exposure time of personnel working in the reactor. (Author)
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2016-01-01
Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital.
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2015-05-17
Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department.
International Nuclear Information System (INIS)
Montalvao, Rinaldo W.; De Simone, Alfonso; Vendruscolo, Michele
2012-01-01
Residual dipolar couplings (RDCs) have the potential of providing detailed information about the conformational fluctuations of proteins. It is very challenging, however, to extract such information because of the complex relationship between RDCs and protein structures. A promising approach to decode this relationship involves structure-based calculations of the alignment tensors of protein conformations. By implementing this strategy to generate structural restraints in molecular dynamics simulations we show that it is possible to extract effectively the information provided by RDCs about the conformational fluctuations in the native states of proteins. The approach that we present can be used in a wide range of alignment media, including Pf1, charged bicelles and gels. The accuracy of the method is demonstrated by the analysis of the Q factors for RDCs not used as restraints in the calculations, which are significantly lower than those corresponding to existing high-resolution structures and structural ensembles, hence showing that we capture effectively the contributions to RDCs from conformational fluctuations.
Dai, Mengyan; Liu, Jianghai; Cui, Jianlin; Chen, Chunsheng; Jia, Peng
2017-10-01
In order to solve the problem of the quantitative test of spectrum and color of aerosol, the measurement method of spectrum of aerosol based on human visual system was proposed. The spectrum characteristics and color parameters of three different aerosols were tested, and the color differences were calculated according to the CIE1976-L*a*b* color difference formula. Three tested powders (No 1# No 2# and No 3# ) were dispersed in a plexglass box and turned into aerosol. The powder sample was released by an injector with different dosages in each experiment. The spectrum and color of aerosol were measured by the PRO 6500 Fiber Optic Spectrometer. The experimental results showed that the extinction performance of aerosol became stronger and stronger with the increase of concentration of aerosol. While the chromaticity value differences of aerosols in the experiment were so small, luminance was verified to be the main influence factor of human eye visual perception and contributed most in the three factors of the color difference calculation. The extinction effect of No 3# aerosol was the strongest of all and caused the biggest change of luminance and color difference which would arouse the strongest human visual perception. According to the sensation level of chromatic color by Chinese, recognition color difference would be produced when the dosage of No 1# powder was more than 0.10 gram, the dosage of No 2# powder was more than 0.15 gram, and the dosage of No 3# powder was more than 0.05 gram.
Chen, Xiaol; Guo, Bei; Tuo, Jinliang; Zhou, Ruixin; Lu, Yang
2017-08-01
Nowadays, people are paying more and more attention to the noise reduction of household refrigerator compressor. This paper established a sound field bounded by compressor shell and ISO3744 standard field points. The Acoustic Transfer Vector (ATV) in the sound field radiated by a refrigerator compressor shell were calculated which fits the test result preferably. Then the compressor shell surface is divided into several parts. Based on Acoustic Transfer Vector approach, the sound pressure contribution to the field points and the sound power contribution to the sound field of each part were calculated. To obtain the noise radiation in the sound field, the sound pressure cloud charts were analyzed, and the contribution curves in different frequency of each part were acquired. Meanwhile, the sound power contribution of each part in different frequency was analyzed, to ensure those parts where contributes larger sound power. Through the analysis of acoustic contribution, those parts where radiate larger noise on the compressor shell were determined. This paper provides a credible and effective approach on the structure optimal design of refrigerator compressor shell, which is meaningful in the noise and vibration reduction.
Extension of the COSYMA-ECONOMICS module - cost calculations based on different economic sectors
International Nuclear Information System (INIS)
Faude, D.
1994-12-01
The COSYMA program system for evaluating the off-site consequences of accidental releases of radioactive material to the atmosphere includes an ECONOMICS module for assessing economic consequences. The aim of this module is to convert various consequences (radiation-induced health effects and impacts resulting from countermeasures) caused by an accident into the common framework of economic costs; this allows different effects to be expressed in the same terms and thus to make these effects comparable. With respect to the countermeasure 'movement of people', the dominant cost categories are 'loss-of-income costs' and 'costs of lost capital services'. In the original version of the ECONOMICS module these costs are calculated on the basis of the total number of people moved. In order to take into account also regional or local economic peculiarities of a nuclear site, the ECONOMICS module has been extended: Calculation of the above mentioned cost categories is now based on the number of employees in different economic sectors in the affected area. This extension of the COSYMA ECONOMICS module is described in more detail. (orig.)
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
International Nuclear Information System (INIS)
Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan
2014-01-01
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M o ), moment magnitude (M W ), rupture duration (T o ) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M W =7.8 and the 17 July 2006 Pangandaran earthquake with M W =7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M W =7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake
Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics
Hošek, Petr; Spiwok, Vojtěch
2016-01-01
Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.
Implementation and validation of an implant-based coordinate system for RSA migration calculation.
Laende, Elise K; Deluzio, Kevin J; Hennigar, Allan W; Dunbar, Michael J
2009-10-16
An in vitro radiostereometric analysis (RSA) phantom study of a total knee replacement was carried out to evaluate the effect of implementing two new modifications to the conventional RSA procedure: (i) adding a landmark of the tibial component as an implant marker and (ii) defining an implant-based coordinate system constructed from implant landmarks for the calculation of migration results. The motivation for these two modifications were (i) to improve the representation of the implant by the markers by including the stem tip marker which increases the marker distribution (ii) to recover clinical RSA study cases with insufficient numbers of markers visible in the implant polyethylene and (iii) to eliminate errors in migration calculations due to misalignment of the anatomical axes with the RSA global coordinate system. The translational and rotational phantom studies showed no loss of accuracy with the two new measurement methods. The RSA system employing these methods has a precision of better than 0.05 mm for translations and 0.03 degrees for rotations, and an accuracy of 0.05 mm for translations and 0.15 degrees for rotations. These results indicate that the new methods to improve the interpretability, relevance, and standardization of the results do not compromise precision and accuracy, and are suitable for application to clinical data.
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
Energy Technology Data Exchange (ETDEWEB)
Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com [Badan Meteorologi Klimatologi Geofisika, Jl Angkasa I No. 2 Jakarta (Indonesia); Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan [Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)
2014-03-24
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.
Microcontroller-based network for meteorological sensing and weather forecast calculations
Directory of Open Access Journals (Sweden)
A. Vas
2012-06-01
Full Text Available Weather forecasting needs a lot of computing power. It is generally accomplished by using supercomputers which are expensive to rent and to maintain. In addition, weather services also have to maintain radars, balloons and pay for worldwide weather data measured by stations and satellites. Weather forecasting computations usually consist of solving differential equations based on the measured parameters. To do that, the computer uses the data of close and distant neighbor points. Accordingly, if small-sized weather stations, which are capable of making measurements, calculations and communication, are connected through the Internet, then they can be used to run weather forecasting calculations like a supercomputer does. It doesn’t need any central server to achieve this, because this network operates as a distributed system. We chose Microchip’s PIC18 microcontroller (μC platform in the implementation of the hardware, and the embedded software uses the TCP/IP Stack v5.41 provided by Microchip.
National Research Council Canada - National Science Library
D'Mello, Tiffany A; Yamane, Grover K
2007-01-01
.... Until recently, gender-specific weight standards based on height were in place. However, in June 2006 the USAF implemented a new set of height-weight limits utilizing body mass index (BMI) criteria...
SU-C-204-03: DFT Calculations of the Stability of DOTA-Based-Radiopharmaceuticals
Energy Technology Data Exchange (ETDEWEB)
Khabibullin, A.R.; Woods, L.M. [University of South Florida, Tampa, Florida (United States); Karolak, A.; Budzevich, M.M.; Martinez, M.V. [H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States); McLaughlin, M.L.; Morse, D.L. [University of South Florida, Tampa, Florida (United States); H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States)
2016-06-15
Purpose: Application of the density function theory (DFT) to investigate the structural stability of complexes applied in cancer therapy consisting of the 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) chelated to Ac225, Fr221, At217, Bi213, and Gd68 radio-nuclei. Methods: The possibility to deliver a toxic payload directly to tumor cells is a highly desirable aim in targeted alpha particle therapy. The estimation of bond stability between radioactive atoms and the DOTA chelating agent is the key element in understanding the foundations of this delivery process. Thus, we adapted the Vienna Ab-initio Simulation Package (VASP) with the projector-augmented wave method and a plane-wave basis set in order to study the stability and electronic properties of DOTA ligand chelated to radioactive isotopes. In order to count for the relativistic effect of radioactive isotopes we included Spin-Orbit Coupling (SOC) in the DFT calculations. Five DOTA complex structures were represented as unit cells, each containing 58 atoms. The energy optimization was performed for all structures prior to calculations of electronic properties. Binding energies, electron localization functions as well as bond lengths between atoms were estimated. Results: Calculated binding energies for DOTA-radioactive atom systems were −17.792, −5.784, −8.872, −13.305, −18.467 eV for Ac, Fr, At, Bi and Gd complexes respectively. The displacements of isotopes in DOTA cages were estimated from the variations in bond lengths, which were within 2.32–3.75 angstroms. The detailed representation of chemical bonding in all complexes was obtained with the Electron Localization Function (ELF). Conclusion: DOTA-Gd, DOTA-Ac and DOTA-Bi were the most stable structures in the group. Inclusion of SOC had a significant role in the improvement of DFT calculation accuracy for heavy radioactive atoms. Our approach is found to be proper for the investigation of structures with DOTA-based
Directory of Open Access Journals (Sweden)
Chang Wook Jeong
Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy
Development of 3-D FBR heterogeneous core calculation method based on characteristics method
International Nuclear Information System (INIS)
Takeda, Toshikazu; Maruyama, Manabu; Hamada, Yuzuru; Nishi, Hiroshi; Ishibashi, Junichi; Kitano, Akihiro
2002-01-01
A new 3-D transport calculation method taking into account the heterogeneity of fuel assemblies has been developed by combining the characteristics method and the nodal transport method. In the axial direction the nodal transport method is applied, and the characteristics method is applied to take into account the radial heterogeneity of fuel assemblies. The numerical calculations have been performed to verify 2-D radial calculations of FBR assemblies and partial core calculations. Results are compared with the reference Monte-Carlo calculations. A good agreement has been achieved. It is shown that the present method has an advantage in calculating reaction rates in a small region
Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies
Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.
2017-04-01
Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4% ± 3.1% for CBAC and 3.5% ± 3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a
Relativistic many-body perturbation-theory calculations based on Dirac-Fock-Breit wave functions
International Nuclear Information System (INIS)
Ishikawa, Y.; Quiney, H.M.
1993-01-01
A relativistic many-body perturbation theory based on the Dirac-Fock-Breit wave functions has been developed and implemented by employing analytic basis sets of Gaussian-type functions. The instantaneous Coulomb and low-frequency Breit interactions are treated using a unified formalism in both the construction of the Dirac-Fock-Breit self-consistent-field atomic potential and in the evaluation of many-body perturbation-theory diagrams. The relativistic many-body perturbation-theory calculations have been performed on the helium atom and ions of the helium isoelectronic sequence up to Z=50. The contribution of the low-frequency Breit interaction to the relativistic correlation energy is examined for the helium isoelectronic sequence
Status of CINDER and ENDF/B-V based libraries for transmutation calculations
International Nuclear Information System (INIS)
Wilson, W.B.; England, T.R.; LaBauve, R.J.; Battat, M.E.; Wessol, D.E.; Perry, R.T.
1980-01-01
The CINDER codes and their data libraries are described, and their range of calculational capabilities are described using documented applications. The importance of ENDF/B data and the features of the ENDF/B-IV and ENDF/B-V fission-product and actinide data files are emphasized. The actinide decay data of ENDF/B-V, augmented by additional data from available sources, are used to produce average decay energy values and neutron source values from sponteneous fission, (α,n) and delayed neutron emission for 144 actinide nuclides that are formed in reactor fuel. The status and characteristics of the CINDER-2 code is described, along with a brief description of more well known code versions; a review of the status of new ENDF/B-V based libraries for all versions is presented
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models
Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.
2017-12-01
While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API
A GPU-based solution for fast calculation of the betweenness centrality in large weighted networks
Directory of Open Access Journals (Sweden)
Rui Fan
2017-12-01
Full Text Available Betweenness, a widely employed centrality measure in network science, is a decent proxy for investigating network loads and rankings. However, its extremely high computational cost greatly hinders its applicability in large networks. Although several parallel algorithms have been presented to reduce its calculation cost for unweighted networks, a fast solution for weighted networks, which are commonly encountered in many realistic applications, is still lacking. In this study, we develop an efficient parallel GPU-based approach to boost the calculation of the betweenness centrality (BC for large weighted networks. We parallelize the traditional Dijkstra algorithm by selecting more than one frontier vertex each time and then inspecting the frontier vertices simultaneously. By combining the parallel SSSP algorithm with the parallel BC framework, our GPU-based betweenness algorithm achieves much better performance than its CPU counterparts. Moreover, to further improve performance, we integrate the work-efficient strategy, and to address the load-imbalance problem, we introduce a warp-centric technique, which assigns many threads rather than one to a single frontier vertex. Experiments on both realistic and synthetic networks demonstrate the efficiency of our solution, which achieves 2.9× to 8.44× speedups over the parallel CPU implementation. Our algorithm is open-source and free to the community; it is publicly available through https://dx.doi.org/10.6084/m9.figshare.4542405. Considering the pervasive deployment and declining price of GPUs in personal computers and servers, our solution will offer unprecedented opportunities for exploring betweenness-related problems and will motivate follow-up efforts in network science.
A cultural study of a science classroom and graphing calculator-based technology
Casey, Dennis Alan
Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.
A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.
Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei
2017-05-18
The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.
Base data for looking-up tables of calculation errors in JACS code system
International Nuclear Information System (INIS)
Murazaki, Minoru; Okuno, Hiroshi
1999-03-01
The report intends to clarify the base data for the looking-up tables of calculation errors cited in 'Nuclear Criticality Safety Handbook'. The tables were obtained by classifying the benchmarks made by JACS code system, and there are two kinds: One kind is for fuel systems in general geometry with a reflected and another kind is for fuel systems specific to simple geometry with a reflector. Benchmark systems were further categorized into eight groups according to the fuel configuration: homogeneous or heterogeneous; and fuel kind: uranium, plutonium and their mixtures, etc. The base data for fuel systems in general geometry with a reflected are summarized in this report for the first time. The base data for fuel systems in simple geometry with a reflector were summarized in a technical report published in 1987. However, the data in a group named homogeneous low-enriched uranium were further selected out later by the working group for making the Nuclear Criticality Safety Handbook. This report includes the selection. As a project has been organized by OECD/NEA for evaluation of criticality safety benchmark experiments, the results are also described. (author)
GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms
International Nuclear Information System (INIS)
Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick
2014-01-01
The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the
Rutigliano, Grazia; Stahl, Daniel; Davies, Cathy; Bonoldi, Ilaria; Reilly, Thomas; McGuire, Philip
2017-01-01
Importance The overall effect of At Risk Mental State (ARMS) services for the detection of individuals who will develop psychosis in secondary mental health care is undetermined. Objective To measure the proportion of individuals with a first episode of psychosis detected by ARMS services in secondary mental health services, and to develop and externally validate a practical web-based individualized risk calculator tool for the transdiagnostic prediction of psychosis in secondary mental health care. Design, Setting, and Participants Clinical register-based cohort study. Patients were drawn from electronic, real-world, real-time clinical records relating to 2008 to 2015 routine secondary mental health care in the South London and the Maudsley National Health Service Foundation Trust. The study included all patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within the South London and the Maudsley National Health Service Foundation Trust in the period between January 1, 2008, and December 31, 2015. Data analysis began on September 1, 2016. Main Outcomes and Measures Risk of development of nonorganic International Statistical Classification of Diseases and Related Health Problems, Tenth Revision psychotic disorders. Results A total of 91 199 patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within South London and the Maudsley National Health Service Foundation Trust were included in the derivation (n = 33 820) or external validation (n = 54 716) data sets. The mean age was 32.97 years, 50.88% were men, and 61.05% were white race/ethnicity. The mean follow-up was 1588 days. The overall 6-year risk of psychosis in secondary mental health care was 3.02 (95% CI, 2.88-3.15), which is higher than the 6-year risk in the local general population (0.62). Compared with the ARMS designation, all of the International Statistical Classification of Diseases and Related Health Problems
Lim, Lucy; Thompson, Alexander; Patterson, Scott; George, Jacob; Strasser, Simone; Lee, Alice; Sievert, William; Nicoll, Amanda; Desmond, Paul; Roberts, Stuart; Marion, Kaye; Bowden, Scott; Locarnini, Stephen; Angus, Peter
2017-06-01
Multidrug-resistant HBV continues to be an important clinical problem. The TDF-109 study demonstrated that TDF±LAM is an effective salvage therapy through 96 weeks for LAM-resistant patients who previously failed ADV add-on or switch therapy. We evaluated the 5-year efficacy and safety outcomes in patients receiving long-term TDF±LAM in the TDF-109 study. A total of 59 patients completed the first phase of the TDF-109 study and 54/59 were rolled over into a long-term prospective open-label study of TDF±LAM 300 mg daily. Results are reported at the end of year 5 of treatment. At year 5, 75% (45/59) had achieved viral suppression by intent-to-treat analysis. Per-protocol assessment revealed 83% (45/54) were HBV DNA undetectable. Nine patients remained HBV DNA detectable, however 8/9 had very low HBV DNA levels (<264IU/mL) and did not meet virological criteria for virological breakthrough (VBT). One patient experienced VBT, but this was in the setting of documented non-compliance. The response was independent of baseline LAM therapy or mutations conferring ADV resistance. Four patients discontinued TDF, one patient was lost to follow-up and one died from hepatocellular carcinoma. Long-term TDF treatment appears to be safe and effective in patients with prior failure of LAM and a suboptimal response to ADV therapy. These findings confirm that TDF has a high genetic barrier to resistance is active against multidrug-resistant HBV, and should be the preferred oral anti-HBV agent in CHB patients who fail treatment with LAM and ADV. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Ingo Gräff
Full Text Available To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe.Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS, taking into account specific workload fluctuations (50th-95th percentiles.Patients classified to the MTS category red (n = 35 required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118, nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181, 40.95 min, while the two MTS categories with the least acute patients, green (n = 129 and blue (n = 40 required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010 emergency patients, 67-123 emergency patients (50-95% percentile per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1.Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments.
International Nuclear Information System (INIS)
Tian, Zhen; Jia, Xun; Jiang, Steve B; Graves, Yan Jiang
2014-01-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of d max dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
Actinide-lanthanide separation by bipyridyl-based ligands. DFT calculations and experimental results
International Nuclear Information System (INIS)
Borisova, Nataliya E.; Eroshkina, Elizaveta A.; Korotkov, Leonid A.; Ustynyuk, Yuri A.; Alyapyshev, Mikhail Yu.; Eliseev, Ivan I.; Babain, Vasily A.
2011-01-01
In order to gain insights into effect of substituents on selectivity of Am/Eu separation, the synthesis and extractions tests were undertaken on the series of bipyridyl-based ligands (amides of 2,2'-bipyridyl-6,6'-dicarboxylic acid: L Ph - N,N'-diethyl-N,N'-diphenyl amide; L Bu2 - tetrabutyl amide; L Oct2 - tetraoctyl amide; L 3FPh - N,N'-diethyl-N,N'-bis-(3-fluorophenyl) amide; as well as N,N'-diethyl-N,N'-diphenyl amide of 4,4'-dibrom-2,2'-bipyridyl-6,6'-dicarboxylic acid and N,N'-diethyl-N,N'-diphenyl amide of 4,4'-dinitro-2,2'-bipyridyl-6,6'-dicarboxylic acid) as well as structure and stability of their complexes with lanthanides and actinides were studied. The extraction tests were performed for Am, lanthanide series and transition metals in polar diluents in presence of chlorinated cobalt dicarbolide and have shown high distribution coefficients for Am. Also was found that the type of substituents on amidic nitrogen exerts great influence on the extraction of light lanthanides. For understanding of the nature of this effect we made QC-calculations at DFT level, binding constants determination and X-Ray structure determination of the complexes. The UV/VIS titration performed show that the composition of all complexes of the amides with lanthanides in solution is 1:1. In spite of the binding constants are high (lgβ about 6-7 in acetonitrile solution), lanthanide ions have binding constants with the same order of magnitude for dialkyl substituted extractants. The X-Ray structures of the complexes of bipyridyl-based amides show the composition of 1:1 and the coordination number of the ions being 10. The DFT optimized structures of the compounds are in good agreement with that obtained by X-Ray. The gas phase affinity of the amides to lanthanides shows strong correlation with the distribution ratios. We can infer that the bipyridyl-based amides form complexes with metal nitrates which have similar structure in solid and gas phases and in solution, and the DFT
Using 3d Bim Model for the Value-Based Land Share Calculations
Çelik Şimşek, N.; Uzun, B.
2017-11-01
According to the Turkish condominium ownership system, 3D physical buildings and its condominium units are registered to the condominium ownership books via 2D survey plans. Currently, 2D representations of the 3D physical objects, causes inaccurate and deficient implementations for the determination of the land shares. Condominium ownership and easement right are established with a clear indication of land shares (condominium ownership law, article no. 3). So, the land share of each condominium unit have to be determined including the value differences among the condominium units. However the main problem is that, land share has often been determined with area based over the project before construction of the building. The objective of this study is proposing a new approach in terms of value-based land share calculations of the condominium units that subject to condominium ownership. So, the current approaches and its failure that have taken into account in determining the land shares are examined. And factors that affect the values of the condominium units are determined according to the legal decisions. This study shows that 3D BIM models can provide important approaches for the valuation problems in the determination of the land shares.
Weight Calculation for Cases Generated by Tacit Knowledge Explicit Based on RS-FAHP
Directory of Open Access Journals (Sweden)
Cao Yue
2017-01-01
Full Text Available In the knowledge economy, it becomes the core competence of persons, groups and organizations to effectively organize and manage tacit knowledge, affecting their sustainable development. Case explicit for tacit knowledge is an effective way to improve their clarity, improve management efficiency. it determines the validity of the case view to calculate legitimately the weights for the case aspects or attributes, and further affect the application benefit of the explicit knowledge. The case view affected seriously by the subjective, obtaining via traditional direct weighting method, and the objectivity of the result is not strong. On the other hand, the objective weights configuration is not only ignored the expert knowledge, but also lead to the acceptance barriers for the body of knowledge to accept the result. Therefore, in this paper, relying on rough set (RS theory, the integrating algorithm of two objective weight configuration is analyzed Systematically, based on conditional entropy and property dependence. Simultaneously, Fuzzy Analytic Hierarchy Process (AHP is studied to take into account the operational experience and knowledge of experts in the field. And then, case attribute RS-FAHP comprehensive weight placement algorithms is designed, based on the integration of subjective and objective thinking. The work mentioned above can improve and perfect the traditional configuration of weights, and support to apply and manage the tacit knowledge explicit cases effectively.
Design of Pd-Based Bimetallic Catalysts for ORR: A DFT Calculation Study
Directory of Open Access Journals (Sweden)
Lihui Ou
2015-01-01
Full Text Available Developing Pd-lean catalysts for oxygen reduction reaction (ORR is the key for large-scale application of proton exchange membrane fuel cells (PEMFCs. In the present paper, we have proposed a multiple-descriptor strategy for designing efficient and durable ORR Pd-based alloy catalysts. We demonstrated that an ideal Pd-based bimetallic alloy catalyst for ORR should possess simultaneously negative alloy formation energy, negative surface segregation energy of Pd, and a lower oxygen binding ability than pure Pt. By performing detailed DFT calculations on the thermodynamics, surface chemistry and electronic properties of Pd-M alloys, Pd-V, Pd-Fe, Pd-Zn, Pd-Nb, and Pd-Ta, are identified theoretically to have stable Pd segregated surface and improved ORR activity. Factors affecting these properties are analyzed. The alloy formation energy of Pd with transition metals M can be mainly determined by their electron interaction. This may be the origin of the negative alloy formation energy for Pd-M alloys. The surface segregation energy of Pd is primarily determined by the surface energy and the atomic radius of M. The metals M which have smaller atomic radius and higher surface energy would tend to favor the surface segregation of Pd in corresponding Pd-M alloys.
Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting
Directory of Open Access Journals (Sweden)
Lin Li
2013-01-01
Full Text Available The tilt and decentration of intraocular lens (IOL result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6–12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL’s location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.
Research on calculation of the IOL tilt and decentration based on surface fitting.
Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng
2013-01-01
The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration) and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6-12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL's location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.
GIS supported calculations of 137Cs deposition in Sweden based on precipitation data
International Nuclear Information System (INIS)
Almgren, Sara; Nilsson, Elisabeth; Erlandsson, Bengt; Isaksson, Mats
2006-01-01
It is of interest to know the spatial variation and the amount of 137 Cs e.g. in case of an accident with a radioactive discharge. In this study, the spatial distribution of the quarterly 137 Cs deposition over Sweden due to nuclear weapons fallout (NWF) during the period 1962-1966 was determined by relating the measured deposition density at a reference site to the amount of precipitation. Measured quarterly values of 137 Cs deposition density per unit precipitation at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations. The reference sites were assumed to represent areas with different quarterly mean precipitation. The extent of these areas was determined from the distribution of the mean measured precipitation between 1961 and 1990 and varied according to seasonal variations in the mean precipitation pattern. Deposition maps were created by interpolation within a geographical information system (GIS). Both integrated (total) and cumulative (decay corrected) deposition densities were calculated. The lowest levels of NWF 137 Cs deposition density were noted in north-eastern and eastern parts of Sweden and the highest levels in the western parts of Sweden. Furthermore the deposition density of 137 Cs, resulting from the Chernobyl accident was determined for an area in western Sweden based on precipitation data. The highest levels of Chernobyl 137 Cs in western Sweden were found in the western parts of the area along the coast and the lowest in the east. The sum of the deposition densities from NWF and Chernobyl in western Sweden was then compared to the total activity measured in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values and other studies
He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao
2016-01-01
The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.
2017-10-24
trichomonas vaginalis testing, Melinda Balansay-ames, chris Myers and gary Brice for Pcr- based sex determination testing, and Kimberly De Vera for...2017-053355 rEFErEnCEs 1 torrone e , Papp J, Weinstock H. centers for Disease control and Prevention (cDc). Prevalence of Chlamydia trachomatis genital...infection among persons aged 14-39 years-United States, 2007-2012. MMWR Morb Mortal Wkly Rep 2014;63:834–7. 2 rietmeijer ca, Hopkins e , geisler WM
Directory of Open Access Journals (Sweden)
Malte Kroenig
2016-01-01
Full Text Available Objective. In this study, we compared prostate cancer detection rates between MRI-TRUS fusion targeted and systematic biopsies using a robot-guided, software based transperineal approach. Methods and Patients. 52 patients received a MRIT/TRUS fusion followed by a systematic volume adapted biopsy using the same robot-guided transperineal approach. The primary outcome was the detection rate of clinically significant disease (Gleason grade ≥ 4. Secondary outcomes were detection rate of all cancers, sampling efficiency and utility, and serious adverse event rate. Patients received no antibiotic prophylaxis. Results. From 52 patients, 519 targeted biopsies from 135 lesions and 1561 random biopsies were generated (total n=2080. Overall detection rate of clinically significant PCa was 44.2% (23/52 and 50.0% (26/52 for target and random biopsy, respectively. Sampling efficiency as the median number of cores needed to detect clinically significant prostate cancer was 9 for target (IQR: 6–14.0 and 32 (IQR: 24–32 for random biopsy. The utility as the number of additionally detected clinically significant PCa cases by either strategy was 0% (0/52 for target and 3.9% (2/52 for random biopsy. Conclusions. MRI/TRUS fusion based target biopsy did not show an advantage in the overall detection rate of clinically significant prostate cancer.
Li, Haoyuan; Qiu, Yong; Duan, Lian
2016-01-01
A method is proposed to calculate the electric properties of organic-based devices from the molecular structure. The charge transfer rate is obtained using non-adiabatic molecular dynamics. The organic film in the device is modeled using
Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon
2012-10-01
This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, psmartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.
International Nuclear Information System (INIS)
Goto, Minoru; Takamatsu, Kuniyoshi
2007-03-01
The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)
Kim, Dong Ki; Lee, Jung Chan; Lee, Hajeong; Joo, Kwon Wook; Oh, Kook-Hwan; Kim, Yon Su; Yoon, Hyung-Jin; Kim, Hee Chan
2016-04-01
Wearable artificial kidney (WAK) has been considered an alternative to standard hemodialysis (HD) for many years. Although various novel WAK systems have been recently developed for use in clinical applications, the target performance or standard dose of dialysis has not yet been determined. To calculate the appropriate clearance for a HD-based WAK system for the treatment of patients with end-stage renal disease with various dialysis conditions, a classic variable-volume two-compartment kinetic model was used to simulate an anuric patient with variable target time-averaged creatinine concentration (TAC), daily water intake volume, daily dialysis pause time, and patient body weight. A 70-kg anuric patient with a HD-based WAK system operating for 24 h required dialysis clearances of creatinine of at least 100, 50, and 25 mL/min to achieve TACs of 1.0, 2.0, and 4.0 mg/dL, respectively. The daily water intake volume did not affect the clearance required for dialysis under various conditions. As the pause time per day for the dialysis increased, higher dialysis clearances were required to maintain the target TAC. The present study provided theoretical dialysis doses for an HD-based WAK system to achieve various target TACs through relevant mathematical kinetic modeling. The theoretical results may contribute to the determination of the technical specifications required for the development of a WAK system. © 2015 The Authors. Hemodialysis International published by Wiley Periodicals, Inc. on behalf of International Society for Hemodialysis.
A RTS-based method for direct and consistent calculating intermittent peak cooling loads
International Nuclear Information System (INIS)
Chen Tingyao; Cui, Mingxian
2010-01-01
The RTS method currently recommended by ASHRAE Handbook is based on continuous operation. However, most of air-conditioning systems, if not all, in commercial buildings, are intermittently operated in practice. The application of the current RTS method to intermittent air-conditioning in nonresidential buildings could result in largely underestimated design cooling loads, and inconsistently sized air-conditioning systems. Improperly sized systems could seriously deteriorate the performance of system operation and management. Therefore, a new method based on both the current RTS method and the principles of heat transfer has been developed. The first part of the new method is the same as the current RTS method in principle, but its calculation procedure is simplified by the derived equations in a close form. The technical data available in the current RTS method can be utilized to compute zone responses to a change in space air temperature so that no efforts are needed for regenerating new technical data. Both the overall RTS coefficients and the hourly cooling loads computed in the first part are used to estimate the additional peak cooling load due to a change from continuous operation to intermittent operation. It only needs one more step after the current RTS method to determine the intermittent peak cooling load. The new RTS-based method has been validated by EnergyPlus simulations. The root mean square deviation (RMSD) between the relative additional peak cooling loads (RAPCLs) computed by the two methods is 1.8%. The deviation of the RAPCL varies from -3.0% to 5.0%, and the mean deviation is 1.35%.
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2014-01-01
The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method
Reisner, Jon; D'Angelo, Gennaro; Koo, Eunmo; Even, Wesley; Hecht, Matthew; Hunke, Elizabeth; Comeau, Darin; Bos, Randall; Cooley, James
2018-03-01
We present a multiscale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock, Oman, Stenchikov, et al. (2007, https://doi.org/10.5194/acp-7-2003-2007), based on the analysis of Toon et al. (2007, https://doi.org/10.5194/acp-7-1973-2007), which assumes a regional exchange between India and Pakistan of fifty 15 kt weapons detonated by each side. We expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 109 kg of black carbon in the upper troposphere (approximately from 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 109 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures
Comparison of CT number calibration techniques for CBCT-based dose calculation
Energy Technology Data Exchange (ETDEWEB)
Dunlop, Alex [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); Murray, Julia; Bhide, Shreerang; Harrington, Kevin [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); The Institute of Cancer Research, London (United Kingdom); Poludniowski, Gavin [Karolinska University Hospital, Department of Medical Physics, Stockholm (Sweden); Nutting, Christopher [The Institute of Cancer Research, London (United Kingdom); Newbold, Kate [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom)
2015-12-15
The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT{sub r}); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS{sub auto}), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS{sub auto} provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT{sub r} (0.5 %) and RS{sub auto} (0.6 %) performing best. For lung cases, WL and RS{sub auto} methods generated dose distributions most similar to the ground truth. The RS{sub auto} density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS{sub auto} methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [German] Ziel dieser Arbeit ist der Vergleich und die Validierung mehrerer CT-Kalibrierungsmethoden zur Dosisberechnung auf der Grundlage von Kegelstrahlcomputertomographie
International Nuclear Information System (INIS)
Brandt, Adam R.; Dale, Michael; Barnhart, Charles J.
2013-01-01
In this paper we expand the work of Brandt and Dale (2011) on ERRs (energy return ratios) such as EROI (energy return on investment). This paper describes a “bottom-up” mathematical formulation which uses matrix-based computations adapted from the LCA (life cycle assessment) literature. The framework allows multiple energy pathways and flexible inclusion of non-energy sectors. This framework is then used to define a variety of ERRs that measure the amount of energy supplied by an energy extraction and processing pathway compared to the amount of energy consumed in producing the energy. ERRs that were previously defined in the literature are cast in our framework for calculation and comparison. For illustration, our framework is applied to include oil production and processing and generation of electricity from PV (photovoltaic) systems. Results show that ERR values will decline as system boundaries expand to include more processes. NERs (net energy return ratios) tend to be lower than GERs (gross energy return ratios). External energy return ratios (such as net external energy return, or NEER (net external energy ratio)) tend to be higher than their equivalent total energy return ratios. - Highlights: • An improved bottom-up mathematical method for computing net energy return metrics is developed. • Our methodology allows arbitrary numbers of interacting processes acting as an energy system. • Our methodology allows much more specific and rigorous definition of energy return ratios such as EROI or NER
International Nuclear Information System (INIS)
Chvosta, Petr; Holubec, Viktor; Ryabov, Artem; Einax, Mario; Maass, Philipp
2010-01-01
We investigate a microscopic motor based on an externally controlled two-level system. One cycle of the motor operation consists of two strokes. Within each stroke, the two-level system is in contact with a given thermal bath and its energy levels are driven at a constant rate. The time evolutions of the occupation probabilities of the two states are controlled by one rate equation and represent the system's response with respect to the external driving. We give the exact solution of the rate equation for the limit cycle and discuss the emerging thermodynamics: the work done on the environment, the heat exchanged with the baths, the entropy production, the motor's efficiency, and the power output. Furthermore we introduce an augmented stochastic process which reflects, at a given time, both the occupation probabilities for the two states and the time spent in the individual states during the previous evolution. The exact calculation of the evolution operator for the augmented process allows us to discuss in detail the probability density for the work performed during the limit cycle. In the strongly irreversible regime, the density exhibits important qualitative differences with respect to the more common Gaussian shape in the regime of weak irreversibility
Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin
2017-02-01
Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.
Calculating the Efficiency of Steam Boilers Based on Its Most Effecting Factors: A Case Study
Nabil M. Muhaisen; Rajab Abdullah Hokoma
2012-01-01
This paper is concerned with calculating boiler efficiency as one of the most important types of performance measurements in any steam power plant. That has a key role in determining the overall effectiveness of the whole system within the power station. For this calculation, a Visual-Basic program was developed, and a steam power plant known as El-Khmus power plant, Libya was selected as a case study. The calculation of the boiler efficiency was applied by using heating ...
Calculation of DC Arc Plasma Torch Voltage- Current Characteristics Based on Steebeck Model
International Nuclear Information System (INIS)
Gnedenko, V.G.; Ivanov, A.A.; Pereslavtsev, A.V.; Tresviatsky, S.S.
2006-01-01
The work is devoted to the problem of the determination of plasma torches parameters and power sources parameters (working voltage and current of plasma torch) at the predesigning stage. The sequence of calculation of voltage-current characteristics of DC arc plasma torch is proposed. It is shown that the simple Steenbeck model of arc discharge in cylindrical channel makes it possible to carry out this calculation. The results of the calculation are confirmed by the experiments
Directory of Open Access Journals (Sweden)
Andrew C. Elton
2017-01-01
Full Text Available Salmonella meningitis is a rare manifestation of meningitis typically presenting in neonates and the elderly. This infection typically associates with foodborne outbreaks in developing nations and AIDS-endemic regions. We report a case of a 19-year-old male presenting with altered mental status after 3-day absence from work at a Wisconsin tourist area. He was febrile, tachycardic, and tachypneic with a GCS of 8. The patient was intubated and a presumptive diagnosis of meningitis was made. Treatment was initiated with ceftriaxone, vancomycin, acyclovir, dexamethasone, and fluid resuscitation. A lumbar puncture showed cloudy CSF with Gram negative rods. He was admitted to the ICU. CSF culture confirmed Salmonella enterica subsp. I (enterica Enteritidis (A. Based on this finding, a 4th-generation HIV antibody/p24 antigen test was sent. When this returned positive, a CD4 count was obtained and showed 3 cells/mm3, confirming AIDS. The patient ultimately received 38 days of ceftriaxone, was placed on elvitegravir, cobicistat, emtricitabine, and tenofovir alafenamide (Genvoya for HIV/AIDS, and was discharged neurologically intact after a 44-day admission.
Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong
2007-04-01
Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.
SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.
Yuan, Y; Duan, J; Popple, R; Brezovich, I
2012-06-01
To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.
Becker, Ursula; Briggs, Andrew H; Moreno, Santiago G; Ray, Joshua A; Ngo, Phuong; Samanta, Kunal
2016-06-01
To evaluate the cost-effectiveness of treatment with anti-CD20 monoclonal antibody obinutuzumab plus chlorambucil (GClb) in untreated patients with chronic lymphocytic leukemia unsuitable for full-dose fludarabine-based therapy. A Markov model was used to assess the cost-effectiveness of GClb versus other chemoimmunotherapy options. The model comprised three mutually exclusive health states: "progression-free survival (with/without therapy)", "progression (refractory/relapsed lines)", and "death". Each state was assigned a health utility value representing patients' quality of life and a specific cost value. Comparisons between GClb and rituximab plus chlorambucil or only chlorambucil were performed using patient-level clinical trial data; other comparisons were performed via a network meta-analysis using information gathered in a systematic literature review. To support the model, a utility elicitation study was conducted from the perspective of the UK National Health Service. There was good agreement between the model-predicted progression-free and overall survival and that from the CLL11 trial. On incorporating data from the indirect treatment comparisons, it was found that GClb was cost-effective with a range of incremental cost-effectiveness ratios below a threshold of £30,000 per quality-adjusted life-year gained, and remained so during deterministic and probabilistic sensitivity analyses under various scenarios. GClb was estimated to increase both quality-adjusted life expectancy and treatment costs compared with several commonly used therapies, with incremental cost-effectiveness ratios below commonly referenced UK thresholds. This article offers a real example of how to combine direct and indirect evidence in a cost-effectiveness analysis of oncology drugs. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Lesperance, Marielle; Inglis-Whalen, M; Thomson, R M
2014-02-01
To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with(125)I, (103)Pd, or (131)Cs seeds, and to investigate doses to ocular structures. An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20-30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%-10% and 13%-14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%-17% and 29%-34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16%. In the full eye model
International Nuclear Information System (INIS)
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.
2014-01-01
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with 125 I, 103 Pd, or 131 Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up
Energy Technology Data Exchange (ETDEWEB)
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)
2009-01-15
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
International Nuclear Information System (INIS)
Fawcett, B.C.; Mason, H.E.
1989-02-01
This report presents details of a new method to enable the computation of collision strengths for complex ions which is adapted from long established optimisation techniques previously applied to the calculation of atomic structures and oscillator strengths. The procedure involves the adjustment of Slater parameters so that they determine improved energy levels and eigenvectors. They provide a basis for collision strength calculations in ions where ab initio computations break down or result in reducible errors. This application is demonstrated through modifications of the DISTORTED WAVE collision code and SUPERSTRUCTURE atomic-structure code which interface via a transformation code JAJOM which processes their output. (author)
Evaluation of a rapid LMP-based approach for calculating marginal unit emissions
International Nuclear Information System (INIS)
Rogers, Michelle M.; Wang, Yang; Wang, Caisheng; McElmurry, Shawn P.; Miller, Carol J.
2013-01-01
Graphical abstract: Display Omitted - Highlights: • Pollutant emissions estimated based on locational marginal price and eGRID data. • Stochastic model using IEEE RTS-96 system used to evaluate LMP approach. • Incorporating membership function enhanced reliability of pollutant estimate. • Error in pollutant estimate typically 2 and X and SO 2 . - Abstract: To evaluate the sustainability of systems that draw power from electrical grids there is a need to rapidly and accurately quantify pollutant emissions associated with power generation. Air emissions resulting from electricity generation vary widely among power plants based on the types of fuel consumed, the efficiency of the plant, and the type of pollution control systems in service. To address this need, methods for estimating real-time air emissions from power generation based on locational marginal prices (LMPs) have been developed. Based on LMPs the type of the marginal generating unit can be identified and pollutant emissions are estimated. While conceptually demonstrated, this LMP approach has not been rigorously tested. The purpose of this paper is to (1) improve the LMP method for predicting pollutant emissions and (2) evaluate the reliability of this technique through power system simulations. Previous LMP methods were expanded to include marginal emissions estimates using an LMP Emissions Estimation Method (LEEM). The accuracy of emission estimates was further improved by incorporating a probability distribution function that characterize generator fuel costs and a membership function (MF) capable of accounting for multiple marginal generation units. Emission estimates were compared to those predicted from power flow simulations. The improved LEEM was found to predict the marginal generation type approximately 70% of the time based on typical system conditions (e.g. loads and fuel costs) without the use of a MF. With the addition of a MF, the LEEM was found to provide emission estimates with
Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.
Bandura, A V; Evarestov, R A; Lukyanov, S I
2014-07-28
A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure.
Shapley Value-Based Payment Calculation for Energy Exchange between Micro- and Utility Grids
Directory of Open Access Journals (Sweden)
Robin Pilling
2017-10-01
Full Text Available In recent years, microgrids have developed as important parts of power systems and have provided affordable, reliable, and sustainable supplies of electricity. Each microgrid is managed as a single controllable entity with respect to the existing power system but demands for joint operation and sharing the benefits between a microgrid and its hosting utility. This paper is focused on the joint operation of a microgrid and its hosting utility, which cooperatively minimize daily generation costs through energy exchange, and presents a payment calculation scheme for power transactions based on a fair allocation of reduced generation costs. To fairly compensate for energy exchange between the micro- and utility grids, we adopt the cooperative game theoretic solution concept of Shapley value. We design a case study for a fictitious interconnection model between the Mueller microgrid in Austin, Texas and the utility grid in Taiwan. Our case study shows that when compared to standalone generations, both the micro- and utility grids are better off when they collaborate in power exchange regardless of their individual contributions to the power exchange coalition.
Improvement of Power Flow Calculation with Optimization Factor Based on Current Injection Method
Directory of Open Access Journals (Sweden)
Lei Wang
2014-01-01
Full Text Available This paper presents an improvement in power flow calculation based on current injection method by introducing optimization factor. In the method proposed by this paper, the PQ buses are represented by current mismatches while the PV buses are represented by power mismatches. It is different from the representations in conventional current injection power flow equations. By using the combined power and current injection mismatches method, the number of the equations required can be decreased to only one for each PV bus. The optimization factor is used to improve the iteration process and to ensure the effectiveness of the improved method proposed when the system is ill-conditioned. To verify the effectiveness of the method, the IEEE test systems are tested by conventional current injection method and the improved method proposed separately. Then the results are compared. The comparisons show that the optimization factor improves the convergence character effectively, especially that when the system is at high loading level and R/X ratio, the iteration number is one or two times less than the conventional current injection method. When the overloading condition of the system is serious, the iteration number in this paper appears 4 times less than the conventional current injection method.
Nagura, Takuya; Kawachi, Shingo; Chokawa, Kenta; Shirakawa, Hiroki; Araidai, Masaaki; Kageshima, Hiroyuki; Endoh, Tetsuo; Shiraishi, Kenji
2018-04-01
It is expected that the off-state leakage current of MOSFETs can be reduced by employing vertical body channel MOSFETs (V-MOSFETs). However, in fabricating these devices, the structure of the Si pillars sometimes cannot be maintained during oxidation, since Si atoms sometimes disappear from the Si/oxide interface (Si missing). Thus, in this study, we used first-principles calculations based on the density functional theory, and investigated the Si emission behavior at the various interfaces on the basis of the Si emission model including its atomistic structure and dependence on Si crystal orientation. The results show that the order in which Si atoms are more likely to be emitted during thermal oxidation is (111) > (110) > (310) > (100). Moreover, the emission of Si atoms is enhanced as the compressive strain increases. Therefore, the emission of Si atoms occurs more easily in V-MOSFETs than in planar MOSFETs. To reduce Si missing in V-MOSFETs, oxidation processes that induce less strain, such as wet or pyrogenic oxidation, are necessary.
International Nuclear Information System (INIS)
Ji, K F; Zhao, Z; Xing, X W; Zou, H X; Zhou, S L
2014-01-01
Ship detection and classification with space-borne SAR has many potential applications within the maritime surveillance, fishery activity management, monitoring ship traffic, and military security. While ship detection techniques with SAR imagery are well established, ship classification is still an open issue. One of the main reasons may be ascribed to the difficulties on acquiring the required quantities of real data of vessels under different observation and environmental conditions with precise ground truth. Therefore, simulation of SAR images with high scenario flexibility and reasonable computation costs is compulsory for ship classification algorithms development. However, the simulation of SAR imagery of ship over sea surface is challenging. Though great efforts have been devoted to tackle this difficult problem, it is far from being conquered. This paper proposes a novel scheme for SAR imagery simulation of ship over sea surface. The simulation is implemented based on high frequency electromagnetic calculations methods of PO, MEC, PTD and GO. SAR imagery of sea clutter is modelled by the representative K-distribution clutter model. Then, the simulated SAR imagery of ship can be produced by inserting the simulated SAR imagery chips of ship into the SAR imagery of sea clutter. The proposed scheme has been validated with canonical and complex ship targets over a typical sea scene
Fission yield calculation using toy model based on Monte Carlo simulation
International Nuclear Information System (INIS)
Jubaidah; Kurniadi, Rizal
2015-01-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R c ), mean of left curve (μ L ) and mean of right curve (μ R ), deviation of left curve (σ L ) and deviation of right curve (σ R ). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
International Nuclear Information System (INIS)
Lauridsen, B.; Hedemann Jensen, P.
1987-01-01
The basic dosimetric quantity in ICRP-publication no. 30 is the aborbed fraction AF(T<-S). This parameter is the fraction of energy absorbed in a target organ T per emission of radiation from activity deposited in the source organ S. Based upon this fraction it is possible to calculate the Specific Effective Energy SEE(T<-S). From this, the committed effective dose equivalent from an intake of radioactive material can be found, and thus the annual limit of intake for given radionuclides can be determined. A male phantom has been constructed with the aim of measuring the Specific Effective Energy SEE(T<-S) in various target organs. Impressions-of real human organs have been used to produce vacuum forms. Tissue equivalent plastic sheets were sucked into the vacuum forms producing a shell with a shape identical to the original organ. Each organ has been made of two shells. The same procedure has been used for the body. Thin tubes through the organs make it possible to place TL dose meters in a matrix so the dose distribution can be measured. The phantom has been supplied with lungs, liver, kidneys, spleen, stomach, bladder, pancreas, and thyroid gland. To select a suitable body liquid for the phantom, laboratory experiments have been made with different liquids and different radionuclides. In these experiments the change in dose rate due to changes in density and composition of the liquid was determined. Preliminary results of the experiments are presented. (orig.)
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Energy Technology Data Exchange (ETDEWEB)
Anusionwu, Princess [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Alpuche Aviles, Jorge E. [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Pistorius, Stephen [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Department of Radiology, University of Manitoba, Winnipeg (Canada)
2016-08-15
Objective: Commissioning of a Monte Carlo based electron dose calculation algorithm requires percentage depth doses (PDDs) and beam profiles which can be measured with multiple detectors. Electron dosimetry is commonly performed with cylindrical chambers but parallel plate chambers and diodes can also be used. The purpose of this study was to determine the most appropriate detector to perform the commissioning measurements. Methods: PDDs and beam profiles were measured for beams with energies ranging from 6 MeV to 15 MeV and field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Detectors used included diodes, cylindrical and parallel plate ionization chambers. Beam profiles were measured in water (100 cm source to surface distance) and in air (95 cm source to detector distance). Results: PDDs for the cylindrical chambers were shallower (1.3 mm averaged over all energies and field sizes) than those measured with the parallel plate chambers and diodes. Surface doses measured with the diode and cylindrical chamber were on average larger by 1.6 % and 3% respectively than those of the parallel plate chamber. Profiles measured with a diode resulted in penumbra values smaller than those measured with the cylindrical chamber by 2 mm. Conclusion: The diode was selected as the most appropriate detector since PDDs agreed with those measured with parallel plate chambers (typically recommended for low energies) and results in sharper profiles. Unlike ion chambers, no corrections are needed to measure PDDs, making it more convenient to use.
International Nuclear Information System (INIS)
Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)
Ruthenia-based electrochemical supercapacitors: insights from first-principles calculations.
Ozoliņš, Vidvuds; Zhou, Fei; Asta, Mark
2013-05-21
Electrochemical supercapacitors (ECs) have important applications in areas wherethe need for fast charging rates and high energy density intersect, including in hybrid and electric vehicles, consumer electronics, solar cell based devices, and other technologies. In contrast to carbon-based supercapacitors, where energy is stored in the electrochemical double-layer at the electrode/electrolyte interface, ECs involve reversible faradaic ion intercalation into the electrode material. However, this intercalation does not lead to phase change. As a result, ECs can be charged and discharged for thousands of cycles without loss of capacity. ECs based on hydrous ruthenia, RuO2·xH2O, exhibit some of the highest specific capacitances attained in real devices. Although RuO2 is too expensive for widespread practical use, chemists have long used it as a model material for investigating the fundamental mechanisms of electrochemical supercapacitance and heterogeneous catalysis. In this Account, we discuss progress in first-principles density-functional theory (DFT) based studies of the electronic structure, thermodynamics, and kinetics of hydrous and anhydrous RuO2. We find that DFT correctly reproduces the metallic character of the RuO2 band structure. In addition, electron-proton double-insertion into bulk RuO2 leads to the formation of a polar covalent O-H bond with a fractional increase of the Ru charge in delocalized d-band states by only 0.3 electrons. This is in slight conflict with the common assumption of a Ru valence change from Ru(4+) to Ru(3+). Using the prototype electrostatic ground state (PEGS) search method, we predict a crystalline RuOOH compound with a formation energy of only 0.15 eV per proton. The calculated voltage for the onset of bulk proton insertion in the dilute limit is only 0.1 V with respect to the reversible hydrogen electrode (RHE), in reasonable agreement with the 0.4 V threshold for a large diffusion-limited contribution measured experimentally
Neutronic calculations of AFPR-100 reactor based on Spherical Cermet Fuel particles
International Nuclear Information System (INIS)
Benchrif, A.; Chetaine, A.; Amsil, H.
2013-01-01
Highlights: • AFPR-100 reactor considered as a small nuclear reactor without on-site refueling originally based on TRISO micro-fuel element. • The AFPR-100 reactor was re-designed using the new Spherical Cermet fuel element. • The adoption of the Cermet fuel instead of TRISO fuel reduces the core lifetime operation by 3.1 equivalent full power years. • We discussed the new micro-fuel element candidate for small and medium sized reactors. - Abstract: The Atoms For Peace Reactor (AFPR-100), as a 100 MW(e) without the need of on-site refueling, was originally based on UO2 TRISO fuel coated particles embedded in a carbon matrix directly cooled by light water. AFPR-100 is considered as a small nuclear reactor without open-vessel refueling which is proposed by Pacific Northwest National Laboratory (PNNL). An account of significant irradiation swelling in the silicon carbide fission product barrier coating layer of TRISO fuel element, a Spherical Cermet Fuel element has been proposed. Indeed, the new fuel concept, which was developed by PNNL, consists of changing the pyro-carbon and ceramic coatings that are incompatible with low temperature by Zirconium. The latter was chosen to avoid any potential Wigner energy effect issues in the TRISO fuel element. Actually, the purpose of this study is to assess the goal of AFPR-100 concept using the Cermet fuel; undeniably, the fuel core lifetime prediction may be extended for reasonably long period without on-site refueling. In fact, we investigated some neutronic parameters of reactor core by the calculation code SRAC95. The results suggest that the core fuel lifetime beyond 12 equivalent full power years (EFPYs) is possible. Hence, the adoption of Cermet fuel concept shows a core lifetime decrease of about 3.1 EFPY
A clustering approach to segmenting users of internet-based risk calculators.
Harle, C A; Downs, J S; Padman, R
2011-01-01
Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.
Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education
Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.
2014-01-01
Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…
Review of theoretical calculations of hydrogen storage in carbon-based materials
Energy Technology Data Exchange (ETDEWEB)
Meregalli, V.; Parrinello, M. [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany)
2001-02-01
In this paper we review the existing theoretical literature on hydrogen storage in single-walled nanotubes and carbon nanofibers. The reported calculations indicate a hydrogen uptake smaller than some of the more optimistic experimental results. Furthermore the calculations suggest that a variety of complex chemical processes could accompany hydrogen storage and release. (orig.)
Volumetric Arterial Wall Shear Stress Calculation Based on Cine Phase Contrast MRI
Potters, Wouter V.; van Ooij, Pim; Marquering, Henk; VanBavel, Ed; Nederveen, Aart J.
2015-01-01
PurposeTo assess the accuracy and precision of a volumetric wall shear stress (WSS) calculation method applied to cine phase contrast magnetic resonance imaging (PC-MRI) data. Materials and MethodsVolumetric WSS vectors were calculated in software phantoms. WSS algorithm parameters were optimized
Bending Moment Calculations for Piles Based on the Finite Element Method
Directory of Open Access Journals (Sweden)
Yu-xin Jie
2013-01-01
Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.
Nonlinear optimization method of ship floating condition calculation in wave based on vector
Ding, Ning; Yu, Jian-xing
2014-08-01
Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.
A fast dose calculation method based on table lookup for IMRT optimization
International Nuclear Information System (INIS)
Wu Qiuwen; Djajaputra, David; Lauterbach, Marc; Wu Yan; Mohan, Radhe
2003-01-01
This note describes a fast dose calculation method that can be used to speed up the optimization process in intensity-modulated radiotherapy (IMRT). Most iterative optimization algorithms in IMRT require a large number of dose calculations to achieve convergence and therefore the total amount of time needed for the IMRT planning can be substantially reduced by using a faster dose calculation method. The method that is described in this note relies on an accurate dose calculation engine that is used to calculate an approximate dose kernel for each beam used in the treatment plan. Once the kernel is computed and saved, subsequent dose calculations can be done rapidly by looking up this kernel. Inaccuracies due to the approximate nature of the kernel in this method can be reduced by performing scheduled kernel updates. This fast dose calculation method can be performed more than two orders of magnitude faster than the typical superposition/convolution methods and therefore is suitable for applications in which speed is critical, e.g., in an IMRT optimization that requires a simulated annealing optimization algorithm or in a practical IMRT beam-angle optimization system. (note)
Bidwell, Colin S.
2015-05-01
A method for calculating particle transport through turbo-machinery using the mixing plane analogy was developed and used to analyze the energy efficient engine . This method allows the prediction of temperature and phase change of water based particles along their path and the impingement efficiency and particle impact property data on various components in the engine. This methodology was incorporated into the LEWICE3D V3.5 software. The method was used to predict particle transport in the low pressure compressor of the . The was developed by NASA and GE in the early 1980s as a technology demonstrator and is representative of a modern high bypass turbofan engine. The flow field was calculated using the NASA Glenn ADPAC turbo-machinery flow solver. Computations were performed for a Mach 0.8 cruise condition at 11,887 m assuming a standard warm day for ice particle sizes of 5, 20 and 100 microns and a free stream particle concentration of . The impingement efficiency results showed that as particle size increased average impingement efficiencies and scoop factors increased for the various components. The particle analysis also showed that the amount of mass entering the inner core decreased with increased particle size because the larger particles were less able to negotiate the turn into the inner core due to particle inertia. The particle phase change analysis results showed that the larger particles warmed less as they were transported through the low pressure compressor. Only the smallest 5 micron particles were warmed enough to produce melting with a maximum average melting fraction of 0.18. The results also showed an appreciable amount of particle sublimation and evaporation for the 5 micron particles entering the engine core (22.6 %).
Method for stability analysis based on the Floquet theory and Vidyn calculations
Energy Technology Data Exchange (ETDEWEB)
Ganander, Hans
2005-03-01
This report presents the activity 3.7 of the STEM-project Aerobig and deals with aeroelastic stability of the complete wind turbine structure at operation. As a consequence of the increase of sizes of wind turbines dynamic couplings are being more important for loads and dynamic properties. The steady ambition to increase the cost competitiveness of wind turbine energy by using optimisation methods lowers design margins, which in turn makes questions about stability of the turbines more important. The main objective of the project is to develop a general stability analysis tool, based on the VIDYN methodology regarding the turbine dynamic equations and the Floquet theory for the stability analysis. The reason for selecting the Floquet theory is that it is independent of number of blades, thus can be used for 2 as well as 3 bladed turbines. Although the latter ones are dominating on the market, the former has large potential when talking about offshore large turbines. The fact that cyclic and individual blade pitch controls are being developed as a mean for reduction of fatigue also speaks for general methods as Floquet. The first step of a general system for stability analysis has been developed, the code VIDSTAB. Together with other methods, as the snap shot method, the Coleman transformation and the use of Fourier series, eigenfrequences and modes can be analysed. It is general with no restrictions on the number of blades nor the symmetry of the rotor. The derivatives of the aerodynamic forces are calculated numerically in this first version. Later versions would include state space formulations of these forces. This would also be the case for the controllers of turbine rotation speed, yaw direction and pitch angle.
DEFF Research Database (Denmark)
Olesen, Bjarne W.; de Carli, Michele
2011-01-01
According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting syst......–20% of the building energy demand. The additional loss depends on the type of heat emitter, type of control, pump and boiler. Keywords: Heating systems; CEN standards; Energy performance; Calculation methods......According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting...... systems. This energy declaration must refer to the primary energy or CO2 emissions. The European Organization for Standardization (CEN) has prepared a series of standards for energy performance calculations for buildings and systems. This paper presents related standards for heating systems. The relevant...
Energy Technology Data Exchange (ETDEWEB)
Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)
2011-07-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
International Nuclear Information System (INIS)
Christoforou, Stavros; Hoogenboom, J. Eduard
2011-01-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang
2012-09-01
Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.
Rohendi, Keukeu; Putra, Ilham Eka
2016-01-01
Sinarmas currently has several insurance services featured. To perform its function as a good insurance company is need for reform in terms of services in the process of calculating insurance premiums of insurance carried by marketing to use a calculator which interferes with the activities of marketing activities, slow printing insurance policies, automobile claims process that requires the customer to come to the office ASM, slow printing of Work Order (SPK) and the difficulty recap custome...
Study on the acceleration of the neutronics calculation based on GPGPU
International Nuclear Information System (INIS)
Ohoka, Y.; Tatsumi, M.
2007-01-01
The cost of the reactor physics calculation tends to become higher with more detail treatment in the physics models and computational algorithms. For example, SCOPE2 requires considerably high computational costs for multi-group transport calculation in 3-D pin-by-pin geometry. In this paper, applicability of GPGPU to acceleration of neutronics calculation is discussed. At first, performance and accuracy of the basic matrix calculations with fundamental arithmetic operators and the exponential, function are studied. The calculation was performed on a machine with Pentium 4 of 3.2 MHz and GPU of nVIDIA GeForce7800GTX using a test program written in C++, OpenGL and GLSL on Linux. When matrix size becomes large, the calculation on GPU is 10-50 times faster than that on CPU for fundamental arithmetic operators. For the exponential function, calculation on GPU is 270-370 times faster than that on CPU. The precision of all the cases are equivalent to that on CPU, which is less than the criterion of IEEE754 (10 -6 as single precision). Next, the GPGPU is applied to a functional module in SCOPE2. In the present study, as the first step of GPGPU application, calculations in. small geometry are tested. Performance gain, by GPGPU in this application was relatively modest, approximately 15%, compared to the feasibility study. This is because the part in which GPGPU was applied had appropriate structure for GPGPU implementation but had only small fraction of computational load. For much advanced acceleration, it is important to consider various factors such as easiness of implementation, fraction of computational load and bottleneck in data transfer between GPU and CPU. (authors)
Energy Technology Data Exchange (ETDEWEB)
Dai, Wen-Wu [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Zhao, Zong-Yan, E-mail: zzy@kmust.edu.cn [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Jiangsu Provincial Key Laboratory for Nanotechnology, Nanjing University, Nanjing 210093 (China)
2017-06-01
Highlights: • Heterostructure constructing is an effective way to enhance the photocatalytic performance. • Graphene-like materials and BiOI were in contact and formed van der Waals heterostructures. • Band edge positions of GO/g-C{sub 3}N{sub 4} and BiOI changed to form standard type-II heterojunction. • 2D materials can promote the separation of photo-generated electron-hole pairs in BiOI. - Abstract: Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C{sub 3}N{sub 4}) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C{sub 3}N{sub 4} and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and
Calculation and analysis of the source term of the reactor core based on different data libraries
International Nuclear Information System (INIS)
Chen Haiying; Zhang Chunming; Wang Shaowei; Lan Bing; Liu Qiaofeng; Han Jingru
2014-01-01
The nuclear fuel in reactor core produces large amount of radioactive nuclides in the fission process. ORIGEN-S can calculate the accumulation and decay of radioactive nuclides in the core by using various forms of data libraries, including card-image library, binary library and ORIGEN-S cross section library generated by ARP through interpolation method. In this paper, the information of each data library was described, and the reactor core inventory was calculated by using Card-image library and ARP library. The radioactivity concentration of typical nuclides with the change of fuel burnup was analyzed. The results showed that the influence of data libraries on the calculation of nuclide radioactivity was various. Compared to Card-image library, the radioactivity of a small part of nuclides calculated by ARP library were larger and the radioactivity of "1"3"4Cs, "1"3"6Cs were calculated smaller by about 15%. For some typical nuclides, with the deepening of fuel burnup, the difference of nuclide radioactivity calculated by the two libraries increased. However, the changes of the ratio of nuclide radioactivity were different. (authors)
Effectiveness of a computer based medication calculation education and testing programme for nurses.
Sherriff, Karen; Burston, Sarah; Wallis, Marianne
2012-01-01
The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Jansen, J. T. M.; Shrimpton, P. C.; Zankl, M.
2009-01-01
This paper discusses the simulation of contemporary computed tomography (CT) scanners using Monte Carlo calculation methods to derive normalized organ doses, which enable hospital physicists to estimate typical organ and effective doses for CT examinations. The hardware used in a small PC-cluster at the Health Protection Agency (HPA) for these calculations is described. Investigations concerning optimization of software, including the radiation transport codes MCNP5 and MCNPX, and the Intel and PGI FORTRAN compilers, are presented in relation to results and calculation speed. Differences in approach for modelling the X-ray source are described and their influences are analysed. Comparisons with previously published calculations at HPA from the early 1990's proved satisfactory for the purposes of quality assurance and are presented in terms of organ dose ratios for whole body exposure and differences in organ location. Influences on normalized effective dose are discussed in relation to choice of cross section library, CT scanner technology (contemporary multi slice versus single slice), definition for effective dose (1990 and 2007 versions) and anthropomorphic phantom (mathematical and voxel). The results illustrate the practical need for the updated scanner-specific dose coefficients presently being calculated at HPA, in order to facilitate improved dosimetry for contemporary CT practice. (authors)
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-01-01
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Python-based framework for coupled MC-TH reactor calculations
International Nuclear Information System (INIS)
Travleev, A.A.; Molitor, R.; Sanchez, V.
2013-01-01
We have developed a set of Python packages to provide a modern programming interface to codes used for analysis of nuclear reactors. Python classes can be classified by their functionality into three groups: low-level interfaces, general model classes and high-level interfaces. A low-level interface describes an interface between Python and a particular code. General model classes can be used to describe calculation geometry and meshes to represent system variables. High-level interface classes are used to convert geometry described with general model classes into instances of low-level interface classes and to put results of code calculations (read by low-interface classes) back to general model. The implementation of Python interfaces to the Monte Carlo neutronics code MCNP and thermo-hydraulic code SCF allow efficient description of calculation models and provide a framework for coupled calculations. In this paper we illustrate how these interfaces can be used to describe a pin model, and report results of coupled MCNP-SCF calculations performed for a PWR fuel assembly, organized by means of the interfaces
International Nuclear Information System (INIS)
Puigdomenech, I.; Bruno, J.
1995-04-01
Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T r Cdeg pm values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs
International Nuclear Information System (INIS)
Yousefkhani, M. Baghban; Ghadamian, H.; Massoudi, A.; Aminy, M.
2017-01-01
Highlights: • Investigation of fuel utilization in PEMFC within transfer phenomenon approach. • The main defect of the theoretical calculation of U_F depends on Nernst equation. • U_F has a differential nature so it is employed to do theoretical calculation. - Abstract: In this study, fuel utilization (U_F) of a PEMFC have been investigated within transfer phenomenon approach. Description of the U_F and fuel consumption measurement is the main factor to obtain the U_F. The differences between the experimental study and theoretical calculations results in the previous research articles reveal the available theoretical equations should be studied more based on the fundamental affairs of the U_F. Hence, there is a substantial issue that the U_F description satisfies the principles, and then it can be validated by the experimental results. The results of this study indicate that the U_F and power grew by 1.1% and 1%, respectively, based on one degree increased temperature. In addition, for every 1 kPa pressure increment, U_F improved considerably by 0.25% and 0.173% in the 40 °C and 80 °C, respectively. Furthermore, in the constant temperature, the power improved by 22% based on one atmospheric growth of the pressure. Results of this research show that the U_F has a differential nature, therefore differential equations will be employed to do an accurate theoretical calculation. Accordingly, it seems that the main defect of the theoretical calculation depends on Nernst equation that can be modified by a differential nature coefficient.
International Nuclear Information System (INIS)
Jin, L; Eldib, A; Li, J; Price, R; Ma, C
2015-01-01
Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin
Cell verification of parallel burnup calculation program MCBMPI based on MPI
International Nuclear Information System (INIS)
Yang Wankui; Liu Yaoguang; Ma Jimin; Wang Guanbo; Yang Xin; She Ding
2014-01-01
The parallel burnup calculation program MCBMPI was developed. The program was modularized. The parallel MCNP5 program MCNP5MPI was employed as neutron transport calculation module. And a composite of three solution methods was used to solve burnup equation, i.e. matrix exponential technique, TTA analytical solution, and Gauss Seidel iteration. MPI parallel zone decomposition strategy was concluded in the program. The program system only consists of MCNP5MPI and burnup subroutine. The latter achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, and data exchanging with MCNP5MPI. Also, the program was verified with the pressurized water reactor (PWR) cell burnup benchmark. The results show that it,s capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Lara, A; Riquelme, M; Vöhringer-Martinez, E
2018-05-11
Partition coefficients serve in various areas as pharmacology and environmental sciences to predict the hydrophobicity of different substances. Recently, they have also been used to address the accuracy of force fields for various organic compounds and specifically the methylated DNA bases. In this study, atomic charges were derived by different partitioning methods (Hirshfeld and Minimal Basis Iterative Stockholder) directly from the electron density obtained by electronic structure calculations in a vacuum, with an implicit solvation model or with explicit solvation taking the dynamics of the solute and the solvent into account. To test the ability of these charges to describe electrostatic interactions in force fields for condensed phases, the original atomic charges of the AMBER99 force field were replaced with the new atomic charges and combined with different solvent models to obtain the hydration and chloroform solvation free energies by molecular dynamics simulations. Chloroform-water partition coefficients derived from the obtained free energies were compared to experimental and previously reported values obtained with the GAFF or the AMBER-99 force field. The results show that good agreement with experimental data is obtained when the polarization of the electron density by the solvent has been taken into account, and when the energy needed to polarize the electron density of the solute has been considered in the transfer free energy. These results were further confirmed by hydration free energies of polar and aromatic amino acid side chain analogs. Comparison of the two partitioning methods, Hirshfeld-I and Minimal Basis Iterative Stockholder (MBIS), revealed some deficiencies in the Hirshfeld-I method related to the unstable isolated anionic nitrogen pro-atom used in the method. Hydration free energies and partitioning coefficients obtained with atomic charges from the MBIS partitioning method accounting for polarization by the implicit solvation model
Peng, Hai-Qin; Liu, Yan; Gao, Xue-Long; Wang, Hong-Wu; Chen, Yi; Cai, Hui-Yi
2017-11-01
While point source pollutions have gradually been controlled in recent years, the non-point source pollution problem has become increasingly prominent. The receiving waters are frequently polluted by the initial stormwater from the separate stormwater system and the wastewater from sewage pipes through stormwater pipes. Consequently, calculating the intercepted runoff depth has become a problem that must be resolved immediately for initial stormwater pollution management. The accurate calculation of intercepted runoff depth provides a solid foundation for selecting the appropriate size of intercepting facilities in drainage and interception projects. This study establishes a separate stormwater system for the Yishan Building watershed of Fuzhou City using the InfoWorks Integrated Catchment Management (InfoWorks ICM), which can predict the stormwater flow velocity and the flow of discharge outlet after each rainfall. The intercepted runoff depth is calculated from the stormwater quality and environmental capacity of the receiving waters. The average intercepted runoff depth from six rainfall events is calculated as 4.1 mm based on stormwater quality. The average intercepted runoff depth from six rainfall events is calculated as 4.4 mm based on the environmental capacity of the receiving waters. The intercepted runoff depth differs when calculated from various aspects. The selection of the intercepted runoff depth depends on the goal of water quality control, the self-purification capacity of the water bodies, and other factors of the region.
Directory of Open Access Journals (Sweden)
Fuda Guo
2016-01-01
Full Text Available The phase stability, mechanical, electronic, and thermodynamic properties of In-Zr compounds have been explored using the first-principles calculation based on density functional theory (DFT. The calculated formation enthalpies show that these compounds are all thermodynamically stable. Information on electronic structure indicates that they possess metallic characteristics and there is a common hybridization between In-p and Zr-d states near the Fermi level. Elastic properties have been taken into consideration. The calculated results on the ratio of the bulk to shear modulus (B/G validate that InZr3 has the strongest deformation resistance. The increase of indium content results in the breakout of a linear decrease of the bulk modulus and Young’s modulus. The calculated theoretical hardness of α-In3Zr is higher than the other In-Zr compounds.
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
Calculation of Collective Variable-based PMF by Combining WHAM with Umbrella Sampling
International Nuclear Information System (INIS)
Xu Wei-Xin; Li Yang; Zhang, John Z. H.
2012-01-01
Potential of mean force (PMF) with respect to localized reaction coordinates (RCs) such as distance is often applied to evaluate the free energy profile along the reaction pathway for complex molecular systems. However, calculation of PMF as a function of global RCs is still a challenging and important problem in computational biology. We examine the combined use of the weighted histogram analysis method and the umbrella sampling method for the calculation of PMF as a function of a global RC from the coarse-grained Langevin dynamics simulations for a model protein. The method yields the folding free energy profile projected onto a global RC, which is in accord with benchmark results. With this method rare global events would be sufficiently sampled because the biased potential can be used for restricting the global conformation to specific regions during free energy calculations. The strategy presented can also be utilized in calculating the global intra- and intermolecular PMF at more detailed levels. (cross-disciplinary physics and related areas of science and technology)
DEFF Research Database (Denmark)
Sauer, Stephan P. A.; Pitzner-Frydendahl, Henrik Frank; Buse, Mogens
2015-01-01
methods, the original SOPPA method as well as SOPPA(CCSD) and RPA(D) in the calculation of vertical electronic excitation energies and oscillator strengths is investigated for a large benchmark set of 28 medium-size molecules with 139 singlet and 71 triplet excited states. The results are compared...
Czech Academy of Sciences Publication Activity Database
Zelinka, Jiří; Oral, Martin; Radlička, Tomáš
2015-01-01
Roč. 21, S4 (2015), s. 246-251 ISSN 1431-9276 R&D Projects: GA MŠk(CZ) LO1212 Institutional support: RVO:68081731 Keywords : electron optical system * calculations of current density Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.730, year: 2015
Development of a risk-based mine closure cost calculation model
CSIR Research Space (South Africa)
Du Plessis, A
2006-06-01
Full Text Available . This research is important because currently there are a number of mines that do not have sufficient financial provision to close and rehabilitate the mines. The magnitude of the lack of funds could be reduced or eliminated if the closure cost calculation...
CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals
Czech Academy of Sciences Publication Activity Database
Červinka, C.; Fulem, Michal; Růžička, K.
2016-01-01
Roč. 144, č. 6 (2016), 1-15, č. článku 064505. ISSN 0021-9606 Institutional support: RVO:68378271 Keywords : density-functional theory * organic oxygen compounds * quantum -mechanical calculations Subject RIV: BJ - Thermodynamics Impact factor: 2.965, year: 2016
Energy Technology Data Exchange (ETDEWEB)
Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)
2002-07-01
Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.
SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations
International Nuclear Information System (INIS)
Xu, H; Guerrero, M; Prado, K; Yi, B
2016-01-01
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.
SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations
Energy Technology Data Exchange (ETDEWEB)
Xu, H; Guerrero, M; Prado, K; Yi, B [University of Maryland School of Medicine, Baltimore, MD (United States)
2016-06-15
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
Assawaroongruengchot, Monchai
computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M
2007-07-01
computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k{sub eff} at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and k{sub eff}-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and k{sub eff}-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.
2007-01-01
computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and k eff -EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and k eff -EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR
International Nuclear Information System (INIS)
Valle G, E. del; Mugica R, C.A.
2005-01-01
In our country, in last congresses, Gomez et al carried out reactivity calculations based on the solution of the diffusion equation for an energy group using nodal methods in one dimension and the TPL approach (Lineal Perturbation Theory). Later on, Mugica extended the application to the case of multigroup so much so much in one as in two dimensions (X Y geometry) with excellent results. Presently work is carried out similar calculations but this time based on the solution of the neutron transport equation in X Y geometry using nodal methods and again the TPL approximation. The idea is to provide a calculation method that allows to obtain in quick form the reactivity solving the direct problem as well as the enclosed problem of the not perturbed problem. A test problem for the one that results are provided for the effective multiplication factor is described and its are offered some conclusions. (Author)
Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy
Energy Technology Data Exchange (ETDEWEB)
Hünemohr, Nora, E-mail: n.huenemohr@dkfz.de; Greilich, Steffen [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg (Germany); Paganetti, Harald; Seco, Joao [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Jäkel, Oliver [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany and Department of Radiation Oncology and Radiation Therapy, University Hospital of Heidelberg, 69120 Heidelberg (Germany)
2014-06-15
Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would
International Nuclear Information System (INIS)
Kljenak, I.; Mavko, B.; Babic, M.
2005-01-01
Full text of publication follows: The modelling and simulation of atmosphere mixing and stratification in nuclear power plant containments is a topic, which is currently being intensely investigated. With the increase of computer power, it has now become possible to model these phenomena with a local instantaneous description, using so-called Computational Fluid Dynamics (CFD) codes. However, calculations with these codes still take relatively long times. An alternative faster approach, which is also being applied, is to model nonhomogeneous atmosphere with lumped-parameter codes by dividing larger control volumes into smaller volumes, in which conditions are modelled as homogeneous. The flow between smaller volumes is modelled using one-dimensional approaches, which includes the prescription of flow loss coefficients. However, some authors have questioned this approach, as it appears that atmosphere stratification may sometimes be well simulated only by adjusting flow loss coefficients to adequate 'artificial' values that are case-dependent. To start the resolution of this issue, a modelling of nonhomogeneous atmosphere with a lumped-parameter code is proposed, where the subdivision of a large volume into smaller volumes is based on results of CFD simulations. The basic idea is to use the results of a CFD simulation to define regions, in which the flow velocities have roughly the same direction. These regions are then modelled as control volumes in a lumped-parameter model. In the proposed work, this procedure was applied to a simulation of an experiment of atmosphere mixing and stratification, which was performed in the TOSQAN facility. The facility is located at the Institut de Radioprotection et de Surete Nucleaire (IRSN) in Saclay (France) and consists of a cylindrical vessel (volume: 7 m3), in which gases are injected. In the experiment, which was also proposed for the OECD/NEA International Standard Problem No.47, air was initially present in the vessel, and
International Nuclear Information System (INIS)
Hongo, Shozo; Yamaguchi, Hiroshi; Takeshita, Hiroshi; Iwai, Satoshi.
1994-01-01
A computer program named IDES is developed by BASIC language for a personal computer and translated to C language of engineering work station. The IDES carries out internal dose calculations described in ICRP Publication 30 and it installs the program of transformation method which is an empirical method to estimate absorbed fractions of different physiques from ICRP Referenceman. The program consists of three tasks: productions of SAF for Japanese including children, productions of SEE, Specific Effective Energy, and calculation of effective dose equivalents. Each task and corresponding data file appear as a module so as to meet future requirement for revisions of the related data. Usefulness of IDES is discussed by exemplifying the case that 5 age groups of Japanese intake orally Co-60 or Mn-54. (author)
Navier-Stokes calculations on multi-element airfoils using a chimera-based solver
Jasper, Donald W.; Agrawal, Shreekant; Robinson, Brian A.
1993-01-01
A study of Navier-Stokes calculations of flows about multielement airfoils using a chimera grid approach is presented. The chimera approach utilizes structured, overlapped grids which allow great flexibility of grid arrangement and simplifies grid generation. Calculations are made for two-, three-, and four-element airfoils, and modeling of the effect of gap distance between elements is demonstrated for a two element case. Solutions are obtained using the thin-layer form of the Reynolds averaged Navier-Stokes equations with turbulence closure provided by the Baldwin-Lomax algebraic model or the Baldwin-Barth one equation model. The Baldwin-Barth turbulence model is shown to provide better agreement with experimental data and to dramatically improve convergence rates for some cases. Recently developed, improved farfield boundary conditions are incorporated into the solver for greater efficiency. Computed results show good comparison with experimental data which include aerodynamic forces, surface pressures, and boundary layer velocity profiles.
Calculation of TC in a normal-superconductor bilayer using the microscopic-based Usadel theory
International Nuclear Information System (INIS)
Martinis, John M.; Hilton, G.C.; Irwin, K.D.; Wollman, D.A.
2000-01-01
The Usadel equations give a theory of superconductivity, valid in the diffusive limit, that is a generalization of the microscopic equations of the BCS theory. Because the theory is expressed in a tractable and physical form, even experimentalists can analytically and numerically calculate detailed properties of superconductors in physically relevant geometries. Here, we describe the Usadel equations and review their solution in the case of predicting the transition temperature T C of a thin normal-superconductor bilayer. We also extend this calculation for thicker bilayers to show the dependence on the resistivity of the films. These results, which show a dependence on both the interface resistance and heat capacity of the films, provide important guidance on fabricating bilayers with reproducible transition temperatures
Calculations of the hurricane eye motion based on singularity propagation theory
Directory of Open Access Journals (Sweden)
Vladimir Danilov
2002-02-01
Full Text Available We discuss the possibility of using calculating singularities to forecast the dynamics of hurricanes. Our basic model is the shallow-water system. By treating the hurricane eye as a vortex type singularity and truncating the corresponding sequence of Hugoniot type conditions, we carry out many numerical experiments. The comparison of our results with the tracks of three actual hurricanes shows that our approach is rather fruitful.
International Nuclear Information System (INIS)
Hiller, Mauritius Michael
2015-01-01
The external radiation exposure at the former village of Metlino, Russia, was reconstructed. The Techa river in Metlino was contaminated by water from the Majak plant. The village was evacuated in 1956 and a reservoir lake created. Absorbed doses in bricks were measured and a model of the present-day and the historic Metlino was created for Monte Carlo calculations. By combining both, the air kerma at shoreline could be reconstructed to evaluate the Techa River Dosimetry System.
Stur, J.; Bos, M.; van der Linden, W.E.
1984-01-01
Fast and accurate calculation procedures for pH and redox potentials are required for optimum control of automatic titrations. The procedure suggested is based on a three-dimensional titration curve V = f(pH, redox potential). All possible interactions between species in the solution, e.g., changes
Gangarapu, S.; Marcelis, A.T.M.; Zuilhof, H.
2013-01-01
The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum
GIS supported calculations of 137Cs deposition in Sweden based on precipitation data
International Nuclear Information System (INIS)
Almgren, S.; Nilsson, E.; Isaksson, M.; Erlandsson, B.
2005-01-01
137 Cs deposition maps were made using Kriging interpolation in a Geographical Information System (GIS). Quarterly values of 137 Cs deposition density per unit precipitation (Bq/m 2 /mm) at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations of Nuclear Weapons Fallout (NWF). The deposition density of 137 Cs, resulting from the Chernobyl accident, was calculated for western Sweden using precipitation data from 46 stations. The lowest levels of NWF 137 Cs deposition density were noted in the northeastern and eastern Sweden and the highest levels in the western parts of Sweden. The Chernobyl 137 Cs deposition density is highest along the coast in the selected area and the lowest in the southeastern part and along the middle. The sum of the calculated deposition density from NWF and Chernobyl in western Sweden was compared to accumulated activities in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values
Essa, Mohammed Sh.; Chiad, Bahaa T.; Hussein, Khalil A.
2018-05-01
Chemical thermal deposition techniques are highly depending on deposition platform temperature as well as surface substrate temperatures, so in this research thermal distribution and heat transfer was calculated to optimize the deposition platform temperature distribution, determine the power required for the heating element, to improve thermal homogeneity. Furthermore, calculate the dissipated thermal power from the deposition platform. Moreover, the thermal imager (thermal camera) was used to estimate the thermal destitution in addition to, the temperature allocation over 400cm2 heated plate area. In order to reach a plate temperature at 500 oC, a plate supported with an electrical heater of power (2000 W). Stainless steel plate of 12mm thickness was used as a heated plate and deposition platform and subjected to lab tests using element analyzer X-ray fluorescence system (XRF) to check its elemental composition and found the grade of stainless steel and found to be 316 L. The total heat losses calculated at this temperature was 612 W. Homemade heating element was used to heat the plate and can reach 450 oC with less than 15 min as recorded from the system.as well as the temperatures recorded and monitored using Arduino/UNO microcontroller with cold-junction-compensated K-thermocouple-to-digital converter type MAX6675.
Li, Haibin; He, Yun; Nie, Xiaobo
2018-01-01
Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.
International Nuclear Information System (INIS)
Song, Chan-Ho; Park, Hee-Seong; Ha, Jea-Hyun; Jin, Hyung-Gon; Park, Seung-Kook
2015-01-01
The KAERI be used to calculate the decommissioning cost and manage the data of decommissioning activity experience through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). Some country such as Japan and The United States have the information for decommissioning experience of the NPP and publish reports on decommissioning cost analysis. These reports as valuable data be used to compare with the decommissioning unit cost. In particular, need a method to estimate the decommissioning cost of the NPP because there is no decommissioning experience of NPP in case of Korea. makes possible to predict the more precise prediction about the decommissioning unit cost. But still, there are many differences on calculation for the decommissioning unit cost in domestic and foreign country. Typically, it is difficult to compare with data because published not detailed reports. Therefore, field of estimation for decommissioning cost have to use a unified framework in order to the decommissioning cost be provided to exact of the decommissioning cost
Energy Technology Data Exchange (ETDEWEB)
Song, Chan-Ho; Park, Hee-Seong; Ha, Jea-Hyun; Jin, Hyung-Gon; Park, Seung-Kook [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-05-15
The KAERI be used to calculate the decommissioning cost and manage the data of decommissioning activity experience through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). Some country such as Japan and The United States have the information for decommissioning experience of the NPP and publish reports on decommissioning cost analysis. These reports as valuable data be used to compare with the decommissioning unit cost. In particular, need a method to estimate the decommissioning cost of the NPP because there is no decommissioning experience of NPP in case of Korea. makes possible to predict the more precise prediction about the decommissioning unit cost. But still, there are many differences on calculation for the decommissioning unit cost in domestic and foreign country. Typically, it is difficult to compare with data because published not detailed reports. Therefore, field of estimation for decommissioning cost have to use a unified framework in order to the decommissioning cost be provided to exact of the decommissioning cost.
Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate
Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef
2016-04-01
The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.
Critical and subcritical mass calculations of fissionable nuclides based on JENDL-3.2+
International Nuclear Information System (INIS)
Okuno, H.
2002-01-01
We calculated critical and subcritical masses of 10 fissionable actinides ( 233 U, 235 U, 238 Pu, 239 Pu, 241 Pu, 242m Am, 243 Cm, 244 Cm, 249 Cf and 251 Cf) in metal and in metal-water mixtures (except 238 Pu and 244 Cm). The calculation was made with a combination of a continuous energy Monte Carlo neutron transport code, MCNP-4B2, and the latest released version of the Japanese Evaluated Nuclear Data Library, JENDL-3.2. Other evaluated nuclear data files, ENDF/B-VI, JEF-2.2, and JENDL-3.3 in its preliminary version were also applied to find differences in results originated from different nuclear data files. For the so-called big three fissiles ( 233 U, 235 U and 239 Pu), analyzing the criticality experiments cited in ICSBEP Handbook validated the code-library combination, and calculation errors were consequently evaluated. Estimated critical and lower limit critical masses of the big three in a sphere with/without a water or SS-304 reflector were supplied, and they were compared with the subcritical mass limits of ANS-8.1. (author)
International Nuclear Information System (INIS)
Asta, M.; Foiles, S.M.; Quong, A.A.
1998-01-01
The configurational thermodynamic properties of fcc-based Al-Sc alloys and coherent Al/Al 3 Sc interphase-boundary interfaces have been calculated from first principles. The computational approach used in this study combines the results of pseudopotential total-energy calculations with a cluster-expansion description of the alloy energetics. Bulk and interface configurational-thermodynamic properties are computed using a low-temperature-expansion technique. Calculated values of the {100} and {111} Al/Al 3 Sc interfacial energies at zero temperature are, respectively, 192 and 226mJ/m 2 . The temperature dependence of the calculated interfacial free energies is found to be very weak for {100} and more appreciable for {111} orientations; the primary effect of configurational disordering at finite temperature is to reduce the degree of crystallographic anisotropy associated with calculated interfacial free energies. The first-principles-computed solid-solubility limits for Sc in bulk fcc Al are found to be underestimated significantly in comparison with experimental measurements. It is argued that this discrepancy can be largely attributed to nonconfigurational contributions to the entropy which have been neglected in the present thermodynamic calculations. copyright 1998 The American Physical Society
Ananthakrishna, G.; K, Srikanth
2018-03-01
It is well known that plastic deformation is a highly nonlinear dissipative irreversible phenomenon of considerable complexity. As a consequence, little progress has been made in modeling some well-known size-dependent properties of plastic deformation, for instance, calculating hardness as a function of indentation depth independently. Here, we devise a method of calculating hardness by calculating the residual indentation depth and then calculate the hardness as the ratio of the load to the residual imprint area. Recognizing the fact that dislocations are the basic defects controlling the plastic component of the indentation depth, we set up a system of coupled nonlinear time evolution equations for the mobile, forest, and geometrically necessary dislocation densities. Within our approach, we consider the geometrically necessary dislocations to be immobile since they contribute to additional hardness. The model includes dislocation multiplication, storage, and recovery mechanisms. The growth of the geometrically necessary dislocation density is controlled by the number of loops that can be activated under the contact area and the mean strain gradient. The equations are then coupled to the load rate equation. Our approach has the ability to adopt experimental parameters such as the indentation rates, the geometrical parameters defining the Berkovich indenter, including the nominal tip radius. The residual indentation depth is obtained by integrating the Orowan expression for the plastic strain rate, which is then used to calculate the hardness. Consistent with the experimental observations, the increasing hardness with decreasing indentation depth in our model arises from limited dislocation sources at small indentation depths and therefore avoids divergence in the limit of small depths reported in the Nix-Gao model. We demonstrate that for a range of parameter values that physically represent different materials, the model predicts the three characteristic
International Nuclear Information System (INIS)
Beres, D.A.; Hull, A.P.
1991-12-01
DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer
Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok
2018-02-01
A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
International Nuclear Information System (INIS)
Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S
2014-01-01
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
Energy Technology Data Exchange (ETDEWEB)
Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
International Nuclear Information System (INIS)
Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael
2007-01-01
Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm 3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach
Directory of Open Access Journals (Sweden)
Lin Yang
2018-01-01
Full Text Available In this paper, the calculation of the conductor temperature is related to the temperature sensor position in high-voltage power cables and four thermal circuits—based on the temperatures of insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface are established to calculate the conductor temperature. To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum are built up, and thermocouples are placed at the four radial positions in a 110 kV cross-linked polyethylene (XLPE insulated power cable to measure the temperatures of four positions. In measurements, six cases of current heating test under three laying environments, such as duct, water, and backfilled soil were carried out. Both errors of the conductor temperature calculation and the simulation based on the temperature of insulation shield were significantly smaller than others under all laying environments. It is the uncertainty of the thermal resistivity, together with the difference of the initial temperature of each radial position by the solar radiation, which led to the above results. The thermal capacitance of the air has little impact on errors. The thermal resistance of the air gap is the largest error source. Compromising the temperature-estimation accuracy and the insulation-damage risk, the waterproof compound is the recommended sensor position to improve the accuracy of conductor-temperature calculation. When the thermal resistances were calculated correctly, the aluminum sheath is also the recommended sensor position besides the waterproof compound.
A Review of Solid-Solution Models of High-Entropy Alloys Based on Ab Initio Calculations
Directory of Open Access Journals (Sweden)
Fuyang Tian
2017-11-01
Full Text Available Similar to the importance of XRD in experiments, ab initio calculations, as a powerful tool, have been applied to predict the new potential materials and investigate the intrinsic properties of materials in theory. As a typical solid-solution material, the large degree of uncertainty of high-entropy alloys (HEAs results in the difficulty of ab initio calculations application to HEAs. The present review focuses on the available ab initio based solid-solution models (virtual lattice approximation, coherent potential approximation, special quasirandom structure, similar local atomic environment, maximum-entropy method, and hybrid Monte Carlo/molecular dynamics and their applications and limits in single phase HEAs.
Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings
Ucun, Fatih; Tokatlı, Ahmet
2015-02-01
In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.
International Nuclear Information System (INIS)
Tian Yong; Zhang Longqiang; Yang Zhen; Yu Bin
2014-01-01
In order to ensure a long-term reliable operation of the DCS cabinet's 220 V AC power cable, it was needed to confirm whether the conductor temperature rise of power cable meet the requirement of the cable specification. Based on the actual data in site and the theory of numerical heat transfer, conservative model was established, and the conductor temperature was calculated. The calculation results show that the cable arrangement on the cable tray will not lead to the conductor temperature rise of power cable over than the required temperature in technical specification. (authors)
Kim, Hak Gu; Man Ro, Yong
2017-11-27
In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.
Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems
da Jornada, Felipe H.
2015-03-01
Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.
Ouk, Chanda-Malis; Zvereva-Loëte, Natalia; Scribano, Yohann; Bussery-Honvault, Béatrice
2012-10-30
Multireference single and double configuration interaction (MRCI) calculations including Davidson (+Q) or Pople (+P) corrections have been conducted in this work for the reactants, products, and extrema of the doublet ground state potential energy surface involved in the N((2)D) + CH(4) reaction. Such highly correlated ab initio calculations are then compared with previous PMP4, CCSD(T), W1, and DFT/B3LYP studies. Large relative differences are observed in particular for the transition state in the entrance channel resolving the disagreement between previous ab initio calculations. We confirm the existence of a small but positive potential barrier (3.86 ± 0.84 kJ mol(-1) (MR-AQCC) and 3.89 kJ mol(-1) (MRCI+P)) in the entrance channel of the title reaction. The correlation is seen to change significantly the energetic position of the two minima and five saddle points of this system together with the dissociation channels but not their relative order. The influence of the electronic correlation into the energetic of the system is clearly demonstrated by the thermal rate constant evaluation and it temperature dependance by means of the transition state theory. Indeed, only MRCI values are able to reproduce the experimental rate constant of the title reaction and its behavior with temperature. Similarly, product branching ratios, evaluated by means of unimolecular RRKM theory, confirm the NH production of Umemoto et al., whereas previous works based on less accurate ab initio calculations failed. We confirm the previous findings that the N((2)D) + CH(4) reaction proceeds via an insertion-dissociation mechanism and that the dominant product channels are CH(2)NH + H and CH(3) + NH. Copyright © 2012 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Jin, L; Fan, J; Eldib, A; Price, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2016-06-15
Purpose: Treating nose skin with an electron beam is of a substantial challenge due to uneven nose surfaces and tissue heterogeneity, and consequently could have a great uncertainty of dose accuracy on the target. This work explored the method using Monte Carlo (MC)-based energy and intensity modulated electron radiotherapy (MERT), which would be delivered with a photon MLC in a standard medical linac (Artiste). Methods: The traditional treatment on the nose skin involves the usage of a bolus, often with a single energy electron beam. This work avoided using the bolus, and utilized mixed energies of electron beams. An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. A clinical case of the nose skin, which was previously treated with a single 9 MeV electron beam, was replanned with the MERT method. The resultant dose distributions were compared with the plan previously clinically used. The dose volume histogram of the MERT plan is calculated to examine the coverage of the planning target volume (PTV) and critical structure doses. Results: The target coverage and conformality in the MERT plan are improved as compared to the conventional plan. The MERT can provide more sufficient target coverage and less normal tissue dose underneath the nose skin. Conclusion: Compared to the conventional treatment technique, using MERT for the nose skin treatment has shown the dosimetric advantages in the PTV coverage and conformality. In addition, this technique eliminates the necessity of the cutout and bolus, which makes the treatment more efficient and accurate.
Czech Academy of Sciences Publication Activity Database
Cimrman, R.; Novák, Matyáš; Kolman, Radek; Tůma, Miroslav; Plešek, Jiří; Vackář, Jiří
2018-01-01
Roč. 319, Feb (2018), s. 138-152 ISSN 0096-3003 R&D Projects: GA ČR GA17-12925S; GA ČR(CZ) GAP108/11/0853; GA MŠk(CZ) EF15_003/0000493 Institutional support: RVO:68378271 ; RVO:61388998 ; RVO:67985807 Keywords : electronic structure calculation * density functional theory * finite element method * isogeometric analysis OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.); Materials engineering (UT-L); Applied mathematics (UIVT-O) Impact factor: 1.738, year: 2016
A camera based calculation of 99m Tc-MAG-3 clearance using conjugate views method
International Nuclear Information System (INIS)
Hojabr, M.; Rajabi, H.; Eftekhari, M.
2004-01-01
Background: measurement of absolute or different renal function using radiotracers plays an important role in the clinical management of various renal diseases. Gamma camera quantitative methods is approximations of renal clearance may potentially be as accurate as plasma clearance methods. However some critical factors such as kidney depth and background counts are still troublesome in the use of this technique. In this study the conjugate-view method along with some background correction technique have been used for the measurement of renal activity in 99m Tc- MAG 3 renography. Transmission data were used for attenuation correction and the source volume was considered for accurate background subtraction. Materials and methods: the study was performed in 35 adult patients referred to our department for conventional renography and ERPF calculation. Depending on patients weight approximately 10-15 mCi 99 Tc-MAG 3 was injected in the form of a sharp bolus and 60 frames of 1 second followed by 174 frames of 10 seconds were acquired for each patient. Imaging was performed on a dual-head gamma camera(SOLUS; SunSpark10, ADAC Laboratories, Milpitas, CA) anterior and posterior views were acquired simultaneously. A LEHR collimator was used to correct the scatter for the emission and transmission images. Buijs factor was applied on background counts before background correction (Rutland-Patlak equation). gamma camera clearance was calculated using renal uptake in 1-2, 1.5-2.5, 2-3 min. The same procedure was repeated for both renograms obtained from posterior projection and conjugated views. The plasma clearance was also directly calculated by three blood samples obtained at 40, 80, 120 min after injection. Results: 99 Tc-MAG 3 clearance using direct sampling method were used as reference values and compared to the results obtained from the renograms. The maximum correlation was found between conjugate view clearance at 2-3 min (R=0.99, R 2 =0.98, SE=15). Conventional
Programs and subroutines for calculating cadmium body burdens based on a one-compartment model
International Nuclear Information System (INIS)
Robinson, C.V.; Novak, K.M.
1980-08-01
A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters
REITP3-Hazard evaluation program for heat release based on thermochemical calculation
Energy Technology Data Exchange (ETDEWEB)
Akutsu, Yoshiaki.; Tamura, Masamitsu. [The University of Tokyo, Tokyo (Japan). School of Engineering; Kawakatsu, Yuichi. [Oji Paper Corp., Tokyo (Japan); Wada, Yuji. [National Institute for Resources and Environment, Tsukuba (Japan); Yoshida, Tadao. [Hosei University, Tokyo (Japan). College of Engineering
1999-06-30
REITP3-A hazard evaluation program for heat release besed on thermochemical calculation has been developed by modifying REITP2 (Revised Estimation of Incompatibility from Thermochemical Properties{sup 2)}. The main modifications are as follows. (1) Reactants are retrieved from the database by chemical formula. (2) As products are listed in an external file, the addition of products and change in order of production can be easily conducted. (3) Part of the program has been changed by considering its use on a personal computer or workstation. These modifications will promote the usefulness of the program for energy hazard evaluation. (author)
Czech Academy of Sciences Publication Activity Database
Šponer, Jiří; Zgarbová, M.; Jurečka, Petr; Riley, K.E.; Šponer, Judit E.; Hobza, Pavel
2009-01-01
Roč. 5, č. 4 (2009), s. 1166-1179 ISSN 1549-9618 R&D Projects: GA AV ČR(CZ) IAA400040802; GA AV ČR(CZ) IAA400550701; GA MŠk(CZ) LC06030; GA MŠk(CZ) LC512 Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702; CEZ:AV0Z40550506 Keywords : RNA * ribose * quantum calculations Subject RIV: BO - Biophysics Impact factor: 4.804, year: 2009
Energy Technology Data Exchange (ETDEWEB)
Liu, Jing-yong, E-mail: www053991@126.com [School of Environmental Science and Engineering, Guangdong University of Technology, Guangzhou 510006 (China); Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe [School of Environmental Science and Engineering, Guangdong University of Technology, Guangzhou 510006 (China); Li, Xiao-ming [Guangdong Testing Institute of Product Quality Supervision, Guangzhou 510330 (China); Chen, Tao [State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Luo, Guang-qian [State Key Laboratory of Coal Combustion, Huazhong University of Science and Technology, Wuhan 430074 (China); Xie, Wu-ming; Wang, Yu-jie; Zhuo, Zhong-xu; Fu, Jie-wen [School of Environmental Science and Engineering, Guangdong University of Technology, Guangzhou 510006 (China)
2015-04-15
Highlights: • A thermodynamic equilibrium calculation was carried out. • Effects of three types of sulfurs on Pb distribution were investigated. • The mechanism for three types of sulfurs acting on Pb partitioning were proposed. • Lead partitioning and species in bottom ash and fly ash were identified. - Abstract: Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na{sub 2}S and Na{sub 2}SO{sub 4}) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na{sub 2}SO{sub 4} and Na{sub 2}S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO{sub 4}(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO{sub 2}, CaO, TiO{sub 2}, and Al{sub 2}O{sub 3} containing materials function as condensed phase solids in the temperature range of 800–1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the
Energy Technology Data Exchange (ETDEWEB)
Dohet-Eraly, Jeremy [F.R.S.-FNRS (Belgium); Sparenberg, Jean-Marc; Baye, Daniel, E-mail: jdoheter@ulb.ac.be, E-mail: jmspar@ulb.ac.be, E-mail: dbaye@ulb.ac.be [Physique Nucleaire et Physique Quantique, CP229, Universite Libre de Bruxelles (ULB), B-1050 Brussels (Belgium)
2011-09-16
The elastic phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions are calculated in a cluster approach by the Generator Coordinate Method coupled with the Microscopic R-matrix Method. Two interactions are derived from the realistic Argonne potentials AV8' and AV18 with the Unitary Correlation Operator Method. With a specific adjustment of correlations on the {alpha} + {alpha} collision, the phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions agree rather well with experimental data.
Photon and electron data bases and their use in radiation transport calculations
International Nuclear Information System (INIS)
Cullen, D.E.; Perkins, S.T.; Seltzer, S.M.
1992-02-01
The ENDF/B-VI photon interaction library includes data to describe the interaction of photons with the elements Z=1 to 100 over the energy range 10 eV to 100 MeV. This library has been designed to meet the traditional needs of users to model the interaction and transport of primary photons. However, this library contains additional information which used in a combination with our other data libraries can be used to perform much more detailed calculations, e.g., emission of secondary fluorescence photons. This paper describes both traditional and more detailed uses of this library
A Microsoft Excel® 2010 Based Tool for Calculating Interobserver Agreement
Azulay, Richard L
2011-01-01
This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel®) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work. PMID:22649578
A microsoft excel(®) 2010 based tool for calculating interobserver agreement.
Reed, Derek D; Azulay, Richard L
2011-01-01
This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel(®)) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work.
DEFF Research Database (Denmark)
Grandjean, Philippe; Budtz-Joergensen, Esben
2013-01-01
BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...
Lattice dynamics calculations based on density-functional perturbation theory in real space
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
Directory of Open Access Journals (Sweden)
Niancheng Zhou
2018-03-01
Full Text Available Short-circuit current level of power grid will be increased with high penetration of VSC-based renewable energy, and a strong coupling between transient fault process and control strategy will change the fault features. The full current expression of VSC-based renewable energy was obtained according to transient characteristics of short-circuit current. Furtherly, by analyzing the closed-loop transfer function model of controller and current source characteristics presented in steady state during a fault, equivalent circuits of VSC-based renewable energy of fault transient state and steady state were proposed, respectively. Then the correctness of the theory was verified by experimental tests. In addition, for power grid with VSC-based renewable energy, superposition theorem was used to calculate AC component and DC component of short-circuit current, respectively, then the peak value of short-circuit current was evaluated effectively. The calculated results could be used for grid planning and design, short-circuit current management as well as adjustment of relay protection. Based on comparing calculation and simulation results of 6-node 500 kV Huainan power grid and 35-node 220 kV Huaisu power grid, the effectiveness of the proposed method was verified.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
International Nuclear Information System (INIS)
Hanke, M.; Hennig, D.; Kaschte, A.; Koeppen, M.
1988-01-01
The energy band structure of cadmium telluride and mercury telluride materials is investigated by means of the tight-binding (TB) method considering relativistic effects and the spin-orbit interaction. Taking into account relativistic effects in the method is rather simple though the size of the Hamilton matrix doubles. Such considerations are necessary for the interesting small-interstice semiconductors, and the experimental results are reflected correctly in the band structures. The transformation behaviour of the eigenvectors within the Brillouin zone gets more complicated, but is, nevertheless, theoretically controllable. If, however, the matrix elements of the Green operator are to be calculated, one has to use formula manipulation programmes in particular for non-diagonal elements. For defect calculations by the Koster-Slater theory of scattering it is necessary to know these matrix elements. Knowledge of the transformation behaviour of eigenfunctions saves frequent diagonalization of the Hamilton matrix and thus permits a numerical solution of the problem. Corresponding results for the sp 3 basis are available
Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-Jie; Zhuo, Zhong-xu; Fu, Jie-wen
2015-04-01
Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na2S and Na2SO4) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na2SO4 and Na2S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO4(s) at low temperatures (incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the concentration of Si, Ca and Al-containing compounds in the sludge. These findings provide useful information for understanding the partitioning behavior of Pb, facilitating the development of strategies to control the volatilization of Pb during sludge incineration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
V. Giannoglou
2016-06-01
Full Text Available Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.
International Nuclear Information System (INIS)
Golovko, Yury; Rozhikhin, Yevgeniy; Tsibulya, Anatoly; Koscheev, Vladimir
2008-01-01
Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k eff caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)
Risk Analysis of Reservoir Flood Routing Calculation Based on Inflow Forecast Uncertainty
Directory of Open Access Journals (Sweden)
Binquan Li
2016-10-01
Full Text Available Possible risks in reservoir flood control and regulation cannot be objectively assessed by deterministic flood forecasts, resulting in the probability of reservoir failure. We demonstrated a risk analysis of reservoir flood routing calculation accounting for inflow forecast uncertainty in a sub-basin of Huaihe River, China. The Xinanjiang model was used to provide deterministic flood forecasts, and was combined with the Hydrologic Uncertainty Processor (HUP to quantify reservoir inflow uncertainty in the probability density function (PDF form. Furthermore, the PDFs of reservoir water level (RWL and the risk rate of RWL exceeding a defined safety control level could be obtained. Results suggested that the median forecast (50th percentiles of HUP showed better agreement with observed inflows than the Xinanjiang model did in terms of the performance measures of flood process, peak, and volume. In addition, most observations (77.2% were bracketed by the uncertainty band of 90% confidence interval, with some small exceptions of high flows. Results proved that this framework of risk analysis could provide not only the deterministic forecasts of inflow and RWL, but also the fundamental uncertainty information (e.g., 90% confidence band for the reservoir flood routing calculation.
Directory of Open Access Journals (Sweden)
D. M. Kurhan
2014-11-01
Full Text Available Purpose. The module of elasticity of the subrail base is one of the main characteristics for an assessment intense the deformed condition of a track. Need for different cases to consider unequal elasticity of the subrail base repeatedly was considered, however, results contained rather difficult mathematical approaches and the obtained decisions didn't keep within borders of standard engineering calculation of a railway on strength. Therefore the purpose of this work is obtaining the decision within this document. Methodology. It is offered to consider a rail model as a beam which has the distributed loading of such outline corresponding to value of the module of elasticity that gives an equivalent deflection at free seating on bearing parts. Findings. The method of the accounting of gradual change of the module of elasticity of the subrail base by means of the correcting coefficient in engineering calculation of a way on strength was received. Expansion of existing calculation of railways strength was developed for the accounting of sharp change of the module of elasticity of the subrail base (for example, upon transition from a ballast design of a way on the bridge. The characteristic of change of forces operating from a rail on a basis, depending on distance to the bridge on an approach site from a ballast design of a way was received. The results of the redistribution of forces after a sudden change in the elastic modulus of the base under the rail explain the formation of vertical irregularities before the bridge. Originality. The technique of engineering calculation of railways strength for performance of calculations taking into account unequal elasticity of the subrail base was improved. Practical value. The obtained results allow carrying out engineering calculations for an assessment of strength of a railway in places of unequal elasticity caused by a condition of a way or features of a design. The solution of the return task on
Thermal neutron dose calculations in a brain phantom from 7Li(p,n) reaction based BNCT setup
International Nuclear Information System (INIS)
Elshahat, B.A.; Naqvi, A.A.; Maalej, N.; Abdallah, Khalid
2006-01-01
Monte Carlo simulations were carried out to calculate neutron dose in a brain phantom from a 7 Li(p,n) reaction based setup utilizing a high density polyethylene moderator with graphite reflector. The dimensions of the moderator and the reflector were optimized through optimization of epithermal /(fast +thermal) neutron intensity ratio as a function of geometric parameters of the setup. Results of our calculation showed the capability of our setup to treat the tumor within 4 cm of the head surface. The calculated Peak Therapeutic Ratio for the setup was found to be 2.15. With further improvement in the moderator design and brain phantom irradiation arrangement, the setup capabilities can be improved to reach further deep-seated tumor. (author)
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
Energy Technology Data Exchange (ETDEWEB)
Li, Y; Tian, Z; Song, T; Jia, X; Gu, X; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States)
2016-06-15
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accounting for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.
Hammitzsch, M.; Spazier, J.; Reißland, S.
2014-12-01
Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the
International Nuclear Information System (INIS)
Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.
2010-01-01
This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm 2 ). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm 2 . Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm 2 ) only 92% of the data meet the criteria. Total scatter factors show a good agreement ( 2 ) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm 2 . Special care must be taken for smaller fields.
A Scientific Calculator for Exact Real Number Computation Based on LRT, GMP and FC++
Directory of Open Access Journals (Sweden)
J. A. Hernández
2012-03-01
Full Text Available Language for Redundant Test (LRT is a programming language for exact real number computation. Its lazy evaluation mechanism (also called call-by-need and its infinite list requirement, make the language appropriate to be implemented in a functional programming language such as Haskell. However, a direction translation of the operational semantics of LRT into Haskell as well as the algorithms to implement basic operations (addition subtraction, multiplication, division and trigonometric functions (sin, cosine, tangent, etc. makes the resulting scientific calculator time consuming and so inefficient. In this paper, we present an alternative implementation of the scientific calculator using FC++ and GMP. FC++ is a functional C++ library while GMP is a GNU multiple presicion library. We show that a direct translation of LRT in FC++ results in a faster scientific calculator than the one presented in Haskell.El lenguaje de verificación redundante (LRT, por sus siglas en inglés es un lenguaje de programación para el cómputo con números reales exactos. Su método de evaluación lazy (o mejor conocido como llamada por necesidad y el manejo de listas infinitas requerido, hace que el lenguaje sea apropiado para su implementación en un lenguaje funcional como Haskell. Sin embargo, la implementación directa de la semántica operacional de LRT en Haskell así como los algoritmos para funciones básicas (suma, resta, multiplicación y división y funciones trigonométricas (seno, coseno, tangente, etc hace que la calculadora científica resultante sea ineficiente. En este artículo, presentamos una implementación alternativa de la calculadora científica usando FC++ y GMP. FC++ es una librería que utiliza el paradigma Funcional en C++ mientras que GMP es una librería GNU de múltiple precisión. En el artículo mostramos que la implementación directa de LRT en FC++ resulta en una librería más eficiente que la implementada en Haskell.
Czech Academy of Sciences Publication Activity Database
Šponer, Jiří
2002-01-01
Roč. 223, - (2002), s. 212 ISSN 0065-7727. [Annual Meeting of the American Chemistry Society /223./. 07.04.2002-11.04.2002, Orlando ] Institutional research plan: CEZ:AV0Z5004920 Keywords : quantum chemistry * base pairing * base stacking Subject RIV: BO - Biophysics
SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation
Energy Technology Data Exchange (ETDEWEB)
Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K [Department of Radiation Oncology, NYU Langone Medical Center, New York, NY (United States)
2016-06-15
Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.
SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation
International Nuclear Information System (INIS)
Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K
2016-01-01
Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.
Calculation of elastic-plastic strain ranges for fatigue analysis based on linear elastic stresses
International Nuclear Information System (INIS)
Sauer, G.
1998-01-01
Fatigue analysis requires that the maximum strain ranges be known. These strain ranges are generally computed from linear elastic analysis. The elastic strain ranges are enhanced by a factor K e to obtain the total elastic-plastic strain range. The reliability of the fatigue analysis depends on the quality of this factor. Formulae for calculating the K e factor are proposed. A beam is introduced as a computational model for determining the elastic-plastic strains. The beam is loaded by the elastic stresses of the real structure. The elastic-plastic strains of the beam are compared with the beam's elastic strains. This comparison furnishes explicit expressions for the K e factor. The K e factor is tested by means of seven examples. (orig.)
Analysis of calculating methods for failure distribution function based on maximal entropy principle
International Nuclear Information System (INIS)
Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli
2009-01-01
The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Wu, Yu; Zhang, Hongpeng
2017-12-01
A new microfluidic chip is presented to enhance the sensitivity of a micro inductive sensor, and an approach to coil inductance change calculation is introduced for metal particle detection in lubrication oil. Electromagnetic knowledge is used to establish a mathematical model of an inductive sensor for metal particle detection, and the analytic expression of coil inductance change is obtained by a magnetic vector potential. Experimental verification is carried out. The results show that copper particles 50-52 µm in diameter have been detected; the relative errors between the theoretical and experimental values are 7.68% and 10.02% at particle diameters of 108-110 µm and 50-52 µm, respectively. The approach presented here can provide a theoretical basis for an inductive sensor in metal particle detection in oil and other areas of application.
Calculation of the Strip Foundation on Solid Elastic Base, Taking into Account the Karst Collapse
Sharapov, R.; Lodigina, N.
2017-07-01
Karst processes greatly complicate the construction and operation of buildings and structures. Due to the karstic deformations at different times there have been several major accidents, which analysis showed that in all cases the fundamental errors committed at different stages of building development: site selection, engineering survey, design, construction or operation of the facilities. Theory analysis of beams on elastic foundation is essential in building practice. Specialist engineering facilities often have to resort to multiple designing in finding efficient forms of construction of these facilities. In work the calculation of stresses in cross-sections of the strip foundation evenly distributed load in the event of karst. A comparison of extreme stress in the event of karst and without accounting for the strip foundation as a beam on an elastic foundation.
Directory of Open Access Journals (Sweden)
A.V. Erisov
2016-05-01
Full Text Available Purpose. Simplification of accounting ratio to determine the magnetic field strength of electric power lines, and assessment of their environmental safety. Methodology. Description of the transmission lines of the magnetic field by using techniques of spatial harmonic analysis in the cylindrical coordinate system is carried out. Results. For engineering calculations of electric power lines magnetic field with sufficient accuracy describes their first spatial harmonic magnetic field. Originality. Substantial simplification of the definition of the impact of the construction of transmission line poles on the value of its magnetic field and the bands of land alienation sizes. Practical value. The environmentally friendly projection electric power lines on the level of the magnetic field.
Shielding property of bismuth glass based on MCNP 5 and WINXCOM simulated calculation
International Nuclear Information System (INIS)
Zhang Zhicheng; Zhang Jinzhao; Liu Ze; Lu Chunhai; Chen Min
2013-01-01
Background: Currently, lead glass is widely used as observation window, while lead is toxic heavy metal. Purpose: Non-toxic materials and their shielding effects are researched in order to find a new material to replace lead containing material. Methods: The mass attenuation coefficients of bismuth silicate glass were investigated with gamma-ray's energy at 0.662 MeV, 1.17 MeV and 1.33 MeV, respectively, by MCNP 5 (Monte Carlo) and WINXCOM program, and compared with those of the lead glass. Results: With attenuation factor K, shielding and mechanical properties taken into consideration bismuth glass containing 50% bismuth oxide might be selected as the right material. Dose rate distributions of water phantom were calculated with 2-cm and 10-cm thick glass, respectively, irradiated by 137 Cs and 60 Co in turn. Conclusion: Results show that the bismuth glass may replace lead glass for radiation shielding with appropriate energy. (authors)
TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors
Energy Technology Data Exchange (ETDEWEB)
Mitchell, T; Bush, K [Stanford School of Medicine, Stanford, CA (United States)
2015-06-15
Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identify the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.
Directory of Open Access Journals (Sweden)
Aidan G. O’Keeffe
2017-12-01
Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.
Energy Technology Data Exchange (ETDEWEB)
Puigdomenech, I [Studsvik AB, Nykoeping (Sweden); Bruno, J [Intera Information Technologies SL, Cerdanyola (Spain)
1995-04-01
Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T<100 deg C and at the steam saturated pressure at higher temperatures. For aqueous species, the revised Helgeson-Kirkham-Flowers model is used for temperature extrapolations. The data base contains a large amount of estimated data, and the methods used for these estimations are described in detail. A new equation is presented that allows the estimation of {Delta}{sub r}Cdeg{sub pm} values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs.
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.
Highlights from the previous volumes
Vergini Eduardo, G.; Pan, Y.; al., Vardi R. et; al., Akkermans Eric et; et al.
2014-01-01
Semiclassical propagation up to the Heisenberg time Superconductivity and magnetic order in the half-Heusler compound ErPdBi An experimental evidence-based computational paradigm for new logic-gates in neuronal activity Universality in the symmetric exclusion process and diffusive systems
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
International Nuclear Information System (INIS)
Pan, Zhao; Thomson, Scott; Whitehead, Jared; Truscott, Tadd
2016-01-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. (paper)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-01-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587
Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David
2016-12-06
There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift
One-velocity neutron diffusion calculations based on a two-group reactor model
Energy Technology Data Exchange (ETDEWEB)
Bingulac, S; Radanovic, L; Lazarevic, B; Matausek, M; Pop-Jordanov, J [Boris Kidric Institute of Nuclear Sciences, Vinca, Belgrade (Yugoslavia)
1965-07-01
Many processes in reactor physics are described by the energy dependent neutron diffusion equations which for many practical purposes can often be reduced to one-dimensional two-group equations. Though such two-group models are satisfactory from the standpoint of accuracy, they require rather extensive computations which are usually iterative and involve the use of digital computers. In many applications, however, and particularly in dynamic analyses, where the studies are performed on analogue computers, it is preferable to avoid iterative calculations. The usual practice in such situations is to resort to one group models, which allow the solution to be expressed analytically. However, the loss in accuracy is rather great particularly when several media of different properties are involved. This paper describes a procedure by which the solution of the two-group neutron diffusion. equations can be expressed analytically in the form which, from the computational standpoint, is as simple as the one-group model, but retains the accuracy of the two-group treatment. In describing the procedure, the case of a multi-region nuclear reactor of cylindrical geometry is treated, but the method applied and the results obtained are of more general application. Another approach in approximate solution of diffusion equations, suggested by Galanin is applicable only in special ideal cases.
Calculation of benefit reserves based on true m-thly benefit premiums
Riaman; Susanti, Dwi; Supriatna, Agus; Nurani Ruchjana, Budi
2017-10-01
Life insurance is a form of insurance that provides risk mitigation in life or death of a human. One of its advantages is measured life insurance. Insurance companies ought to give a sum of money as reserves to the customers. The benefit reserves are an alternative calculation which involves net and cost premiums. An insured may pay a series of benefit premiums to an insurer equivalent, at the date of policy issue, to the sum of to be paid on the death of the insured, or on survival of the insured to the maturity date. A balancing item is required and this item is a liability for one of the parties and the other is an asset. The balancing item, in loan, is the outstanding principle, an asset for the lender and the liability for the borrower. In this paper we examined the benefit reserves formulas corresponding to the formulas for true m-thly benefit premiums by the prospective method. This method specifies that, the reserves at the end of the first year are zero. Several principles can be used for the determined of benefit premiums, an equivalence relation is established in our discussion.
The future of new calculation concepts in dosimetry based on the Monte Carlo Methods
International Nuclear Information System (INIS)
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.
2009-01-01
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
Feng, Lei; Zhang, Yugui
2017-08-01
Dispersion analysis is an important part of in-seam seismic data processing, and the calculation accuracy of the dispersion curve directly influences pickup errors of channel wave travel time. To extract an accurate channel wave dispersion curve from in-seam seismic two-component signals, we proposed a time-frequency analysis method based on single-trace signal processing; in addition, we formulated a dispersion calculation equation, based on S-transform, with a freely adjusted filter window width. To unify the azimuth of seismic wave propagation received by a two-component geophone, the original in-seam seismic data undergoes coordinate rotation. The rotation angle can be calculated based on P-wave characteristics, with high energy in the wave propagation direction and weak energy in the vertical direction. With this angle acquisition, a two-component signal can be converted to horizontal and vertical directions. Because Love channel waves have a particle vibration track perpendicular to the wave propagation direction, the signal in the horizontal and vertical directions is mainly Love channel waves. More accurate dispersion characters of Love channel waves can be extracted after the coordinate rotation of two-component signals.
SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy
International Nuclear Information System (INIS)
Kalantzis, G; Leventouri, T; Tachibana, H; Shang, C
2015-01-01
Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction while the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms
Ship motion-based wave estimation using a spectral residual-calculation
DEFF Research Database (Denmark)
Nielsen, Ulrik D.; H. Brodtkorb, Astrid
2018-01-01
This paper presents a study focused on a newly developed procedure for wave spectrum estimation using wave-induced motion recordings from a ship. The particular procedure stands out from other existing, similar ship motion-based pro-cedures by its computational efficiency and - at the same time- ...
Calculating the Entropy of Solid and Liquid Metals, Based on Acoustic Data
Tekuchev, V. V.; Kalinkin, D. P.; Ivanova, I. V.
2018-05-01
The entropies of iron, cobalt, rhodium, and platinum are studied for the first time, based on acoustic data and using the Debye theory and rigid-sphere model, from 298 K up to the boiling point. A formula for the melting entropy of metals is validated. Good agreement between the research results and the literature data is obtained.
International Nuclear Information System (INIS)
Ruehle, R.; Wohland, H.; Reyer, G.
1976-01-01
The basic principles of the application system RSYST are presented. This programme system is developed and used in the Institut fuer Kernenergetik (IKE). Data base, structure, and connection of data, and the application language are described. The process of problem formulation typical for RSYST is discussed. Just now the system is being extended to dialogue and remote data processing. (orig.) [de
International Nuclear Information System (INIS)
Lim, L; Gibbs, P; Yip, D; Shapiro, JD; Dowling, R; Smith, D; Little, A; Bailey, W; Liechtenstein, M
2005-01-01
To prospectively evaluate the efficacy and safety of selective internal radiation (SIR) spheres in patients with inoperable liver metastases from colorectal cancer who have failed 5FU based chemotherapy. Patients were prospectively enrolled at three Australian centres. All patients had previously received 5-FU based chemotherapy for metastatic colorectal cancer. Patients were ECOG 0–2 and had liver dominant or liver only disease. Concurrent 5-FU was given at investigator discretion. Thirty patients were treated between January 2002 and March 2004. As of July 2004 the median follow-up is 18.3 months. Median patient age was 61.7 years (range 36 – 77). Twenty-nine patients are evaluable for toxicity and response. There were 10 partial responses (33%), with the median duration of response being 8.3 months (range 2–18) and median time to progression of 5.3 mths. Response rates were lower (21%) and progression free survival shorter (3.9 mths) in patients that had received all standard chemotherapy options (n = 14). No responses were seen in patients with a poor performance status (n = 3) or extrahepatic disease (n = 6). Overall treatment related toxicity was acceptable, however significant late toxicity included 4 cases of gastric ulceration. In patients with metastatic colorectal cancer that have previously received treatment with 5-FU based chemotherapy, treatment with SIR-spheres has demonstrated encouraging activity. Further studies are required to better define the subsets of patients most likely to respond
International Nuclear Information System (INIS)
Pirotta, M.; Aquilina, D.; Bhikha, T.; Georg, D.
2005-01-01
The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (z m ), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth z R of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at z m and z R ), and the traditional ''full-scatter'' methodology. All methodologies, except for the ''full-scatter'' methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at z m (4% at 6 MV and 1.6% at 10 MV) than at z R (1.9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The ''full-scatter'' methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were
International Nuclear Information System (INIS)
Lee, Gyeong Geun; Lee, Yong Bok; Kim, Min Chul; Kwon, Junh Yun
2012-01-01
Neutron irradiation to reactor pressure vessel (RPV) steels causes a decrease in fracture toughness and an increase in yield strength while in service. It is generally accepted that the growth of point defect cluster (PDC) and copper rich precipitate (CRP) affects radiation hardening of RPV steels. A number of models have been proposed to account for the embrittlement of RPV steels. The rate theory based modeling mathematically described the evolution of radiation induced microstructures of ferritic steels under neutron irradiation. In this work, we compared the rate theory based modeling calculation with the surveillance test results of Korean Light Water Reactors (LWRs)
International Nuclear Information System (INIS)
Gesheva-Atanasova, N.
2008-01-01
The aim of this study is: 1) to propose a procedure and a program for monitor unit calculation for radiation therapy with high energy photon beams, based on data measured by author; 2) to compare this data with published one and 3) to evaluate the precision of the monitor unit calculation program. From this study it could be concluded that, we reproduced with a good agreement the published data, except the TPR values for dept up to 5 cm. The measured relative weight of upper and lower jaws - parameter A was dramatically different from the published data, but perfectly described the collimator exchange effect for our treatment machine. No difference was found between the head scatter ratios, measured in a mini phantom and those measured with a proper brass buildup cap. Our monitor unit calculation program was found to be reliable and it can be applied for check up of the patient's plans for irradiation with high energy photon beams and for some fast calculations. Because of the identity in the construction, design and characteristics of the Siemens accelerators, and the agreement with the published data for the same beam qualities, we hope that most of our experimental data and this program can be used after verification in other hospitals
International Nuclear Information System (INIS)
Mai, V. T.; Fujii, T.; Wada, K.; Kitada, T.; Takaki, N.; Yamaguchi, A.; Watanabe, H.; Unesaki, H.
2012-01-01
Considering the importance of thorium data and concerning about the accuracy of Th-232 cross section library, a series of experiments of thorium critical core carried out at KUCA facility of Kyoto Univ. Research Reactor Inst. have been analyzed. The core was composed of pure thorium plates and 93% enriched uranium plates, solid polyethylene moderator with hydro to U-235 ratio of 140 and Th-232 to U-235 ratio of 15.2. Calculations of the effective multiplication factor, control rod worth, reactivity worth of Th plates have been conducted by MVP code using JENDL-4.0 library [1]. At the experiment site, after achieving the critical state with 51 fuel rods inserted inside the reactor, the measurements of the reactivity worth of control rod and thorium sample are carried out. By comparing with the experimental data, the calculation overestimates the effective multiplication factor about 0.90%. Reactivity worth of the control rods evaluation using MVP is acceptable with the maximum discrepancy about the statistical error of the measured data. The calculated results agree to the measurement ones within the difference range of 3.1% for the reactivity worth of one Th plate. From this investigation, further experiments and research on Th-232 cross section library need to be conducted to provide more reliable data for thorium based fuel core design and safety calculation. (authors)
The calculation of the chemical exergies of coal-based fuels by using the higher heating values
International Nuclear Information System (INIS)
Bilgen, Selcuk; Kaygusuz, Kamil
2008-01-01
This paper demonstrates the application of exergy to gain a better understanding of coal properties, especially chemical exergy and specific chemical exergy. In this study, a BASIC computer program was used to calculation of the chemical exergies of the coal-based fuels. Calculations showed that the chemical composition of the coal influences strongly the values of the chemical exergy. The exergy value of a coal is closely related to the H:C and O:C ratios. High proportions of hydrogen and/or oxygen, compared to carbon, generally reduce the exergy value of the coal. High contents of the moisture and/or the ash cause to low values of the chemical exergy. The aim of this paper is to calculate the chemical exergy of coals by using equations given in the literature and to detect and to evaluate quantitatively the effect of irreversible phenomena increased the thermodynamic imperfection of the processes. In this paper, the calculated exergy values of the fuels will be useful for energy experts studied in the coal mining area and coal-fired powerplants
Kamata, S.
2017-12-01
Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
International Nuclear Information System (INIS)
Chow, James C L; Lam, Phil; Jaffray, David A
2012-01-01
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR G ET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
Chow, James C. L.; Lam, Phil; Jaffray, David A.
2012-02-01
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
Choi, Yun Seok
2017-05-26
Full waveform inversion (FWI) using an energy-based objective function has the potential to provide long wavelength model information even without low frequency in the data. However, without the back-propagation method (adjoint-state method), its implementation is impractical for the model size of general seismic survey. We derive the gradient of the energy-based objective function using the back-propagation method to make its FWI feasible. We also raise the energy signal to the power of a small positive number to properly handle the energy signal imbalance as a function of offset. Examples demonstrate that the proposed FWI algorithm provides a convergent long wavelength structure model even without low-frequency information, which can be used as a good starting model for the subsequent conventional FWI.
Calculated thermal performance of solar collectors based on measured weather data from 2001-2010
DEFF Research Database (Denmark)
Dragsted, Janne; Furbo, Simon; Andersen, Elsa
2015-01-01
This paper presents an investigation of the differences in modeled thermal performance of solar collectors when meteorological reference years are used as input and when mulit-year weather data is used as input. The investigation has shown that using the Danish reference year based on the period ...... with an increase in global radiation. This means that besides increasing the thermal performance with increasing the solar radiation, the utilization of the solar radiation also becomes better.......This paper presents an investigation of the differences in modeled thermal performance of solar collectors when meteorological reference years are used as input and when mulit-year weather data is used as input. The investigation has shown that using the Danish reference year based on the period...
Shin, Min-Ho; Kim, Hyo-Jun; Kim, Young-Joo
2017-02-20
We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.
Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken
2018-05-17
An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4
GIS-based Watershed Management Modeling for Surface Runoff Calculation in Tatara River Basin, Japan
Ix, Hour; Mori, Makito; Hiramatsu, Kazuaki; Harada, Masayoshi
2007-01-01
In the past few decades when Geographical Information System (GIS) technology was not fully developed in practical use, watershed delineation work used to be conducted manually by hydrologists based on topographic maps. The work was a tedious operation, since it had to be done repeatedly in similar manner for each basin or sub-basin of interest, and its process always left some unpredicted errors. Nowadays, GIS software is being upgraded regularly with powerful tools responding to the needs o...
Calculation Method for Equilibrium Points in Dynamical Systems Based on Adaptive Sinchronization
Directory of Open Access Journals (Sweden)
Manuel Prian Rodríguez
2017-12-01
Full Text Available In this work, a control system is proposed as an equivalent numerical procedure whose aim is to obtain the natural equilibrium points of a dynamical system. These equilibrium points may be employed later as setpoint signal for different control techniques. The proposed procedure is based on the adaptive synchronization between an oscillator and a reference model driven by the oscillator state variables. A stability analysis is carried out and a simplified algorithm is proposed. Finally, satisfactory simulation results are shown.
Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting
Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng
2013-01-01
The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentrati...
A calculation method for RF couplers design based on numerical simulation by microwave studio
International Nuclear Information System (INIS)
Wang Rong; Pei Yuanji; Jin Kai
2006-01-01
A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)
Preliminary Calculation for Plasma Chamber Design of Pulsed Electron Source Based on Plasma
International Nuclear Information System (INIS)
Widdi Usada
2009-01-01
This paper described the characteristics of pulsed electron sources with anode-cathode distance of 5 cm, electrode diameter of 10 cm, driven by capacitor energy of 25 J. The preliminary results showed that if the system is operated with diode resistance is 1.6 Ω, plasma resistance is 0.14 Ω, and β is 0.94, the achieved of plasma voltage is 640 V, its current is 4.395 kA with its pulse width of 0.8 μsecond. According to breakdown voltage based on Paschen empirical formula, with this achieved voltage, this system could be operated for operation pressure of 1 torr. (author)
International Nuclear Information System (INIS)
Chen, C.L.; Wu, T.H.; Cheng, M.C.; Huang, Y.H.; Sheu, C.Y.; Hsieh, J.C.; Lee, J.S.
2006-01-01
Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training
Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter
2017-04-01
A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).
International Nuclear Information System (INIS)
Xu, Liang; Yuan, Jingqi
2015-01-01
Thermodynamic properties of the working fluid and the flue gas play an important role in the thermodynamic calculation for the boiler design and the operational optimization in power plants. In this study, a generic approach to online calculate the thermodynamic properties of the flue gas is proposed based on its composition estimation. It covers the full operation scope of the flue gas, including the two-phase state when the temperature becomes lower than the dew point. The composition of the flue gas is online estimated based on the routinely offline assays of the coal samples and the online measured oxygen mole fraction in the flue gas. The relative error of the proposed approach is found less than 1% when the standard data set of the dry and humid air and the typical flue gas is used for validation. Also, the sensitivity analysis of the individual component and the influence of the measurement error of the oxygen mole fraction on the thermodynamic properties of the flue gas are presented. - Highlights: • Flue gas thermodynamic properties in coal-fired power plants are online calculated. • Flue gas composition is online estimated using the measured oxygen mole fraction. • The proposed approach covers full operation scope, including two-phase flue gas. • Component sensitivity to the thermodynamic properties of flue gas is presented.
Chan, GuoXuan; Wang, Xin
2018-04-01
We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.
Chen, C. L.; Wu, T. H.; Cheng, M. C.; Huang, Y. H.; Sheu, C. Y.; Hsieh, J. C.; Lee, J. S.
2006-12-01
Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.
Watanabe, Takashi
2013-01-01
The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
Energy Technology Data Exchange (ETDEWEB)
Chow, James C L; Lam, Phil; Jaffray, David A, E-mail: james.chow@rmp.uhn.on.ca [Department of Radiation Oncology, University of Toronto and Radiation Medicine Program, Princess Margaret Hospital, University Health Network, Toronto, Ontario M5G 2M9 (Canada)
2012-02-09
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR{sub G}ET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
Porphyrin-based polymeric nanostructures for light harvesting applications: Ab initio calculations
Orellana, Walter
The capture and conversion of solar energy into electricity is one of the most important challenges to the sustainable development of mankind. Among the large variety of materials available for this purpose, porphyrins concentrate great attention due to their well-known absorption properties in the visible range. However, extended materials like polymers with similar absorption properties are highly desirable. In this work, we investigate the stability, electronic and optical properties of polymeric nanostructures based on free-base porphyrins and phthalocyanines (H2P, H2Pc), within the framework of the time-dependent density functional perturbation theory. The aim of this work is the stability, electronic, and optical characterization of polymeric sheets and nanotubes obtained from H2P and H2Pc monomers. Our results show that H2P and H2Pc sheets exhibit absorption bands between 350 and 400 nm, slightly different that the isolated molecules. However, the H2P and H2Pc nanotubes exhibit a wide absorption in the visible and near-UV range, with larger peaks at 600 and 700 nm, respectively, suggesting good characteristic for light harvesting. The stability and absorption properties of similar structures obtained from ZnP and ZnPc molecules is also discussed. Departamento de Ciencias Físicas, República 220, 037-0134 Santiago, Chile.
Directory of Open Access Journals (Sweden)
Hyejoo Kang
2017-07-01
Full Text Available Purpose: The goal is to develop a stand-alone application, which automatically and consistently computes the coordinates of the dose calculation point recommended by the American Brachytherapy Society (i.e., point A based solely on the implanted applicator geometry for cervical cancer brachytherapy. Material and methods: The application calculates point A coordinates from the source dwell geometries in the computed tomography (CT scans, and outputs the 3D coordinates in the left and right directions. The algorithm was tested on 34 CT scans of 7 patients treated with high-dose-rate (HDR brachytherapy using tandem and ovoid applicators. A single experienced user retrospectively and manually inserted point A into each CT scan, whose coordinates were used as the “gold standard” for all comparisons. The gold standard was subtracted from the automatically calculated points, a second manual placement by the same experienced user, and the clinically used point coordinates inserted by multiple planners. Coordinate differences and corresponding variances were compared using nonparametric tests. Results: Automatically calculated, manually placed, and clinically used points agree with the gold standard to < 1 mm, 1 mm, 2 mm, respectively. When compared to the gold standard, the average and standard deviation of the 3D coordinate differences were 0.35 ± 0.14 mm from automatically calculated points, 0.38 ± 0.21 mm from the second manual placement, and 0.71 ± 0.44 mm from the clinically used point coordinates. Both the mean and standard deviations of the 3D coordinate differences were statistically significantly different from the gold standard, when point A was placed by multiple users (p < 0.05 but not when placed repeatedly by a single user or when calculated automatically. There were no statistical differences in doses, which agree to within 1-2% on average for all three groups. Conclusions: The study demonstrates that the automated algorithm
[Calculating the optimum size of a hemodialysis unit based on infrastructure potential].
Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis
2010-01-01
To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.
International Nuclear Information System (INIS)
Ivo, Kljenak; Miroslav, Babic; Borut, Mavko
2007-01-01
The possibility of simulating adequately the flow circulation within a nuclear power plant containment using a lumped-parameter code is considered. An experiment on atmosphere mixing and stratification, which was performed in the containment experimental facility TOSQAN at IRSN (Institute of Radioprotection and Nuclear Safety) in Saclay (France), was simulated with the CFD (Computational Fluid Dynamics) code CFX4 and the lumped-parameter code CONTAIN. During some phases of the experiment, steady states were achieved by keeping the boundary conditions constant. Two steady states during which natural convection was the dominant gas flow mechanism were simulated independently. The nodalization of the lumped-parameter model was based on the flow pattern, simulated with the CFD code. The simulation with the lumped-parameter code predicted basically the same flow circulation patterns within the experimental vessel as the simulation with the CFD code did. (authors)
International Nuclear Information System (INIS)
Schweingruber, M.
1983-12-01
A chemical equilibrium model is used to estimate maximum upper concentration limits for some actinides (Th, U, Np, Pu, Am) in groundwaters. Eh/pH diagrams for solubility isopleths, dominant dissolved species and limiting solids are constructed for fixed parameter sets including temperature, thermodynamic database, ionic strength and total concentrations of most important inorganic ligands (carbonate, fluoride, phosphate, sulphate, chloride). In order to assess conservative conditions, a reference water is defined with high ligand content and ionic strength, but without competing cations. In addition, actinide oxides and hydroxides are the only solid phases considered. Recommendations for 'safe' upper actinide solubility limits for deep groundwaters are derived from such diagrams, based on the predicted Eh/pH domain. The model results are validated as far as the scarce experimental data permit. (Auth.)
Yield estimation based on calculated comparisons to particle velocity data recorded at low stress
International Nuclear Information System (INIS)
Rambo, J.
1993-01-01
This paper deals with the problem of optimizing the yield estimation process if some of the material properties are known from geophysical measurements and others are inferred from in-situ dynamic measurements. The material models and 2-D simulations of the event are combined to determine the yield. Other methods of yield determination from peak particle velocity data have mostly been based on comparisons of nearby events in similar media at NTS. These methods are largely empirical and are subject to additional error when a new event has different properties than the population being used for a basis of comparison. The effect of material variations can be examined using LLNL's KDYNA computer code. The data from an NTS event provide the instructive example for simulation
Challenges in calculating the bandgap of triazine-based carbon nitride structures
Steinmann, Stephan N.
2017-02-08
Graphitic carbon nitrides form a popular family of materials, particularly as photoharvesters in photocatalytic water splitting cells. Recently, relatively ordered g-C3N4 and g-C6N9H3 were characterized by X-ray diffraction and their ability to photogenerate excitons was subsequently estimated using density functional theory. In this study, the ability of triazine-based g-C3N4 and g-C6N9H3 to photogenerate excitons was studied using self-consistent GW computations followed by solving the Bethe–Salpeter Equation (BSE). In particular, monolayers, bilayers and 3D-periodic systems were characterized. The predicted optical band gaps are in the order of 1 eV higher than the experimentally measured ones, which is explained by a combination of shortcomings in the adopted model, small defects in the experimentally obtained structures and the particular nature of the experimental determination of the band gap.
International Nuclear Information System (INIS)
Yun-Jiang, Wang; Chong-Yu, Wang
2009-01-01
A model system consisting of Ni[001](100)/Ni 3 Al[001](100) multi-layers are studied using the density functional theory in order to explore the elastic properties of single crystal Ni-based superalloys. Simulation results are consistent with the experimental observation that rafted Ni-base superalloys virtually possess a cubic symmetry. The convergence of the elastic properties with respect to the thickness of the multilayers are tested by a series of multilayers from 2γ'+2γ to 10γ'+10γ atomic layers. The elastic properties are found to vary little with the increase of the multilayer's thickness. A Ni/Ni 3 Al multilayer with 10γ'+10γ atomic layers (3.54 nm) can be used to simulate the mechanical properties of Ni-base model superalloys. Our calculated elastic constants, bulk modulus, orientation-dependent shear modulus and Young's modulus, as well as the Zener anisotropy factor are all compatible with the measured results of Ni-base model superalloys R1 and the advanced commercial superalloys TMS-26, CMSX-4 at a low temperature. The mechanical properties as a function of the γ' phase volume fraction are calculated by varying the proportion of the γ and γ' phase in the multilayers. Besides, the mechanical properties of two-phase Ni/Ni 3 Al multilayer can be well predicted by the Voigt–Reuss–Hill rule of mixtures. (classical areas of phenomenology)
Energy Technology Data Exchange (ETDEWEB)
Cecconello, M
1999-05-01
Extrap T2 will be equipped with a neutral particles energy diagnostic based on time of flight technique. In this report, the expected neutral fluxes for Extrap T2 are estimated and discussed in order to determine the feasibility and the limits of such diagnostic. These estimates are based on a 1D model of the plasma. The input parameters of such model are the density and temperature radial profiles of electrons and ions and the density of neutrals at the edge and in the centre of the plasma. The atomic processes included in the model are the charge-exchange and the electron-impact ionization processes. The results indicate that the plasma attenuation length varies from a/5 to a, a being the minor radius. Differential neutral fluxes, as well as the estimated power losses due to CX processes (2 % of the input power), are in agreement with experimental results obtained in similar devices. The expected impurity influxes vary from 10{sup 14} to 10{sup 11} cm{sup -2}s{sup -1}. The neutral particles detection and acquisition systems are discussed. The maximum detectable energy varies from 1 to 3 keV depending on the flight distances d. The time resolution is 0.5 ms. Output signals from the waveform recorder are foreseen in the range 0-200 mV. An 8-bit waveform recorder having 2 MHz sampling frequency and 100K sample of memory capacity is the minimum requirement for the acquisition system 20 refs, 19 figs.
Site- and phase-selective x-ray absorption spectroscopy based on phase-retrieval calculation
International Nuclear Information System (INIS)
Kawaguchi, Tomoya; Fukuda, Katsutoshi; Matsubara, Eiichiro
2017-01-01
Understanding the chemical state of a particular element with multiple crystallographic sites and/or phases is essential to unlocking the origin of material properties. To this end, resonant x-ray diffraction spectroscopy (RXDS) achieved through a combination of x-ray diffraction (XRD) and x-ray absorption spectroscopy (XAS) techniques can allow for the measurement of diffraction anomalous fine structure (DAFS). This is expected to provide a peerless tool for electronic/local structural analyses of materials with complicated structures thanks to its capability to extract spectroscopic information about a given element at each crystallographic site and/or phase. At present, one of the major challenges for the practical application of RXDS is the rigorous determination of resonant terms from observed DAFS, as this requires somehow determining the phase change in the elastic scattering around the absorption edge from the scattering intensity. This is widely known in the field of XRD as the phase problem. The present review describes the basics of this problem, including the relevant background and theory for DAFS and a guide to a newly-developed phase-retrieval method based on the logarithmic dispersion relation that makes it possible to analyze DAFS without suffering from the intrinsic ambiguities of conventional iterative-fitting. Several matters relating to data collection and correction of RXDS are also covered, with a final emphasis on the great potential of powder-sample-based RXDS (P-RXDS) to be used in various applications relevant to practical materials, including antisite-defect-type electrode materials for lithium-ion batteries. (topical review)
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
International Nuclear Information System (INIS)
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-01-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors
International Nuclear Information System (INIS)
Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z
2015-01-01
Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)
Rahayuningsih, M.; Kartijono, N. E.; Arifin, M. S.
2018-03-01
Increasing number of staffs and academicians as a result of UNNES's popularity becoming a favourite university in Indonesia has demanded more facilities to support the learning process, student activities and campus operations. This condition has declined forest covered area in the campus, even though. Optimum extent must be prevented to support ecological function in campus areas. This research is conducted to determine the optimum areas of needed campus's forest based on CO2 emissions in the UNNES area in Sekaran sub-district. The results showed that forest need for campus of UNNES in 2017 is 14.25 ha, but the existing area is only 13.103 ha. Campus forest in western campus area is sufficient to absorb CO2 emissions with forest availability is about 8,147 ha while forest requirement is about 4.47 ha. Campus forest in eastern campus area is not sufficient to absorb CO2 emissions. The need of campus forest in eastern campus area is much bigger that is 9,78 ha from campus forest which available is about 4,956 ha. The results of this study can be used as a reference in the development of green space both on campus and in the city of UNNES Semarang.
Saravanan, A. V. Sai; Abishek, B.; Anantharaj, R.
2018-04-01
The fundamental natures of the molecular level interaction and charge transfer between specific radioactive elements and ionic liquids of 1-butyl-3-methylimidazolium bis (trifluoromethylsulfonyl) imide ([BMIM]+[NTf2]-), 1-Butyl-3-methylimidazolium ethylsulfate ([BMIM]+[ES]-) and 1-butyl-3-methylimidazolium tetrafluoroborate ([BMIM]+[BF4]-) were investigated utilising HF theory and B3LYP hybrid DFT. The ambiguity in reaction mechanism of the interacting species dictates to employ Effective Core Potential (ECP) basis sets such as UGBS, SDD, and SDDAll to account for the relativistic effects of deep core electrons in the system involving potential, heavy and hazardous radioactive elements present in nuclear waste. The SCF energy convergence of each system validates the characterisation of the molecular orbitals as a linear combination of atomic orbitals utilising fixed MO coefficients and the optimized geometry of each system is visualised based on which Mulliken partial charge analysis is carried out to account for the polarising behaviour of the radioactive element and charge transfer between the IL phase by comparison with the bare IL species.
Lian, Wenjing; Liang, Jiying; Shen, Li; Jin, Yue; Liu, Hongyun
2018-02-15
The molecularly imprinted polymer (MIP) films were electropolymerized on the surface of Au electrodes with luminol and pyrrole (PY) as the two monomers and ampicillin (AM) as the template molecule. The electrochemiluminescence (ECL) intensity peak of polyluminol (PL) of the AM-free MIP films at 0.7V vs Ag/AgCl could be greatly enhanced by AM rebinding. In addition, the ECL signals of the MIP films could also be enhanced by the addition of glucose oxidase (GOD)/glucose and/or ferrocenedicarboxylic acid (Fc(COOH) 2 ) in the testing solution. Moreover, Fc(COOH) 2 exhibited cyclic voltammetric (CV) response at the AM-free MIP film electrodes. Based on these results, a binary 3-input/6-output biomolecular logic gate system was established with AM, GOD and Fc(COOH) 2 as inputs and the ECL responses at different levels and CV signal as outputs. Some functional non-Boolean logic devices such as an encoder, a decoder and a demultiplexer were also constructed on the same platform. Particularly, on the basis of the same system, a ternary AND logic gate was established. The present work combined MIP film electrodes, the solid-state ECL, and the enzymatic reaction together, and various types of biomolecular logic circuits and devices were developed, which opened a novel avenue to construct more complicated bio-logic gate systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Kress, Christian; Sadowski, Gabriele; Brandenbusch, Christoph
2016-10-01
The purification of therapeutic proteins is a challenging task with immediate need for optimization. Besides other techniques, aqueous 2-phase extraction (ATPE) of proteins has been shown to be a promising alternative to cost-intensive state-of-the-art chromatographic protein purification. Most likely, to enable a selective extraction, protein partitioning has to be influenced using a displacement agent to isolate the target protein from the impurities. In this work, a new displacement agent (lithium bromide [LiBr]) allowing for the selective separation of the target protein IgG from human serum albumin (represents the impurity) within a citrate-polyethylene glycol (PEG) ATPS is presented. In order to characterize the displacement suitability of LiBr on IgG, the mutual influence of LiBr and the phase formers on the aqueous 2-phase system (ATPS) and partitioning is investigated. Using osmotic virial coefficients (B22 and B23) accessible by composition gradient multiangle light-scattering measurements, the precipitating effect of LiBr on both proteins and an estimation of both protein partition coefficients is estimated. The stabilizing effect of LiBr on both proteins was estimated based on B22 and experimentally validated within the citrate-PEG ATPS. Our approach contributes to an efficient implementation of ATPE within the downstream processing development of therapeutic proteins. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.
2018-03-01
A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.
International Nuclear Information System (INIS)
Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H.
2016-01-01
Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10 −2 –1.0 × 10 −5 M with detection limit 8.5 × 10 −6 M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.
Using Ab-Initio Calculations to Appraise Stm-Based - and Kink-Formation Energies
Feibelman, Peter J.
2001-03-01
Ab-initio total energies can and should be used to test the typically model-dependent results of interpreting STM morphologies. The benefits of such tests are illustrated here by ab-initio energies of step- and kink-formation on Pb and Pt(111) which show that the STM-based values of the kink energies must be revised. On Pt(111), the computed kink-energies for (100)- and (111)-microfacet steps are about 0.25 and 0.18 eV. These results imply a specific ratio of formation energies for the two step types, namely 1.14, in excellent agreement with experiment. If kink-formation actually cost the same energy on the two step types, an inference drawn from scanning probe observations of step wandering,(M. Giesen et al., Surf. Sci. 366, 229(1996).) this ratio ought to be 1. In the case of Pb(111), though computed energies to form (100)- and (111)-microfacet steps agree with measurement, the ab-initio kink-formation energies for the two step types, 41 and 60 meV, are 40-50% below experimental values drawn from STM images.(K. Arenhold et al., Surf. Sci. 424, 271(1999).) The discrepancy results from interpreting the images with a step-stiffness vs. kink-energy relation appropriate to (100) but not (111) surfaces. Good agreement is found when proper account of the trigonal symmetry of Pb(111) is taken in reinterpreting the step-stiffness data.
Energy Technology Data Exchange (ETDEWEB)
Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H., E-mail: Ahmed.hussienabdelazim@hotmil.com
2016-04-01
Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10{sup −2}–1.0 × 10{sup −5} M with detection limit 8.5 × 10{sup −6} M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.
Directory of Open Access Journals (Sweden)
Grzegorz Zwierzchowski
2016-08-01
Full Text Available Purpose: Well-known defect of TG-43 based algorithms used in brachytherapy is a lack of information about interaction cross-sections, which are determined not only by electron density but also by atomic number. TG-186 recommendations with using of MBDCA (model-based dose calculation algorithm, accurate tissues segmentation, and the structure’s elemental composition continue to create difficulties in brachytherapy dosimetry. For the clinical use of new algorithms, it is necessary to introduce reliable and repeatable methods of treatment planning systems (TPS verification. The aim of this study is the verification of calculation algorithm used in TPS for shielded vaginal applicators as well as developing verification procedures for current and further use, based on the film dosimetry method. Material and methods : Calibration data was collected by separately irradiating 14 sheets of Gafchromic® EBT films with the doses from 0.25 Gy to 8.0 Gy using HDR 192Ir source. Standard vaginal cylinders of three diameters were used in the water phantom. Measurements were performed without any shields and with three shields combination. Gamma analyses were performed using the VeriSoft® package. Results : Calibration curve was determined as third-degree polynomial type. For all used diameters of unshielded cylinder and for all shields combinations, Gamma analysis were performed and showed that over 90% of analyzed points meets Gamma criteria (3%, 3 mm. Conclusions : Gamma analysis showed good agreement between dose distributions calculated using TPS and measured by Gafchromic films, thus showing the viability of using film dosimetry in brachytherapy.
International Nuclear Information System (INIS)
Pelloni, S.; Grimm, P.; Mathews, D.; Paratte, J.M.
1989-06-01
In this report the capability of various code systems widely used at PSI (such as WIMS-D, BOXER, and the AARE modules TRAMIX and MICROX-2 in connection with the one-dimensional transport code ONEDANT) and JEF-1 based nuclear data libraries to compute LWR lattices is analysed by comparing results from thermal reactor benchmarks TRX and BAPL with experiment and with previously published values. It is shown that with the JEF-1 evaluation eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and that all methods give reasonable results for the measured reaction rate within or not too far from the experimental uncertainty. This is consistent with previous similar studies. (author) 7 tabs., 36 refs
Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru
2014-12-01
Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
High-performance whole core Pin-by-Pin calculation based on EFEN-SP_3 method
International Nuclear Information System (INIS)
Yang Wen; Zheng Youqi; Wu Hongchun; Cao Liangzhi; Li Yunzhao
2014-01-01
The EFEN code for high-performance PWR whole core pin-by-pin calculation based on the EFEN-SP_3 method can be achieved by employing spatial parallelization based on MPI. To take advantage of the advanced computing and storage power, the entire problem spatial domain can be appropriately decomposed into sub-domains and the assigned to parallel CPUs to balance the computing load and minimize communication cost. Meanwhile, Red-Black Gauss-Seidel nodal sweeping scheme is employed to avoid the within-group iteration deterioration due to spatial parallelization. Numerical results based on whole core pin-by-pin problems designed according to commercial PWRs demonstrate the following conclusions: The EFEN code can provide results with acceptable accuracy; Communication period impacts neither the accuracy nor the parallel efficiency; Domain decomposition methods with smaller surface to volume ratio leads to greater parallel efficiency; A PWR whole core pin-by-pin calculation with a spatial mesh 289 × 289 × 218 and 4 energy groups could be completed about 900 s by using 125 CPUs, and its parallel efficiency is maintained at about 90%. (authors)
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
Energy Technology Data Exchange (ETDEWEB)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
Energy Technology Data Exchange (ETDEWEB)
Dai, J.C. [College of Mechanical and Electrical Engineering, Central South University, Changsha (China); School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Hu, Y.P.; Liu, D.S. [School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Long, X. [Hara XEMC Windpower Co., Ltd., Xiangtan (China)
2011-03-15
The aerodynamic loads for MW scale horizontal-axis wind turbines are calculated and analyzed in the established coordinate systems which are used to describe the wind turbine. In this paper, the blade element momentum (BEM) theory is employed and some corrections, such as Prandtl and Buhl models, are carried out. Based on the B-L semi-empirical dynamic stall (DS) model, a new modified DS model for NACA63-4xx airfoil is adopted. Then, by combing BEM modified theory with DS model, a set of calculation method of aerodynamic loads for large scale wind turbines is proposed, in which some influence factors such as wind shear, tower, tower and blade vibration are considered. The research results show that the presented dynamic stall model is good enough for engineering purpose; the aerodynamic loads are influenced by many factors such as tower shadow, wind shear, dynamic stall, tower and blade vibration, etc, with different degree; the single blade endures periodical changing loads but the variations of the rotor shaft power caused by the total aerodynamic torque in edgewise direction are very small. The presented study approach of aerodynamic loads calculation and analysis is of the university, and helpful for thorough research of loads reduction on large scale wind turbines. (author)
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
International Nuclear Information System (INIS)
Toshio, S.; Kazuo, A.
1983-01-01
A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: 1. Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. 2. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. 3. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size
International Nuclear Information System (INIS)
Sanda, T.; Azekura, K.
1983-01-01
A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size
Li, Haoyuan
2016-03-24
A method is proposed to calculate the electric properties of organic-based devices from the molecular structure. The charge transfer rate is obtained using non-adiabatic molecular dynamics. The organic film in the device is modeled using the snapshots from the dynamic trajectory of the simulated molecular system. Kinetic Monte Carlo simulations are carried out to calculate the current characteristics. A widely used hole-transporting material, N,N′-diphenyl-N,N′-bis(1-naphthyl)-1,1′-biphenyl-4,4′-diamine (NPB) is studied as an application of this method, and the properties of its hole-only device are investigated. The calculated current densities and dependence on the applied voltage without an injection barrier are close to those obtained by the Mott-Gurney equation. The results with injection barriers are also in good agreement with experiment. This method can be used to aid the design of molecules and guide the optimization of devices. © 2016 Elsevier B.V. All rights reserved.
Huang, Shicheng; Tan, Likun; Hu, Nan; Grover, Hannah; Chu, Kevin; Chen, Zi
This reserach introduces a new numerical approach of calculating the post-buckling configuration of a thin rod embedded in elastic media. The theoretical base is the governing ODEs describing the balance of forces and moments, the length conservation, and the physics of bending and twisting by Laudau and Lifschitz. The numerical methods applied in the calculation are continuation method and Newton's method of iteration in combination with spectrum method. To the authors' knowledge, it is the first trial of directly applying the L-L theory to numerically studying the phenomenon of rod buckling in elastic medium. This method accounts for nonlinearity of geometry, thus is capable of calculating large deformation. The stability of this method is another advantage achieved by expressing the governing equations in a set of first-order derivative form. The wave length, amplitude, and decay effect all agree with the experiment without any further assumptions. This program can be applied to different occasions with varying stiffness of the elastic medai and rigidity of the rod.
Lin
2000-05-01
To reduce the calculating time for the summations over linearly independent and minimal conjugated circuits of benzenoid hydrocarbons (BHs), an approximate method is proposed that counts only the numbers of the first four classes of conjugated circuits R1, R2, R3, and R4, respectively. By representation of BHs as custom-made "ring-block chains" and use of the techniques of Database and visual computing, an application software is realized that is much faster and more powerful than the old one based on an enumeration technique.
Dimitroulis, Christos; Raptis, Theophanes; Raptis, Vasilios
2015-12-01
We present an application for the calculation of radial distribution functions for molecular centres of mass, based on trajectories generated by molecular simulation methods (Molecular Dynamics, Monte Carlo). When designing this application, the emphasis was placed on ease of use as well as ease of further development. In its current version, the program can read trajectories generated by the well-known DL_POLY package, but it can be easily extended to handle other formats. It is also very easy to 'hack' the program so it can compute intermolecular radial distribution functions for groups of interaction sites rather than whole molecules.
International Nuclear Information System (INIS)
Bobkov, V.P.
2000-01-01
The method for calculating the critical heat flux in the mixed rod assemblies, for example RBMK, containing three-four angle and peripheral macrocells, is presented. The method is based on generalization of experimental data in form of tables for the rods beams. It is recommended for the areas of parameters both provided for by experimental data and for others, where the data are absent. The advantages of the table method as follows: it is acceptable within a wide range of parameters and provides for smooth description of dependence of critical heat fluxes on these parameters; it is characterized by clearness, high reliability and accuracy and is easy in application [ru
Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.
2013-01-01
Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323
Directory of Open Access Journals (Sweden)
V. I. Milykh
2016-12-01
Full Text Available Purpose. The work is dedicated to the presentation of the principle of construction and implementation of an automated synthesis system of the turbo-generator (TG electromagnetic system in the case of its modernization. This is done on the example of changing the number of the stator core slots. Methodology. The basis of the synthesis is a TG basic construction. Its structure includes the mathematical and physical-geometrical models, as well as the calculation model for the FEMM software environment, providing the numerical calculations of the magnetic fields and electromagnetic parameters of TG. The mathematical model links the changing and basic dimensions and parameters of the electromagnetic system, provided that the TG power parameters are ensured. The physical-geometrical model is the geometric mapping of the electromagnetic system with the specified physical properties of its elements. This model converts the TG electromagnetic system in a calculation model for the FEMM program. Results. Testing of the created synthesis system is carried out on the example of the 340 MW TG. The geometric, electromagnetic and power parameters of its basic construction and its new variants at the different numbers of the stator slots are compared. The harmonic analysis of the temporal function of the stator winding EMF is also made for the variants being compared. Originality. The mathematical model, relating the new and base parameters of TG at the changing of the number of the stator slots is created. A Lua script, providing the numerical-field calculations of the TG electromagnetic parameters in the FEMM software environment is worked out. Construction of the constructive and calculation models, the numerical-field calculations and delivery of results are performed by a computer automatically, that ensures high efficiency of the TG design process. Practical value. The considered version of the TG modernization on the example of changing the number of the
Inoue, N.; Kitada, N.; Irikura, K.
2013-12-01
A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the
International Nuclear Information System (INIS)
González-Lavado, Eloisa; Corchado, Jose C.; Espinosa-Garcia, Joaquin
2014-01-01
Based exclusively on high-level ab initio calculations, a new full-dimensional analytical potential energy surface (PES-2014) for the gas-phase reaction of hydrogen abstraction from methane by an oxygen atom is developed. The ab initio information employed in the fit includes properties (equilibrium geometries, relative energies, and vibrational frequencies) of the reactants, products, saddle point, points on the reaction path, and points on the reaction swath, taking especial caution respecting the location and characterization of the intermediate complexes in the entrance and exit channels. By comparing with the reference results we show that the resulting PES-2014 reproduces reasonably well the whole set of ab initio data used in the fitting, obtained at the CCSD(T) = FULL/aug-cc-pVQZ//CCSD(T) = FC/cc-pVTZ single point level, which represents a severe test of the new surface. As a first application, on this analytical surface we perform an extensive dynamics study using quasi-classical trajectory calculations, comparing the results with recent experimental and theoretical data. The excitation function increases with energy (concave-up) reproducing experimental and theoretical information, although our values are somewhat larger. The OH rotovibrational distribution is cold in agreement with experiment. Finally, our results reproduce experimental backward scattering distribution, associated to a rebound mechanism. These results lend confidence to the accuracy of the new surface, which substantially improves the results obtained with our previous surface (PES-2000) for the same system
Directory of Open Access Journals (Sweden)
Alicia Gauffin
2018-05-01
Full Text Available On the basis of the Volume Correlation Model (VCM as well as data on steel consumption and scrap collection per industry sector (construction, automotive, industrial goods, and consumer goods, it was possible to estimate service lifetimes of steel in the United States between 1900 and 2016. Input data on scrap collection per industry sector was based on a scrap survey conducted by the World Steel Association for a static year in 2014 in the United States. The lifetimes of steel calculated with the VCM method were within the range of previously reported measured lifetimes of products and applications for all industry sectors. Scrapped (and apparent lifetimes of steel compared with measured lifetimes were calculated to be as follows: a scrapped lifetime of 29 years for the construction sector (apparent lifetime: 52 years compared with 44 years measured in 2014. Industrial goods: 16 (27 years compared with 19 years measured in 2010. Consumer goods: 12 (14 years compared with 13 years measured in 2014. Automotive sector: 14 (19 years compared with 17 years measured in 2011. Results show that the VCM can estimate reasonable values of scrap collection and availability per industry sector over time.
International Nuclear Information System (INIS)
Minamoto, Satoshi; Kato, Masato; Konashi, Kenji
2011-01-01
Combination of an oxygen vacancy formation energy calculated using first-principles approach and the configurational entropy change treated within the framework of statistical mechanics gives an expression of the Gibbs free energy at large deviation from stoichiometry of plutonium oxide PuO 2 . An oxygen vacancy formation energy 4.20 eV derived from our previously first-principles calculation was used to evaluate the Gibbs free energy change due to oxygen vacancies in the crystal. The oxygen partial pressures then can be evaluated from the change of the free energy with two fitting parameters (a vacancy-vacancy interaction energy and vibration entropy change due to induced vacancies). Derived thermodynamic expression for the free energy based on the SGTE thermodynamic data for the stoichiometric PuO 2 and the Pu 2 O 3 compounds was further incorporated into the CALPHAD modeling, then phase equilibrium between the stoichiometric Pu 2 O 3 and non-stoichiometric PuO 2-x were reproduced.
Energy Technology Data Exchange (ETDEWEB)
Minamoto, Satoshi, E-mail: satoshi.minamoto@ctc-g.co.jp [ITOCHU Techno-Solutions Corporation, Kasumigaseki, 2-5, Kasumigaseki 3-chome, Chiyoda-ku, Tokyo 100-6080 (Japan); Kato, Masato [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki (Japan); Konashi, Kenji [Institute for Materials Research, Tohoku University, Oarai-chou, Ibaraki (Japan)
2011-05-31
Combination of an oxygen vacancy formation energy calculated using first-principles approach and the configurational entropy change treated within the framework of statistical mechanics gives an expression of the Gibbs free energy at large deviation from stoichiometry of plutonium oxide PuO{sub 2}. An oxygen vacancy formation energy 4.20 eV derived from our previously first-principles calculation was used to evaluate the Gibbs free energy change due to oxygen vacancies in the crystal. The oxygen partial pressures then can be evaluated from the change of the free energy with two fitting parameters (a vacancy-vacancy interaction energy and vibration entropy change due to induced vacancies). Derived thermodynamic expression for the free energy based on the SGTE thermodynamic data for the stoichiometric PuO{sub 2} and the Pu{sub 2}O{sub 3} compounds was further incorporated into the CALPHAD modeling, then phase equilibrium between the stoichiometric Pu{sub 2}O{sub 3} and non-stoichiometric PuO{sub 2-x} were reproduced.
Fischer, Michael; Bell, Robert G
2014-10-21
The influence of the nature of the cation on the interaction of the silicoaluminophosphate SAPO-34 with small hydrocarbons (ethane, ethylene, acetylene, propane, propylene) is investigated using periodic density-functional theory calculations including a semi-empirical dispersion correction (DFT-D). Initial calculations are used to evaluate which of the guest-accessible cation sites in the chabazite-type structure is energetically preferred for a set of ten cations, which comprises four alkali metals (Li(+), Na(+), K(+), Rb(+)), three alkaline earth metals (Mg(2+), Ca(2+), Sr(2+)), and three transition metals (Cu(+), Ag(+), Fe(2+)). All eight cations that are likely to be found at the SII site (centre of a six-ring) are then included in the following investigation, which studies the interaction with the hydrocarbon guest molecules. In addition to the interaction energies, some trends and peculiarities regarding the adsorption geometries are analysed, and electron density difference plots obtained from the calculations are used to gain insights into the dominant interaction types. In addition to dispersion interactions, electrostatic and polarisation effects dominate for the main group cations, whereas significant orbital interactions are observed for unsaturated hydrocarbons interacting with transition metal (TM) cations. The differences between the interaction energies obtained for pairs of hydrocarbons of interest (such as ethylene-ethane and propylene-propane) deliver some qualitative insights: if this energy difference is large, it can be expected that the material will exhibit a high selectivity in the adsorption-based separation of alkene-alkane mixtures, which constitutes a problem of considerable industrial relevance. While the calculations show that TM-exchanged SAPO-34 materials are likely to exhibit a very high preference for alkenes over alkanes, the strong interaction may render an application in industrial processes impractical due to the large amount
International Nuclear Information System (INIS)
Fu, Q.; Thorsen, T.J.; Su, J.; Ge, J.M.; Huang, J.P.
2009-01-01
We simulate the single-scattering properties (SSPs) of dust aerosols with both spheroidal and spherical shapes at a wavelength of 0.55 μm for two refractive indices and four effective radii. Herein spheres are defined by preserving both projected area and volume of a non-spherical particle. It is shown that the relative errors of the spheres to approximate the spheroids are less than 1% in the extinction efficiency and single-scattering albedo, and less than 2% in the asymmetry factor. It is found that the scattering phase function of spheres agrees with spheroids better than the Henyey-Greenstein (HG) function for the scattering angle range of 0-90 o . In the range of ∼90-180 o , the HG function is systematically smaller than the spheroidal scattering phase function while the spherical scattering phase function is smaller from ∼90 o to 145 o but larger from ∼145 o to 180 o . We examine the errors in reflectivity and absorptivity due to the use of SSPs of equivalent spheres and HG functions for dust aerosols. The reference calculation is based on the delta-DISORT-256-stream scheme using the SSPs of the spheroids. It is found that the errors are mainly caused by the use of the HG function instead of the SSPs for spheres. By examining the errors associated with the delta-four- and delta-two-stream schemes using various approximate SSPs of dust aerosols, we find that the errors related to the HG function dominate in the delta-four-stream results, while the errors related to the radiative transfer scheme dominate in the delta-two-stream calculations. We show that the relative errors in the global reflectivity due to the use of sphere SSPs are always less than 5%. We conclude that Mie-based SSPs of non-spherical dust aerosols are well suited in radiative flux calculations.
International Nuclear Information System (INIS)
Petersen, K.E.
1986-03-01
Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used e