WorldWideScience

Sample records for previous calculations based

  1. Laboratory Grouping Based on Previous Courses.

    Science.gov (United States)

    Doemling, Donald B.; Bowman, Douglas C.

    1981-01-01

    In a five-year study, second-year human physiology students were grouped for laboratory according to previous physiology and laboratory experience. No significant differences in course or board examination performance were found, though correlations were found between predental grade-point averages and grouping. (MSE)

  2. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  3. Motivational activities based on previous knowledge of students

    Science.gov (United States)

    García, J. A.; Gómez-Robledo, L.; Huertas, R.; Perales, F. J.

    2014-07-01

    Academic results depend strongly on the individual circumstances of students: background, motivation and aptitude. We think that academic activities conducted to increase motivation must be tuned to the special situation of the students. Main goal of this work is analyze the students in the first year of the Degree in Optics and Optometry in the University of Granada and the suitability of an activity designed for those students. Initial data were obtained from a survey inquiring about the reasons to choose this degree, their knowledge of it, and previous academic backgrounds. Results show that: 1) the group is quite heterogeneous, since students have very different background. 2) Reasons to choose the Degree in Optics and Optometry are also very different, and in many cases were selected as a second option. 3) Knowledge and motivations about the Degree are in general quite low. Trying to increase the motivation of the students we designed an academic activity in which we show different topics studied in the Degree. Results show that students that have been involved in this activity are the most motivated and most satisfied with their election of the degree.

  4. Reference values for spirometry, including vital capacity, in Japanese adults calculated with the LMS method and compared with previous values.

    Science.gov (United States)

    Kubota, Masaru; Kobayashi, Hirosuke; Quanjer, Philip H; Omori, Hisamitsu; Tatsumi, Koichiro; Kanazawa, Minoru

    2014-07-01

    Reference values for lung function tests should be periodically updated because of birth cohort effects and improved technology. This study updates the spirometric reference values, including vital capacity (VC), for Japanese adults and compares the new reference values with previous Japanese reference values. Spirometric data from healthy non-smokers (20,341 individuals aged 17-95 years, 67% females) were collected from 12 centers across Japan, and reference equations were derived using the LMS method. This method incorporates modeling skewness (lambda: L), mean (mu: M), and coefficient of variation (sigma: S), which are functions of sex, age, and height. In addition, the age-specific lower limits of normal (LLN) were calculated. Spirometric reference values for the 17-95-year age range and the age-dependent LLN for Japanese adults were derived. The new reference values for FEV(1) in males are smaller, while those for VC and FVC in middle age and elderly males and those for FEV(1), VC, and FVC in females are larger than the previous values. The LLN of the FEV(1)/FVC for females is larger than previous values. The FVC is significantly smaller than the VC in the elderly. The new reference values faithfully reflect spirometric indices and provide an age-specific LLN for the 17-95-year age range, enabling improved diagnostic accuracy. Compared with previous prediction equations, they more accurately reflect the transition in pulmonary function during young adulthood. In elderly subjects, the FVC reference values are not interchangeable with the VC values. Copyright © 2014 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  5. Data base to compare calculations and observations

    Energy Technology Data Exchange (ETDEWEB)

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)

  6. Data base to compare calculations and observations

    International Nuclear Information System (INIS)

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed

  7. Single-case effect size calculation: comparing regression and non-parametric approaches across previously published reading intervention data sets.

    Science.gov (United States)

    Ross, Sarah G; Begeny, John C

    2014-08-01

    Growing from demands for accountability and research-based practice in the field of education, there is recent focus on developing standards for the implementation and analysis of single-case designs. Effect size methods for single-case designs provide a useful way to discuss treatment magnitude in the context of individual intervention. Although a standard effect size methodology does not yet exist within single-case research, panel experts recently recommended pairing regression and non-parametric approaches when analyzing effect size data. This study compared two single-case effect size methods: the regression-based, Allison-MT method and the newer, non-parametric, Tau-U method. Using previously published research that measured the Words read Correct per Minute (WCPM) variable, these two methods were examined by comparing differences in overall effect size scores and rankings of intervention effect. Results indicated that the regression method produced significantly larger effect sizes than the non-parametric method, but the rankings of the effect size scores had a strong, positive relation. Implications of these findings for research and practice are discussed. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  8. Classifying Cervical Spondylosis Based on Fuzzy Calculation

    Directory of Open Access Journals (Sweden)

    Xinghu Yu

    2014-01-01

    Full Text Available Conventional evaluation of X-ray radiographs aiming at diagnosing cervical spondylosis (CS often depends on the clinic experiences, visual reading of radiography, and analysis of certain regions of interest (ROIs about clinician himself or herself. These steps are not only time consuming and subjective, but also prone to error for inexperienced clinicians due to low resolution of X-ray. This paper proposed an approach based on fuzzy calculation to classify CS. From the X-ray of CS manifestations, we extracted 10 effective ROIs to establish X-ray symptom-disease table of CS. Fuzzy calculation model based on the table can be carried out to classify CS and improve the diagnosis accuracy. The proposed model yields approximately 80.33% accuracy in classifying CS.

  9. Attribute and topology based change detection in a constellation of previously detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, Reginald N.

    2016-01-19

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  10. GPU based acceleration of first principles calculation

    International Nuclear Information System (INIS)

    Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T

    2010-01-01

    We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.

  11. Activity-Based Introductory Physics Using Graphing Calculators and Calculator-Based Laboratories (CBLs)*

    Science.gov (United States)

    Trecia Markes, C.

    1998-04-01

    This paper will report on the development of an activity-based approach to teaching introductory physics that makes extensive use of the Calculator-Based Laboratory (CBL) system from Texas Instruments to collect and analyze data. Studies have suggested that an activity-based instruction format and computer-implemented data acquisition and analysis are effective in enhancing teaching effectiveness. However, implementation of this kind of class is accompanied by significant financial, spatial, and portability constraints. The TI-85/92 graphing calculators provide sufficient computational capacity and speed to handle most of the data acquisition and analysis requirements found in activity based introductory physics instruction. This paper will show how some physical laws can be derived from data obtained using the calculator/CBL system. Other data that can be easily obtained will also be shown and discussed. Results of pre-test and post-test given to both activity-based sections and lecture/laboratory sectioons of algebra-level introductory physics classes will also be discussed.

  12. Criticality criteria for submissions based on calculations

    International Nuclear Information System (INIS)

    Burgess, M.H.

    1975-06-01

    Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)

  13. Calculating Traffic based on Road Sensor Data

    NARCIS (Netherlands)

    Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Rata, Debanshu; Sikora, Monika

    2014-01-01

    Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for

  14. Cultivation-based multiplex phenotyping of human gut microbiota allows targeted recovery of previously uncultured bacteria

    DEFF Research Database (Denmark)

    Rettedal, Elizabeth; Gumpert, Heidi; Sommer, Morten

    2014-01-01

    show that carefully designed conditions enable cultivation of a representative proportion of human gut bacteria, enabling rapid multiplex phenotypic profiling. We use this approach to determine the phylogenetic distribution of antibiotic tolerance phenotypes for 16 antibiotics in the human gut...... microbiota. Based on the phenotypic mapping, we tailor antibiotic combinations to specifically select for previously uncultivated bacteria. Utilizing this method we cultivate and sequence the genomes of four isolates, one of which apparently belongs to the genus Oscillibacter; uncultivated Oscillibacter...

  15. In vivo dentate nucleus MRI relaxometry correlates with previous administration of Gadolinium-based contrast agents

    Energy Technology Data Exchange (ETDEWEB)

    Tedeschi, Enrico; Canna, Antonietta; Cocozza, Sirio; Russo, Carmela; Angelini, Valentina; Brunetti, Arturo [University ' ' Federico II' ' , Neuroradiology, Department of Advanced Biomedical Sciences, Naples (Italy); Palma, Giuseppe; Quarantelli, Mario [National Research Council, Institute of Biostructure and Bioimaging, Naples (Italy); Borrelli, Pasquale; Salvatore, Marco [IRCCS SDN, Naples (Italy); Lanzillo, Roberta; Postiglione, Emanuela; Morra, Vincenzo Brescia [University ' ' Federico II' ' , Department of Neurosciences, Reproductive and Odontostomatological Sciences, Naples (Italy)

    2016-12-15

    To evaluate changes in T1 and T2* relaxometry of dentate nuclei (DN) with respect to the number of previous administrations of Gadolinium-based contrast agents (GBCA). In 74 relapsing-remitting multiple sclerosis (RR-MS) patients with variable disease duration (9.8±6.8 years) and severity (Expanded Disability Status Scale scores:3.1±0.9), the DN R1 (1/T1) and R2* (1/T2*) relaxation rates were measured using two unenhanced 3D Dual-Echo spoiled Gradient-Echo sequences with different flip angles. Correlations of the number of previous GBCA administrations with DN R1 and R2* relaxation rates were tested, including gender and age effect, in a multivariate regression analysis. The DN R1 (normalized by brainstem) significantly correlated with the number of GBCA administrations (p<0.001), maintaining the same significance even when including MS-related factors. Instead, the DN R2* values correlated only with age (p=0.003), and not with GBCA administrations (p=0.67). In a subgroup of 35 patients for whom the administered GBCA subtype was known, the effect of GBCA on DN R1 appeared mainly related to linear GBCA. In RR-MS patients, the number of previous GBCA administrations correlates with R1 relaxation rates of DN, while R2* values remain unaffected, suggesting that T1-shortening in these patients is related to the amount of Gadolinium given. (orig.)

  16. Significantly Increased Odds of Reporting Previous Shoulder Injuries in Female Marines Based on Larger Magnitude Shoulder Rotator Bilateral Strength Differences.

    Science.gov (United States)

    Eagle, Shawn R; Connaboy, Chris; Nindl, Bradley C; Allison, Katelyn F

    2018-02-01

    Musculoskeletal injuries to the extremities are a primary concern for the United States (US) military. One possible injury risk factor in this population is side-to-side strength imbalance. To examine the odds of reporting a previous shoulder injury in US Marine Corps Ground Combat Element Integrated Task Force volunteers based on side-to-side strength differences in isokinetic shoulder strength. Cohort study; Level of evidence, 3. Male (n = 219) and female (n = 91) Marines were included in this analysis. Peak torque values from 5 shoulder internal/external rotation repetitions were averaged and normalized to body weight. The difference in side-to-side strength measurements was calculated as the absolute value of the limb difference divided by the mean peak torque of the dominant limb. Participants were placed into groups based on the magnitude of these differences: 20%. Odds ratios (ORs) and 95% CIs were calculated. When separated by sex, 13.2% of men reported an injury, while 5.5% of women reported an injury. Female Marines with >20% internal rotation side-to-side strength differences demonstrated increased odds of reporting a previous shoulder injury compared with female Marines with reporting a previous shoulder injury compared with those with lesser magnitude differences. Additionally, female sex appears to drastically affect the increased odds of reporting shoulder injuries (OR, 13.9-15.4) with larger magnitude differences (ie, >20%) compared with those with lesser magnitude differences (ie, <10% and 10%-20%). The retrospective cohort design of this study cannot delineate cause and effect but establishes a relationship between female Marines and greater odds of larger magnitude strength differences after returning from an injury.

  17. Comparison between Conventional Blind Embryo Transfer and Embryo Transfer Based on Previously Measured Uterine Length

    Directory of Open Access Journals (Sweden)

    Nasrin Saharkhiz

    2014-11-01

    Full Text Available Background: Embryo transfer (ET is one of the most important steps in assisted reproductive technology (ART cycles and affected by many factors namely the depth of embryo deposition in uterus. In this study, the outcomes of intracytoplasmic sperm injection (ICSI cycles after blind embryo transfer and embryo transfer based on previously measured uterine length using vaginal ultrasound were compared. Materials and Methods: This prospective randomised clinical trial included one hundred and forty non-donor fresh embryo transfers during January 2010 to June 2011. In group I, ET was performed using conventional (blind method at 5-6cm from the external os, and in group II, ET was done at a depth of 1-1.5 cm from the uterine fundus based on previously measured uterine length using vaginal sonography. Appropriate statistical analysis was performed using Student’s t test and Chi-square or Fisher’s exact test. The software that we used was PASW statistics version 18. A p value <0.05 was considered statistically significant. Results: Chemical pregnancy rate was 28.7% in group I and 42.1% in group II, while the difference was not statistically significant (p=0.105. Clinical pregnancy, ongoing pregnancy and implantation rates for group I were 21.2%, 17.7%, and 12.8%, while for group II were 33.9%, 33.9%, and 22.1, respectively. In group I and group II, abortion rates were 34.7% and 0%, respectively, indicating a statistically significant difference (p<0.005. No ectopic pregnancy occurred in two groups. Conclusion: The use of uterine length measurement during treatment cycle in order to place embryos at depth of 1-1.5cm from fundus significantly increases clinical and ongoing pregnancy and implantation rates, while leads to a decrease in abortion rate (Registration Number: IRCT2014032512494N1.

  18. The impact of previous knee injury on force plate and field-based measures of balance.

    Science.gov (United States)

    Baltich, Jennifer; Whittaker, Jackie; Von Tscharner, Vinzenz; Nettel-Aguirre, Alberto; Nigg, Benno M; Emery, Carolyn

    2015-10-01

    Individuals with post-traumatic osteoarthritis demonstrate increased sway during quiet stance. The prospective association between balance and disease onset is unknown. Improved understanding of balance in the period between joint injury and disease onset could inform secondary prevention strategies to prevent or delay the disease. This study examines the association between youth sport-related knee injury and balance, 3-10years post-injury. Participants included 50 individuals (ages 15-26years) with a sport-related intra-articular knee injury sustained 3-10years previously and 50 uninjured age-, sex- and sport-matched controls. Force-plate measures during single-limb stance (center-of-pressure 95% ellipse-area, path length, excursion, entropic half-life) and field-based balance scores (triple single-leg hop, star-excursion, unipedal dynamic balance) were collected. Descriptive statistics (mean within-pair difference; 95% confidence intervals) were used to compare groups. Linear regression (adjusted for injury history) was used to assess the relationship between ellipse-area and field-based scores. Injured participants on average demonstrated greater medio-lateral excursion [mean within-pair difference (95% confidence interval); 2.8mm (1.0, 4.5)], more regular medio-lateral position [10ms (2, 18)], and shorter triple single-leg hop distances [-30.9% (-8.1, -53.7)] than controls, while no between group differences existed for the remaining outcomes. After taking into consideration injury history, triple single leg hop scores demonstrated a linear association with ellipse area (β=0.52, 95% confidence interval 0.01, 1.01). On average the injured participants adjusted their position less frequently and demonstrated a larger magnitude of movement during single-limb stance compared to controls. These findings support the evaluation of balance outcomes in the period between knee injury and post-traumatic osteoarthritis onset. Copyright © 2015 Elsevier Ltd. All rights

  19. Evaluation of questionnaire-based information on previous physical work loads. Stockholm MUSIC 1 Study Group. Musculoskeletal Intervention Center.

    Science.gov (United States)

    Torgén, M; Winkel, J; Alfredsson, L; Kilbom, A

    1999-06-01

    The principal aim of the present study was to evaluate questionnaire-based information on past physical work loads (6-year recall). Effects of memory difficulties on reproducibility were evaluated for 82 subjects by comparing previously reported results on current work loads (test-retest procedure) with the same items recalled 6 years later. Validity was assessed by comparing self-reports in 1995, regarding work loads in 1989, with worksite measurements performed in 1989. Six-year reproducibility, calculated as weighted kappa coefficients (k(w)), varied between 0.36 and 0.86, with the highest values for proportion of the workday spent sitting and for perceived general exertion and the lowest values for trunk and neck flexion. The six-year reproducibility results were similar to previously reported test-retest results for these items; this finding indicates that memory difficulties was a minor problem. The validity of the questionnaire responses, expressed as rank correlations (r(s)) between the questionnaire responses and workplace measurements, varied between -0.16 and 0.78. The highest values were obtained for the items sitting and repetitive work, and the lowest and "unacceptable" values were for head rotation and neck flexion. Misclassification of exposure did not appear to be differential with regard to musculoskeletal symptom status, as judged by the calculated risk estimates. The validity of some of these self-administered questionnaire items appears sufficient for a crude assessment of physical work loads in the past in epidemiologic studies of the general population with predominantly low levels of exposure.

  20. Late preterm birth and previous cesarean section: a population-based cohort study.

    Science.gov (United States)

    Yasseen Iii, Abdool S; Bassil, Kate; Sprague, Ann; Urquia, Marcelo; Maguire, Jonathon L

    2018-02-21

    Late preterm birth (LPB) is increasingly common and associated with higher morbidity and mortality than term birth. Yet, little is known about the influence of previous cesarean section (PCS) and the occurrence of LPB in subsequent pregnancies. We aim to evaluate this association along with the potential mediation by cesarean sections in the current pregnancy. We use population-based birth registry data (2005-2012) to establish a cohort of live born singleton infants born between 34 and 41 gestational weeks to multiparous mothers. PCS was the primary exposure, LPB (34-36 weeks) was the primary outcome, and an unplanned or emergency cesarean section in the current pregnancy was the potential mediator. Associations were quantified using propensity weighted multivariable Poisson regression, and mediating associations were explored using the Baron-Kenny approach. The cohort included 481,531 births, 21,893 (4.5%) were LPB, and 119,983 (24.9%) were predated by at least one PCS. Among mothers with at least one PCS, 6307 (5.26%) were LPB. There was increased risk of LPB among women with at least one PCS (adjusted Relative Risk (aRR): 1.20 (95%CI [1.16, 1.23]). Unplanned or emergency cesarean section in the current pregnancy was identified as a strong mediator to this relationship (mediation ratio = 97%). PCS was associated with higher risk of LPB in subsequent pregnancies. This may be due to an increased risk of subsequent unplanned or emergency preterm cesarean sections. Efforts to minimize index cesarean sections may reduce the risk of LPB in subsequent pregnancies.

  1. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  2. Value-Based Calculators in Cancer: Current State and Challenges.

    Science.gov (United States)

    Nabhan, Chadi; Feinberg, Bruce A

    2017-08-01

    The ASCO Value Framework, National Comprehensive Cancer Network Evidence Blocks, Memorial Sloan Kettering's DrugAbacus, and Institute for Clinical and Economic Review incremental cost-effectiveness ratio calculator are value-based methodologies that attempt to address the disproportionate increase in cancer care spending. These calculators can be used as an initial step for discussing cost versus value, but they fall short in recognizing the importance of the cancer journey because they do not fully factor the patient's perspective or the global cost of care. This timely review highlights both the limitations and the advantages of each value calculator and suggests opportunities for refinement. Practicing oncologists, payers, and manufacturers should be familiar with value-based calculators because the role these tools play in cost containment is likely to be hotly debated.

  3. Efficacy of peg-interferon based treatment in patients with hepatitis C refractory to previous conventional interferon-based treatment

    International Nuclear Information System (INIS)

    Shaikh, S.; Devrajani, B.R.; Kalhoro, M.

    2012-01-01

    Objective: To determine the efficacy of peg-interferon-based therapy in patients refractory to previous conventional interferon-based treatment and factors predicting sustained viral response (SVR). Study Design: Analytical study. Place and Duration of Study: Medical Unit IV, Liaquat University Hospital, Jamshoro, from July 2009 to June 2011. Methodology: This study included consecutive patients of hepatitis C who were previously treated with conventional interferon-based treatment for 6 months but were either non-responders, relapsed or had virologic breakthrough and stage = 2 with fibrosis on liver biopsy. All eligible patients were provided peg-interferon at the dosage of 180 mu g weekly with ribavirin thrice a day for 6 months. Sustained Viral Response (SVR) was defined as absence of HCV RNA at twenty four week after treatment. All data was processed on SPSS version 16. Results: Out of 450 patients enrolled in the study, 192 were excluded from the study on the basis of minimal fibrosis (stage 0 and 1). Two hundred and fifty eight patients fulfilled the inclusion criteria and 247 completed the course of peg-interferon treatment. One hundred and sixty one (62.4%) were males and 97 (37.6%) were females. The mean age was 39.9 +- 6.1 years, haemoglobin was 11.49 +- 2.45 g/dl, platelet count was 127.2 +- 50.6 10/sup 3/ /mm/sup 3/, ALT was 99 +- 65 IU/L. SVR was achieved in 84 (32.6%). The strong association was found between SVR and the pattern of response (p = 0. 001), degree of fibrosis and early viral response (p = 0.001). Conclusion: Peg-interferon based treatment is an effective and safe treatment option for patients refractory to conventional interferon-based treatment. (author)

  4. Sudden Cardiac Death in Young Adults With Previous Hospital-Based Psychiatric Inpatient and Outpatient Treatment

    DEFF Research Database (Denmark)

    Risgaard, Bjarke; Waagstein, Kristine; Winkel, Bo Gregers

    2015-01-01

    Introduction: Psychiatric patients have premature mortality compared to the general population. The incidence of sudden cardiac death (SCD) in psychiatric patients is unknown in a nationwide setting. The aim of this study was to compare nationwide SCD incidence rates in young individuals with and......Introduction: Psychiatric patients have premature mortality compared to the general population. The incidence of sudden cardiac death (SCD) in psychiatric patients is unknown in a nationwide setting. The aim of this study was to compare nationwide SCD incidence rates in young individuals...... with and without previous psychiatric disease. Method: Nationwide, retrospective cohort study including all deaths in people aged 18–35 years in 2000–2006 in Denmark. The unique Danish death certificates and autopsy reports were used to identify SCD cases. Psychiatric disease was defined as a previous psychiatric...

  5. Analysis of Product Buying Decision on Lazada E-commerce based on Previous Buyers’ Comments

    OpenAIRE

    Neil Aldrin

    2017-01-01

    The aims of the present research are: 1) to know that product buying decision possibly occurs, 2) to know how product buying decision occurs on Lazada e-commerce’s customers, 3) how previous buyers’ comments can increase product buying decision on Lazada e-commerce. This research utilizes qualitative research method. Qualitative research is a research that investigates other researches and makes assumption or discussion result so that other analysis results can be made in order to widen idea ...

  6. Improved web-based calculators for predicting breast carcinoma outcomes.

    Science.gov (United States)

    Michaelson, James S; Chen, L Leon; Bush, Devon; Fong, Allan; Smith, Barbara; Younger, Jerry

    2011-08-01

    We describe a set of web-based calculators, available at http://www.CancerMath.net , which estimate the risk of breast carcinoma death, the reduction in life expectancy, and the impact of various adjuvant treatment choices. The published SNAP method of the binary biological model of cancer metastasis uses information on tumor size, nodal status, and other prognostic factors to accurately estimate of breast cancer lethality at 15 years after diagnosis. By combining these 15-year lethality estimates with data on the breast cancer hazard function, breast cancer lethality can be estimated at each of the 15 years after diagnosis. A web-based calculator was then created to visualize the estimated lethality with and without a range of adjuvant therapy options at any of the 15 years after diagnosis, and enable conditional survival calculations. NIH population data was used to estimate non-breast-cancer chance of death. The accuracy of the calculators was tested against two large breast carcinoma datasets: 7,907 patients seen at two academic hospitals and 362,491 patients from the SEER national dataset. The calculators were found to be highly accurate and specific, as seen by their capacity for stratifying patients into groups differing by as little as a 2% risk of death, and accurately accounting for nodal status, histology, grade, age, and hormone receptor status. Our breast carcinoma calculators provide accurate and useful estimates of the risk of death, which can aid in analysis of the various adjuvant therapy options available to each patient.

  7. THE ACCOUNTING POSTEMPLOYMENT BENEFITS BASED ON ACTUARIAL CALCULATIONS

    Directory of Open Access Journals (Sweden)

    Anna CEBOTARI

    2017-11-01

    Full Text Available The accounting post-employment benefits, based on actuarial calculations, at present remains a subject studied in Moldova only theoretically. Applying actuarial calculations of accounting in fact denotes its character of evolving. Because national accounting standards have been adapted to international, which, in turn, require the valuation of assets and debts at fair value, there is a need to draw up exact calculations on which stands the theory of probability and mathematical statistics. One of the main objectives of accounting information is reflected in its financial situations and providing internal and external users of the entity. Hence, arises the need to reflect highly reliable information that can be provided by applying actuarial calculations.

  8. Analysis of Product Buying Decision on Lazada E-commerce based on Previous Buyers’ Comments

    Directory of Open Access Journals (Sweden)

    Neil Aldrin

    2017-06-01

    Full Text Available The aims of the present research are: 1 to know that product buying decision possibly occurs, 2 to know how product buying decision occurs on Lazada e-commerce’s customers, 3 how previous buyers’ comments can increase product buying decision on Lazada e-commerce. This research utilizes qualitative research method. Qualitative research is a research that investigates other researches and makes assumption or discussion result so that other analysis results can be made in order to widen idea and opinion. Research result shows that product which has many ratings and reviews will trigger other buyers to purchase or get that product. The conclusion is that product buying decision may occur because there are some processes before making decision which are: looking for recognition and searching for problems, knowing the needs, collecting information, evaluating alternative, evaluating after buying. In those stages, buying decision on Lazada e-commerce is supported by price, promotion, service, and brand.

  9. Data base for terrestrial food pathways dose commitment calculations

    International Nuclear Information System (INIS)

    Bailey, C.E.

    1979-01-01

    A computer program is under development to allow calculation of the dose-to-man in Georgia and South Carolina from ingestion of radionuclides in terrestrial foods resulting from deposition of airborne radionuclides. This program is based on models described in Regulatory Guide 1.109 (USNRC, 1977). The data base describes the movement of radionuclides through the terrestrial food chain, growth and consumption factors for a variety of radionuclides

  10. Calculating gait kinematics using MR-based kinematic models.

    Science.gov (United States)

    Scheys, Lennart; Desloovere, Kaat; Spaepen, Arthur; Suetens, Paul; Jonkers, Ilse

    2011-02-01

    Rescaling generic models is the most frequently applied approach in generating biomechanical models for inverse kinematics. Nevertheless it is well known that this procedure introduces errors in calculated gait kinematics due to: (1) errors associated with palpation of anatomical landmarks, (2) inaccuracies in the definition of joint coordinate systems. Based on magnetic resonance (MR) images, more accurate, subject-specific kinematic models can be built that are significantly less sensitive to both error types. We studied the difference between the two modelling techniques by quantifying differences in calculated hip and knee joint kinematics during gait. In a clinically relevant patient group of 7 pediatric cerebral palsy (CP) subjects with increased femoral anteversion, gait kinematic were calculated using (1) rescaled generic kinematic models and (2) subject-specific MR-based models. In addition, both sets of kinematics were compared to those obtained using the standard clinical data processing workflow. Inverse kinematics, calculated using rescaled generic models or the standard clinical workflow, differed largely compared to kinematics calculated using subject-specific MR-based kinematic models. The kinematic differences were most pronounced in the sagittal and transverse planes (hip and knee flexion, hip rotation). This study shows that MR-based kinematic models improve the reliability of gait kinematics, compared to generic models based on normal subjects. This is the case especially in CP subjects where bony deformations may alter the relative configuration of joint coordinate systems. Whilst high cost impedes the implementation of this modeling technique, our results demonstrate that efforts should be made to improve the level of subject-specific detail in the joint axes determination. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Solid Propellant Burning Equilibrium Ingredients Calculation Based on Temperature Iteration

    Science.gov (United States)

    Hu, Kuan; Zhang, Lin; Chang, Xinlong; Wang, Chao

    2017-07-01

    Aiming at shortcomings of depending on experiences and inferior computation accuracy brought by using linear interpolation to calculate fixed pressure burning temperature, a new method using iteration was put forward. Moreover, the optimization model to calculate equilibrium ingredients was formulated based on the new method, and sequential quadric programming method rather than Lagrange multiplier was used to compute the model. At last, a numerical example was given to validate the method in this paper. Results make show that more perfect outcomes can be achieved through the method in this paper than classical method.

  12. Electric field calculations in brain stimulation based on finite elements

    DEFF Research Database (Denmark)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-01-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... high-quality head models from magnetic resonance images and their usage in subsequent field calculations based on the FEM. The pipeline starts by extracting the borders between skin, skull, cerebrospinal fluid, gray and white matter. The quality of the resulting surfaces is subsequently improved...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...

  13. Contradictions in the Dynamics of Remote Sensing based Evapotranspiration Calculation

    Science.gov (United States)

    Dhungel, R.

    2016-12-01

    The significance of accurate evapotranspiration (ET) need not be overstated because of the current prolonged drought, water scarcity, increasing population, and climate change in many parts of the world. The remote sensing based ET calculation methods had been taken as one of the reliable tools for estimating ET at larger temporal and spatial resolution. The linearity between temperature difference (DT) and surface temperature (Ts) from the thermal band of the satellite is utilized in many operational evapotranspiration (ET) models (SEBAL/METRIC) invoking the anchor pixel concept. In these models, the surface-air temperature difference in anchor pixels (dThot/cold) are calculated based on known the sensible heat flux (H) from the surface energy balance method. We explored the inherent differences while inverting the aerodynamic equation of H with the actual surface-air temperature (dTact) to dThot/cold. The results showed that this formulation possibly underestimates H with smaller dT slope, which overall overestimates the ET. The major finding and innovative aspect of this study are to present the two inconsistent behaviors of the identical process of energy transformation, which had been utilized by remote sensing based evapotranspiration models. This study will help to understand the uncertainty in H calculations in these models, explore the limitations of this methodology (dThot, cold), and warrant further discussion of this application in remote sensing and micrometeorology community.

  14. Plasma density calculation based on the HCN waveform data

    International Nuclear Information System (INIS)

    Chen Liaoyuan; Pan Li; Luo Cuiwen; Zhou Yan; Deng Zhongchao

    2004-01-01

    A method to improve the plasma density calculation is introduced using the base voltage and the phase zero points obtained from the HCN interference waveform data. The method includes making the signal quality higher by putting the signal control device and the analog-to-digit converters in the same location and charging them by the same power, and excluding the noise's effect according to the possible changing rate of the signal's phase, and to make the base voltage more accurate by dynamical data processing. (authors)

  15. New Products and Technologies, Based on Calculations Developed Areas

    Directory of Open Access Journals (Sweden)

    Gheorghe Vertan

    2013-09-01

    Full Text Available Following statistics, currently prosperous and have high GDP / capita, only countries that have and fructify intensively large natural resources and/or produce and export products massive based on patented inventions accordingly. Without great natural wealth and the lowest GDP / capita in the EU, Romania will prosper only with such products. Starting from the top experience in the country, some patented, can develop new and competitive technologies and patentable and exportable products, based on exact calculations of developed areas, such as that double shells welded assemblies and plating of ships' propellers and blade pump and hydraulic turbines.

  16. Towards automated calculation of evidence-based clinical scores.

    Science.gov (United States)

    Aakre, Christopher A; Dziadzko, Mikhail A; Herasevich, Vitaly

    2017-03-26

    To determine clinical scores important for automated calculation in the inpatient setting. A modified Delphi methodology was used to create consensus of important clinical scores for inpatient practice. A list of 176 externally validated clinical scores were identified from freely available internet-based services frequently used by clinicians. Scores were categorized based on pertinent specialty and a customized survey was created for each clinician specialty group. Clinicians were asked to rank each score based on importance of automated calculation to their clinical practice in three categories - "not important", "nice to have", or "very important". Surveys were solicited via specialty-group listserv over a 3-mo interval. Respondents must have been practicing physicians with more than 20% clinical time spent in the inpatient setting. Within each specialty, consensus was established for any clinical score with greater than 70% of responses in a single category and a minimum of 10 responses. Logistic regression was performed to determine predictors of automation importance. Seventy-nine divided by one hundred and forty-four (54.9%) surveys were completed and 72/144 (50%) surveys were completed by eligible respondents. Only the critical care and internal medicine specialties surpassed the 10-respondent threshold (14 respondents each). For internists, 2/110 (1.8%) of scores were "very important" and 73/110 (66.4%) were "nice to have". For intensivists, no scores were "very important" and 26/76 (34.2%) were "nice to have". Only the number of medical history (OR = 2.34; 95%CI: 1.26-4.67; P calculation. Future efforts towards score calculator automation should focus on technically feasible "nice to have" scores.

  17. Time Correlation Calculation Method Based on Delayed Coordinates

    Science.gov (United States)

    Morino, K.; Kobayashi, M. U.; Miyazaki, S.

    2009-06-01

    An approximate calculation method of time correlations by use of delayed coordinate is proposed. For a solvable piecewise linear hyperbolic chaotic map, this approximation is compared with the exact calculation, and an exponential convergence for the maximum time delay M is found. By use of this exponential convergence, the exact result for M &to ∞ is extrapolated from this approximation for the first few values of M. This extrapolation is shown to be much better than direct numerical simulations based on the definition of the time correlation function. As an application, the irregular dependence of diffusion coefficients similar to Takagi or Weierstrass functions is obtained from this approximation, which is indistinguishable from the exact result only at M = 2. The method is also applied to the dissipative Lozi and Hénon maps and the conservative standard map in order to show wide applicability.

  18. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  19. Sensor Based Engine Life Calculation: A Probabilistic Perspective

    Science.gov (United States)

    Guo, Ten-Huei; Chen, Philip

    2003-01-01

    It is generally known that an engine component will accumulate damage (life usage) during its lifetime of use in a harsh operating environment. The commonly used cycle count for engine component usage monitoring has an inherent range of uncertainty which can be overly costly or potentially less safe from an operational standpoint. With the advance of computer technology, engine operation modeling, and the understanding of damage accumulation physics, it is possible (and desirable) to use the available sensor information to make a more accurate assessment of engine component usage. This paper describes a probabilistic approach to quantify the effects of engine operating parameter uncertainties on the thermomechanical fatigue (TMF) life of a selected engine part. A closed-loop engine simulation with a TMF life model is used to calculate the life consumption of different mission cycles. A Monte Carlo simulation approach is used to generate the statistical life usage profile for different operating assumptions. The probabilities of failure of different operating conditions are compared to illustrate the importance of the engine component life calculation using sensor information. The results of this study clearly show that a sensor-based life cycle calculation can greatly reduce the risk of component failure as well as extend on-wing component life by avoiding unnecessary maintenance actions.

  20. The PHREEQE Geochemical equilibrium code data base and calculations

    International Nuclear Information System (INIS)

    Andersoon, K.

    1987-01-01

    Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)

  1. A drainage data-based calculation method for coalbed permeability

    International Nuclear Information System (INIS)

    Lai, Feng-peng; Li, Zhi-ping; Fu, Ying-kun; Yang, Zhi-hao

    2013-01-01

    This paper establishes a drainage data-based calculation method for coalbed permeability. The method combines material balance and production equations. We use a material balance equation to derive the average pressure of the coalbed in the production process. The dimensionless water production index is introduced into the production equation for the water production stage. In the subsequent stage, which uses both gas and water, the gas and water production ratio is introduced to eliminate the effect of flush-flow radius, skin factor, and other uncertain factors in the calculation of coalbed methane permeability. The relationship between permeability and surface cumulative liquid production can be described as a single-variable cubic equation by derivation. The trend shows that the permeability initially declines and then increases after ten wells in the southern Qinshui coalbed methane field. The results show an exponential relationship between permeability and cumulative water production. The relationship between permeability and cumulative gas production is represented by a linear curve and that between permeability and surface cumulative liquid production is represented by a cubic polynomial curve. The regression result of the permeability and surface cumulative liquid production agrees with the theoretical mathematical relationship. (paper)

  2. [Calculating method for crop water requirement based on air temperature].

    Science.gov (United States)

    Tao, Guo-Tong; Wang, Jing-Lei; Nan, Ji-Qin; Gao, Yang; Chen, Zhi-Fang; Song, Ni

    2014-07-01

    The importance of accurately estimating crop water requirement for irrigation forecast and agricultural water management has been widely recognized. Although it has been broadly adopted to determine crop evapotranspiration (ETc) via meteorological data and crop coefficient, most of the data in whether forecast are qualitative rather than quantitative except air temperature. Therefore, in this study, how to estimate ETc precisely only using air temperature data in forecast was explored, the accuracy of estimation based on different time scales was also investigated, which was believed to be beneficial to local irrigation forecast as well as optimal management of water and soil resources. Three parameters of Hargreaves equation and two parameters of McClound equation were corrected by using meteorological data of Xinxiang from 1970 to 2010, and Hargreaves equation was selected to calculate reference evapotranspiration (ET0) during the growth period of winter wheat. A model of calculating crop water requirement was developed to predict ETc at time scales of 1, 3, and 7 d intervals through combining Hargreaves equation and crop coefficient model based on air temperature. Results showed that the correlation coefficients between measured and predicted values of ETc reached 0.883 (1 d), 0.933 (3 d), and 0.959 (7 d), respectively. The consistency indexes were 0.94, 0.95 and 0.97, respectively, which showed that forecast error decreased with the increasing time scales. Forecasted accuracy with an error less than 1 mm x d(-1) was more than 80%, and that less than 2 mm x d(-1) was greater than 90%. This study provided sound basis for irrigation forecast and agricultural management in irrigated areas since the forecasted accuracy at each time scale was relatively high.

  3. Wavelet-Based DFT calculations on Massively Parallel Hybrid Architectures

    Science.gov (United States)

    Genovese, Luigi

    2011-03-01

    In this contribution, we present an implementation of a full DFT code that can run on massively parallel hybrid CPU-GPU clusters. Our implementation is based on modern GPU architectures which support double-precision floating-point numbers. This DFT code, named BigDFT, is delivered within the GNU-GPL license either in a stand-alone version or integrated in the ABINIT software package. Hybrid BigDFT routines were initially ported with NVidia's CUDA language, and recently more functionalities have been added with new routines writeen within Kronos' OpenCL standard. The formalism of this code is based on Daubechies wavelets, which is a systematic real-space based basis set. As we will see in the presentation, the properties of this basis set are well suited for an extension on a GPU-accelerated environment. In addition to focusing on the implementation of the operators of the BigDFT code, this presentation also relies of the usage of the GPU resources in a complex code with different kinds of operations. A discussion on the interest of present and expected performances of Hybrid architectures computation in the framework of electronic structure calculations is also adressed.

  4. Goal based mesh adaptivity for fixed source radiation transport calculations

    International Nuclear Information System (INIS)

    Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.S.; Goffin, M.A.; Merton, S.R.; Warner, P.

    2013-01-01

    Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed

  5. Introducing E-tec: Ensemble-based Topological Entropy Calculation

    Science.gov (United States)

    Roberts, Eric; Smith, Spencer; Sindi, Suzanne; Smith, Kevin

    2017-11-01

    Topological entropy is a measurement of orbit complexity in a dynamical system that can be estimated in 2D by embedding an initial material curve L0 in the fluid and estimating its growth under the evolution of the flow. This growth is given by L (t) = | L0 | eht , where L (t) is the length of the curve as a function of t and h is the topological entropy. In order to develop a method for computing Eq. (1) that will efficiently scale up in both system size and modeling time, one must be clever about extracting the maximum information from the limited trajectories available. The relative motion of trajectories through phase space encodes global information that is not contained in any individual trajectory. That is, extra information is ''hiding'' in an ensemble of classical trajectories, which is not exploited in a trajectory-by-trajectory approach. Using tools from computational geometry, we introduce a new algorithm designed to take advantage of such additional information that requires only potentially sparse sets of particle trajectories as input and no reliance on any detailed knowledge of the velocity field: the Ensemble-Based Topological Entropy Calculation, or E-tec.

  6. Magnitude processing and complex calculation is negatively impacted by mathematics anxiety while retrieval-based simple calculation is not.

    Science.gov (United States)

    Lee, Kyungmin; Cho, Soohyun

    2017-01-26

    Mathematics anxiety (MA) refers to the experience of negative affect when engaging in mathematical activity. According to Ashcraft and Kirk (2001), MA selectively affects calculation with high working memory (WM) demand. On the other hand, Maloney, Ansari, and Fugelsang (2011) claim that MA affects all mathematical activities, including even the most basic ones such as magnitude comparison. The two theories make opposing predictions on the negative effect of MA on magnitude processing and simple calculation that make minimal demands on WM. We propose that MA has a selective impact on mathematical problem solving that likely involves processing of magnitude representations. Based on our hypothesis, MA will impinge upon magnitude processing even though it makes minimal demand on WM, but will spare retrieval-based, simple calculation, because it does not require magnitude processing. Our hypothesis can reconcile opposing predictions on the negative effect of MA on magnitude processing and simple calculation. In the present study, we observed a negative relationship between MA and performance on magnitude comparison and calculation with high but not low WM demand. These results demonstrate that MA has an impact on a wide range of mathematical performance, which depends on one's sense of magnitude, but spares over-practiced, retrieval-based calculation. © 2017 International Union of Psychological Science.

  7. Hybrid Electric Vehicle Control Strategy Based on Power Loss Calculations

    OpenAIRE

    Boyd, Steven J

    2006-01-01

    Defining an operation strategy for a Split Parallel Architecture (SPA) Hybrid Electric Vehicle (HEV) is accomplished through calculating powertrain component losses. The results of these calculations define how the vehicle can decrease fuel consumption while maintaining low vehicle emissions. For a HEV, simply operating the vehicle's engine in its regions of high efficiency does not guarantee the most efficient vehicle operation. The results presented are meant only to define a literal str...

  8. Life cycle assessment of nuclear-based hydrogen production using thermochemical water decomposition: extension of previous work and future needs

    International Nuclear Information System (INIS)

    Lubis, L.I.; Dincer, I.; Rosen, M.A.

    2008-01-01

    An extension of a previous Life Cycle Assessment (LCA) of nuclear-based hydrogen production using thermochemical water decomposition is reported. The copper-chlorine thermochemical cycle is considered, and the environmental impacts of the nuclear and thermochemical plants are assessed, while future needs are identified. Environmental impacts are investigated using CML 2001 impact categories. The nuclear fuel cycle and construction of the hydrogen plant contribute significantly to total environmental impacts. The environmental impacts for the operation of the thermochemical hydrogen production plant contribute much less. Changes in the inventory of chemicals needed in the thermochemical plant do not affect significantly the total impacts. Improvement analysis suggests the development of more sustainable processes, particularly in the nuclear plant. Other important and necessary future extensions of the research reported are also provided. (author)

  9. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine

    International Nuclear Information System (INIS)

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6 MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, − 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3 mm criteria. The mean and standard deviation of pixels passing

  10. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine.

    Science.gov (United States)

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, - 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3mm criteria. The mean and standard deviation of pixels passing gamma

  11. Inverse boundary element calculations based on structural modes

    DEFF Research Database (Denmark)

    Juhl, Peter Møller

    2007-01-01

    The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods sol...

  12. Calculation of the Transit Dose in HDR Brachytherapy Based on ...

    African Journals Online (AJOL)

    The Monte Carlo method, which is the gold standard for accurate dose calculations in radiotherapy, was used to obtain the transit doses around a high dose rate (HDR) brachytherapy implant with thirteen dwell points. The midpoints of each of the inter-dwell separations, of step size 0.25 cm, were representative of the ...

  13. NMR-based phytochemical analysis of Vitis vinifera cv Falanghina leaves. Characterization of a previously undescribed biflavonoid with antiproliferative activity.

    Science.gov (United States)

    Tartaglione, Luciana; Gambuti, Angelita; De Cicco, Paola; Ercolano, Giuseppe; Ianaro, Angela; Taglialatela-Scafati, Orazio; Moio, Luigi; Forino, Martino

    2018-03-01

    Vitis vinifera cv Falanghina is an ancient grape variety of Southern Italy. A thorough phytochemical analysis of the Falanghina leaves was conducted to investigate its specialised metabolite content. Along with already known molecules, such as caftaric acid, quercetin-3-O-β-d-glucopyranoside, quercetin-3-O-β-d-glucuronide, kaempferol-3-O-β-d-glucopyranoside and kaempferol-3-O-β-d-glucuronide, a previously undescribed biflavonoid was identified. For this last compound, a moderate bioactivity against metastatic melanoma cells proliferation was discovered. This datum can be of some interest to researchers studying human melanoma. The high content in antioxidant glycosylated flavonoids supports the exploitation of grape vine leaves as an inexpensive source of natural products for the food industry and for both pharmaceutical and nutraceutical companies. Additionally, this study offers important insights into the plant physiology, thus prompting possible technological researches of genetic selection based on the vine adaptation to specific pedo-climatic environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Calculation of shear stiffness in noise dominated magnetic resonance elastography data based on principal frequency estimation

    Energy Technology Data Exchange (ETDEWEB)

    McGee, K P; Lake, D; Mariappan, Y; Manduca, A; Ehman, R L [Department of Radiology, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Hubmayr, R D [Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Ansell, K, E-mail: mcgee.kiaran@mayo.edu [Schaeffer Academy, 2700 Schaeffer Lane NE, Rochester, MN 55906 (United States)

    2011-07-21

    Magnetic resonance elastography (MRE) is a non-invasive phase-contrast-based method for quantifying the shear stiffness of biological tissues. Synchronous application of a shear wave source and motion encoding gradient waveforms within the MRE pulse sequence enable visualization of the propagating shear wave throughout the medium under investigation. Encoded shear wave-induced displacements are then processed to calculate the local shear stiffness of each voxel. An important consideration in local shear stiffness estimates is that the algorithms employed typically calculate shear stiffness using relatively high signal-to-noise ratio (SNR) MRE images and have difficulties at an extremely low SNR. A new method of estimating shear stiffness based on the principal spatial frequency of the shear wave displacement map is presented. Finite element simulations were performed to assess the relative insensitivity of this approach to decreases in SNR. Additionally, ex vivo experiments were conducted on normal rat lungs to assess the robustness of this approach in low SNR biological tissue. Simulation and experimental results indicate that calculation of shear stiffness by the principal frequency method is less sensitive to extremely low SNR than previously reported MRE inversion methods but at the expense of loss of spatial information within the region of interest from which the principal frequency estimate is derived.

  15. Poster - 08: Preliminary Investigation into Collapsed-Cone based Dose Calculations for COMS Eye Plaques

    International Nuclear Information System (INIS)

    Morrison, Hali; Menon, Geetha; Sloboda, Ron

    2016-01-01

    Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm 3 water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.

  16. Inverse boundary element calculations based on structural modes

    DEFF Research Database (Denmark)

    Juhl, Peter Møller

    2007-01-01

    The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods solve...... for the unknown normal velocities of the structure at the relatively large number of nodes in the numerical model. Efficiently the regularization technique smoothes the solution spatially, since a fast spatial variation is associated with high index singular values, which is filtered out or damped...... in the regularization. Hence, the effective number of degrees of freedom in the model is often much lower than the number of nodes in the model. The present paper deals with an alternative formulation possible for the subset of radiation problems in which a (structural) modal expansion is known for the structure...

  17. Realistic calculations of carbon-based disordered systems

    International Nuclear Information System (INIS)

    Rocha, A R; Fazzio, A; Rossi, Mariana; Silva, Antonio J R da

    2010-01-01

    Carbon nanotubes rank amongst potential candidates for a new family of nanoscopic devices, in particular for sensing applications. At the same time that defects in carbon nanotubes act as binding sites for foreign species, our current level of control over the fabrication process does not allow one to specifically choose where these binding sites will actually be positioned. In this work we present a theoretical framework for accurately calculating the electronic and transport properties of long disordered carbon nanotubes containing a large number of binding sites randomly distributed along a sample. This method combines the accuracy and functionality of ab initio density functional theory to determine the electronic structure with a recursive Green's functions method. We apply this methodology on the problem of nitrogen-rich carbon nanotubes, first considering different types of defects and then demonstrating how our simulations can help in the field of sensor design by allowing one to compute the transport properties of realistic nanotube devices containing a large number of randomly distributed binding sites.

  18. QED Based Calculation of the Fine Structure Constant

    Energy Technology Data Exchange (ETDEWEB)

    Lestone, John Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-13

    Quantum electrodynamics is complex and its associated mathematics can appear overwhelming for those not trained in this field. Here, semi-classical approaches are used to obtain a more intuitive feel for what causes electrostatics, and the anomalous magnetic moment of the electron. These intuitive arguments lead to a possible answer to the question of the nature of charge. Virtual photons, with a reduced wavelength of λ, are assumed to interact with isolated electrons with a cross section of πλ2. This interaction is assumed to generate time-reversed virtual photons that are capable of seeking out and interacting with other electrons. This exchange of virtual photons between particles is assumed to generate and define the strength of electromagnetism. With the inclusion of near-field effects the model presented here gives a fine structure constant of ~1/137 and an anomalous magnetic moment of the electron of ~0.00116. These calculations support the possibility that near-field corrections are the key to understanding the numerical value of the dimensionless fine structure constant.

  19. New population-based exome data question the pathogenicity of some genetic variants previously associated with Marfan syndrome

    DEFF Research Database (Denmark)

    Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu

    2014-01-01

    BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...

  20. Estimating the effect of current, previous and never use of drugs in studies based on prescription registries

    DEFF Research Database (Denmark)

    Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms

    2009-01-01

    PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do not ca...

  1. New population-based exome data are questioning the pathogenicity of previously cardiomyopathy-associated genetic variants

    DEFF Research Database (Denmark)

    Andreasen, Charlotte Hartig; Nielsen, Jonas B; Refsgaard, Lena

    2013-01-01

    variants in the NHLBI-Go Exome Sequencing Project (ESP) containing exome data from 6500 individuals. In ESP, we identified 94 variants out of 687 (14%) variants previously associated with HCM, 58 out of 337 (17%) variants associated with DCM, and 38 variants out of 209 (18%) associated with ARVC...... with these cardiomyopathies, but the disease-causing effect of reported variants is often dubious. In order to identify possible false-positive variants, we investigated the prevalence of previously reported cardiomyopathy-associated variants in recently published exome data. We searched for reported missense and nonsense...... times higher than expected from the phenotype prevalences in the general population (HCM 1:500, DCM 1:2500, and ARVC 1:5000) and our data suggest that a high number of these variants are not monogenic causes of cardiomyopathy....

  2. Five criteria for using a surrogate endpoint to predict treatment effect based on data from multiple previous trials.

    Science.gov (United States)

    Baker, Stuart G

    2018-02-20

    A surrogate endpoint in a randomized clinical trial is an endpoint that occurs after randomization and before the true, clinically meaningful, endpoint that yields conclusions about the effect of treatment on true endpoint. A surrogate endpoint can accelerate the evaluation of new treatments but at the risk of misleading conclusions. Therefore, criteria are needed for deciding whether to use a surrogate endpoint in a new trial. For the meta-analytic setting of multiple previous trials, each with the same pair of surrogate and true endpoints, this article formulates 5 criteria for using a surrogate endpoint in a new trial to predict the effect of treatment on the true endpoint in the new trial. The first 2 criteria, which are easily computed from a zero-intercept linear random effects model, involve statistical considerations: an acceptable sample size multiplier and an acceptable prediction separation score. The remaining 3 criteria involve clinical and biological considerations: similarity of biological mechanisms of treatments between the new trial and previous trials, similarity of secondary treatments following the surrogate endpoint between the new trial and previous trials, and a negligible risk of harmful side effects arising after the observation of the surrogate endpoint in the new trial. These 5 criteria constitute an appropriately high bar for using a surrogate endpoint to make a definitive treatment recommendation. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  3. A method for calculating the acid-base equilibria in aqueous and nonaqueous electrolyte solutions

    Science.gov (United States)

    Tanganov, B. B.; Alekseeva, I. A.

    2017-06-01

    Concentrations of particles in acid-base equilibria in aqueous and nonaqueous solutions of electrolytes are calculated on the basis of logarithmic charts, activity coefficients, and equilibrium constants.

  4. Inhibitor Ranking Through QM based Chelation Calculations for Virtual Screening of HIV-1 RNase H inhibition

    DEFF Research Database (Denmark)

    Poongavanam, Vasanthanathan; Svendsen, Casper Steinmann; Kongsted, Jacob

    2014-01-01

    of the methods based on the use of a training set of molecules, QM based chelation calculations were used as filter in virtual screening of compounds in the ZINC database. By this, we find, compared to regular docking, QM based chelation calculations to significantly reduce the large number of false positives......Quantum mechanical (QM) calculations have been used to predict the binding affinity of a set of ligands towards HIV-1 RT associated RNase H (RNH). The QM based chelation calculations show improved binding affinity prediction for the inhibitors compared to using an empirical scoring function...

  5. Calculation of Sentence Semantic Similarity Based on Syntactic Structure

    Directory of Open Access Journals (Sweden)

    Xiao Li

    2015-01-01

    Full Text Available Combined with the problem of single direction of the solution of the existing sentence similarity algorithms, an algorithm for sentence semantic similarity based on syntactic structure was proposed. Firstly, analyze the sentence constituent, then through analysis convert sentence similarity into words similarity on the basis of syntactic structure, then convert words similarity into concept similarity through words disambiguation, and, finally, realize the semantic similarity comparison. It also gives the comparison rules in more detail for the modifier words in the sentence which also have certain contributions to the sentence. Under the same test condition, the experiments show that the proposed algorithm is more intuitive understanding of people and has higher accuracy.

  6. The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.

    Science.gov (United States)

    Youn, Ahrim; Wang, Shuang

    2018-02-06

    Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .

  7. Birth outcome in women with previously treated breast cancer--a population-based cohort study from Sweden.

    Directory of Open Access Journals (Sweden)

    Kristina Dalberg

    2006-09-01

    Full Text Available Data on birth outcome and offspring health after the appearance of breast cancer are limited. The aim of this study was to assess the risk of adverse birth outcomes in women previously treated for invasive breast cancer compared with the general population of mothers.Of all 2,870,932 singleton births registered in the Swedish Medical Birth Registry during 1973-2002, 331 first births following breast cancer surgery--with a mean time to pregnancy of 37 mo (range 7-163--were identified using linkage with the Swedish Cancer Registry. Logistic regression analysis was used. The estimates were adjusted for maternal age, parity, and year of delivery. Odds ratios (ORs and 95% confidence intervals (CIs were used to estimate infant health and mortality, delivery complications, the risk of preterm birth, and the rates of instrumental delivery and cesarean section. The large majority of births from women previously treated for breast cancer had no adverse events. However, births by women exposed to breast cancer were associated with an increased risk of delivery complications (OR 1.5, 95% CI 1.2-1.9, cesarean section (OR 1.3, 95% CI 1.0-1.7, very preterm birth (<32 wk (OR 3.2, 95% CI 1.7-6.0, and low birth weight (<1500 g (OR 2.9, 95% CI 1.4-5.8. A tendency towards an increased risk of malformations among the infants was seen especially in the later time period (1988-2002 (OR 2.1, 95% CI 1.2-3.7.It is reassuring that births overall were without adverse events, but our findings indicate that pregnancies in previously treated breast cancer patients should possibly be regarded as higher risk pregnancies, with consequences for their surveillance and management.

  8. Aeroelastic Calculations Based on Three-Dimensional Euler Analysis

    Science.gov (United States)

    Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.

    1998-01-01

    This paper presents representative results from an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic code (TURBO). Unsteady pressure, lift, and moment distributions are presented for a helical fan test configuration which is used to verify the code by comparison to two-dimensional linear potential (flat plate) theory. The results are for pitching and plunging motions over a range of phase angles, Good agreement with linear theory is seen for all phase angles except those near acoustic resonances. The agreement is better for pitching motions than for plunging motions. The reason for this difference is not understood at present. Numerical checks have been performed to ensure that solutions are independent of time step, converged to periodicity, and linearly dependent on amplitude of blade motion. The paper concludes with an evaluation of the current state of development of the TURBO-AE code and presents some plans for further development and validation of the TURBO-AE code.

  9. Reasons for placement of restorations on previously unrestored tooth surfaces by dentists in The Dental Practice-Based Research Network

    DEFF Research Database (Denmark)

    Nascimento, Marcelle M; Gordan, Valeria V; Qvist, Vibeke

    2010-01-01

    The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so....

  10. Reasons for placement of restorations on previously unrestored tooth surfaces by dentists in The Dental Practice-Based Research Network

    DEFF Research Database (Denmark)

    Nascimento, Marcelle M; Gordan, Valeria V; Qvist, Vibeke

    2010-01-01

    The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so.......The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so....

  11. A DFT based method for calculating the surface energies of asymmetric MoP facets

    Science.gov (United States)

    Tian, Xinxin; Wang, Tao; Fan, Lifang; Wang, Yuekui; Lu, Haigang; Mu, Yuewen

    2018-01-01

    MoP is a promising catalyst in heterogeneous catalysis. Understanding its surface stability and morphology is the first and essential step in exploring its catalytic properties. However, traditional surface energy calculation method does not work for the asymmetric termination of MoP. In this work, we reported a useful DFT based method to get the surface energies of asymmetric MoP facets. Under ideal condition, the (101) surface with mixed Mo/P termination is most stable, followed by the (100) surface, while the (001) surface is least stable. Wulff construction reveals the exposure of six surfaces on the MoP nanoparticle, where the (101) has the largest contribution. Atomistic thermodynamics results reveal the changes in surface stability orders with experimental conditions, and the (001)-P termination becomes more and more stable with increasing P chemical potential, which indicates its exposure is possible at defined conditions. Our results agree well with the previous experimental XRD and TEM data. We believe the reported method for surface energy calculation could be extended to other similar systems with asymmetric surface terminations.

  12. Explicit consideration of spatial hydrogen bonding direction for activity coefficient prediction based on implicit solvation calculations.

    Science.gov (United States)

    Chen, Wei-Lin; Lin, Shiang-Tai

    2017-08-09

    The activity coefficient of a chemical in a mixture is important in understanding the thermodynamic properties and non-ideality of the mixture. The COSMO-SAC model based on the result of quantum mechanical implicit solvation calculations has been shown to provide reliable predictions of activity coefficients for mixed fluids. However, it is found that the prediction accuracy is in general inferior for associating fluids. Existing methods for describing the hydrogen-bonding interaction consider the strength of the interaction based only on the polarity of the screening charges, neglecting the fact that the formation of hydrogen bonds requires a specific orientation between the donor and acceptor pairs. In this work, we propose a new approach that takes into account the spatial orientational constraints in hydrogen bonds. Based on the Valence Shell Electron Pair Repulsion (VSEPR) theory, the molecular surfaces associated with the formation of hydrogen bonds are limited to those in the projection of the lone pair electrons of hydrogen bond acceptors, in addition to the polarity of the surface screening charges. Our results show that this new directional hydrogen bond approach, denoted as the COSMO-SAC(DHB) model, requires fewer universal parameters and is significantly more accurate and reliable compared to previous models for a variety of properties, including vapor-liquid equilibria (VLE), infinite dilution activity coefficient (IDAC) and water-octanol partition coefficient (K ow ).

  13. Biotin IgM Antibodies in Human Blood: A Previously Unknown Factor Eliciting False Results in Biotinylation-Based Immunoassays

    Science.gov (United States)

    Chen, Tingting; Hedman, Lea; Mattila, Petri S.; Jartti, Laura; Jartti, Tuomas; Ruuskanen, Olli; Söderlund-Venermo, Maria; Hedman, Klaus

    2012-01-01

    Biotin is an essential vitamin that binds streptavidin or avidin with high affinity and specificity. As biotin is a small molecule that can be linked to proteins without affecting their biological activity, biotinylation is applied widely in biochemical assays. In our laboratory, IgM enzyme immuno assays (EIAs) of µ-capture format have been set up against many viruses, using as antigen biotinylated virus like particles (VLPs) detected by horseradish peroxidase-conjugated streptavidin. We recently encountered one serum sample reacting with the biotinylated VLP but not with the unbiotinylated one, suggesting in human sera the occurrence of biotin-reactive antibodies. In the present study, we search the general population (612 serum samples from adults and 678 from children) for IgM antibodies reactive with biotin and develop an indirect EIA for quantification of their levels and assessment of their seroprevalence. These IgM antibodies were present in 3% adults regardless of age, but were rarely found in children. The adverse effects of the biotin IgM on biotinylation-based immunoassays were assessed, including four inhouse and one commercial virus IgM EIAs, showing that biotin IgM do cause false positivities. The biotin can not bind IgM and streptavidin or avidin simultaneously, suggesting that these biotin-interactive compounds compete for the common binding site. In competitive inhibition assays, the affinities of biotin IgM antibodies ranged from 2.1×10−3 to 1.7×10−4 mol/L. This is the first report on biotin antibodies found in humans, providing new information on biotinylation-based immunoassays as well as new insights into the biomedical effects of vitamins. PMID:22879954

  14. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun

    2015-10-01

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  15. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  16. Hemoglobin-Based Oxygen Carrier (HBOC) Development in Trauma: Previous Regulatory Challenges, Lessons Learned, and a Path Forward.

    Science.gov (United States)

    Keipert, Peter E

    2017-01-01

    Historically, hemoglobin-based oxygen carriers (HBOCs) were being developed as "blood substitutes," despite their transient circulatory half-life (~ 24 h) vs. transfused red blood cells (RBCs). More recently, HBOC commercial development focused on "oxygen therapeutic" indications to provide a temporary oxygenation bridge until medical or surgical interventions (including RBC transfusion, if required) can be initiated. This included the early trauma trials with HemAssist ® (BAXTER), Hemopure ® (BIOPURE) and PolyHeme ® (NORTHFIELD) for resuscitating hypotensive shock. These trials all failed due to safety concerns (e.g., cardiac events, mortality) and certain protocol design limitations. In 2008 the Food and Drug Administration (FDA) put all HBOC trials in the US on clinical hold due to the unfavorable benefit:risk profile demonstrated by various HBOCs in different clinical studies in a meta-analysis published by Natanson et al. (2008). During standard resuscitation in trauma, organ dysfunction and failure can occur due to ischemia in critical tissues, which can be detected by the degree of lactic acidosis. SANGART'S Phase 2 trauma program with MP4OX therefore added lactate >5 mmol/L as an inclusion criterion to enroll patients who had lost sufficient blood to cause a tissue oxygen debt. This was key to the successful conduct of their Phase 2 program (ex-US, from 2009 to 2012) to evaluate MP4OX as an adjunct to standard fluid resuscitation and transfusion of RBCs. In 2013, SANGART shared their Phase 2b results with the FDA, and succeeded in getting the FDA to agree that a planned Phase 2c higher dose comparison study of MP4OX in trauma could include clinical sites in the US. Unfortunately, SANGART failed to secure new funding and was forced to terminate development and operations in Dec 2013, even though a regulatory path forward with FDA approval to proceed in trauma had been achieved.

  17. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon

    2014-01-01

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  18. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  19. THEXSYST - a knowledge based system for the control and analysis of technical simulation calculations

    International Nuclear Information System (INIS)

    Burger, B.

    1991-07-01

    This system (THEXSYST) will be used for control, analysis and presentation of thermal hydraulic simulation calculations of light water reactors. THEXSYST is a modular system consisting of an expert shell with user interface, a data base, and a simulation program and uses techniques available in RSYST. A knowledge base, which was created to control the simulational calculation of pressurized water reactors, includes both the steady state calculation and the transient calculation in the domain of the depressurization, as a result of a small break loss of coolant accident. The methods developed are tested using a simulational calculation with RELAP5/Mod2. It will be seen that the application of knowledge base techniques may be a helpful tool to support existing solutions especially in graphical analysis. (orig./HP) [de

  20. Standardizing Benchmark Dose Calculations to Improve Science-Based Decisions in Human Health Assessments

    Science.gov (United States)

    Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.

    2014-01-01

    Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health

  1. [External beam radiotherapy cone beam-computed tomography-based dose calculation].

    Science.gov (United States)

    Barateau, A; Céleste, M; Lafond, C; Henry, O; Couespel, S; Simon, A; Acosta, O; de Crevoisier, R; Périchon, N

    2018-02-01

    In external beam radiotherapy, the dose planning is currently based on computed tomography (CT) images. A relation between Hounsfield numbers and electron densities (or mass densities) is necessary for dose calculation taking heterogeneities into account. In image-guided radiotherapy process, the cone beam CT is classically used for tissue visualization and registration. Cone beam CT for dose calculation is also attractive in dose reporting/monitoring perspectives and particularly in a context of dose-guided adaptive radiotherapy. The accuracy of cone beam CT-based dose calculation is limited by image characteristics such as quality, Hounsfield numbers consistency and restrictive sizes of volume acquisition. The analysis of the literature identifies three kinds of strategies for cone beam CT-based dose calculation: establishment of Hounsfield numbers versus densities curves, density override to regions of interest, and deformable registration between CT and cone beam CT images. Literature results show that discrepancies between the reference CT-based dose calculation and the cone beam CT-based dose calculation are often lower than 3%, regardless of the method. However, they can also reach 10% with unsuitable method. Even if the accuracy of the cone beam CT-based dose calculation is independent of the method, some strategies are promising but need improvements in the automating process for a routine implementation. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  2. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and...

  3. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    OpenAIRE

    Wan'e, Wu; Zuoming, Zhu

    2012-01-01

    A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ± 7 . 3 %; in the formulation rang...

  4. Variation among internet based calculators in predicting spontaneous resolution of vesicoureteral reflux.

    Science.gov (United States)

    Routh, Jonathan C; Gong, Edward M; Cannon, Glenn M; Yu, Richard N; Gargollo, Patricio C; Nelson, Caleb P

    2010-04-01

    An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesicoureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  5. Inverse calculation of biochemical oxygen demand models based on time domain for the tidal Foshan River.

    Science.gov (United States)

    Er, Li; Xiangying, Zeng

    2014-01-01

    To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models.

  6. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  7. Calculation of the Stabilization Energies of Oxidatively Damaged Guanine Base Pairs with Guanine

    Directory of Open Access Journals (Sweden)

    Hiroshi Miyazawa

    2012-06-01

    Full Text Available DNA is constantly exposed to endogenous and exogenous oxidative stresses. Damaged DNA can cause mutations, which may increase the risk of developing cancer and other diseases. G:C-C:G transversions are caused by various oxidative stresses. 2,2,4-Triamino-5(2H-oxazolone (Oz, guanidinohydantoin (Gh/iminoallantoin (Ia and spiro-imino-dihydantoin (Sp are known products of oxidative guanine damage. These damaged bases can base pair with guanine and cause G:C-C:G transversions. In this study, the stabilization energies of these bases paired with guanine were calculated in vacuo and in water. The calculated stabilization energies of the Ia:G base pairs were similar to that of the native C:G base pair, and both bases pairs have three hydrogen bonds. By contrast, the calculated stabilization energies of Gh:G, which form two hydrogen bonds, were lower than the Ia:G base pairs, suggesting that the stabilization energy depends on the number of hydrogen bonds. In addition, the Sp:G base pairs were less stable than the Ia:G base pairs. Furthermore, calculations showed that the Oz:G base pairs were less stable than the Ia:G, Gh:G and Sp:G base pairs, even though experimental results showed that incorporation of guanine opposite Oz is more efficient than that opposite Gh/Ia and Sp.

  8. Development of a GPU-based multithreaded software application to calculate digitally reconstructed radiographs for radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro; Kobayashi, Masanao; Kumagai, Motoki; Minohara, Shinichi

    2009-01-01

    To provide faster calculation of digitally reconstructed radiographs (DRRs) in patient-positioning verification, we developed and evaluated a graphic processing unit (GPU)-based DRR software application and compared it with a central processing unit (CPU)-based application. The evaluation metrics were calculation speed and image quality for various slice thicknesses. The results showed that the GPU-based DRR computation was an average of 50 times faster than the CPU-based methodology, whereas the image quality was very similar. This excellent performance may increase the accuracy of patient positioning and improve the patient treatment throughput time.

  9. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  10. Tunneling-time calculations for general finite wave packets based on the presence-time formalism

    International Nuclear Information System (INIS)

    Barco, O. del; Ortuno, M.; Gasparian, V.

    2006-01-01

    We analyze the tunneling-time problem via the presence time formalism. With this method we reproduce previous results for very long wave packets and we are able to calculate the tunneling time for general wave packets of arbitrary shape and length. The tunneling time for a general wave packet is equal to the average over the energy components of the standard phase time. With this approach we can also calculate the time uncertainty. We have checked that the results obtained with this approach agree extremely well with numerical simulations of the wave packet evolution

  11. New algorithm for post-radial keratotomy intraocular lens power calculations based on rotating Scheimpflug camera data.

    Science.gov (United States)

    Potvin, Richard; Hill, Warren

    2013-03-01

    To provide an algorithm to calculate intraocular lens (IOL) power for post-radial keratometry (RK) eyes based on data extracted from the Pentacam Scheimpflug camera and to compare calculations with those from an existing standard. Private practice, Mesa, Arizona. Case series. Relevant IOL calculation and postoperative refractive data were obtained for eyes that had previous RK but no additional keratorefractive procedures or subsequent cataract surgery. Various Scheimpflug measurements from examinations before cataract surgery over a range of zone diameters were used to calculate IOL power using an Aramberri double-K-modified Holladay 1 formula. Results were compared with actual postsurgical data and IOL calculations based on the mean of the 1.0 mm to 4.0 mm rings from the Atlas topography system. Data were obtained for 83 eyes of 57 patients, including more than 120 different measures per eye from the Scheimpflug system. The mean pupil-centered sagittal front power over the central 4.0 mm zone provided the best results after adjustment for central corneal thickness (CCT). Results were similar to those obtained when the IOL power was calculated using the topography system; 42% of eyes were within ± 0.50 diopter (D) of the target, and 76% of eyes were within ± 1.00 D. In this large series of eyes, the calculation of IOL power after RK using sagittal front-surface power and CCT from the Scheimpflug system produced results equivalent to the multizone approach with the topography system. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  12. Evaluation of Mycobacterium leprae antigens in the monitoring of a dapsone-based chemotherapy of previously untreated lepromatous patients in Cebu, Philippines

    NARCIS (Netherlands)

    Klatser, P. R.; de Wit, M. Y.; Fajardo, T. T.; Cellona, R. V.; Abalos, R. M.; de la Cruz, E. C.; Madarang, M. G.; Hirsch, D. S.; Douglas, J. T.

    1989-01-01

    Thirty-five previously untreated lepromatous patients receiving dapsone-based therapy were monitored throughout their 5-year period of treatment by serology and by pathology. Sequentially collected sera were used to evaluate the usefulness of four Mycobacterium leprae antigens as used in ELISA to

  13. Effects of Web-Based Instruction on Nursing Students' Arithmetical and Drug Dosage Calculation Skills.

    Science.gov (United States)

    Karabağ Aydin, Arzu; Dinç, Leyla

    2017-05-01

    Drug dosage calculation skill is critical for all nursing students to ensure patient safety, particularly during clinical practice. The study purpose was to evaluate the effectiveness of Web-based instruction on improving nursing students' arithmetical and drug dosage calculation skills using a pretest-posttest design. A total of 63 nursing students participated. Data were collected through the Demographic Information Form, and the Arithmetic Skill Test and Drug Dosage Calculation Skill Test were used as pre and posttests. The pretest was conducted in the classroom. A Web site was then constructed, which included audio presentations of lectures, quizzes, and online posttests. Students had Web-based training for 8 weeks and then they completed the posttest. Pretest and posttest scores were compared using the Wilcoxon test and correlation coefficients were used to identify the relationship between arithmetic and calculation skills scores. The results demonstrated that Web-based teaching improves students' arithmetic and drug dosage calculation skills. There was a positive correlation between the arithmetic skill and drug dosage calculation skill scores of students. Web-based teaching programs can be used to improve knowledge and skills at a cognitive level in nursing students.

  14. Applications of thermodynamic calculations to Mg alloy design: Mg-Sn based alloy development

    International Nuclear Information System (INIS)

    Jung, In-Ho; Park, Woo-Jin; Ahn, Sang Ho; Kang, Dae Hoon; Kim, Nack J.

    2007-01-01

    Recently an Mg-Sn based alloy system has been investigated actively in order to develop new magnesium alloys which have a stable structure and good mechanical properties at high temperatures. Thermodynamic modeling of the Mg-Al-Mn-Sb-Si-Sn-Zn system was performed based on available thermodynamic, phase equilibria and phase diagram data. Using the optimized database, the phase relationships of the Mg-Sn-Al-Zn alloys with additions of Si and Sb were calculated and compared with their experimental microstructures. It is shown that the calculated results are in good agreement with experimental microstructures, which proves the applicability of thermodynamic calculations for new Mg alloy design. All calculations were performed using FactSage thermochemical software. (orig.)

  15. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  16. Calculating evidence-based renal replacement therapy - Introducing an excel-based calculator to improve prescribing and delivery in renal replacement therapy - A before and after study.

    Science.gov (United States)

    Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin

    2016-02-01

    Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1  h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1  h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1  h -1 to 26.8 ml kg -1  h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p  < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p  = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.

  17. Research on Transformer Direct Magnetic Bias Current Calculation Method Based on Field Circuit Iterative Algorithm

    OpenAIRE

    Ning Yao

    2014-01-01

    In order to analyze the DC magnetic bias effect of neutral grounding AC transformer around convertor station grounding electrode, it proposes a new calculation method —field circuit iterative algorithm in this article. The method includes partial iterative algorithm and concentrated iterative algorithm. On the research base of direct injection current calculation methods, field circuit coupling method and resistor network method. Not only the effect of direct convertor station grounding elect...

  18. CALCULATION OF PROPELLER UAV BASED REYNOLDS NUMBER AND DEGREE OF REDUCTION

    Directory of Open Access Journals (Sweden)

    O. V. Gerasimov

    2014-01-01

    Full Text Available Presented methodology to the design and check calculations of an isolated propeller for mini-UAV based on the vortex theory of Zhukovsky. Results of the calculation of propeller mini-UAVs and their comparison with results matching propeller on a normal chart. Shows the effect of Re, as well as the degree of reduction in the aerodynamic and geometric characteristics of the propeller.

  19. Artificial neural network based torque calculation of switched reluctance motor without locking the rotor

    Science.gov (United States)

    Kucuk, Fuat; Goto, Hiroki; Guo, Hai-Jiao; Ichinokura, Osamu

    2009-04-01

    Feedback of motor torque is required in most of switched reluctance (SR) motor applications in order to control torque and its ripple. An SR motor shows highly nonlinear property which does not allow calculating torque analytically. Torque can be directly measured by torque sensor, but it inevitably increases the cost and has to be properly mounted on the motor shaft. Instead of torque sensor, finite element analysis (FEA) may be employed for torque calculation. However, motor modeling and calculation takes relatively long time. The results of FEA may also differ from the actual results. The most convenient way seems to calculate torque from the measured values of rotor position, current, and flux linkage while locking the rotor at definite positions. However, this method needs an extra assembly to lock the rotor. In this study, a novel torque calculation based on artificial neural networks (ANNs) is presented. Magnetizing data are collected while a 6/4 SR motor is running. They need to be interpolated for torque calculation. ANN is very strong tool for data interpolation. ANN based torque estimation is verified on the 6/4 SR motor and is compared by FEA based torque estimation to show its validity.

  20. Effectiveness of Ritonavir-Boosted Protease Inhibitor Monotherapy in Clinical Practice Even with Previous Virological Failures to Protease Inhibitor-Based Regimens.

    Directory of Open Access Journals (Sweden)

    Luis F López-Cortés

    Full Text Available Significant controversy still exists about ritonavir-boosted protease inhibitor monotherapy (mtPI/rtv as a simplification strategy that is used up to now to treat patients that have not experienced previous virological failure (VF while on protease inhibitor (PI -based regimens. We have evaluated the effectiveness of two mtPI/rtv regimens in an actual clinical practice setting, including patients that had experienced previous VF with PI-based regimens.This retrospective study analyzed 1060 HIV-infected patients with undetectable viremia that were switched to lopinavir/ritonavir or darunavir/ritonavir monotherapy. In cases in which the patient had previously experienced VF while on a PI-based regimen, the lack of major HIV protease resistance mutations to lopinavir or darunavir, respectively, was mandatory. The primary endpoint of this study was the percentage of participants with virological suppression after 96 weeks according to intention-to-treat analysis (non-complete/missing = failure.A total of 1060 patients were analyzed, including 205 with previous VF while on PI-based regimens, 90 of whom were on complex therapies due to extensive resistance. The rates of treatment effectiveness (intention-to-treat analysis and virological efficacy (on-treatment analysis at week 96 were 79.3% (CI95, 76.8-81.8 and 91.5% (CI95, 89.6-93.4, respectively. No relationships were found between VF and earlier VF while on PI-based regimens, the presence of major or minor protease resistance mutations, the previous time on viral suppression, CD4+ T-cell nadir, and HCV-coinfection. Genotypic resistance tests were available in 49 out of the 74 patients with VFs and only four patients presented new major protease resistance mutations.Switching to mtPI/rtv achieves sustained virological control in most patients, even in those with previous VF on PI-based regimens as long as no major resistance mutations are present for the administered drug.

  1. Quantum-mechanical calculation of H on Ni(001) using a model potential based on first-principles calculations

    DEFF Research Database (Denmark)

    Mattsson, T.R.; Wahnström, G.; Bengtsson, L.

    1997-01-01

    First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...

  2. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    Science.gov (United States)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  3. Calculation of marine propeller static strength based on coupled BEM/FEM

    Directory of Open Access Journals (Sweden)

    YE Liyu

    2017-10-01

    Full Text Available [Objectives] The reliability of propeller stress has a great influence on the safe navigation of a ship. To predict propeller stress quickly and accurately,[Methods] a new numerical prediction model is developed by coupling the Boundary Element Method(BEMwith the Finite Element Method (FEM. The low order BEM is used to calculate the hydrodynamic load on the blades, and the Prandtl-Schlichting plate friction resistance formula is used to calculate the viscous load. Next, the calculated hydrodynamic load and viscous correction load are transmitted to the calculation of the Finite Element as surface loads. Considering the particularity of propeller geometry, a continuous contact detection algorithm is developed; an automatic method for generating the finite element mesh is developed for the propeller blade; a code based on the FEM is compiled for predicting blade stress and deformation; the DTRC 4119 propeller model is applied to validate the reliability of the method; and mesh independence is confirmed by comparing the calculated results with different sizes and types of mesh.[Results] The results show that the calculated blade stress and displacement distribution are reliable. This method avoids the process of artificial modeling and finite element mesh generation, and has the advantages of simple program implementation and high calculation efficiency.[Conclusions] The code can be embedded into the code of theoretical and optimized propeller designs, thereby helping to ensure the strength of designed propellers and improve the efficiency of propeller design.

  4. Calculating the Fee-Based Services of Library Institutions: Theoretical Foundations and Practical Challenges

    Directory of Open Access Journals (Sweden)

    Sysіuk Svitlana V.

    2017-05-01

    Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.

  5. PREVIOUS SECOND TRIMESTER ABORTION

    African Journals Online (AJOL)

    PNLC

    PREVIOUS SECOND TRIMESTER ABORTION: A risk factor for third trimester uterine rupture in three ... for accurate diagnosis of uterine rupture. KEY WORDS: Induced second trimester abortion - Previous uterine surgery - Uterine rupture. ..... scarred uterus during second trimester misoprostol- induced labour for a missed ...

  6. General method for calculation of hydrogen-ion concentration in multicomponent acid-base mixtures.

    Science.gov (United States)

    Ventura, D A; Ando, H Y

    1980-08-01

    A generalized method for the rapid evaluation of complicated ionic equilibria in terms of the hydrogen-ion concentration was developed. The method was based on the derivation of a single general equation that could be used to evaluate any mixture. A tableau method also was developed which allowed calculation of the numerical solution to the general equation without computer analysis or graphical or intuitive approximations. Examples illustrating the utility of the method are presented. These examples include a mixture of barbital, citric acid, boric acid, monobasic sodium phosphate, and sodium hydroxide. Calculated hydrogen-ion concentrations showed good agreement with experimental values for simple and complex solutions. The major advantages of the method are its simplicity and the obtainment of numerical solutions without initial approximations in the calculations. However, activity corrections are not included in the calculations.

  7. Research on Transformer Direct Magnetic Bias Current Calculation Method Based on Field Circuit Iterative Algorithm

    Directory of Open Access Journals (Sweden)

    Ning Yao

    2014-08-01

    Full Text Available In order to analyze the DC magnetic bias effect of neutral grounding AC transformer around convertor station grounding electrode, it proposes a new calculation method —field circuit iterative algorithm in this article. The method includes partial iterative algorithm and concentrated iterative algorithm. On the research base of direct injection current calculation methods, field circuit coupling method and resistor network method. Not only the effect of direct convertor station grounding electrode current on substation grounding grid potential, but also the effect of the current of each substation grounding grid on the grounding grid potential of other substation is considered in the field circuit iterative algorithm. Through the analyzing comparison of calculation model, it is proved that field circuit iterative algorithm is more accuracy and adaptative than field-circuit coupling method and resistor network method in the AC power system set by using the equivalent resistance circuit DC path to calculate DC current component of the transformer.

  8. The calculation and analysis of MTR-type assemble based on HELIOS

    International Nuclear Information System (INIS)

    Zhang Wenliang; Zhao Qiang

    2014-01-01

    Cell and assembly calculation are the fundamental of 3D core calculation. The spatial discretization used for the cell and burnup calculations influences significantly the results for the used integral transport solutions. The arising differences in the neutron flux distribution are demonstrated of different spatial discretization strategies and these differences in the flux distribution cause significant changes in the isotopic densities and the k inf value. The calculation results will be different when users choose different spatial discretization and current coupling order conditions. This problem is focus on MTR Benchmark based on HELIOS-1. ll. The influence of the spatial discretization and current coupling order conditions is investigated in order to discuss how to choose the reasonable spatial discretization conditions and current coupling order in HELIOS. (authors)

  9. Code accuracy evaluation of ISP 35 calculations based on NUPEC M-7-1 test

    International Nuclear Information System (INIS)

    Auria, F.D.; Oriolo, F.; Leonardi, M.; Paci, S.

    1995-01-01

    Quantitative evaluation of code uncertainties is a necessary step in the code assessment process, above all if best-estimate codes are utilised for licensing purposes. Aiming at quantifying the code accuracy, an integral methodology based on the Fast Fourier Transform (FFT) has been developed at the University of Pisa (DCMN) and has been already applied to several calculations related to primary system test analyses. This paper deals with the first application of the FFT based methodology to containment code calculations based on a hydrogen mixing and distribution test performed in the NUPEC (Nuclear Power Engineering Corporation) facility. It is referred to pre-test and post-test calculations submitted for the International Standard Problem (ISP) n. 35. This is a blind exercise, simulating the effects of steam injection and spray behaviour on gas distribution and mixing. The result of the application of this methodology to nineteen selected variables calculated by ten participants are here summarized, and the comparison (where possible) of the accuracy evaluated for the pre-test and for the post-test calculations of a same user is also presented. (author)

  10. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  11. Predicting the outcome of prostate biopsy: comparison of a novel logistic regression-based model, the prostate cancer risk calculator, and prostate-specific antigen level alone.

    Science.gov (United States)

    Hernandez, David J; Han, Misop; Humphreys, Elizabeth B; Mangold, Leslie A; Taneja, Samir S; Childs, Stacy J; Bartsch, Georg; Partin, Alan W

    2009-03-01

    To develop a logistic regression-based model to predict prostate cancer biopsy at, and compare its performance to the risk calculator developed by the Prostate Cancer Prevention Trial (PCPT), which was based on age, race, prostate-specific antigen (PSA) level, a digital rectal examination (DRE), family history, and history of a previous negative biopsy, and to PSA level alone. We retrospectively analysed the data of 1280 men who had a biopsy while enrolled in a prospective, multicentre clinical trial. Of these, 1108 had all relevant clinical and pathological data available, and no previous diagnosis of prostate cancer. Using the PCPT risk calculator, we calculated the risks of prostate cancer and of high-grade disease (Gleason score > or =7) for each man. Receiver operating characteristic (ROC) curves for the risk calculator, PSA level and the novel regression-based model were compared. Prostate cancer was detected in 394 (35.6%) men, and 155 (14.0%) had Gleason > or =7 disease. For cancer prediction, the area under the ROC curve (AUC) for the risk calculator was 66.7%, statistically greater than the AUC for PSA level of 61.9% (P calculator and PSA level, respectively (P = 0.024). The AUCs increased to 71.2% (P calculator modestly improves the performance of PSA level alone in predicting an individual's risk of prostate cancer or high-grade disease on biopsy. This predictive tool might be enhanced by including percentage free PSA and the number of biopsy cores.

  12. Monte Carlo-based dose calculation engine for minibeam radiation therapy.

    Science.gov (United States)

    Martínez-Rovira, I; Sempau, J; Prezado, Y

    2014-02-01

    Minibeam radiation therapy (MBRT) is an innovative radiotherapy approach based on the well-established tissue sparing effect of arrays of quasi-parallel micrometre-sized beams. In order to guide the preclinical trials in progress at the European Synchrotron Radiation Facility (ESRF), a Monte Carlo-based dose calculation engine has been developed and successfully benchmarked with experimental data in anthropomorphic phantoms. Additionally, a realistic example of treatment plan is presented. Despite the micron scale of the voxels used to tally dose distributions in MBRT, the combination of several efficiency optimisation methods allowed to achieve acceptable computation times for clinical settings (approximately 2 h). The calculation engine can be easily adapted with little or no programming effort to other synchrotron sources or for dose calculations in presence of contrast agents. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wu Wan'e

    2012-01-01

    Full Text Available A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ±7.3%; in the formulation range (hydroxyl-terminated polybutadiene 28%–32%, ammonium perchlorate 30%–35%, magnalium alloy 4%–8%, catocene 0%–5%, and boron 30%, the variation of the calculation data is consistent with the experimental results.

  14. Development of simplified methods and data bases for radiation shielding calculations for concrete

    Energy Technology Data Exchange (ETDEWEB)

    Bhuiyan, S.I.; Roussin, R.W.; Lucius, J.L.; Marable, J.H.; Bartine, D.A.

    1986-06-01

    Two simplified methods have been developed which allow rapid and accurate calculations of the attenuation of neutrons and gamma rays through concrete shields. One method, called the BEST method, uses sensitivity coefficients to predict changes in the transmitted dose from a fission source that are due to changes in the composition of the shield. The other method uses transmission factors based on adjoint calculations to predict the transmitted dose from an arbitrary source incident on a given shield. The BEST method, utilizing an exponential molecule that is shown to be a significant improvement over the traditional linear model, has been successfully applied to slab shields of standard concrete and rebar concrete. It has also been tested for a special concrete that has been used in many shielding experiments at the ORNL Tower Shielding Facility, as well as for a deep-penetration sodium problem. A comprehensive data base of concrete sensitivity coefficients generated as part of this study is available for use in the BEST model. For problems in which the changes are energy independent, application of the model and data base can be accomplished with a desk calculator. Larger-scale calculations required for problems that are energy dependent are facilitated by employing a simple computer code, which is included, together with the data base and other calculational aids, in a data package that can be obtained from the ORNL Radiation Shielding Information Center (request DLC-102/CONSENT). The transmission factors used by the second method are a byproduct of the sensitivity calculations and are mathematically equivalent to the surface adjoint function phi*, which gives the dose equivalent transmitted through a slab of thickness T due to one particle incident on the surface in the gth energy group and jth direction. 18 refs., 1 fig., 50 tabs.

  15. Radial electromagnetic force calculation of induction motor based on multi-loop theory

    Directory of Open Access Journals (Sweden)

    HE Haibo

    2017-12-01

    Full Text Available [Objectives] In order to study the vibration and noise of induction motors, a method of radial electromagnetic force calculation is established on the basis of the multi-loop model.[Methods] Based on the method of calculating air-gap magneto motive force according to stator and rotor fundamental wave current, the analytic formulas are deduced for calculating the air-gap magneto motive force and radial electromagnetic force generated in accordance with any stator winding and rotor conducting bar current. The multi-loop theory and calculation method for the electromagnetic parameters of a motor are introduced, and a dynamic simulation model of an induction motor built to achieve the current of the stator winding and rotor conducting bars, and obtain the calculation formula of radial electromagnetic force. The radial electromagnetic force and vibration are then estimated.[Results] The experimental results indicate that the vibration acceleration frequency and amplitude of the motor are consistent with the experimental results.[Conclusions] The results and calculation method can support the low noise design of converters.

  16. Slope excavation quality assessment and excavated volume calculation in hydraulic projects based on laser scanning technology

    Directory of Open Access Journals (Sweden)

    Chao Hu

    2015-04-01

    Full Text Available Slope excavation is one of the most crucial steps in the construction of a hydraulic project. Excavation project quality assessment and excavated volume calculation are critical in construction management. The positioning of excavation projects using traditional instruments is inefficient and may cause error. To improve the efficiency and precision of calculation and assessment, three-dimensional laser scanning technology was used for slope excavation quality assessment. An efficient data acquisition, processing, and management workflow was presented in this study. Based on the quality control indices, including the average gradient, slope toe elevation, and overbreak and underbreak, cross-sectional quality assessment and holistic quality assessment methods were proposed to assess the slope excavation quality with laser-scanned data. An algorithm was also presented to calculate the excavated volume with laser-scanned data. A field application and a laboratory experiment were carried out to verify the feasibility of these methods for excavation quality assessment and excavated volume calculation. The results show that the quality assessment indices can be obtained rapidly and accurately with design parameters and scanned data, and the results of holistic quality assessment are consistent with those of cross-sectional quality assessment. In addition, the time consumption in excavation quality assessment with the laser scanning technology can be reduced by 70%–90%, as compared with the traditional method. The excavated volume calculated with the scanned data only slightly differs from measured data, demonstrating the applicability of the excavated volume calculation method presented in this study.

  17. Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene

    Science.gov (United States)

    Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.

    2012-02-01

    We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.

  18. Medication calculation: the potential role of digital game-based learning in nurse education.

    Science.gov (United States)

    Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle

    2013-12-01

    Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.

  19. The 'lottery' of cardiovascular risk estimation with Internet-based risk calculators.

    Science.gov (United States)

    Lippi, Giuseppe; Sanchis-Gomar, Fabian

    2018-03-02

    The cardiovascular disease (CVD) is the leading cause of disability and premature death around the world. The ongoing publication of systematic and critical literature reviews has contributed to generate a kaleidoscope of guidelines by different scientific organizations. We investigated the accordance among the most popular web-based CVD risk calculators on the Internet. We carried out a simple study, by estimating the risk of CVD using the most popular Internet-based calculators available on the Internet. A Google search was performed, using the keyword "cardiovascular risk calculator", to identify the first 10 websites providing free on-line CVD risk calculators. We arbitrarily selected the cardiovascular profile of two subjects of a typical Western family: a 55-year man at a likely intermediate cardiovascular risk and a 45-year woman at a probable low risk. The score calculated according to the two arbitrary CVD risk profiles, one of whom was supposed to be at intermediate risk and the other at lower risk, was extremely variable. More specifically, the 10-year CVD risk of the 55-year old man varied from 3% to over 25% (median value, 12.9%, interquartile range [IQR], 10.7-19.0%), whereas that of the 45-year women varied between 0% and 4% (median value, 1.2%; IQR, 0.4-2.2%), thus displaying a nearly 10-fold variation in both cases. We concluded from our analysis of 11 different Internet-based CVD risk calculators that the final 10-year risk score can be extremely different, especially for the 55-year old man at predictably intermediate risk.

  20. Docking and Molecular Dynamics Calculations of Some Previously Studied and newly Designed Ligands to Catalytic Core Domain of HIV-1 Integrase and an Investigation to Effects of Conformational Changes of Protein on Docking Results

    Directory of Open Access Journals (Sweden)

    Selami Ercan

    2016-10-01

    Full Text Available Nowadays, AIDS still remains as a worldwide pandemic and continues to cause many death which arise from HIV-1 virus. For nearly 35 years, drugs that target various steps of virus life cycle have been developed. HIV-1 integrase is the one of these steps which is essential for virus life cycle. Computer aided drug design is being used in many drug design studies as also used in development of the first HIV-1 integrase inhibitor Raltegravir. In this study 3 ligands which are used as HIV-1 integrase inhibitors and 4 newly designed ligands were docked to catalytic core domain of HIV-1 integrase. Each of ligands docked to three different conformations of protein. Prepared complexes (21 item were carried out by 50 ns MD simulations and results were analyzed. Finally, the binding free energies of ligands were calculated. Hereunder, it was determined that designed ligands L01 and L03 gave favorable results. The questions about the ligands which have low docking scores in a conformation of protein could give better scores in another conformation of protein and if the MD simulations carry the different oriented and different localized ligands in same position at the end of simulation were answered.

  1. Calculation and Simulation Study on Transient Stability of Power System Based on Matlab/Simulink

    Directory of Open Access Journals (Sweden)

    Shi Xiu Feng

    2016-01-01

    Full Text Available The stability of the power system is destroyed, will cause a large number of users power outage, even cause the collapse of the whole system, extremely serious consequences. Based on the analysis in single machine infinite system as an example, when at the f point two phase ground fault occurs, the fault lines on either side of the circuit breaker tripping resection at the same time,respectively by two kinds of calculation and simulation methods of system transient stability analysis, the conclusion are consistent. and the simulation analysis is superior to calculation analysis.

  2. A NASTRAN DMAP procedure for calculation of base excitation modal participation factors

    Science.gov (United States)

    Case, W. R.

    1983-01-01

    This paper presents a technique for calculating the modal participation factors for base excitation problems using a DMAP alter to the NASTRAN real eigenvalue analysis Rigid Format. The DMAP program automates the generation of the seismic mass to add to the degrees of freedom representing the shaker input directions and calculates the modal participation factors. These are shown in the paper to be a good measure of the maximum acceleration expected at any point on the structure when the subsequent frequency response analysis is run.

  3. Core physics design calculation of mini-type fast reactor based on Monte Carlo method

    International Nuclear Information System (INIS)

    He Keyu; Han Weishi

    2007-01-01

    An accurate physics calculation model has been set up for the mini-type sodium-cooled fast reactor (MFR) based on MCNP-4C code, then a detailed calculation of its critical physics characteristics, neutron flux distribution, power distribution and reactivity control has been carried out. The results indicate that the basic physics characteristics of MFR can satisfy the requirement and objectives of the core design. The power density and neutron flux distribution are symmetrical and reasonable. The control system is able to make a reliable reactivity balance efficiently and meets the request for long-playing operation. (authors)

  4. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  5. Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1997-01-01

    of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the code......The part 1-2 of the Eurocode on Concrete deals with Structural Fire Design.In chapter 3, which is partly written by the author of this paper, some data are given for the development of a few material parameters at high temperatures. These data are intended to represent the worst possible concrete...... to experience form tests on structural specimens based on German siliceous concrete subjected to Standard fire exposure until the time of maximum gas temperature.Chapter 4.3, which is written by the author of this paper, provides a simplified calculation method by means of which the load bearing capacity...

  6. Calculation Scheme Based on a Weighted Primitive: Application to Image Processing Transforms

    Directory of Open Access Journals (Sweden)

    Signes Pont María Teresa

    2007-01-01

    Full Text Available This paper presents a method to improve the calculation of functions which specially demand a great amount of computing resources. The method is based on the choice of a weighted primitive which enables the calculation of function values under the scope of a recursive operation. When tackling the design level, the method shows suitable for developing a processor which achieves a satisfying trade-off between time delay, area costs, and stability. The method is particularly suitable for the mathematical transforms used in signal processing applications. A generic calculation scheme is developed for the discrete fast Fourier transform (DFT and then applied to other integral transforms such as the discrete Hartley transform (DHT, the discrete cosine transform (DCT, and the discrete sine transform (DST. Some comparisons with other well-known proposals are also provided.

  7. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Science.gov (United States)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  8. Validation of 3D volumetric-based renal function prediction calculator for nephron sparing surgery.

    Science.gov (United States)

    Corradi, Renato; Kabra, Aashish; Suarez, Melissa; Oppenheimer, Jacob; Okhunov, Zhamshid; White, Hugh; Nougaret, Stephanie; Vargas, Hebert A; Landman, Jaime; Coleman, Jonathan; Liss, Michael A

    2017-04-01

    To evaluate a recently published volume-based renal function prediction calculator intended to be used in small renal mass surgical counseling. Retrospective data collection included three-dimensional calculation of renal mass and parenchyma of patients who have undergone extirpative therapy. The predicted glomerular filtration rate (GFR) was calculated using the online calculator. The predicted GFR was compared with the actual 6-month GFR. The Pearson correlation coefficient, paired t test and root-mean-square error (RMSE) are utilized for statistical analysis. After institutional review board approval, three institutions provided data for analysis. After patients with renal mass size >300 cc, renal size >400 cc or preoperative CKD ≥stage 3 had been excluded, we retrospectively analyzed data from 136 patients. The median mass volume was 22.2 cc (IQR 7-49). In multiple linear regression analysis, the most significant variables predicting postoperative GFR were partial versus radical nephrectomy and preoperative GFR with an overall R 2 of .68 (F = 26.13, P calculator effectively predicts GFR and could potentially be used to help urologists and patients discuss renal function prior to extirpative renal surgery.

  9. An automated patient recognition method based on an image-matching technique using previous chest radiographs in the picture archiving and communication system environment

    International Nuclear Information System (INIS)

    Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio

    2001-01-01

    An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment

  10. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    Science.gov (United States)

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  11. Longitudinal Evaluation of Hospitalized Burn Patients in Sivas City Center for Six Months and Comparison with a Previously Held Community-based Survey

    Directory of Open Access Journals (Sweden)

    Ömer Faruk Erin

    2016-03-01

    Full Text Available Objective: This study was designed to longitudinally demonstrate the rate and epidemiology of hospitalized burn patients in Sivas city center within 6 months. The second aim was to compare the results of the current study with those of a previously held community-based survey in the same region. Material and Methods: Patients who were hospitalized due to burn injuries in Sivas city for six months were longitudinally evaluated. Epidemiological data of these patients were analyzed. Results: During the course of the study, 87 patients (49 males and 38 females were hospitalized. The ratio of burn patients to the total number of hospitalized patients was 0.38%. The most common etiologic factor was scalds (70.1%. Burns generally took place in the kitchen (41.4% and living room (31.4%, and majority of the patients received cold water as first-aid treatment at the time of injury. The vast majority of patients were discharged from the hospital without the need of surgical intervention (83.9%, and the duration of treatment was between 1 and 14 days for 73.6% of the patients. Sixty patients (68.9% had a total burn surface area under 10%. The total cost of the hospitalization period of these patients was 137.225 Turkish Lira (83.308–92.908$, and the average cost per patient was 1.577 Turkish Lira (957–1067$. Conclusion: Our study revealed a considerable inconsistency when compared with the results of the community-based survey, which had been previously conducted in the same region. We concluded that hospital-based studies are far from reflecting the actual burn trauma potential of a given district in the absence of a reliable, standard, nation-wide record system. Population-based surveys should be encouraged to make an accurate assessment of burn rates in countries lacking reliable record systems.

  12. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface

    Science.gov (United States)

    Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin

    2018-05-01

    Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.

  13. A Calculation Method of Electric Distance and Subarea Division Application Based on Transmission Impedance

    Science.gov (United States)

    Fang, G. J.; Bao, H.

    2017-12-01

    The widely used method of calculating electric distances is sensitivity method. The sensitivity matrix is the result of linearization and based on the hypothesis that the active power and reactive power are decoupled, so it is inaccurate. In addition, it calculates the ratio of two partial derivatives as the relationship of two dependent variables, so there is no physical meaning. This paper presents a new method for calculating electrical distance, namely transmission impedance method. It forms power supply paths based on power flow tracing, then establishes generalized branches to calculate transmission impedances. In this paper, the target of power flow tracing is S instead of Q. Q itself has no direction and the grid delivers complex power so that S contains more electrical information than Q. By describing the power transmission relationship of the branch and drawing block diagrams in both forward and reverse directions, it can be found that the numerators of feedback parts of two block diagrams are all the transmission impedances. To ensure the distance is scalar, the absolute value of transmission impedance is defined as electrical distance. Dividing network according to the electric distances and comparing with the results of sensitivity method, it proves that the transmission impedance method can adapt to the dynamic change of system better and reach a reasonable subarea division scheme.

  14. A New Displacement-based Approach to Calculate Stress Intensity Factors With the Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Marco Gonzalez

    Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.

  15. Calculation of the Instream Ecological Flow of the Wei River Based on Hydrological Variation

    Directory of Open Access Journals (Sweden)

    Shengzhi Huang

    2014-01-01

    Full Text Available It is of great significance for the watershed management department to reasonably allocate water resources and ensure the sustainable development of river ecosystems. The greatly important issue is to accurately calculate instream ecological flow. In order to precisely compute instream ecological flow, flow variation is taken into account in this study. Moreover, the heuristic segmentation algorithm that is suitable to detect the mutation points of flow series is employed to identify the change points. Besides, based on the law of tolerance and ecological adaptation theory, the maximum instream ecological flow is calculated, which is the highest frequency of the monthly flow based on the GEV distribution and very suitable for healthy development of the river ecosystems. Furthermore, in order to guarantee the sustainable development of river ecosystems under some bad circumstances, minimum instream ecological flow is calculated by a modified Tennant method which is improved by replacing the average flow with the highest frequency of flow. Since the modified Tennant method is more suitable to reflect the law of flow, it has physical significance, and the calculation results are more reasonable.

  16. GPU-based ultra-fast dose calculation using a finite size pencil beam model

    Science.gov (United States)

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.

    2009-10-01

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  17. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  18. Photon SAF calculation based on the Chinese mathematical phantom and comparison with the ORNL phantoms.

    Science.gov (United States)

    Qiu, Rui; Li, Junli; Zhang, Zhan; Wu, Zhen; Zeng, Zhi; Fan, Jiajin

    2008-12-01

    The Chinese mathematical phantom (CMP) is a stylized human body model developed based on the methods of Oak Ridge National Laboratory (ORNL) mathematical phantom series (OMPS), and data from Reference Asian Man and Chinese Reference Man. It is constructed for radiation dose estimation for Mongolians, whose anatomical parameters are different from those of Caucasians to some extent. Specific absorbed fractions (SAF) are useful quantities for the primary estimation of internal radiation dose. In this paper, a general Monte Carlo code, Monte Carlo N-Particle Code (MCNP) is used to transport particles and calculate SAF. A new variance reduction technique, called the "pointing probability with force collision" method, is implemented into MCNP to reduce the calculation uncertainty, especially for a small-volume target organ. Finally, SAF data for all 31 organs of both sexes of CMP are calculated. A comparison between SAF based on male phantoms of CMP and OMPS demonstrates that the differences apparently exist, and more than 80% of SAF data based on CMP are larger than that of OMPS. However, the differences are acceptable (the differences are above one order of magnitude only in less than 3% of situations) considering the differences in physique. Furthermore, trends in the SAF with increasing photon energy based on the two phantoms agree well. This model complements existing phantoms of different age, sex and ethnicity.

  19. SU-E-T-182: Clinical Implementation of TG71-Based Electron MU Calculation and Comparison with a Commercial Secondary Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H; Guerrero, M; Chen, S; Langen, K; Prado, K [University of Maryland School of Medicine, Baltimore, MD (United States); Yang, X [Medstar RadAmerica, Baltimore, MD (United States); Schinkel, C [Tom Baker Cancer Centre, Calgary, AB (Canada)

    2015-06-15

    Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data access and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D

  20. Study on the Application of the Tie-Line-Table-Look-Up-Based Methods to Flash Calculations in Compositional Simulations

    DEFF Research Database (Denmark)

    Yan, Wei; Belkadi, Abdelkrim; Michelsen, Michael Locht

    2013-01-01

    Flash calculation can be a time-consuming part in compositional reservoir simulations, and several approaches have been proposed to speed it up. One recent approach is the shadow-region method that reduces the computation time mainly by skipping stability analysis for a large portion of the compo......Flash calculation can be a time-consuming part in compositional reservoir simulations, and several approaches have been proposed to speed it up. One recent approach is the shadow-region method that reduces the computation time mainly by skipping stability analysis for a large portion...... of the compositions in the single-phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be used with the initial estimates from the previous step. Another approach is the compositional-space adaptive-tabulation (CSAT) approach, which is based on tie-line table look-up (TTL). It saves...... computation time by replacing rigorous phase-equilibrium calculations with the stored results in a tie-line table whenever the new feed composition is on one of the stored tie-lines within a certain tolerance. In this study, a modified version of CSAT, named the TTL method, has been proposed to investigate...

  1. The use of approximation formulae in calculations of acid-base equilibria-IV Mixtures of acid and base and titration of acid with base.

    Science.gov (United States)

    Narasaki, H

    1980-05-01

    The pH of mixtures of mono- or diprotic acids and a strong base is calculated by use of approximation formulae and the theoretically exact equations. The regions for useful application of the approximation formulae (error calculate the curves for titration of mono- or diprotic acids with a strong base.

  2. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  3. The calculation of surface free energy based on embedded atom method for solid nickel

    International Nuclear Information System (INIS)

    Luo Wenhua; Hu Wangyu; Su Kalin; Liu Fusheng

    2013-01-01

    Highlights: ► A new solution for accurate prediction of surface free energy based on embedded atom method was proposed. ► The temperature dependent anisotropic surface energy of solid nickel was obtained. ► In isotropic environment, the approach does not change most predictions of bulk material properties. - Abstract: Accurate prediction of surface free energy of crystalline metals is a challenging task. The theory calculations based on embedded atom method potentials often underestimate surface free energy of metals. With an analytical charge density correction to the argument of the embedding energy of embedded atom method, an approach to improve the prediction for surface free energy is presented. This approach is applied to calculate the temperature dependent anisotropic surface energy of bulk nickel and surface energies of nickel nanoparticles, and the obtained results are in good agreement with available experimental data.

  4. Research on trust calculation of wireless sensor networks based on time segmentation

    Science.gov (United States)

    Su, Yaoxin; Gao, Xiufeng; Qiao, Wenxin

    2017-05-01

    Because the wireless sensor network is different from the traditional network characteristics, it is easy to accept the intrusion from the compromise node. The trust mechanism is the most effective way to defend against internal attacks. Aiming at the shortcomings of the existing trust mechanism, a method of calculating the trust of wireless sensor networks based on time segmentation is proposed. It improves the security of the network and extends the life of the network

  5. DP-THOT - a calculational tool for bundle-specific decay power based on actual irradiation history

    International Nuclear Information System (INIS)

    Johnston, S.; Morrison, C.A.; Albasha, H.; Arguner, D.

    2005-01-01

    A tool has been created for calculating the decay power of an individual fuel bundle to take account of its actual irradiation history, as tracked by the fuel management code SORO. The DP-THOT tool was developed in two phases: first as a standalone executable code for decay power calculation, which could accept as input an entirely arbitrary irradiation history; then as a module integrated with SORO auxiliary codes, which directly accesses SORO history files to retrieve the operating power history of the bundle since it first entered the core. The methodology implemented in the standalone code is based on the ANSI/ANS-5.1-1994 formulation, which has been specifically adapted for calculating decay power in irradiated CANDU reactor fuel, by making use of fuel type specific parameters derived from WIMS lattice cell simulations for both 37 element and 28 element CANDU fuel bundle types. The approach also yields estimates of uncertainty in the calculated decay power quantities, based on the evaluated error in the decay heat correlations built-in for each fissile isotope, in combination with the estimated uncertainty in user-supplied inputs. The method was first implemented in the form of a spreadsheet, and following successful testing against decay powers estimated using the code ORIGEN-S, the algorithm was coded in FORTRAN to create an executable program. The resulting standalone code, DP-THOT, accepts an arbitrary irradiation history and provides the calculated decay power and estimated uncertainty over any user-specified range of cooling times, for either 37 element or 28 element fuel bundles. The overall objective was to produce an integrated tool which could be used to find the decay power associated with any identified fuel bundle or channel in the core, taking into account the actual operating history of the bundles involved. The benefit is that the tool would allow a more realistic calculation of bundle and channel decay powers for outage heat sink planning

  6. Integration based profile likelihood calculation for PDE constrained parameter estimation problems

    Science.gov (United States)

    Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.

    2016-12-01

    Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.

  7. Calculating the knowledge-based similarity of functional groups using crystallographic data

    Science.gov (United States)

    Watson, Paul; Willett, Peter; Gillet, Valerie J.; Verdonk, Marcel L.

    2001-09-01

    A knowledge-based method for calculating the similarity of functional groups is described and validated. The method is based on experimental information derived from small molecule crystal structures. These data are used in the form of scatterplots that show the likelihood of a non-bonded interaction being formed between functional group A (the `central group') and functional group B (the `contact group' or `probe'). The scatterplots are converted into three-dimensional maps that show the propensity of the probe at different positions around the central group. Here we describe how to calculate the similarity of a pair of central groups based on these maps. The similarity method is validated using bioisosteric functional group pairs identified in the Bioster database and Relibase. The Bioster database is a critical compilation of thousands of bioisosteric molecule pairs, including drugs, enzyme inhibitors and agrochemicals. Relibase is an object-oriented database containing structural data about protein-ligand interactions. The distributions of the similarities of the bioisosteric functional group pairs are compared with similarities for all the possible pairs in IsoStar, and are found to be significantly different. Enrichment factors are also calculated showing the similarity method is statistically significantly better than random in predicting bioisosteric functional group pairs.

  8. Validation of a GPU-Based 3D dose calculator for modulated beams.

    Science.gov (United States)

    Ahmed, Saeed; Hunt, Dylan; Kapatoes, Jeff; Hayward, Robert; Zhang, Geoffrey; Moros, Eduardo G; Feygelman, Vladimir

    2017-05-01

    A superposition/convolution GPU-accelerated dose computation algorithm (the Calculator) has been recently incorporated into commercial software. The algorithm requires validation prior to clinical use. Three photon energies were examined: conventional 6 MV and 15 MV, and 10 MV flattening filter free (10 MVFFF). For a set of IMRT and VMAT plans based on four of the five AAPM Practice Guideline 5a downloadable datasets, ion chamber (IC) measurements were performed on the water-equivalent phantoms. The average difference between the Calculator and IC was -0.3 ± 0.8% (1SD). The same plans were projected on a phantom containing a biplanar diode array. We used the forthcoming criteria for routine gamma analysis, 3% dose-error (global (G) normalization, 2 mm distance to agreement, and 10% low dose cutoff). The γ (3%G/2 mm) average passing rate was 98.9 ± 2.1%. Measurement-guided three-dimensional dose reconstruction on the patient CT dataset (excluding the Lung) resulted in a similar average agreement rate with the Calculator: 98.2 ± 2.0%. The mean γ (3%G/2 mm) passing rate comparing the Calculator to the TPS (again excluding the Lung) was 99.0 ± 1.0%. Because of the significant inhomogeneity, the Lung case was investigated separately. The calculator has an alternate heterogeneity correction mode that can change the results in the thorax for higher-energy beams (15 MV). As this correction is nonphysical and was optimized for simple slab geometries, its application leads to mixed results when compared to the TPS and independent Monte Carlo calculations, depending on the CT dataset and the plan. The Calculator vs. TPS 15 MV Guideline 5a IMRT and VMAT plans demonstrate 96.3% and 93.4% γ (3%G/2 mm) passing rates respectively. For the lower energies, which should be predominantly used in the thoracic region, the passing rates for the same plans and criteria range from 98.6 to 100%. Overall, the Calculator accuracy is sufficient for the intended use. © 2017 The Authors

  9. A brief look at model-based dose calculation principles, practicalities, and promise.

    Science.gov (United States)

    Sloboda, Ron S; Morrison, Hali; Cawston-Grant, Brie; Menon, Geetha V

    2017-02-01

    Model-based dose calculation algorithms (MBDCAs) have recently emerged as potential successors to the highly practical, but sometimes inaccurate TG-43 formalism for brachytherapy treatment planning. So named for their capacity to more accurately calculate dose deposition in a patient using information from medical images, these approaches to solve the linear Boltzmann radiation transport equation include point kernel superposition, the discrete ordinates method, and Monte Carlo simulation. In this overview, we describe three MBDCAs that are commercially available at the present time, and identify guidance from professional societies and the broader peer-reviewed literature intended to facilitate their safe and appropriate use. We also highlight several important considerations to keep in mind when introducing an MBDCA into clinical practice, and look briefly at early applications reported in the literature and selected from our own ongoing work. The enhanced dose calculation accuracy offered by a MBDCA comes at the additional cost of modelling the geometry and material composition of the patient in treatment position (as determined from imaging), and the treatment applicator (as characterized by the vendor). The adequacy of these inputs and of the radiation source model, which needs to be assessed for each treatment site, treatment technique, and radiation source type, determines the accuracy of the resultant dose calculations. Although new challenges associated with their familiarization, commissioning, clinical implementation, and quality assurance exist, MBDCAs clearly afford an opportunity to improve brachytherapy practice, particularly for low-energy sources.

  10. Calculating acid-base and oxygenation status during COPD exacerbation using mathematically arterialised venous blood

    DEFF Research Database (Denmark)

    Rees, Stephen Edward; Rychwicka-Kielek, Beate A; Andersen, Bjarne F

    2012-01-01

    Abstract Background: Repeated arterial puncture is painful. A mathematical method exists for transforming peripheral venous pH, PCO2 and PO2 to arterial eliminating the need for arterial sampling. This study evaluates this method to monitor acid-base and oxygenation during admission...... for exacerbation of chronic obstructive pulmonary disease (COPD). Methods: Simultaneous arterial and peripheral venous blood was analysed. Venous values were used to calculate arterial pH, PCO2 and PO2, with these compared to measured values using Bland-Altman analysis and scatter plots. Calculated values of PO2......H, PCO2 and PO2 were 7.432±0.047, 6.8±1.7 kPa and 9.2±1.5 kPa, respectively. Calculated and measured arterial pH and PCO2 agreed well, differences having small bias and SD (0.000±0.022 pH, -0.06±0.50 kPa PCO2), significantly better than venous blood alone. Calculated PO2 obeyed the clinical rules...

  11. Brittleness index calculation and evaluation for CBM reservoirs based on AVO simultaneous inversion

    Science.gov (United States)

    Wu, Haibo; Dong, Shouhua; Huang, Yaping; Wang, Haolong; Chen, Guiwu

    2016-11-01

    In this paper, a new approach is proposed for coalbed methane (CBM) reservoir brittleness index (BI) calculations. The BI, as a guide for fracture area selection, is calculated by dynamic elastic parameters (dynamic Young's modulus Ed and dynamic Poisson's ratio υd) obtained from an amplitude versus offset (AVO) simultaneous inversion. Among the three different classes of CBM reservoirs distinguished on the basis of brittleness in the theoretical part of this study, class I reservoirs with high BI values are identified as preferential target areas for fracturing. Therefore, we derive the AVO approximation equation expressed by Ed and υd first. This allows the direct inversion of the dynamic elastic parameters through the pre-stack AVO simultaneous inversion, which is based on Bayes' theorem. Thereafter, a test model with Gaussian white noise and a through-well seismic profile inversion is used to demonstrate the high reliability of the inversion parameters. Accordingly, the BI of a CBM reservoir section from the Qinshui Basin is calculated using the proposed method and a class I reservoir section detected through brittleness evaluation. From the outcome of this study, we believe the adoption of this new approach could act as a guide and reference for BI calculations and evaluations of CBM reservoirs.

  12. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    International Nuclear Information System (INIS)

    Osadchy, A V; Obraztsova, E D; Volotovskiy, S G; Golovashkin, D L; Savin, V V

    2016-01-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars. (paper)

  13. Calculation of effect of burnup history on spent fuel reactivity based on CASMO5

    International Nuclear Information System (INIS)

    Li Xiaobo; Xia Zhaodong; Zhu Qingfu

    2015-01-01

    Based on the burnup credit of actinides + fission products (APU-2) which are usually considered in spent fuel package, the effect of power density and operating history on k ∞ was studied. All the burnup calculations are based on the two-dimensional fuel assembly burnup program CASMO5. The results show that taking the core average power density of specified power plus a bounding margin of 0.0023 to k ∞ , and taking the operating history of specified power without shutdown during cycle and between cycles plus a bounding margin of 0.0045 to k ∞ can meet the bounding principle of burnup credit. (authors)

  14. Substituent effect on redox potential of nitrido technetium complexes with Schiff base ligand. Theoretical calculations

    International Nuclear Information System (INIS)

    Takayama, T.; Sekine, T.; Kudo, H.

    2003-01-01

    Theoretical calculations based on the density functional theory (DFT) were performed to understand the effect of substituents on the molecular and electronic structures of technetium nitrido complexes with salen type Schiff base ligands. Optimized structures of these complexes are square pyramidal. The electron density on a Tc atom of the complex with electron withdrawing substituents is lower than that of the complex with electron donating substituents. The HOMO energy is lower in the complex with electron withdrawing substituents than that in the complex with electron donating substituents. The charge on Tc atoms is a good measure that reflects the redox potential of [TcN(L)] complex. (author)

  15. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  16. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  17. Absorbed doses behind bones with MR image-based dose calculations for radiotherapy treatment planning.

    Science.gov (United States)

    Korhonen, Juha; Kapanen, Mika; Keyrilainen, Jani; Seppala, Tiina; Tuomikoski, Laura; Tenhunen, Mikko

    2013-01-01

    Magnetic resonance (MR) images are used increasingly in external radiotherapy target delineation because of their superior soft tissue contrast compared to computed tomography (CT) images. Nevertheless, radiotherapy treatment planning has traditionally been based on the use of CT images, due to the restrictive features of MR images such as lack of electron density information. This research aimed to measure absorbed radiation doses in material behind different bone parts, and to evaluate dose calculation errors in two pseudo-CT images; first, by assuming a single electron density value for the bones, and second, by converting the electron density values inside bones from T(1)∕T(2)∗-weighted MR image intensity values. A dedicated phantom was constructed using fresh deer bones and gelatine. The effect of different bone parts to the absorbed dose behind them was investigated with a single open field at 6 and 15 MV, and measuring clinically detectable dose deviations by an ionization chamber matrix. Dose calculation deviations in a conversion-based pseudo-CT image and in a bulk density pseudo-CT image, where the relative electron density to water for the bones was set as 1.3, were quantified by comparing the calculation results with those obtained in a standard CT image by superposition and Monte Carlo algorithms. The calculations revealed that the applied bulk density pseudo-CT image causes deviations up to 2.7% (6 MV) and 2.0% (15 MV) to the dose behind the examined bones. The corresponding values in the conversion-based pseudo-CT image were 1.3% (6 MV) and 1.0% (15 MV). The examinations illustrated that the representation of the heterogeneous femoral bone (cortex denser compared to core) by using a bulk density for the whole bone causes dose deviations up to 2% both behind the bone edge and the middle part of the bone (diameter bones). This study indicates that the decrease in absorbed dose is not dependent on the bone diameter with all types of bones. Thus

  18. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    Science.gov (United States)

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Calculation of the similarity rate between images based on the local minima present Therein

    Directory of Open Access Journals (Sweden)

    K. Hourany

    2016-12-01

    Full Text Available Hourany, K., Benmeddour, F., Moulin, E., Assaad, J. and Zaatar, Y. Calculation of the similarity rate between images based on the local minima present therein. 2016. Lebanese Science Journal, 17(2: 177-192. Image processing is a very vast field that includes both IT and applied mathematics. It is a discipline that studies the improvement and transformations of digital images hence permitting the improvement of the quality of these images and the extraction of information. The comparison of digital images is a paramount issue that has been discussed in several researches because of its various applications especially in the field of control and surveillance such as the Structural Health Monitoring using acoustic waves. The IT support of the images serves especially for comparing them notably in distinguishing differences between these images and quantifying them automatically. In this study we will present an algorithm, allowing us to calculate the similarity rate between images based on the local minima present therein. This algorithm is divided into two main parts. In the first part we will explain how to extract the local minima from an image and in the second part we will show how to calculate the similarity rate between two images.

  20. Feasibility study of context-awareness device Comfort calculation methods and their application to comfort-based access control

    DEFF Research Database (Denmark)

    Guo, Jingjing; Jensen, Christian D.; Ma, Jianfeng

    2016-01-01

    Mobile devices have become more powerful and are increasingly integrated in the everyday life of people; from playing games, taking pictures and interacting with social media to replacing credit cards in payment solutions. Some actions may only be appropriate in some situations, so the security...... of a mobile device is therefore increasingly linked to its context, such as its location, surroundings (e.g. objects in the immediate environment) and so on. However, situational awareness and context are not captured by traditional security models. In this paper, we examine the notion of Device Comfort......, which captures a device's ability to secure and reason about its environment. Specifically, we study the feasibility of two device comfort calculation methods we proposed in previous work. We do trace driven simulations based on a large body of sensed data from mobile devices in the real world...

  1. Hypothesis testing and power calculations for taxonomic-based human microbiome data.

    Science.gov (United States)

    La Rosa, Patricio S; Brooks, J Paul; Deych, Elena; Boone, Edward L; Edwards, David J; Wang, Qin; Sodergren, Erica; Weinstock, George; Shannon, William D

    2012-01-01

    This paper presents new biostatistical methods for the analysis of microbiome data based on a fully parametric approach using all the data. The Dirichlet-multinomial distribution allows the analyst to calculate power and sample sizes for experimental design, perform tests of hypotheses (e.g., compare microbiomes across groups), and to estimate parameters describing microbiome properties. The use of a fully parametric model for these data has the benefit over alternative non-parametric approaches such as bootstrapping and permutation testing, in that this model is able to retain more information contained in the data. This paper details the statistical approaches for several tests of hypothesis and power/sample size calculations, and applies them for illustration to taxonomic abundance distribution and rank abundance distribution data using HMP Jumpstart data on 24 subjects for saliva, subgingival, and supragingival samples. Software for running these analyses is available.

  2. Hypothesis testing and power calculations for taxonomic-based human microbiome data.

    Directory of Open Access Journals (Sweden)

    Patricio S La Rosa

    Full Text Available This paper presents new biostatistical methods for the analysis of microbiome data based on a fully parametric approach using all the data. The Dirichlet-multinomial distribution allows the analyst to calculate power and sample sizes for experimental design, perform tests of hypotheses (e.g., compare microbiomes across groups, and to estimate parameters describing microbiome properties. The use of a fully parametric model for these data has the benefit over alternative non-parametric approaches such as bootstrapping and permutation testing, in that this model is able to retain more information contained in the data. This paper details the statistical approaches for several tests of hypothesis and power/sample size calculations, and applies them for illustration to taxonomic abundance distribution and rank abundance distribution data using HMP Jumpstart data on 24 subjects for saliva, subgingival, and supragingival samples. Software for running these analyses is available.

  3. Vertex based missing mass calculator for 3-prong hadronically decaying tau leptons in the ATLAS detector

    CERN Document Server

    Maddocks, Harvey

    In this thesis my personal contributions to the ATLAS experiment are presented, these consist of studies and analyses relating to tau leptons. The first main section contains work on the identification of hadronically decaying tau leptons, and my specific contribution the electron veto. This work involved improving the choice of variables to discriminate against electrons that had been incorrectly identified as tau leptons. These variables were optimised to be robust against increasing pile-up, which is present in this data period. The resulting efficiencies are independent of this pile-up. The second main section contains an analysis of Z → τ τ decays, my specific contribution was the calculation of the detector acceptance factors and systematics. The third, and final section contains an analysis of the performance of a new vertex based missing mass calculator for 3-prong hadronically decaying tau leptons. It was found that in its current state it performs just as well as the existing methods. However it...

  4. A massively-parallel electronic-structure calculations based on real-space density functional theory

    International Nuclear Information System (INIS)

    Iwata, Jun-Ichi; Takahashi, Daisuke; Oshiyama, Atsushi; Boku, Taisuke; Shiraishi, Kenji; Okada, Susumu; Yabana, Kazuhiro

    2010-01-01

    Based on the real-space finite-difference method, we have developed a first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers. In addition to efficient parallel implementation, we also implemented several computational improvements, substantially reducing the computational costs of O(N 3 ) operations such as the Gram-Schmidt procedure and subspace diagonalization. Using the program on a massively-parallel computer cluster with a theoretical peak performance of several TFLOPS, we perform electronic-structure calculations for a system consisting of over 10,000 Si atoms, and obtain a self-consistent electronic-structure in a few hundred hours. We analyze in detail the costs of the program in terms of computation and of inter-node communications to clarify the efficiency, the applicability, and the possibility for further improvements.

  5. Dose calculation using a numerical method based on Haar wavelets integration

    Energy Technology Data Exchange (ETDEWEB)

    Belkadhi, K., E-mail: khaled.belkadhi@ult-tunisie.com [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); Manai, K. [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); College of Science and Arts, University of Bisha, Bisha (Saudi Arabia)

    2016-03-11

    This paper deals with the calculation of the absorbed dose in an irradiation cell of gamma rays. Direct measurement and simulation have shown that they are expensive and time consuming. An alternative to these two operations is numerical methods, a quick and efficient way can furnish an estimation of the absorbed dose by giving an approximation of the photon flux at a specific point of space. To validate the numerical integration method based on the Haar wavelet for absorbed dose estimation, a study with many configurations was performed. The obtained results with the Haar wavelet method showed a very good agreement with the simulation highlighting good efficacy and acceptable accuracy. - Highlights: • A numerical integration method using Haar wavelets is detailed. • Absorbed dose is estimated with Haar wavelets method. • Calculated absorbed dose using Haar wavelets and Monte Carlo simulation using Geant4 are compared.

  6. A theoretical study of blue phosphorene nanoribbons based on first-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Jiafeng; Si, M. S., E-mail: sims@lzu.edu.cn; Yang, D. Z.; Zhang, Z. Y.; Xue, D. S. [Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China)

    2014-08-21

    Based on first-principles calculations, we present a quantum confinement mechanism for the band gaps of blue phosphorene nanoribbons (BPNRs) as a function of their widths. The BPNRs considered have either armchair or zigzag shaped edges on both sides with hydrogen saturation. Both the two types of nanoribbons are shown to be indirect semiconductors. An enhanced energy gap of around 1 eV can be realized when the ribbon's width decreases to ∼10 Å. The underlying physics is ascribed to the quantum confinement effect. More importantly, the parameters to describe quantum confinement are obtained by fitting the calculated band gaps with respect to their widths. The results show that the quantum confinement in armchair nanoribbons is stronger than that in zigzag ones. This study provides an efficient approach to tune the band gap in BPNRs.

  7. A steady-state target calculation method based on "point" model for integrating processes.

    Science.gov (United States)

    Pang, Qiang; Zou, Tao; Zhang, Yanyan; Cong, Qiumei

    2015-05-01

    Aiming to eliminate the influences of model uncertainty on the steady-state target calculation for integrating processes, this paper presented an optimization method based on "point" model and a method determining whether or not there is a feasible solution of steady-state target. The optimization method resolves the steady-state optimization problem of integrating processes under the framework of two-stage structure, which builds a simple "point" model for the steady-state prediction, and compensates the error between "point" model and real process in each sampling interval. Simulation results illustrate that the outputs of integrating variables can be restricted within the constraints, and the calculation errors between actual outputs and optimal set-points are small, which indicate that the steady-state prediction model can predict the future outputs of integrating variables accurately. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Comparison of Different Calculation Approaches for Defining Microbiological Control Levels Based on Historical Data.

    Science.gov (United States)

    Gordon, Oliver; Goverde, Marcel; Pazdan, James; Staerk, Alexandra; Roesti, David

    2015-01-01

    In the present work we compared different calculation approaches for their ability to accurately define microbiological control levels based on historical data. To that end, real microbiological data were used for simulation experiments. The results of our study confirmed that assuming a normal distribution is not appropriate for that purpose. In addition, assumption of a Poisson distribution generally underestimated the control level, and the predictive power for future values was highly insufficient. The non-parametric Excel percentile strongly predicted future values in our simulation experiments (although not as good as some of the parametric models). With the limited amount of data used in the simulations, the calculated control levels for the upper percentiles were on average higher and more variable compared to the parametric models. This was due to the fact that the largest observed value was generally defined as the control level. Accordingly, the Excel percentile is less robust towards outliers and requires more data to accurately define control levels as compared to parametric models. The negative binomial as well as the zero-inflated negative binomial distribution, both parametric models, had good predictive power for future values. Nonetheless, on basis of our simulation experiments, we saw no evidence to generally prefer the zero-inflated model over the non-inflated one. Finally, with our data, the gamma distribution on average had at least as good predictive power as the negative binomial distribution and zero-inflated negative binomial distribution for percentiles ≥98%, indicating that it may represent a viable option for calculating microbiological control levels at high percentiles. Presumably, this was based on the fact that the gamma distribution fitted the upper end of the distribution better than other models. Since in general microbiological control levels would be based on the upper percentiles, microbiologists may exclusively rely on the

  9. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  10. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  11. Activity-based costing: a practical model for cost calculation in radiotherapy.

    Science.gov (United States)

    Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien

    2003-10-01

    The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.

  12. Surface energy budget and thermal inertia at Gale Crater: Calculations from ground-based measurements.

    Science.gov (United States)

    Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M

    2014-08-01

    The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia ( I ) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼10 4  m 2 to ∼10 7  m 2 . Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼10 2  m 2 . We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I  = 452 J m -2  K -1  s -1/2 (SI units used throughout this article) is found at YKB followed by PL with I  = 306 and RCK with I  = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars.

  13. Antigenic cartography of H1N1 influenza viruses using sequence-based antigenic distance calculation.

    Science.gov (United States)

    Anderson, Christopher S; McCall, Patrick R; Stern, Harry A; Yang, Hongmei; Topham, David J

    2018-02-12

    The ease at which influenza virus sequence data can be used to estimate antigenic relationships between strains and the existence of databases containing sequence data for hundreds of thousands influenza strains make sequence-based antigenic distance estimates an attractive approach to researchers. Antigenic mismatch between circulating strains and vaccine strains results in significantly decreased vaccine effectiveness. Furthermore, antigenic relatedness between the vaccine strain and the strains an individual was originally primed with can affect the cross-reactivity of the antibody response. Thus, understanding the antigenic relationships between influenza viruses that have circulated is important to both vaccinologists and immunologists. Here we develop a method of mapping antigenic relationships between influenza virus stains using a sequence-based antigenic distance approach (SBM). We used a modified version of the p-all-epitope sequence-based antigenic distance calculation, which determines the antigenic relatedness between strains using influenza hemagglutinin (HA) genetic coding sequence data and provide experimental validation of the p-all-epitope calculation. We calculated the antigenic distance between 4838 H1N1 viruses isolated from infected humans between 1918 and 2016. We demonstrate, for the first time, that sequence-based antigenic distances of H1N1 Influenza viruses can be accurately represented in 2-dimenstional antigenic cartography using classic multidimensional scaling. Additionally, the model correctly predicted decreases in cross-reactive antibody levels with 87% accuracy and was highly reproducible with even when small numbers of sequences were used. This work provides a highly accurate and precise bioinformatics tool that can be used to assess immune risk as well as design optimized vaccination strategies. SBM accurately estimated the antigenic relationship between strains using HA sequence data. Antigenic maps of H1N1 virus strains reveal

  14. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    Science.gov (United States)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  15. A novel pH-responsive hydrogel-based on calcium alginate engineered by the previous formation of polyelectrolyte complexes (PECs) intended to vaginal administration.

    Science.gov (United States)

    Ferreira, Natália Noronha; Perez, Taciane Alvarenga; Pedreiro, Liliane Neves; Prezotti, Fabíola Garavello; Boni, Fernanda Isadora; Cardoso, Valéria Maria de Oliveira; Venâncio, Tiago; Gremião, Maria Palmira Daflon

    2017-10-01

    This work aimed to develop a calcium alginate hydrogel as a pH responsive delivery system for polymyxin B (PMX) sustained-release through the vaginal route. Two samples of sodium alginate from different suppliers were characterized. The molecular weight and M/G ratio determined were, approximately, 107 KDa and 1.93 for alginate_S and 32 KDa and 1.36 for alginate_V. Polymer rheological investigations were further performed through the preparation of hydrogels. Alginate_V was selected for subsequent incorporation of PMX due to the acquisition of pseudoplastic viscous system able to acquiring a differential structure in simulated vaginal microenvironment (pH 4.5). The PMX-loaded hydrogel (hydrogel_PMX) was engineered based on polyelectrolyte complexes (PECs) formation between alginate and PMX followed by crosslinking with calcium chloride. This system exhibited a morphology with variable pore sizes, ranging from 100 to 200 μm and adequate syringeability. The hydrogel liquid uptake ability in an acid environment was minimized by the previous PECs formation. In vitro tests evidenced the hydrogels mucoadhesiveness. PMX release was pH-dependent and the system was able to sustain the release up to 6 days. A burst release was observed at pH 7.4 and drug release was driven by an anomalous transport, as determined by the Korsmeyer-Peppas model. At pH 4.5, drug release correlated with Weibull model and drug transport was driven by Fickian diffusion. The calcium alginate hydrogels engineered by the previous formation of PECs showed to be a promising platform for sustained release of cationic drugs through vaginal administration.

  16. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  17. A method for improving the calculation accuracy of acid-base constants by inverse gas chromatography.

    Science.gov (United States)

    Shi, Baoli; Qi, Dawei

    2012-03-30

    In this paper, studies were conducted in order to improve the calculation accuracy of acid-base constants measured by inverse gas chromatography. The conventional a·(γ(d)(l))(0.5) parameters of DCM (dichloromethane), TCM (trichloromethane), and EtAcet (ethyl acetate) were corrected as 185, 212, and 235 Å(2)(mJ/m(2))(0.5) by analyzing the relationship between a·(γ(d)(l))(0.5) and the boiling temperature of the probe solvents, where a is molecular area and γ(l)(d) is surface dispersive free energy of the probe solvents, respectively. To validate the availability of the new a·(γ(d)(l))(0.5) values, the acid-base constants of polystyrene were measured. It was found that when the new a·(γ(d)(l))(0.5) parameters were adopted, the final linear fit degree for the plot of -ΔH(a)(s)/AN* versus DN/AN* was enhanced from 0.993 to 0.999, and the standard deviation was decreased from 0.344 to 0.156. In addition, the availability of general application to improving the calculation accuracy of acid-base constants with the new a·(γ(d)(l))(0.5) parameters was also proved with a mathematical justification. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Accelerating Atomic Orbital-based Electronic Structure Calculation via Pole Expansion plus Selected Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Lin; Chen, Mohan; Yang, Chao; He, Lixin

    2012-02-10

    We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEpSI is that it has a much lower computational complexity than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEpSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEpSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEpSI is modest. This makes it even possible to perform Kohn-Sham DFT calculations for 10,000-atom nanotubes on a single processor. We also show that the use of PEpSI does not lead to loss of accuracy required in a practical DFT calculation.

  19. Excel pour l'ingénieur bases, graphiques, calculs, macros, VBA

    CERN Document Server

    Bellan, Philippe

    2010-01-01

    Excel, utilisé par tout possesseur d'un ordinateur personnel pour effectuer des manipulations élémentaires de tableaux et de chiffres, est en réalité un outil beaucoup plus puissant, aux potentialités souvent ignorées. A tous ceux, étudiants scientifiques, élèves-ingénieurs ou ingénieurs en exercice qui pensaient le calcul numérique seulement possible à travers des logiciels lourds et coûteux, ce livre montrera qu'un grand nombre de problèmes mathématiques courants de l'ingénieur peuvent être résolus numériquement en utilisant les outils de calcul et la capacité graphique d'Excel. A cet effet, après avoir introduit les notions de base, l'ouvrage décrit les fonctions disponibles avec Excel, puis quelques méthodes numériques simples permettant de calculer des intégrales, de résoudre des équations différentielles, d'obtenir les solutions de systèmes linéaires ou non, de traiter des problèmes d'optimisation... Les méthodes numériques présentées, qui sont très simples, peuvent...

  20. Data to calculate emissions intensity for individual beef cattle reared on pasture-based production systems

    Directory of Open Access Journals (Sweden)

    G.A. McAuliffe

    2018-04-01

    Full Text Available With increasing concern about environmental burdens originating from livestock production, the importance of farming system evaluation has never been greater. In order to form a basis for trade-off analysis of pasture-based cattle production systems, liveweight data from 90 Charolais × Hereford-Friesian calves were collected at a high temporal resolution at the North Wyke Farm Platform (NWFP in Devon, UK. These data were then applied to the Intergovernmental Panel on Climate Change (IPCC modelling framework to estimate on-farm methane emissions under three different pasture management strategies, completing a foreground dataset required to calculate emissions intensity of individual beef cattle.

  1. Interest of thermochemical data bases linked to complex equilibria calculation codes for practical applications

    International Nuclear Information System (INIS)

    Cenerino, G.; Marbeuf, A.; Vahlas, C.

    1992-01-01

    Since 1974, Thermodata has been working on developing an Integrated Information System in Inorganic Chemistry. A major effort was carried on the thermochemical data assessment of both pure substances and multicomponent solution phases. The available data bases are connected to powerful calculation codes (GEMINI = Gibbs Energy Minimizer), which allow to determine the thermodynamical equilibrium state in multicomponent systems. The high interest of such an approach is illustrated by recent applications in as various fields as semi-conductors, chemical vapor deposition, hard alloys and nuclear safety. (author). 26 refs., 6 figs

  2. Comparison of CT number calibration techniques for CBCT-based dose calculation

    International Nuclear Information System (INIS)

    Dunlop, Alex; McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe; Murray, Julia; Bhide, Shreerang; Harrington, Kevin; Poludniowski, Gavin; Nutting, Christopher; Newbold, Kate

    2015-01-01

    The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT r ); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS auto ), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS auto provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT r (0.5 %) and RS auto (0.6 %) performing best. For lung cases, WL and RS auto methods generated dose distributions most similar to the ground truth. The RS auto density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS auto methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [de

  3. Validation of an online risk calculator for the prediction of anastomotic leak after colon cancer surgery and preliminary exploration of artificial intelligence-based analytics.

    Science.gov (United States)

    Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L

    2017-11-01

    Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.

  4. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation

    Science.gov (United States)

    Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei

    2007-02-01

    On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No

  5. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation

    International Nuclear Information System (INIS)

    Yang Yong; Schreibmann, Eduard; Li Tianfang; Wang Chuang; Xing Lei

    2007-01-01

    On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No

  6. Measurement-based aerosol forcing calculations: The influence of model complexity

    Directory of Open Access Journals (Sweden)

    Manfred Wendisch

    2001-03-01

    Full Text Available On the basis of ground-based microphysical and chemical aerosol measurements a simple 'two-layer-single-wavelength' and a complex 'multiple-layer-multiple-wavelength' radiative transfer model are used to calculate the local solar radiative forcing of black carbon (BC and (NH42SO4 (ammonium sulfate particles and mixtures (external and internal of both materials. The focal points of our approach are (a that the radiative forcing calculations are based on detailed aerosol measurements with special emphasis of particle absorption, and (b the results of the radiative forcing calculations with two different types of models (with regards to model complexity are compared using identical input data. The sensitivity of the radiative forcing due to key input parameters (type of particle mixture, particle growth due to humidity, surface albedo, solar zenith angle, boundary layer height is investigated. It is shown that the model results for external particle mixtures (wet and dry only slightly differ from those of the corresponding internal mixture. This conclusion is valid for the results of both model types and for both surface albedo scenarios considered (grass and snow. Furthermore, it is concluded that the results of the two model types approximately agree if it is assumed that the aerosol particles are composed of pure BC. As soon as a mainly scattering substance is included alone or in (internal or external mixture with BC, the differences between the radiative forcings of both models become significant. This discrepancy results from neglecting multiple scattering effects in the simple radiative transfer model.

  7. Calculation of the Fission Product Release for the HTR-10 based on its Operation History

    International Nuclear Information System (INIS)

    Xhonneux, A.; Druska, C.; Struth, S.; Allelein, H.-J.

    2014-01-01

    Since the first criticality of the HTR-10 test reactor in 2000, a rather complex operation history was performed. As the HTR-10 is the only pebble bed reactor in operation today delivering experimental data for HTR simulation codes, an attempt was made to simulate the whole reactor operation up to the presence. Special emphasis was put on the fission product release behaviour as it is an important safety aspect of such a reactor. The operation history has to be simulated with respect to the neutronics, fluid mechanics and depletion to get a detailed knowledge about the time-dependent nuclide inventory. In this paper we report about such a simulation with VSOP 99/11 and our new fission product release code STACY. While STACY (Source Term Analysis Code System) so far was able to calculate the fission product release rates in case of an equilibrium core and during transients, it now can also be applied to running-in-phases. This coupling demonstrates a first step towards an HCP Prototype. Based on the published power histogram of the HTR-10 and additional information about the fuel loading and shuffling, a coupled neutronics, fluid dynamics and depletion calculation was performed. Special emphasis was put on the complex fuel-shuffling scheme within both VSOP and STACY. The simulations have shown that the HTR-10 up to now generated about 2580 MWd while reshuffling the core about 2.3 times. Within this paper, STACY results for the equilibrium core will be compared with FRESCO-II results being published by INET. Compared to these release rates, which are based on a few user defined life histories, in this new approach the fission product release rates of Ag-110m, Cs-137, Sr-90 and I-131 have been simulated for about 4000 tracer pebbles with STACY. For the calculation of the HTR-10 operation history time-dependent release rates are being presented as well. (author)

  8. Wavelet-based calculation of cerebral angiographic data from time-resolved CT perfusion acquisitions.

    Science.gov (United States)

    Havla, Lukas; Thierfelder, Kolja M; Beyer, Sebastian E; Sommer, Wieland H; Dietrich, Olaf

    2015-08-01

    To evaluate a new approach for reconstructing angiographic images by application of wavelet transforms on CT perfusion data. Fifteen consecutive patients with suspected stroke were examined with a multi-detector CT acquiring 32 dynamic phases (∆t = 1.5s) of 99 slices (total slab thickness 99mm) at 80kV/200mAs. Thirty-five mL of iomeprol-350 was injected (flow rate = 4.5mL/s). Angiographic datasets were calculated after initial rigid-body motion correction using (a) temporally filtered maximum intensity projections (tMIP) and (b) the wavelet transform (Paul wavelet, order 1) of each voxel time course. The maximum of the wavelet-power-spectrum was defined as the angiographic signal intensity. The contrast-to-noise ratio (CNR) of 18 different vessel segments was quantified and two blinded readers rated the images qualitatively using 5pt Likert scales. The CNR for the wavelet angiography (501.8 ± 433.0) was significantly higher than for the tMIP approach (55.7 ± 29.7, Wilcoxon test p wavelet angiography with median scores of 4/4 (reader 1/reader 2) than the tMIP (scores of 3/3). The proposed calculation approach for angiography data using temporal wavelet transforms of intracranial CT perfusion datasets provides higher vascular contrast and intrinsic removal of non-enhancing structures such as bone. • Angiographic images calculated with the proposed wavelet-based approach show significantly improved contrast-to-noise ratio. • CT perfusion-based wavelet angiography is an alternative method for vessel visualization. • Provides intrinsic removal of non-enhancing structures such as bone.

  9. Consolidating duodenal and small bowel toxicity data via isoeffective dose calculations based on compiled clinical data.

    Science.gov (United States)

    Prior, Phillip; Tai, An; Erickson, Beth; Li, X Allen

    2014-01-01

    To consolidate duodenum and small bowel toxicity data from clinical studies with different dose fractionation schedules using the modified linear quadratic (MLQ) model. A methodology of adjusting the dose-volume (D,v) parameters to different levels of normal tissue complication probability (NTCP) was presented. A set of NTCP model parameters for duodenum toxicity were estimated by the χ(2) fitting method using literature-based tolerance dose and generalized equivalent uniform dose (gEUD) data. These model parameters were then used to convert (D,v) data into the isoeffective dose in 2 Gy per fraction, (D(MLQED2),v) and convert these parameters to an isoeffective dose at another NTCP (D(MLQED2'),v). The literature search yielded 5 reports useful in making estimates of duodenum and small bowel toxicity. The NTCP model parameters were found to be TD50(1)(model) = 60.9 ± 7.9 Gy, m = 0.21 ± 0.05, and δ = 0.09 ± 0.03 Gy(-1). Isoeffective dose calculations and toxicity rates associated with hypofractionated radiation therapy reports were found to be consistent with clinical data having different fractionation schedules. Values of (D(MLQED2'),v) between different NTCP levels remain consistent over a range of 5%-20%. MLQ-based isoeffective calculations of dose-response data corresponding to grade ≥2 duodenum toxicity were found to be consistent with one another within the calculation uncertainty. The (D(MLQED2),v) data could be used to determine duodenum and small bowel dose-volume constraints for new dose escalation strategies. Copyright © 2014 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  10. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    Directory of Open Access Journals (Sweden)

    Brejnev Muhizi Muhire

    Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.

  11. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    Science.gov (United States)

    Muhire, Brejnev Muhizi; Varsani, Arvind; Martin, Darren Patrick

    2014-01-01

    The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV). There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT), a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms).

  12. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Science.gov (United States)

    Jeong, Chang Wook; Lee, Sangchul; Jung, Jin-Woo; Lee, Byung Ki; Jeong, Seong Jin; Hong, Sung Kyu; Byun, Seok-Soo; Lee, Sang Eun

    2014-01-01

    We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC) that predicts the probability of prostate cancer (PC) at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC) and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC). The clinical value was evaluated using decision curve analysis. PC was diagnosed in 1,240 (35.6%) and 417 (37.5%) men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811) than for ERSPC-RC (0.768, pcalculators. SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy to use because it is available as a mobile application for smart devices.

  13. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  14. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    Science.gov (United States)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  15. Intramolecular Hydrogen Bonding Involving Organic Fluorine: NMR Investigations Corroborated by DFT-Based Theoretical Calculations

    Directory of Open Access Journals (Sweden)

    Sandeep Kumar Mishra

    2017-03-01

    Full Text Available The combined utility of many one and two dimensional NMR methodologies and DFT-based theoretical calculations have been exploited to detect the intramolecular hydrogen bond (HB in number of different organic fluorine-containing derivatives of molecules, viz. benzanilides, hydrazides, imides, benzamides, and diphenyloxamides. The existence of two and three centered hydrogen bonds has been convincingly established in the investigated molecules. The NMR spectral parameters, viz., coupling mediated through hydrogen bond, one-bond NH scalar couplings, physical parameter dependent variation of chemical shifts of NH protons have paved the way for understanding the presence of hydrogen bond involving organic fluorine in all the investigated molecules. The experimental NMR findings are further corroborated by DFT-based theoretical calculations including NCI, QTAIM, MD simulations and NBO analysis. The monitoring of H/D exchange with NMR spectroscopy established the effect of intramolecular HB and the influence of electronegativity of various substituents on the chemical kinetics in the number of organic building blocks. The utility of DQ-SQ technique in determining the information about HB in various fluorine substituted molecules has been convincingly established.

  16. Calculations of helium separation via uniform pores of stanene-based membranes

    Directory of Open Access Journals (Sweden)

    Guoping Gao

    2015-12-01

    Full Text Available The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn and decorated 2D Sn (SnH and SnF honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K, two practical strategies (i.e., the application of strain and functionalization are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation.

  17. Development of a Carbon Emission Calculations System for Optimizing Building Plan Based on the LCA Framework

    Directory of Open Access Journals (Sweden)

    Feifei Fu

    2014-01-01

    Full Text Available Life cycle thinking has become widely applied in the assessment for building environmental performance. Various tool are developed to support the application of life cycle assessment (LCA method. This paper focuses on the carbon emission during the building construction stage. A partial LCA framework is established to assess the carbon emission in this phase. Furthermore, five typical LCA tools programs have been compared and analyzed for demonstrating the current application of LCA tools and their limitations in the building construction stage. Based on the analysis of existing tools and sustainability demands in building, a new computer calculation system has been developed to calculate the carbon emission for optimizing the sustainability during the construction stage. The system structure and detail functions are described in this paper. Finally, a case study is analyzed to demonstrate the designed LCA framework and system functions. This case is based on a typical building in UK with different plans of masonry wall and timber frame to make a comparison. The final results disclose that a timber frame wall has less embodied carbon emission than a similar masonry structure. 16% reduction was found in this study.

  18. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    Hebert, Alain; Coste, Mireille

    2002-01-01

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  19. Calculated thermal performance of solar collectors based on measured weather data from 2001-2010

    DEFF Research Database (Denmark)

    Dragsted, Janne; Furbo, Simon; Andersen, Elsa

    2015-01-01

    This paper presents an investigation of the differences in modeled thermal performance of solar collectors when meteorological reference years are used as input and when mulit-year weather data is used as input. The investigation has shown that using the Danish reference year based on the period...... with an increase in global radiation. This means that besides increasing the thermal performance with increasing the solar radiation, the utilization of the solar radiation also becomes better....... 1975-1990 will result in deviations of up to 39 % compared with thermal performance calculated with multi-year the measured weather data. For the newer local reference years based on the period 2001-2010 the maximum deviation becomes 25 %. The investigation further showed an increase in utilization...

  20. Implementation of a Web-Based Spatial Carbon Calculator for Latin America and the Caribbean

    Science.gov (United States)

    Degagne, R. S.; Bachelet, D. M.; Grossman, D.; Lundin, M.; Ward, B. C.

    2013-12-01

    A multi-disciplinary team from the Conservation Biology Institute is creating a web-based tool for the InterAmerican Development Bank (IDB) to assess the impact of potential development projects on carbon stocks in Latin America and the Caribbean. Funded by the German Society for International Cooperation (GIZ), this interactive carbon calculator is an integrated component of the IDB Decision Support toolkit which is currently utilized by the IDB's Environmental Safeguards Group. It is deployed on the Data Basin (www.databasin.org) platform and provides a risk screening function to indicate the potential carbon impact of various types of projects, based on a user-delineated development footprint. The tool framework employs the best available geospatial carbon data to quantify above-ground carbon stocks and highlights potential below-ground and soil carbon hotspots in the proposed project area. Results are displayed in the web mapping interface, as well as summarized in PDF documents generated by the tool.

  1. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  2. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    International Nuclear Information System (INIS)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.

    2014-08-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  3. An elastic elements calculation in the construction of electrical connectors based on flexible printed cables

    Directory of Open Access Journals (Sweden)

    Yefimenko A. A.

    2016-05-01

    connectors. We got an analytic dependence that can be used to find the Young's modulus for a known value of hardness on a scale Shore A. We gave examples of the amount of compression calculation in the elastomeric liner to provide a reliable contact for specified values of the transition resistance for the removable and permanent connectors based on flexible printed cable.

  4. Difference in the Minimum Horizontal Stress Magnitudes Between Direct Measurements and Poroelastic Equation-Based Calculation

    Science.gov (United States)

    Vo, U. D.; Chang, C.

    2016-12-01

    Horizontal stress profile with depth is needed for hydraulic fracture design, wellbore stability analysis, and sanding potential assessment. While vertical stress (Sv) can be calculated from overburden, horizontal stresses normally have to be measured. A conventional alternative is to use a linear poroelasticity equation derived based on the assumption of uniaxially strained basins, which gives minimum horizontal stress magnitude (Shmin) as a function of Sv, pore pressure and Poisson's ratio (ν). In order to check the reliability of the equation, we compare the calculated Shmin with measured values through e.g., leak-off tests (LOT). We compile Shmin and pore pressure data from 6 major petroleum fields worldwide (Cuu Long basin, offshore Vietnam; Champion field, offshore Brunei; Visund field, North Sea; Gippsland basin, offshore SE Australia; St. Lawrence Lowlands basin, East Canada; Popeye basin, Gulf of Mexico) for this comparison. For calculation of Shmin via the equation, we assume ν of 0.25 and Biot's constant of unity. The comparison shows that the calculated Shmin values generally underestimate the measured values by a range between 4% and as much as 29% depending on the regions. We attempt to explain qualitatively the gaps between the measured and the calculated Shmin (ΔShmin) in terms of tectonic stress. In Cuu Long, although tectonically active, ΔShmin is quite low (4%) throughout the 4.3 km depth investigated. In contrast, Visund and St. Lawrence Lowlands, although tectonically stable, show appreciable ΔShmin (average 9% and 22% respectively). These results imply that ΔShmin may not depend solely on tectonic stress. In Popeye where tectonics is active, ΔShmin is as high as 24%. Popeye is the only one among the fields investigated that exhibits high ΔShmin likely attributed to tectonic stress. The measured Shmin values in Champion and Gippsland are so scattered that any prediction on magnitudes might not be feasible. The wide variation of

  5. Density functional MO calculation for stacked DNA base-pairs with backbones.

    Science.gov (United States)

    Kurita, N; Kobayashi, K

    2000-05-01

    In order to elucidate the effect of the sugar and phosphate backbones on the stable structure and electronic properties of stacked DNA base-pairs, we performed ab initio molecular orbital (MO) calculations based on the density functional theory and Slater-type basis set. As a model cluster for stacked base-pairs, we employed three isomers for the dimer unit of stacked guanine-cytosine pairs composed with backbones as well as base-pairs. These structures were fully optimized and their electronic properties were self-consistently investigated. By including the backbones, the difference in total energy among the isomers was largely enhanced, while the trend in relative stability was not changed. The effect of backbones on the electronic properties is remarkable: the MOs with the character of the PO4 parts of backbones appear just below the highest-occupied MO. This result indicates that the PO4 parts might play a rule as a reaction site in chemical processes concerning DNA. Therefore, we conclude that the DNA backbones are indispensable for investigating the stability and electronic properties of the stacked DNA base-pairs.

  6. [Influenza pandemic deaths in Germany from 1918 to 2009. Estimates based on literature and own calculations].

    Science.gov (United States)

    Buchholz, Udo; Buda, Silke; Reuß, Annicka; Haas, Walter; Uphoff, Helmut

    2016-04-01

    Estimation of the number of deaths as a consequence of the influenza pandemics in the twentieth and twenty-first centuries (i.e. 1918-1919, 1957-1958, 1968-1970 and 2009) is a challenge worldwide and also in Germany. After conducting a systematic literature search complemented by our own calculations, values and estimates for all four pandemics were collated and evaluated. A systematic literature search including the terms death, mortality, pandemic, epidemic, Germany, 1918, 1957, 1968, 2009 was performed. Hits were reviewed by title and abstract and selected for possible relevance. We derived our own estimates using excess mortality calculations, which estimate the mortality exceeding that to be expected. All identified values were evaluated by methodology and quality of the database. Numbers of pandemic deaths were used to calculate case fatality rates and were compared with global values provided by the World Health Organization. For the pandemic 1918-1919 we identified 5 relevant publications, 3 for the pandemics 1957-1958 and 1968-1970 and 3 for 2009. For all four pandemics the most plausible estimations were based on time series analyses, taken either from the literature or from our own calculations based on monthly or weekly all cause death statistics. For the four pandemics these estimates were in chronological order 426,600 (1918-1919), 29,100 (1957-1958), 46,900 (1968-1970) and 350 (2009) excess pandemic-related deaths. This translates to an excess mortality ranging between 691 per 100,000 (0.69 % in 1918-1919) and 0.43 per 100,000 (0.00043 % in 2009). Case fatality rates showed good agreement with global estimates. We have proposed plausible estimates of pandemic-related excess number of deaths for the last four pandemics as well as excess mortality in Germany. The heterogeneity among pandemics is large with a variation factor of more than 1000. Possible explanations include characteristics of the virus or host (immunity), social conditions, status

  7. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  8. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  9. A computational approach to calculate personalized pennation angle based on MRI: effect on motion analysis.

    Science.gov (United States)

    Chincisan, Andra; Tecante, Karelia; Becker, Matthias; Magnenat-Thalmann, Nadia; Hurschler, Christof; Choi, Hon Fai

    2016-05-01

    Muscles are the primary component responsible for the locomotion and change of posture of the human body. The physiologic basis of muscle force production and movement is determined by the muscle architecture (maximum muscle force, [Formula: see text], optimal muscle fiber length, [Formula: see text], tendon slack length, [Formula: see text], and pennation angle at optimal muscle fiber length, [Formula: see text]). The pennation angle is related to the maximum force production and to the range of motion. The aim of this study was to investigate a computational approach to calculate subject-specific pennation angle from magnetic resonance images (MRI)-based 3D anatomical model and to determine the impact of this approach on the motion analysis with personalized musculoskeletal models. A 3D method that calculates the pennation angle using MRI was developed. The fiber orientations were automatically computed, while the muscle line of action was determined using approaches based on anatomical landmarks and on centroids of image segmentation. Three healthy male volunteers were recruited for MRI scanning and motion capture acquisition. This work evaluates the effect of subject-specific pennation angle as musculoskeletal parameter in the lower limb, focusing on the quadriceps group. A comparison was made for assessing the contribution of personalized models on motion analysis. Gait and deep squat were analyzed using neuromuscular simulations (OpenSim). The results showed variation of the pennation angle between the generic and subject-specific models, demonstrating important interindividual differences, especially for the vastus intermedius and vastus medialis muscles. The pennation angle variation between personalized and generic musculoskeletal models generated significant variation in muscle moments and forces during dynamic motion analysis. A MRI-based approach to define subject-specific pennation angle was proposed and evaluated in motion analysis models. The

  10. An easy to calculate equation to estimate GFR based on inulin clearance.

    Science.gov (United States)

    Tsinalis, Dimitrios; Thiel, Gilbert T

    2009-10-01

    For the estimation of renal function on the basis of serum creatinine, either the Cockcroft-Gault (CG) equation or the MDRD formula is commonly used. Compared to MDRD (using power functions), CG has the advantage of easy calculability at the bedside. MDRD, however, approaches glomerular filtration rate (GFR) more precisely than CG and gives values corrected for a body surface area (BSA) of 1.73 m(2). We wondered whether CG could be adapted to estimate GFR rather than creatinine clearance without losing the advantage of easy calculability. In this prospective study, inulin clearance under well-defined conditions was taken as the gold standard for GFR. In 182 living kidney donors, inulin clearance was measured under standardized conditions (protein, salt and water intake, overnight stay) before and after nephrectomy. Together with the serum creatinine level, and demographic and clinical data, 281 measurements of inulin clearance were used to compare the accuracy of different estimation equations. Using stepwise multiple regression, a new set of constants was defined for a CG-like equation in order to estimate GFR. The MDRD equation underestimated GFR by 9%, and the quadratic equation suggested by Rule overestimated GFR by 12.4%. The new CG-like equation, even when calculated with 'mental arithmetic-friendly' rounded parameters, showed significantly less bias (1.2%). The adapted equation is GFR[mL/min] = ((155 - Age[years]) x weight [kg]/serum creatinine [micromol/L]) x 0.85 if female. We propose the CG-like equation called IB-eGFR (Inulinclearance Based eGFR) to estimate GFR more reliably than MDRD, Rule's equation or the original Cockcroft-Gault equation. As our data represent a Caucasian population, the adapted equation is still to be validated for patients of other ethnicity.

  11. Neutron spectra calculation and doses in a subcritical nuclear reactor based on thorium

    International Nuclear Information System (INIS)

    Medina C, D.; Hernandez A, P. L.; Hernandez D, V. M.; Vega C, H. R.; Sajo B, L.

    2015-10-01

    This paper describes a heterogeneous subcritical nuclear reactor with molten salts based on thorium, with graphite moderator and a source of 252 Cf, whose dose levels in the periphery allows its use in teaching and research activities. The design was done by the Monte Carlo method with the code MCNP5 where the geometry, dimensions and fuel was varied in order to obtain the best design. The result is a cubic reactor of 110 cm side with graphite moderator and reflector. In the central part they have 9 ducts that were placed in the direction of axis Y. The central duct contains the source of 252 Cf, of 8 other ducts, are two irradiation ducts and the other six contain a molten salt ( 7 LiF - BeF 2 - ThF 4 - UF 4 ) as fuel. For design the k eff , neutron spectra and ambient dose equivalent was calculated. In the first instance the above calculation for a virgin fuel was called case 1, then a percentage of 233 U was used and the percentage of Th was decreased and was called case 2. This with the purpose to compare two different fuels working inside the reactor. In the case 1 a value was obtained for the k eff of 0.13 and case 2 of 0.28, maintaining the subcriticality in both cases. In the dose levels the higher value is in case 2 in the axis Y with a value of 3.31 e-3 ±1.6% p Sv/Q this value is reported in for one. With this we can calculate the exposure time of personnel working in the reactor. (Author)

  12. Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital.

    Science.gov (United States)

    Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud

    2015-05-17

    Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran.‎ This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS.‎ The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department.

  13. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    International Nuclear Information System (INIS)

    Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan

    2014-01-01

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M o ), moment magnitude (M W ), rupture duration (T o ) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M W =7.8 and the 17 July 2006 Pangandaran earthquake with M W =7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M W =7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake

  14. Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods

    International Nuclear Information System (INIS)

    Kramer, Richard

    2010-01-01

    Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)

  15. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    International Nuclear Information System (INIS)

    Kramer, Richard

    2011-01-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  16. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    Science.gov (United States)

    Kramer, Richard

    2011-08-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  17. An analytical method based on multipole moment expansion to calculate the flux distribution in Gammacell-220

    Science.gov (United States)

    Rezaeian, P.; Ataenia, V.; Shafiei, S.

    2017-12-01

    In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.

  18. Determination of structural fluctuations of proteins from structure-based calculations of residual dipolar couplings

    International Nuclear Information System (INIS)

    Montalvao, Rinaldo W.; De Simone, Alfonso; Vendruscolo, Michele

    2012-01-01

    Residual dipolar couplings (RDCs) have the potential of providing detailed information about the conformational fluctuations of proteins. It is very challenging, however, to extract such information because of the complex relationship between RDCs and protein structures. A promising approach to decode this relationship involves structure-based calculations of the alignment tensors of protein conformations. By implementing this strategy to generate structural restraints in molecular dynamics simulations we show that it is possible to extract effectively the information provided by RDCs about the conformational fluctuations in the native states of proteins. The approach that we present can be used in a wide range of alignment media, including Pf1, charged bicelles and gels. The accuracy of the method is demonstrated by the analysis of the Q factors for RDCs not used as restraints in the calculations, which are significantly lower than those corresponding to existing high-resolution structures and structural ensembles, hence showing that we capture effectively the contributions to RDCs from conformational fluctuations.

  19. Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, Richard [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2010-07-01

    Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)

  20. Calculation of color difference and measurement of the spectrum of aerosol based on human visual system

    Science.gov (United States)

    Dai, Mengyan; Liu, Jianghai; Cui, Jianlin; Chen, Chunsheng; Jia, Peng

    2017-10-01

    In order to solve the problem of the quantitative test of spectrum and color of aerosol, the measurement method of spectrum of aerosol based on human visual system was proposed. The spectrum characteristics and color parameters of three different aerosols were tested, and the color differences were calculated according to the CIE1976-L*a*b* color difference formula. Three tested powders (No 1# No 2# and No 3# ) were dispersed in a plexglass box and turned into aerosol. The powder sample was released by an injector with different dosages in each experiment. The spectrum and color of aerosol were measured by the PRO 6500 Fiber Optic Spectrometer. The experimental results showed that the extinction performance of aerosol became stronger and stronger with the increase of concentration of aerosol. While the chromaticity value differences of aerosols in the experiment were so small, luminance was verified to be the main influence factor of human eye visual perception and contributed most in the three factors of the color difference calculation. The extinction effect of No 3# aerosol was the strongest of all and caused the biggest change of luminance and color difference which would arouse the strongest human visual perception. According to the sensation level of chromatic color by Chinese, recognition color difference would be produced when the dosage of No 1# powder was more than 0.10 gram, the dosage of No 2# powder was more than 0.15 gram, and the dosage of No 3# powder was more than 0.05 gram.

  1. Critical comparison of electrode models in density functional theory based quantum transport calculations.

    Science.gov (United States)

    Jacob, D; Palacios, J J

    2011-01-28

    We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.

  2. Microcontroller-based network for meteorological sensing and weather forecast calculations

    Directory of Open Access Journals (Sweden)

    A. Vas

    2012-06-01

    Full Text Available Weather forecasting needs a lot of computing power. It is generally accomplished by using supercomputers which are expensive to rent and to maintain. In addition, weather services also have to maintain radars, balloons and pay for worldwide weather data measured by stations and satellites. Weather forecasting computations usually consist of solving differential equations based on the measured parameters. To do that, the computer uses the data of close and distant neighbor points. Accordingly, if small-sized weather stations, which are capable of making measurements, calculations and communication, are connected through the Internet, then they can be used to run weather forecasting calculations like a supercomputer does. It doesn’t need any central server to achieve this, because this network operates as a distributed system. We chose Microchip’s PIC18 microcontroller (μC platform in the implementation of the hardware, and the embedded software uses the TCP/IP Stack v5.41 provided by Microchip.

  3. Extension of the COSYMA-ECONOMICS module - cost calculations based on different economic sectors

    International Nuclear Information System (INIS)

    Faude, D.

    1994-12-01

    The COSYMA program system for evaluating the off-site consequences of accidental releases of radioactive material to the atmosphere includes an ECONOMICS module for assessing economic consequences. The aim of this module is to convert various consequences (radiation-induced health effects and impacts resulting from countermeasures) caused by an accident into the common framework of economic costs; this allows different effects to be expressed in the same terms and thus to make these effects comparable. With respect to the countermeasure 'movement of people', the dominant cost categories are 'loss-of-income costs' and 'costs of lost capital services'. In the original version of the ECONOMICS module these costs are calculated on the basis of the total number of people moved. In order to take into account also regional or local economic peculiarities of a nuclear site, the ECONOMICS module has been extended: Calculation of the above mentioned cost categories is now based on the number of employees in different economic sectors in the affected area. This extension of the COSYMA ECONOMICS module is described in more detail. (orig.)

  4. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Directory of Open Access Journals (Sweden)

    Chang Wook Jeong

    Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy

  5. Development of 3-D FBR heterogeneous core calculation method based on characteristics method

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Maruyama, Manabu; Hamada, Yuzuru; Nishi, Hiroshi; Ishibashi, Junichi; Kitano, Akihiro

    2002-01-01

    A new 3-D transport calculation method taking into account the heterogeneity of fuel assemblies has been developed by combining the characteristics method and the nodal transport method. In the axial direction the nodal transport method is applied, and the characteristics method is applied to take into account the radial heterogeneity of fuel assemblies. The numerical calculations have been performed to verify 2-D radial calculations of FBR assemblies and partial core calculations. Results are compared with the reference Monte-Carlo calculations. A good agreement has been achieved. It is shown that the present method has an advantage in calculating reaction rates in a small region

  6. SU-C-204-03: DFT Calculations of the Stability of DOTA-Based-Radiopharmaceuticals

    Energy Technology Data Exchange (ETDEWEB)

    Khabibullin, A.R.; Woods, L.M. [University of South Florida, Tampa, Florida (United States); Karolak, A.; Budzevich, M.M.; Martinez, M.V. [H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States); McLaughlin, M.L.; Morse, D.L. [University of South Florida, Tampa, Florida (United States); H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States)

    2016-06-15

    Purpose: Application of the density function theory (DFT) to investigate the structural stability of complexes applied in cancer therapy consisting of the 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) chelated to Ac225, Fr221, At217, Bi213, and Gd68 radio-nuclei. Methods: The possibility to deliver a toxic payload directly to tumor cells is a highly desirable aim in targeted alpha particle therapy. The estimation of bond stability between radioactive atoms and the DOTA chelating agent is the key element in understanding the foundations of this delivery process. Thus, we adapted the Vienna Ab-initio Simulation Package (VASP) with the projector-augmented wave method and a plane-wave basis set in order to study the stability and electronic properties of DOTA ligand chelated to radioactive isotopes. In order to count for the relativistic effect of radioactive isotopes we included Spin-Orbit Coupling (SOC) in the DFT calculations. Five DOTA complex structures were represented as unit cells, each containing 58 atoms. The energy optimization was performed for all structures prior to calculations of electronic properties. Binding energies, electron localization functions as well as bond lengths between atoms were estimated. Results: Calculated binding energies for DOTA-radioactive atom systems were −17.792, −5.784, −8.872, −13.305, −18.467 eV for Ac, Fr, At, Bi and Gd complexes respectively. The displacements of isotopes in DOTA cages were estimated from the variations in bond lengths, which were within 2.32–3.75 angstroms. The detailed representation of chemical bonding in all complexes was obtained with the Electron Localization Function (ELF). Conclusion: DOTA-Gd, DOTA-Ac and DOTA-Bi were the most stable structures in the group. Inclusion of SOC had a significant role in the improvement of DFT calculation accuracy for heavy radioactive atoms. Our approach is found to be proper for the investigation of structures with DOTA-based

  7. Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies

    Science.gov (United States)

    Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.

    2017-04-01

    Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4%  ±  3.1% for CBAC and 3.5%  ±  3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a

  8. Calculation of RBE for normal tissue complications based on charged particle track structure

    International Nuclear Information System (INIS)

    Scholz, M.

    1996-05-01

    A new approach for the calculation of RBE for normal tissue complications after charged particle and neutron irradiation is discussed. It is based on the extension of a model originally developed for the application to cell survival. It can be shown, that according to the model RBE values are determined largely by the α/β-ratio of the photon dose response curve, but are expected to be nearly independent of the absolute values of α and β. Thus, the model can be applied to normal tissue complications as well, where α/β-ratios can be determined by means of fractionation experiments. Agreement of model predictions and experimental results obtained in animal experiments confirm the appliability of the model even in the case of complex biological endpoints. (orig.)

  9. Sample size and power calculations based on generalized linear mixed models with correlated binary outcomes.

    Science.gov (United States)

    Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R

    2008-08-01

    The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.

  10. An iterative approach for symmetrical and asymmetrical Short-circuit calculations with converter-based connected renewable energy sources

    DEFF Research Database (Denmark)

    Göksu, Ömer; Teodorescu, Remus; Bak-Jensen, Birgitte

    2012-01-01

    , an iterative approach for short-circuit calculation of networks with power converter-based wind turbines is developed for both symmetrical and asymmetrical short-circuit grid faults. As a contribution to existing solutions, negative sequence current injection from the wind turbines is also taken into account...... in calculations in case of asymmetrical faults. The developed iterative short-circuit calculation method is verified with time domain simulations....

  11. A GPU-based solution for fast calculation of the betweenness centrality in large weighted networks

    Directory of Open Access Journals (Sweden)

    Rui Fan

    2017-12-01

    Full Text Available Betweenness, a widely employed centrality measure in network science, is a decent proxy for investigating network loads and rankings. However, its extremely high computational cost greatly hinders its applicability in large networks. Although several parallel algorithms have been presented to reduce its calculation cost for unweighted networks, a fast solution for weighted networks, which are commonly encountered in many realistic applications, is still lacking. In this study, we develop an efficient parallel GPU-based approach to boost the calculation of the betweenness centrality (BC for large weighted networks. We parallelize the traditional Dijkstra algorithm by selecting more than one frontier vertex each time and then inspecting the frontier vertices simultaneously. By combining the parallel SSSP algorithm with the parallel BC framework, our GPU-based betweenness algorithm achieves much better performance than its CPU counterparts. Moreover, to further improve performance, we integrate the work-efficient strategy, and to address the load-imbalance problem, we introduce a warp-centric technique, which assigns many threads rather than one to a single frontier vertex. Experiments on both realistic and synthetic networks demonstrate the efficiency of our solution, which achieves 2.9× to 8.44× speedups over the parallel CPU implementation. Our algorithm is open-source and free to the community; it is publicly available through https://dx.doi.org/10.6084/m9.figshare.4542405. Considering the pervasive deployment and declining price of GPUs in personal computers and servers, our solution will offer unprecedented opportunities for exploring betweenness-related problems and will motivate follow-up efforts in network science.

  12. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  13. A cultural study of a science classroom and graphing calculator-based technology

    Science.gov (United States)

    Casey, Dennis Alan

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.

  14. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms.

    Science.gov (United States)

    Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick

    2014-10-16

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the

  15. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.

    Science.gov (United States)

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-05-18

    The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  16. Base data for looking-up tables of calculation errors in JACS code system

    International Nuclear Information System (INIS)

    Murazaki, Minoru; Okuno, Hiroshi

    1999-03-01

    The report intends to clarify the base data for the looking-up tables of calculation errors cited in 'Nuclear Criticality Safety Handbook'. The tables were obtained by classifying the benchmarks made by JACS code system, and there are two kinds: One kind is for fuel systems in general geometry with a reflected and another kind is for fuel systems specific to simple geometry with a reflector. Benchmark systems were further categorized into eight groups according to the fuel configuration: homogeneous or heterogeneous; and fuel kind: uranium, plutonium and their mixtures, etc. The base data for fuel systems in general geometry with a reflected are summarized in this report for the first time. The base data for fuel systems in simple geometry with a reflector were summarized in a technical report published in 1987. However, the data in a group named homogeneous low-enriched uranium were further selected out later by the working group for making the Nuclear Criticality Safety Handbook. This report includes the selection. As a project has been organized by OECD/NEA for evaluation of criticality safety benchmark experiments, the results are also described. (author)

  17. Synthesis, spectroscopic characterization and DFT calculations of novel Schiff base containing thiophene ring

    Science.gov (United States)

    Ermiş, Emel

    2018-03-01

    In this study, a new Schiff base derivative, 2-[(2-hydroxy-5-thiophen-2-yl-benzylidene)-amino]-6-methyl-benzoic acid (5), which has a thiophene ring and N, O donor groups, was successfully prepared by the condensation reaction of 2-hydroxy-5-(thiophen-2-yl)benzaldehyde (3) and 2-amino-6-methylbenzoic acid (4). The characterization of a Schiff base derivative (5) was performed by experimentally the UV-Vis., FTIR, 1H and 13C NMR spectroscopic methods and elemental analysis. Density Functional Theory (DFT/B3LYP/6-311+G(d, p)) calculations were used to examine the optimized molecular geometry, vibrational frequencies, 1H and 13C NMR chemical shifts, UV-Vis. spectroscopic parameters, HOMO-LUMO energies and molecular electrostatic potential (MEP) map of the compound (5) and the theoretical results were compared to the experimental data. In addition, the energetic behaviors such as the sum of electronic and thermal free energy (SETFE), atomic charges, dipole moment of the compound (5) in solvent media were investigated using the B3LYP method with the 6-311+G(d, p) basis set. The obtained experimental and theoretical results were found to be compatible with each other and they were supported the proposed molecular structure for the synthesized Schiff base derivative (5).

  18. Calendar calculating in savants with autism and healthy calendar calculators.

    Science.gov (United States)

    Dubischar-Krivec, A M; Neumann, N; Poustka, F; Braun, C; Birbaumer, N; Bölte, S

    2009-08-01

    Calendar calculation is the ability to quickly name the day that a given date falls on. Previous research has suggested that savant calendar calculation is based on rote memory and the use of rule-based arithmetic skills. The objective of this study was to identify the cognitive processes that distinguish calendar calculation in savant individuals from healthy calendar calculators. Savant calendar calculators with autism (ACC, n=3), healthy calendar calculators (HCC, n=3), non-savant subjects with autism (n=6) and healthy calendar calculator laymen (n=18) were included in the study. All participants calculated dates of the present (current month). In addition, ACC and HCC also calculated dates of the past and future 50 years. ACC showed shorter reaction times and fewer errors than HCC and non-savant subjects with autism, and significantly fewer errors than healthy calendar calculator laymen when calculating dates of the present. Moreover, ACC performed faster and more accurate than HCC regarding past dates. However, no differences between ACC and HCC were detected for future date calculation. The findings may imply distinct calendar calculation strategies in ACC and HCC, with HCC relying on calendar regularities for all types of dates and an involvement of (rote) memory in ACC when processing dates of the past and the present.

  19. Development and Validation of a Clinically Based Risk Calculator for the Transdiagnostic Prediction of Psychosis.

    Science.gov (United States)

    Fusar-Poli, Paolo; Rutigliano, Grazia; Stahl, Daniel; Davies, Cathy; Bonoldi, Ilaria; Reilly, Thomas; McGuire, Philip

    2017-05-01

    The overall effect of At Risk Mental State (ARMS) services for the detection of individuals who will develop psychosis in secondary mental health care is undetermined. To measure the proportion of individuals with a first episode of psychosis detected by ARMS services in secondary mental health services, and to develop and externally validate a practical web-based individualized risk calculator tool for the transdiagnostic prediction of psychosis in secondary mental health care. Clinical register-based cohort study. Patients were drawn from electronic, real-world, real-time clinical records relating to 2008 to 2015 routine secondary mental health care in the South London and the Maudsley National Health Service Foundation Trust. The study included all patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within the South London and the Maudsley National Health Service Foundation Trust in the period between January 1, 2008, and December 31, 2015. Data analysis began on September 1, 2016. Risk of development of nonorganic International Statistical Classification of Diseases and Related Health Problems, Tenth Revision psychotic disorders. A total of 91 199 patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within South London and the Maudsley National Health Service Foundation Trust were included in the derivation (n = 33 820) or external validation (n = 54 716) data sets. The mean age was 32.97 years, 50.88% were men, and 61.05% were white race/ethnicity. The mean follow-up was 1588 days. The overall 6-year risk of psychosis in secondary mental health care was 3.02 (95% CI, 2.88-3.15), which is higher than the 6-year risk in the local general population (0.62). Compared with the ARMS designation, all of the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision diagnoses showed a lower risk of psychosis, with the exception of bipolar mood

  20. Development and Validation of a Clinically Based Risk Calculator for the Transdiagnostic Prediction of Psychosis

    Science.gov (United States)

    Rutigliano, Grazia; Stahl, Daniel; Davies, Cathy; Bonoldi, Ilaria; Reilly, Thomas; McGuire, Philip

    2017-01-01

    Importance The overall effect of At Risk Mental State (ARMS) services for the detection of individuals who will develop psychosis in secondary mental health care is undetermined. Objective To measure the proportion of individuals with a first episode of psychosis detected by ARMS services in secondary mental health services, and to develop and externally validate a practical web-based individualized risk calculator tool for the transdiagnostic prediction of psychosis in secondary mental health care. Design, Setting, and Participants Clinical register-based cohort study. Patients were drawn from electronic, real-world, real-time clinical records relating to 2008 to 2015 routine secondary mental health care in the South London and the Maudsley National Health Service Foundation Trust. The study included all patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within the South London and the Maudsley National Health Service Foundation Trust in the period between January 1, 2008, and December 31, 2015. Data analysis began on September 1, 2016. Main Outcomes and Measures Risk of development of nonorganic International Statistical Classification of Diseases and Related Health Problems, Tenth Revision psychotic disorders. Results A total of 91 199 patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within South London and the Maudsley National Health Service Foundation Trust were included in the derivation (n = 33 820) or external validation (n = 54 716) data sets. The mean age was 32.97 years, 50.88% were men, and 61.05% were white race/ethnicity. The mean follow-up was 1588 days. The overall 6-year risk of psychosis in secondary mental health care was 3.02 (95% CI, 2.88-3.15), which is higher than the 6-year risk in the local general population (0.62). Compared with the ARMS designation, all of the International Statistical Classification of Diseases and Related Health Problems

  1. The Calculation Of Titanium Buildup Factor Based On Monte Carlo Method

    International Nuclear Information System (INIS)

    Has, Hengky Istianto; Achmad, Balza; Harto, Andang Widi

    2001-01-01

    The objective of radioactive-waste container is to reduce radiation emission to the environment. For that purpose, we need material with ability to shield that radiation and last for 10.000 years. Titanium is one of the materials that can be used to make containers. Unfortunately, its buildup factor, which is an importance factor in setting up radiation shielding, has not been calculated. Therefore, the calculations of titanium buildup factor as a function of other parameters is needed. Buildup factor can be determined either experimentally or by simulation. The purpose of this study is to determine titanium buildup factor using simulation program based on Monte Carlo method. Monte Carlo is a stochastic method, therefore is proper to calculate nuclear radiation which naturally has random characteristic. Simulation program also able to give result while experiments can not be performed, because of their limitations.The result of the simulation is, that by increasing titanium thickness the buildup factor number and dosage increase. In contrary If photon energy is higher, then buildup factor number and dosage are lower. The photon energy used in the simulation was ranged from 0.2 MeV to 2.0 MeV with 0.2 MeV step size, while the thickness was ranged from 0.2 cm to 3.0 cm with step size of 0.2 cm. The highest buildup factor number is β = 1.4540 ± 0.047229 at 0.2 MeV photon energy with titanium thickness of 3.0 cm. The lowest is β = 1.0123 ± 0.000650 at 2.0 MeV photon energy with 0.2 cm thickness of titanium. For the dosage buildup factor, the highest dose is β D = 1.3991 ± 0.013999 at 0.2 MeV of the photon energy with a titanium thickness of 3.0 cm and the lowest is β D = 1.0042 ± 0.000597 at 2.0 MeV with titanium thickness of 0.2 cm. For the photon energy and the thickness of titanium used in simulation, buildup factor and dosage buildup factor as a function of photon energy and titanium thickness can be formulated as follow β = 1.1264 e - 0.0855 E e 0 .0584 T

  2. Design of Pd-Based Bimetallic Catalysts for ORR: A DFT Calculation Study

    Directory of Open Access Journals (Sweden)

    Lihui Ou

    2015-01-01

    Full Text Available Developing Pd-lean catalysts for oxygen reduction reaction (ORR is the key for large-scale application of proton exchange membrane fuel cells (PEMFCs. In the present paper, we have proposed a multiple-descriptor strategy for designing efficient and durable ORR Pd-based alloy catalysts. We demonstrated that an ideal Pd-based bimetallic alloy catalyst for ORR should possess simultaneously negative alloy formation energy, negative surface segregation energy of Pd, and a lower oxygen binding ability than pure Pt. By performing detailed DFT calculations on the thermodynamics, surface chemistry and electronic properties of Pd-M alloys, Pd-V, Pd-Fe, Pd-Zn, Pd-Nb, and Pd-Ta, are identified theoretically to have stable Pd segregated surface and improved ORR activity. Factors affecting these properties are analyzed. The alloy formation energy of Pd with transition metals M can be mainly determined by their electron interaction. This may be the origin of the negative alloy formation energy for Pd-M alloys. The surface segregation energy of Pd is primarily determined by the surface energy and the atomic radius of M. The metals M which have smaller atomic radius and higher surface energy would tend to favor the surface segregation of Pd in corresponding Pd-M alloys.

  3. Research on calculation of the IOL tilt and decentration based on surface fitting.

    Science.gov (United States)

    Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng

    2013-01-01

    The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration) and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6-12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL's location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  4. Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting

    Directory of Open Access Journals (Sweden)

    Lin Li

    2013-01-01

    Full Text Available The tilt and decentration of intraocular lens (IOL result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6–12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL’s location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  5. Using 3d Bim Model for the Value-Based Land Share Calculations

    Science.gov (United States)

    Çelik Şimşek, N.; Uzun, B.

    2017-11-01

    According to the Turkish condominium ownership system, 3D physical buildings and its condominium units are registered to the condominium ownership books via 2D survey plans. Currently, 2D representations of the 3D physical objects, causes inaccurate and deficient implementations for the determination of the land shares. Condominium ownership and easement right are established with a clear indication of land shares (condominium ownership law, article no. 3). So, the land share of each condominium unit have to be determined including the value differences among the condominium units. However the main problem is that, land share has often been determined with area based over the project before construction of the building. The objective of this study is proposing a new approach in terms of value-based land share calculations of the condominium units that subject to condominium ownership. So, the current approaches and its failure that have taken into account in determining the land shares are examined. And factors that affect the values of the condominium units are determined according to the legal decisions. This study shows that 3D BIM models can provide important approaches for the valuation problems in the determination of the land shares.

  6. Variations on calculating left-ventricular volume with the radionuclide count-based method

    International Nuclear Information System (INIS)

    Koral, K.F.; Rabinovitch, M.A.; Kalff, V.

    1985-01-01

    Various methods for the calculation of left-ventricular volume by the count-based method utilizing red-blood-cell labeling with /sup 99m/Tc and a parallel-hole collimator are evaluated. Attenuation correction, linked to an additional left posterior oblique view, is utilized for all 26 patients. The authors examine (1) two methods of calculating depth, (2) the use of a pair of attenuation coefficients, (3) the optimization of attenuation coefficients, and (4) the employment of an automated program for expansion of the region of interest. The standard error of the estimate (SEE) from the correlation of the radionuclide volumes with the contrast-angiography volumes, and the root-mean-square difference between the two volume sets at the minimum SEE are computed. It is found that optimizing a single linear attenuation coefficient assumed for attenuation correction best reduces the value of the SEE. The average of the optimum value from the end-diastolic data and that from the end-systolic data is 0.11 cm-1. This value agrees with the mean minus one standard deviation value determined independently from computed tomography scans (0.13-0.02 cm-1). It is also found that expansion of the region of interest beyond the second-derivative edge with an automated program, in order to correctly include more counts, does not lower the SEE as hoped. This result is in contrast to the results of others with different data and a manual method. Possible causes for the difference are given

  7. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    Science.gov (United States)

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.

  8. Calculating the Fickian diffusivity for a lattice-based random walk with agents and obstacles of different shapes and sizes.

    Science.gov (United States)

    Ellery, Adam J; Baker, Ruth E; Simpson, Matthew J

    2015-11-24

    Random walk models are often used to interpret experimental observations of the motion of biological cells and molecules. A key aim in applying a random walk model to mimic an in vitro experiment is to estimate the Fickian diffusivity (or Fickian diffusion coefficient), D. However, many in vivo experiments are complicated by the fact that the motion of cells and molecules is hindered by the presence of obstacles. Crowded transport processes have been modeled using repeated stochastic simulations in which a motile agent undergoes a random walk on a lattice that is populated by immobile obstacles. Early studies considered the most straightforward case in which the motile agent and the obstacles are the same size. More recent studies considered stochastic random walk simulations describing the motion of an agent through an environment populated by obstacles of different shapes and sizes. Here, we build on previous simulation studies by analyzing a general class of lattice-based random walk models with agents and obstacles of various shapes and sizes. Our analysis provides exact calculations of the Fickian diffusivity, allowing us to draw conclusions about the role of the size, shape and density of the obstacles, as well as examining the role of the size and shape of the motile agent. Since our analysis is exact, we calculate D directly without the need for random walk simulations. In summary, we find that the shape, size and density of obstacles has a major influence on the exact Fickian diffusivity. Furthermore, our results indicate that the difference in diffusivity for symmetric and asymmetric obstacles is significant.

  9. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    Science.gov (United States)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  10. Calculation of the clearance requirements for the development of a hemodialysis-based wearable artificial kidney.

    Science.gov (United States)

    Kim, Dong Ki; Lee, Jung Chan; Lee, Hajeong; Joo, Kwon Wook; Oh, Kook-Hwan; Kim, Yon Su; Yoon, Hyung-Jin; Kim, Hee Chan

    2016-04-01

    Wearable artificial kidney (WAK) has been considered an alternative to standard hemodialysis (HD) for many years. Although various novel WAK systems have been recently developed for use in clinical applications, the target performance or standard dose of dialysis has not yet been determined. To calculate the appropriate clearance for a HD-based WAK system for the treatment of patients with end-stage renal disease with various dialysis conditions, a classic variable-volume two-compartment kinetic model was used to simulate an anuric patient with variable target time-averaged creatinine concentration (TAC), daily water intake volume, daily dialysis pause time, and patient body weight. A 70-kg anuric patient with a HD-based WAK system operating for 24 h required dialysis clearances of creatinine of at least 100, 50, and 25 mL/min to achieve TACs of 1.0, 2.0, and 4.0 mg/dL, respectively. The daily water intake volume did not affect the clearance required for dialysis under various conditions. As the pause time per day for the dialysis increased, higher dialysis clearances were required to maintain the target TAC. The present study provided theoretical dialysis doses for an HD-based WAK system to achieve various target TACs through relevant mathematical kinetic modeling. The theoretical results may contribute to the determination of the technical specifications required for the development of a WAK system. © 2015 The Authors. Hemodialysis International published by Wiley Periodicals, Inc. on behalf of International Society for Hemodialysis.

  11. Calculation of the yearly energy performance of heating systems based on the European Building Energy Directive and related CEN Standards

    DEFF Research Database (Denmark)

    Olesen, Bjarne W.; de Carli, Michele

    2011-01-01

    According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting syst......–20% of the building energy demand. The additional loss depends on the type of heat emitter, type of control, pump and boiler. Keywords: Heating systems; CEN standards; Energy performance; Calculation methods......According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting...

  12. Calculation of DC Arc Plasma Torch Voltage- Current Characteristics Based on Steebeck Model

    International Nuclear Information System (INIS)

    Gnedenko, V.G.; Ivanov, A.A.; Pereslavtsev, A.V.; Tresviatsky, S.S.

    2006-01-01

    The work is devoted to the problem of the determination of plasma torches parameters and power sources parameters (working voltage and current of plasma torch) at the predesigning stage. The sequence of calculation of voltage-current characteristics of DC arc plasma torch is proposed. It is shown that the simple Steenbeck model of arc discharge in cylindrical channel makes it possible to carry out this calculation. The results of the calculation are confirmed by the experiments

  13. Accuracy of continuum electrostatic calculations based on three common dielectric boundary definitions.

    Science.gov (United States)

    Onufriev, Alexey V; Aguilar, Boris

    2014-05-01

    MS DB definition by doubling of the solute dielectric constant. However, the use of the higher interior dielectric does not eliminate the large individual deviations between pairwise interactions computed within the two DB definitions. It is argued that while the MS based definition of the dielectric boundary is more physically correct in some types of practical calculations, the choice is not so clear in some other common scenarios.

  14. Climate Impact of a Regional Nuclear Weapons Exchange: An Improved Assessment Based On Detailed Source Calculations

    Science.gov (United States)

    Reisner, Jon; D'Angelo, Gennaro; Koo, Eunmo; Even, Wesley; Hecht, Matthew; Hunke, Elizabeth; Comeau, Darin; Bos, Randall; Cooley, James

    2018-03-01

    We present a multiscale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock, Oman, Stenchikov, et al. (2007, https://doi.org/10.5194/acp-7-2003-2007), based on the analysis of Toon et al. (2007, https://doi.org/10.5194/acp-7-1973-2007), which assumes a regional exchange between India and Pakistan of fifty 15 kt weapons detonated by each side. We expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 109 kg of black carbon in the upper troposphere (approximately from 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 109 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures

  15. Estimation of Real-Time Flood Risk on Roads Based on Rainfall Calculated by the Revised Method of Missing Rainfall

    Directory of Open Access Journals (Sweden)

    Eunmi Kim

    2014-09-01

    Full Text Available Recently, flood damage by frequent localized downpours in cities is on the increase on account of abnormal climate phenomena and the growth of impermeable areas due to urbanization. This study suggests a method to estimate real-time flood risk on roads for drivers based on the accumulated rainfall. The amount of rainfall of a road link, which is an intensive type, is calculated by using the revised method of missing rainfall in meteorology, because the rainfall is not measured on roads directly. To process in real time with a computer, we use the inverse distance weighting (IDW method, which is a suitable method in the computing system and is commonly used in relation to precipitation due to its simplicity. With real-time accumulated rainfall, the flooding history, rainfall range causing flooding from previous rainfall information and frequency probability of precipitation are used to determine the flood risk on roads. The result of simulation using the suggested algorithms shows the high concordance rate between actual flooded areas in the past and flooded areas derived from the simulation for the research region in Busan, Korea.

  16. Calculating systems-scale energy efficiency and net energy returns: A bottom-up matrix-based approach

    International Nuclear Information System (INIS)

    Brandt, Adam R.; Dale, Michael; Barnhart, Charles J.

    2013-01-01

    In this paper we expand the work of Brandt and Dale (2011) on ERRs (energy return ratios) such as EROI (energy return on investment). This paper describes a “bottom-up” mathematical formulation which uses matrix-based computations adapted from the LCA (life cycle assessment) literature. The framework allows multiple energy pathways and flexible inclusion of non-energy sectors. This framework is then used to define a variety of ERRs that measure the amount of energy supplied by an energy extraction and processing pathway compared to the amount of energy consumed in producing the energy. ERRs that were previously defined in the literature are cast in our framework for calculation and comparison. For illustration, our framework is applied to include oil production and processing and generation of electricity from PV (photovoltaic) systems. Results show that ERR values will decline as system boundaries expand to include more processes. NERs (net energy return ratios) tend to be lower than GERs (gross energy return ratios). External energy return ratios (such as net external energy return, or NEER (net external energy ratio)) tend to be higher than their equivalent total energy return ratios. - Highlights: • An improved bottom-up mathematical method for computing net energy return metrics is developed. • Our methodology allows arbitrary numbers of interacting processes acting as an energy system. • Our methodology allows much more specific and rigorous definition of energy return ratios such as EROI or NER

  17. Whole-molecule calculation of log p based on molar volume, hydrogen bonds, and simulated 13C NMR spectra.

    Science.gov (United States)

    Schnackenberg, Laura K; Beger, Richard D

    2005-01-01

    The prediction of Log P is usually accomplished using either substructure or whole-molecule approaches. However, these methods are complicated, and previous whole-molecule approaches have not been successful for the prediction of Log P in very complex molecules. The observed chemical shifts in nuclear magnetic resonance (NMR) spectroscopy are related to the electrostatics at the nucleus, which are influenced by solute-solvent interactions. The different solvation effects on a molecule by either water or methanol have a strong effect on the NMR chemical shift value. Therefore, the chemical shift values observed in an aqueous and organic solvent should correlate to Log P. This paper develops a rapid, objective model of Log P based on molar volume, hydrogen bonds, and differences in calculated 13C NMR chemical shifts for a diverse set of compounds. A partial least squares (PLS) model of Log P built on the sum of carbon chemical shift differences in water and methanol, molar volume, number of hydrogen bond donors and acceptors in 162 diverse compounds gave an r2 value of 0.88. The average r2 for 10 training models of Log P made from 90% of the data was 0.87+/-0.01. The average q2 for 10 leave-10%-out cross-validation test sets was 0.87+/-0.05.

  18. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment... model type. (a) Fuel economy values for a base level are calculated from vehicle configuration fuel... update sales projections at the time any model type value is calculated for a label value. (iii) The...

  19. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom.

    Science.gov (United States)

    Lesperance, Marielle; Inglis-Whalen, M; Thomson, R M

    2014-02-01

    To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with(125)I, (103)Pd, or (131)Cs seeds, and to investigate doses to ocular structures. An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20-30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%-10% and 13%-14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%-17% and 29%-34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16%. In the full eye model

  20. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    International Nuclear Information System (INIS)

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.

    2014-01-01

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with 125 I, 103 Pd, or 131 Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up

  1. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa K1S 5B6 (Canada)

    2014-02-15

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model

  2. How can activity-based costing methodology be performed as a powerful tool to calculate costs and secure appropriate patient care?

    Science.gov (United States)

    Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong

    2007-04-01

    Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.

  3. Shapley Value-Based Payment Calculation for Energy Exchange between Micro- and Utility Grids

    Directory of Open Access Journals (Sweden)

    Robin Pilling

    2017-10-01

    Full Text Available In recent years, microgrids have developed as important parts of power systems and have provided affordable, reliable, and sustainable supplies of electricity. Each microgrid is managed as a single controllable entity with respect to the existing power system but demands for joint operation and sharing the benefits between a microgrid and its hosting utility. This paper is focused on the joint operation of a microgrid and its hosting utility, which cooperatively minimize daily generation costs through energy exchange, and presents a payment calculation scheme for power transactions based on a fair allocation of reduced generation costs. To fairly compensate for energy exchange between the micro- and utility grids, we adopt the cooperative game theoretic solution concept of Shapley value. We design a case study for a fictitious interconnection model between the Mueller microgrid in Austin, Texas and the utility grid in Taiwan. Our case study shows that when compared to standalone generations, both the micro- and utility grids are better off when they collaborate in power exchange regardless of their individual contributions to the power exchange coalition.

  4. Experimental verification of internal dosimetry calculations: Construction of a heterogeneous phantom based on human organs

    International Nuclear Information System (INIS)

    Lauridsen, B.; Hedemann Jensen, P.

    1987-01-01

    The basic dosimetric quantity in ICRP-publication no. 30 is the aborbed fraction AF(T<-S). This parameter is the fraction of energy absorbed in a target organ T per emission of radiation from activity deposited in the source organ S. Based upon this fraction it is possible to calculate the Specific Effective Energy SEE(T<-S). From this, the committed effective dose equivalent from an intake of radioactive material can be found, and thus the annual limit of intake for given radionuclides can be determined. A male phantom has been constructed with the aim of measuring the Specific Effective Energy SEE(T<-S) in various target organs. Impressions-of real human organs have been used to produce vacuum forms. Tissue equivalent plastic sheets were sucked into the vacuum forms producing a shell with a shape identical to the original organ. Each organ has been made of two shells. The same procedure has been used for the body. Thin tubes through the organs make it possible to place TL dose meters in a matrix so the dose distribution can be measured. The phantom has been supplied with lungs, liver, kidneys, spleen, stomach, bladder, pancreas, and thyroid gland. To select a suitable body liquid for the phantom, laboratory experiments have been made with different liquids and different radionuclides. In these experiments the change in dose rate due to changes in density and composition of the liquid was determined. Preliminary results of the experiments are presented. (orig.)

  5. Accelerated materials design of fast oxygen ionic conductors based on first principles calculations

    Science.gov (United States)

    He, Xingfeng; Mo, Yifei

    Over the past decades, significant research efforts have been dedicated to seeking fast oxygen ion conductor materials, which have important technological applications in electrochemical devices such as solid oxide fuel cells, oxygen separation membranes, and sensors. Recently, Na0.5Bi0.5TiO3 (NBT) was reported as a new family of fast oxygen ionic conductor. We will present our first principles computation study aims to understand the O diffusion mechanisms in the NBT material and to design this material with enhanced oxygen ionic conductivity. Using the NBT materials as an example, we demonstrate the computation capability to evaluate the phase stability, chemical stability, and ionic diffusion of the ionic conductor materials. We reveal the effects of local atomistic configurations and dopants on oxygen diffusion and identify the intrinsic limiting factors in increasing the ionic conductivity of the NBT materials. Novel doping strategies were predicted and demonstrated by the first principles calculations. In particular, the K doped NBT compound achieved good phase stability and an order of magnitude increase in oxygen ionic conductivity of up to 0.1 S cm-1 at 900 K compared to the experimental Mg doped compositions. Our results provide new avenues for the future design of the NBT materials and demonstrate the accelerated design of new ionic conductor materials based on first principles techniques. This computation methodology and workflow can be applied to the materials design of any (e.g. Li +, Na +) fast ion-conducting materials.

  6. A method for the calculation of collision strengths for complex atomic structures based on Slater parameter optimisation

    International Nuclear Information System (INIS)

    Fawcett, B.C.; Mason, H.E.

    1989-02-01

    This report presents details of a new method to enable the computation of collision strengths for complex ions which is adapted from long established optimisation techniques previously applied to the calculation of atomic structures and oscillator strengths. The procedure involves the adjustment of Slater parameters so that they determine improved energy levels and eigenvectors. They provide a basis for collision strength calculations in ions where ab initio computations break down or result in reducible errors. This application is demonstrated through modifications of the DISTORTED WAVE collision code and SUPERSTRUCTURE atomic-structure code which interface via a transformation code JAJOM which processes their output. (author)

  7. Understanding the interfacial properties of graphene-based materials/BiOI heterostructures by DFT calculations

    Science.gov (United States)

    Dai, Wen-Wu; Zhao, Zong-Yan

    2017-06-01

    Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C3N4) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C3N4 and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and promote the separation of photo-generated carriers, which provide useful hints for the applications in photocatalysis.

  8. Neutronic calculations of AFPR-100 reactor based on Spherical Cermet Fuel particles

    International Nuclear Information System (INIS)

    Benchrif, A.; Chetaine, A.; Amsil, H.

    2013-01-01

    Highlights: • AFPR-100 reactor considered as a small nuclear reactor without on-site refueling originally based on TRISO micro-fuel element. • The AFPR-100 reactor was re-designed using the new Spherical Cermet fuel element. • The adoption of the Cermet fuel instead of TRISO fuel reduces the core lifetime operation by 3.1 equivalent full power years. • We discussed the new micro-fuel element candidate for small and medium sized reactors. - Abstract: The Atoms For Peace Reactor (AFPR-100), as a 100 MW(e) without the need of on-site refueling, was originally based on UO2 TRISO fuel coated particles embedded in a carbon matrix directly cooled by light water. AFPR-100 is considered as a small nuclear reactor without open-vessel refueling which is proposed by Pacific Northwest National Laboratory (PNNL). An account of significant irradiation swelling in the silicon carbide fission product barrier coating layer of TRISO fuel element, a Spherical Cermet Fuel element has been proposed. Indeed, the new fuel concept, which was developed by PNNL, consists of changing the pyro-carbon and ceramic coatings that are incompatible with low temperature by Zirconium. The latter was chosen to avoid any potential Wigner energy effect issues in the TRISO fuel element. Actually, the purpose of this study is to assess the goal of AFPR-100 concept using the Cermet fuel; undeniably, the fuel core lifetime prediction may be extended for reasonably long period without on-site refueling. In fact, we investigated some neutronic parameters of reactor core by the calculation code SRAC95. The results suggest that the core fuel lifetime beyond 12 equivalent full power years (EFPYs) is possible. Hence, the adoption of Cermet fuel concept shows a core lifetime decrease of about 3.1 EFPY

  9. Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education

    Science.gov (United States)

    Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.

    2014-01-01

    Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…

  10. Review of theoretical calculations of hydrogen storage in carbon-based materials

    Energy Technology Data Exchange (ETDEWEB)

    Meregalli, V.; Parrinello, M. [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany)

    2001-02-01

    In this paper we review the existing theoretical literature on hydrogen storage in single-walled nanotubes and carbon nanofibers. The reported calculations indicate a hydrogen uptake smaller than some of the more optimistic experimental results. Furthermore the calculations suggest that a variety of complex chemical processes could accompany hydrogen storage and release. (orig.)

  11. Volumetric Arterial Wall Shear Stress Calculation Based on Cine Phase Contrast MRI

    NARCIS (Netherlands)

    Potters, Wouter V.; van Ooij, Pim; Marquering, Henk; VanBavel, Ed; Nederveen, Aart J.

    2015-01-01

    PurposeTo assess the accuracy and precision of a volumetric wall shear stress (WSS) calculation method applied to cine phase contrast magnetic resonance imaging (PC-MRI) data. Materials and MethodsVolumetric WSS vectors were calculated in software phantoms. WSS algorithm parameters were optimized

  12. Evaluation of a rapid LMP-based approach for calculating marginal unit emissions

    International Nuclear Information System (INIS)

    Rogers, Michelle M.; Wang, Yang; Wang, Caisheng; McElmurry, Shawn P.; Miller, Carol J.

    2013-01-01

    Graphical abstract: Display Omitted - Highlights: • Pollutant emissions estimated based on locational marginal price and eGRID data. • Stochastic model using IEEE RTS-96 system used to evaluate LMP approach. • Incorporating membership function enhanced reliability of pollutant estimate. • Error in pollutant estimate typically 2 and X and SO 2 . - Abstract: To evaluate the sustainability of systems that draw power from electrical grids there is a need to rapidly and accurately quantify pollutant emissions associated with power generation. Air emissions resulting from electricity generation vary widely among power plants based on the types of fuel consumed, the efficiency of the plant, and the type of pollution control systems in service. To address this need, methods for estimating real-time air emissions from power generation based on locational marginal prices (LMPs) have been developed. Based on LMPs the type of the marginal generating unit can be identified and pollutant emissions are estimated. While conceptually demonstrated, this LMP approach has not been rigorously tested. The purpose of this paper is to (1) improve the LMP method for predicting pollutant emissions and (2) evaluate the reliability of this technique through power system simulations. Previous LMP methods were expanded to include marginal emissions estimates using an LMP Emissions Estimation Method (LEEM). The accuracy of emission estimates was further improved by incorporating a probability distribution function that characterize generator fuel costs and a membership function (MF) capable of accounting for multiple marginal generation units. Emission estimates were compared to those predicted from power flow simulations. The improved LEEM was found to predict the marginal generation type approximately 70% of the time based on typical system conditions (e.g. loads and fuel costs) without the use of a MF. With the addition of a MF, the LEEM was found to provide emission estimates with

  13. Bending Moment Calculations for Piles Based on the Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yu-xin Jie

    2013-01-01

    Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.

  14. Proportion of U.S. Civilian Population Ineligible for U.S. Air Force Enlistment Based on Current and Previous Weight Standards

    National Research Council Canada - National Science Library

    D'Mello, Tiffany A; Yamane, Grover K

    2007-01-01

    .... Until recently, gender-specific weight standards based on height were in place. However, in June 2006 the USAF implemented a new set of height-weight limits utilizing body mass index (BMI) criteria...

  15. Inductance calculation of submarine DC transmission line based on finite element analysis

    Directory of Open Access Journals (Sweden)

    WANG Qi

    2018-02-01

    Full Text Available [Objectives] Because of the characteristics of submarine direct current (DC transmission cables, traditional circuit inductance calculation methods are unable to fit the system. There is a big error between the calculating value and the actual value. This paper studies the finite element method to reduce the calculation error. [Methods] The applicability of common line inductance calculation formulas to submarine DC system is discussed firstly. Then a short-circuit testing system is set up. The inductance of circuit in the system is measured, and a simulation model of the testing system is established. For a comparing purpose, the total inductance of the line is calculated by finite element analysis in the ANSYS/Maxwell software too. With the inductance values, the equivalent circuit model of the testing system is simulated in the Matlab/Simulink software. The simulation waveforms of the short-circuit current and the measured waveform are analyzed and compared. [Results] The result shows that the finite element analysis method is able to improve the accuracy of calculation of submarine DC transmission line equivalent inductance, and reduce the error in DC power system transient analysis. [Conclusions] The achievement can provide support for further simulation model development and calculation method research.

  16. Method for stability analysis based on the Floquet theory and Vidyn calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ganander, Hans

    2005-03-01

    This report presents the activity 3.7 of the STEM-project Aerobig and deals with aeroelastic stability of the complete wind turbine structure at operation. As a consequence of the increase of sizes of wind turbines dynamic couplings are being more important for loads and dynamic properties. The steady ambition to increase the cost competitiveness of wind turbine energy by using optimisation methods lowers design margins, which in turn makes questions about stability of the turbines more important. The main objective of the project is to develop a general stability analysis tool, based on the VIDYN methodology regarding the turbine dynamic equations and the Floquet theory for the stability analysis. The reason for selecting the Floquet theory is that it is independent of number of blades, thus can be used for 2 as well as 3 bladed turbines. Although the latter ones are dominating on the market, the former has large potential when talking about offshore large turbines. The fact that cyclic and individual blade pitch controls are being developed as a mean for reduction of fatigue also speaks for general methods as Floquet. The first step of a general system for stability analysis has been developed, the code VIDSTAB. Together with other methods, as the snap shot method, the Coleman transformation and the use of Fourier series, eigenfrequences and modes can be analysed. It is general with no restrictions on the number of blades nor the symmetry of the rotor. The derivatives of the aerodynamic forces are calculated numerically in this first version. Later versions would include state space formulations of these forces. This would also be the case for the controllers of turbine rotation speed, yaw direction and pitch angle.

  17. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k eff estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  18. Simulation Of Premi Calculation Claims Insurance Base On Web; Case Study PT. Sinarmas Insurance Padang

    OpenAIRE

    Rohendi, Keukeu; Putra, Ilham Eka

    2016-01-01

    Sinarmas currently has several insurance services featured. To perform its function as a good insurance company is need for reform in terms of services in the process of calculating insurance premiums of insurance carried by marketing to use a calculator which interferes with the activities of marketing activities, slow printing insurance policies, automobile claims process that requires the customer to come to the office ASM, slow printing of Work Order (SPK) and the difficulty recap custome...

  19. On the calculation of the topographic wetness index: evaluation of different methods based on field observations

    Directory of Open Access Journals (Sweden)

    R. Sørensen

    2006-01-01

    Full Text Available The topographic wetness index (TWI, ln(a/tanβ, which combines local upslope contributing area and slope, is commonly used to quantify topographic control on hydrological processes. Methods of computing this index differ primarily in the way the upslope contributing area is calculated. In this study we compared a number of calculation methods for TWI and evaluated them in terms of their correlation with the following measured variables: vascular plant species richness, soil pH, groundwater level, soil moisture, and a constructed wetness degree. The TWI was calculated by varying six parameters affecting the distribution of accumulated area among downslope cells and by varying the way the slope was calculated. All possible combinations of these parameters were calculated for two separate boreal forest sites in northern Sweden. We did not find a calculation method that performed best for all measured variables; rather the best methods seemed to be variable and site specific. However, we were able to identify some general characteristics of the best methods for different groups of measured variables. The results provide guiding principles for choosing the best method for estimating species richness, soil pH, groundwater level, and soil moisture by the TWI derived from digital elevation models.

  20. Understanding the interfacial properties of graphene-based materials/BiOI heterostructures by DFT calculations

    International Nuclear Information System (INIS)

    Dai, Wen-Wu; Zhao, Zong-Yan

    2017-01-01

    Highlights: • Heterostructure constructing is an effective way to enhance the photocatalytic performance. • Graphene-like materials and BiOI were in contact and formed van der Waals heterostructures. • Band edge positions of GO/g-C 3 N 4 and BiOI changed to form standard type-II heterojunction. • 2D materials can promote the separation of photo-generated electron-hole pairs in BiOI. - Abstract: Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C 3 N 4 ) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C 3 N 4 and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and promote the separation of

  1. Understanding the interfacial properties of graphene-based materials/BiOI heterostructures by DFT calculations

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Wen-Wu [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Zhao, Zong-Yan, E-mail: zzy@kmust.edu.cn [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Jiangsu Provincial Key Laboratory for Nanotechnology, Nanjing University, Nanjing 210093 (China)

    2017-06-01

    Highlights: • Heterostructure constructing is an effective way to enhance the photocatalytic performance. • Graphene-like materials and BiOI were in contact and formed van der Waals heterostructures. • Band edge positions of GO/g-C{sub 3}N{sub 4} and BiOI changed to form standard type-II heterojunction. • 2D materials can promote the separation of photo-generated electron-hole pairs in BiOI. - Abstract: Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C{sub 3}N{sub 4}) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C{sub 3}N{sub 4} and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and

  2. Influence of previous provisional cementation on the bond strength between two definitive resin-based luting and dentin bonding agents and human dentin.

    Science.gov (United States)

    Erkut, Selim; Küçükesmen, Hakki Cenker; Eminkahyagil, Neslihan; Imirzalioglu, Pervin; Karabulut, Erdem

    2007-01-01

    This study evaluated the effect of two different types of provisional luting agents (RelyX Temp E, eugenol-based; RelyX Temp NE, eugenol-free) on the shear bond strengths between human dentin and two different resin-based luting systems (RelyXARC-Single Bond and Duo Link-One Step) after cementation with two different techniques (dual bonding and conventional technique). One hundred human molars were trimmed parallel to the original long axis, to expose flat dentin surfaces, and were divided into three groups. After related surface treatments for each specimen, the resin-based luting agent was applied in a silicone cylindrical mold (3.5 x 4 mm), placed on the bonding-agent-treated dentin surfaces and polymerized. In the control group (n = 20), the specimens were further divided into two groups (n = 10), and two different resin-based luting systems were immediately applied following the manufacturer's protocols: RelyX ARC-Single Bond (Group I C) and Duo Link-One Step (Group II C). In the provisionalization group (n = 40), the specimens were further divided into four subgroups of 10 specimens each (Group I N, I E and Group II N, II E). In Groups I N and II N, eugenol-free (RelyX NE), and in groups I E and II E, eugenol-based (RelyX E) provisional luting agents (PLA), were applied on the dentin surface. The dentin surfaces were cleaned with a flour-free pumice, and the resin-based luting systems RelyX ARC (Group I N and E) and Duo Link (Group II N and E) were applied. In the Dual bonding groups (n = 40), the specimens were divided into four subgroups of 10 specimens each (Group I ND, ED and Group II ND, ED). The specimens were treated with Single Bond (Groups I ND and ED) or One Step (Groups II ND and ED). After the dentin bonding agent treatment, RelyX Temp NE was applied to Groups I ND and II ND, and RelyX Temp E was applied to Groups I ED and II ED. The dentin surfaces were then cleaned as described in the provisionalization group, and the resin-based luting systems

  3. A kernel-based dose calculation algorithm for kV photon beams with explicit handling of energy and material dependencies.

    Science.gov (United States)

    Reinhart, Anna Merle; Fast, Martin F; Ziegenhein, Peter; Nill, Simeon; Oelfke, Uwe

    2017-01-01

    Mimicking state-of-the-art patient radiotherapy with high-precision irradiators for small animals is expected to advance the understanding of dose-effect relationships and radiobiology in general. We work on the implementation of intensity-modulated radiotherapy-like irradiation schemes for small animals. As a first step, we present a fast analytical dose calculation algorithm for keV photon beams. We follow a superposition-convolution approach adapted to kV X-rays, based on previous work for microbeam therapy. We assume local energy deposition at the photon interaction point due to the short electron ranges in tissue. This allows us to separate the dose calculation into locally absorbed primary dose and the scatter contribution, calculated in a point kernel approach. We validate our dose model against Geant4 Monte Carlo (MC) simulations and compare the results to Muriplan (XStrahl Ltd, Camberley, UK). For field sizes of (1 mm) 2 to (1 cm) 2 in water, the depth dose curves show a mean disagreement of 1.7% to MC simulations, with the largest deviations in the entrance region (4%) and at large depths (5% at 7 cm). Larger discrepancies are observed at water-to-bone boundaries, in bone and at the beam edges in slab phantoms and a mouse brain. Calculation times are in the order of 5 s for a single beam. The algorithm shows good agreement with MC simulations in an initial validation. It has the potential to become an alternative to full MC dose calculation. Advances in knowledge: The presented algorithm demonstrates the potential of kernel-based dose calculation for kV photon beams. It will be valuable in intensity-modulated radiotherapy and inverse treatment planning for high precision small-animal radiotherapy.

  4. X-ray tube output based calculation of patient entrance surface dose: validation of the method

    Energy Technology Data Exchange (ETDEWEB)

    Harju, O.; Toivonen, M.; Tapiovaara, M.; Parviainen, T. [Radiation and Nuclear Safety Authority, Helsinki (Finland)

    2003-06-01

    X-ray departments need methods to monitor the doses delivered to the patients in order to be able to compare their dose level to established reference levels. For this purpose, patient dose per radiograph is described in terms of the entrance surface dose (ESD) or dose-area product (DAP). The actual measurement is often made by using a DAP-meter or thermoluminescent dosimeters (TLD). The third possibility, the calculation of ESD from the examination technique factors, is likely to be a common method for x-ray departments that do not have the other methods at their disposal or for examinations where the dose may be too low to be measured by the other means (e.g. chest radiography). We have developed a program for the determination of ESD by the calculation method and analysed the accuracy that can be achieved by this indirect method. The program calculates the ESD from the current time product, x-ray tube voltage, beam filtration and focus- to-skin distance (FSD). Additionally, for calibrating the dose calculation method and thereby improving the accuracy of the calculation, the x-ray tube output should be measured for at least one x-ray tube voltage value in each x-ray unit. The aim of the present work is to point out the restrictions of the method and details of its practical application. The first experiences from the use of the method will be summarised. (orig.)

  5. Python-based framework for coupled MC-TH reactor calculations

    International Nuclear Information System (INIS)

    Travleev, A.A.; Molitor, R.; Sanchez, V.

    2013-01-01

    We have developed a set of Python packages to provide a modern programming interface to codes used for analysis of nuclear reactors. Python classes can be classified by their functionality into three groups: low-level interfaces, general model classes and high-level interfaces. A low-level interface describes an interface between Python and a particular code. General model classes can be used to describe calculation geometry and meshes to represent system variables. High-level interface classes are used to convert geometry described with general model classes into instances of low-level interface classes and to put results of code calculations (read by low-interface classes) back to general model. The implementation of Python interfaces to the Monte Carlo neutronics code MCNP and thermo-hydraulic code SCF allow efficient description of calculation models and provide a framework for coupled calculations. In this paper we illustrate how these interfaces can be used to describe a pin model, and report results of coupled MCNP-SCF calculations performed for a PWR fuel assembly, organized by means of the interfaces

  6. Diagnostic Accuracy of Robot-Guided, Software Based Transperineal MRI/TRUS Fusion Biopsy of the Prostate in a High Risk Population of Previously Biopsy Negative Men.

    Science.gov (United States)

    Kroenig, Malte; Schaal, Kathrin; Benndorf, Matthias; Soschynski, Martin; Lenz, Philipp; Krauss, Tobias; Drendel, Vanessa; Kayser, Gian; Kurz, Philipp; Werner, Martin; Wetterauer, Ulrich; Schultze-Seemann, Wolfgang; Langer, Mathias; Jilg, Cordula A

    2016-01-01

    Objective . In this study, we compared prostate cancer detection rates between MRI-TRUS fusion targeted and systematic biopsies using a robot-guided, software based transperineal approach. Methods and Patients . 52 patients received a MRIT/TRUS fusion followed by a systematic volume adapted biopsy using the same robot-guided transperineal approach. The primary outcome was the detection rate of clinically significant disease (Gleason grade ≥ 4). Secondary outcomes were detection rate of all cancers, sampling efficiency and utility, and serious adverse event rate. Patients received no antibiotic prophylaxis. Results . From 52 patients, 519 targeted biopsies from 135 lesions and 1561 random biopsies were generated (total n = 2080). Overall detection rate of clinically significant PCa was 44.2% (23/52) and 50.0% (26/52) for target and random biopsy, respectively. Sampling efficiency as the median number of cores needed to detect clinically significant prostate cancer was 9 for target (IQR: 6-14.0) and 32 (IQR: 24-32) for random biopsy. The utility as the number of additionally detected clinically significant PCa cases by either strategy was 0% (0/52) for target and 3.9% (2/52) for random biopsy. Conclusions . MRI/TRUS fusion based target biopsy did not show an advantage in the overall detection rate of clinically significant prostate cancer.

  7. Multiclass pesticide analysis in fruit-based baby food: A comparative study of sample preparation techniques previous to gas chromatography-mass spectrometry.

    Science.gov (United States)

    Petrarca, Mateus H; Fernandes, José O; Godoy, Helena T; Cunha, Sara C

    2016-12-01

    With the aim to develop a new gas chromatography-mass spectrometry method to analyze 24 pesticide residues in baby foods at the level imposed by established regulation two simple, rapid and environmental-friendly sample preparation techniques based on QuEChERS (quick, easy, cheap, effective, robust and safe) were compared - QuEChERS with dispersive liquid-liquid microextraction (DLLME) and QuEChERS with dispersive solid-phase extraction (d-SPE). Both sample preparation techniques achieved suitable performance criteria, including selectivity, linearity, acceptable recovery (70-120%) and precision (⩽20%). A higher enrichment factor was observed for DLLME and consequently better limits of detection and quantification were obtained. Nevertheless, d-SPE provided a more effective removal of matrix co-extractives from extracts than DLLME, which contributed to lower matrix effects. Twenty-two commercial fruit-based baby food samples were analyzed by the developed method, being procymidone detected in one sample at a level above the legal limit established by EU. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Evaluation of allocation methods for calculation of carbon footprint of grass-based dairy production.

    Science.gov (United States)

    Rice, P; O'Brien, D; Shalloo, L; Holden, N M

    2017-11-01

    A major methodological issue for life cycle assessment, commonly used to quantify greenhouse gas emissions from livestock systems, is allocation from multifunctional processes. When a process produces more than one output, the environmental burden has to be assigned between the outputs, such as milk and meat from a dairy cow. In the absence of an objective function for choosing an allocation method, a decision must be made considering a range of factors, one of which is the availability and quality of necessary data. The objective of this study was to evaluate allocation methods to calculate the climate change impact of the economically average (€/ha) dairy farm in Ireland considering both milk and meat outputs, focusing specifically on the pedigree of the available data for each method. The methods were: economic, energy, protein, emergy, mass of liveweight, mass of carcass weight and physical causality. The data quality for each method was expressed using a pedigree score based on reliability of the source, completeness, temporal applicability, geographical alignment and technological appropriateness. Scenario analysis was used to compare the normalised impact per functional unit (FU) from the different allocation methods, between the best and worst third of farms (in economic terms, €/ha) in the national farm survey. For the average farm, the allocation factors for milk ranged from 75% (physical causality) to 89% (mass of carcass weight), which in turn resulted in an impact per FU, from 1.04 to 1.22 kg CO 2 -eq/kg (fat and protein corrected milk). Pedigree scores ranged from 6.0 to 17.1 with protein and economic allocation having the best pedigree. It was concluded that when making the choice of allocation method, the quality of the data available (pedigree) should be given greater emphasis during the decision making process because the effect of allocation on the results. A range of allocation methods could be deployed to understand the uncertainty

  9. Cell verification of parallel burnup calculation program MCBMPI based on MPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Wang Guanbo; Yang Xin; She Ding

    2014-01-01

    The parallel burnup calculation program MCBMPI was developed. The program was modularized. The parallel MCNP5 program MCNP5MPI was employed as neutron transport calculation module. And a composite of three solution methods was used to solve burnup equation, i.e. matrix exponential technique, TTA analytical solution, and Gauss Seidel iteration. MPI parallel zone decomposition strategy was concluded in the program. The program system only consists of MCNP5MPI and burnup subroutine. The latter achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, and data exchanging with MCNP5MPI. Also, the program was verified with the pressurized water reactor (PWR) cell burnup benchmark. The results show that it,s capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  10. SU-E-T-632: Preliminary Study On Treating Nose Skin Using Energy and Intensity Modulated Electron Beams with Monte Carlo Based Dose Calculations

    International Nuclear Information System (INIS)

    Jin, L; Eldib, A; Li, J; Price, R; Ma, C

    2015-01-01

    Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin

  11. Quantitative and qualitative investigation of the fuel utilization and introducing a novel calculation idea based on transfer phenomena in a polymer electrolyte membrane (PEM) fuel cell

    International Nuclear Information System (INIS)

    Yousefkhani, M. Baghban; Ghadamian, H.; Massoudi, A.; Aminy, M.

    2017-01-01

    Highlights: • Investigation of fuel utilization in PEMFC within transfer phenomenon approach. • The main defect of the theoretical calculation of U F depends on Nernst equation. • U F has a differential nature so it is employed to do theoretical calculation. - Abstract: In this study, fuel utilization (U F ) of a PEMFC have been investigated within transfer phenomenon approach. Description of the U F and fuel consumption measurement is the main factor to obtain the U F . The differences between the experimental study and theoretical calculations results in the previous research articles reveal the available theoretical equations should be studied more based on the fundamental affairs of the U F . Hence, there is a substantial issue that the U F description satisfies the principles, and then it can be validated by the experimental results. The results of this study indicate that the U F and power grew by 1.1% and 1%, respectively, based on one degree increased temperature. In addition, for every 1 kPa pressure increment, U F improved considerably by 0.25% and 0.173% in the 40 °C and 80 °C, respectively. Furthermore, in the constant temperature, the power improved by 22% based on one atmospheric growth of the pressure. Results of this research show that the U F has a differential nature, therefore differential equations will be employed to do an accurate theoretical calculation. Accordingly, it seems that the main defect of the theoretical calculation depends on Nernst equation that can be modified by a differential nature coefficient.

  12. Optimum design calculations for detectors based on ZnSe(Te,O) scintillators

    International Nuclear Information System (INIS)

    Katrunov, K.; Ryzhikov, V.; Gavrilyuk, V.; Naydenov, S.; Lysetska, O.; Litichevskyi, V.

    2013-01-01

    Light collection in scintillators ZnSe(X), where X is an isovalent dopant, was studied using Monte Carlo calculations. Optimum design was determined for detectors of “scintillator—Si-photodiode” type, which can involve either one scintillation element or scintillation layers of large area made of small-crystalline grains. The calculations were carried out both for determination of the optimum scintillator shape and for design optimization of light guides, on the surface of which the layer of small-crystalline grains is formed

  13. Dose rate distribution calculation of elaborate head phantom for BNCT based on repeated structure card

    International Nuclear Information System (INIS)

    Li Xiaohua; Yu Tao; Xue Qing

    2009-01-01

    Because of cursory character of the head geometry phantom which adopted in BNCT for glioma cure, filling the head by using Universe card and Fill card of MCNP code is performed, and subtle description of head phantom is accomplished in this paper. Then, dose distribution calculation in head injected with boron and without boron is implemented with fast, super-thermal and thermal neutrons respectively. Finally, the curve of dose rate and depth in head is acquired. The calculation result is consistent with the related reference report, which proves that elaborate head phantom constructed in this paper is correct. (authors)

  14. Research on Displacement Calculation of Dynamometer Card Based on Kaiman Filter and Discrete Numerical Integration

    Directory of Open Access Journals (Sweden)

    Yan Nan

    2017-01-01

    Full Text Available In order to calculate the dynamometer card of oil well using acceleration sensor, the algorithm which combined by Kalman filter and discrete numerical integration is proposed. It can be applied to calculate the displacement and precipitation displacement period of oil well dynamometer card. The Kalman filter not only filters out the noise of the acceleration signal, but also maintains the original shape feature. The accurate precipitation of the displacement period ensures the correctness of displacement. The discrete numerical integration algorithm can make the relative error of displacement measurement less than 1%, which meets the requirement for dynamometer card accuracy. It is suitable for different types of oil wells.

  15. A PC-based program for routine calculation of monitoring units in radiotherapy

    International Nuclear Information System (INIS)

    Kenny, M.B.

    1992-01-01

    The last step in the planning procedure for amega voltage radiotherapy treatment is the calculation of the number of monitor units to be set. This may well be provided for in the planning system, but however it is done, it is essential that the final result be independently double checked either in planning or at the point of treatment. In an attempt to reduce the workload associated with manual checking, and to increase independence of the checking process a computer program is presented which does the monitor unit calculation for both x-ray and electron beams. 2 figs

  16. MVP Based Calculation of Reactivity Loss Due to Gemstone Irradiation Facility of Thai Research Reactor

    International Nuclear Information System (INIS)

    Kajornrith, Varavuth; Konduangkaeo, Areeratt

    2007-08-01

    Full text: The calculation of initial core criticality of Thai Research Reactor-1/Modification 1 was performed by the continuous energy Monte Carlo Code MVP and the material cross-sections from JENDL-3.3 continuous-energy library. After that gemstone irradiation facility model were extended for calculation on the magnitude of the reactivity loss. The results showed that total reactivity worth of the control system was 10.83. The reactivity effects associated with the insertion of gemstone irradiation facility was about -0.43% δk/k

  17. Survey to determine the efficacy and safety of guideline-based pharmacological therapy for chronic obstructive pulmonary disease patients not previously receiving maintenance treatment.

    Science.gov (United States)

    Setoguchi, Yasuhiro; Izumi, Shinyu; Nakamura, Hidenori; Hanada, Shigeo; Marumo, Kazuyoshi; Kurosaki, Atsuko; Akata, Shouichi

    2015-01-01

    To investigate the potential beneficial effects of guideline-based pharmacological therapy on pulmonary function and quality of life (QOL) in Japanese chronic obstructive pulmonary disease (COPD) patients without prior treatment. Multicenter survey, open-label study of 49 Japanese COPD patients aged ≥ 40 years; outpatients with >10 pack years of smoking history; ratio of forced expiratory volume in 1 s (FEV1)/forced vital capacity (FVC) patients. Significant changes over time were not observed for FEV1 and FVC, indicating lung function at initiation of treatment was maintained during the observation period. COPD assessment test scores showed statistical and clinical improvements. Cough, sputum, breathlessness, and shortness of breath were significantly improved. Lung function and QOL of untreated Japanese COPD patients improved and improvements were maintained by performing a therapeutic intervention that conformed to published guidelines.

  18. Los Alamos benchmarks: calculations based on ENDF/B-V data

    International Nuclear Information System (INIS)

    Kidman, R.B.

    1981-11-01

    The new and revised benchmark specifications for nine Los Alamos National Laboratory critical assemblies are used to compute the entire set of parameters that were measured in the experiments. A comparison between the computed and experimental values provides a measure of the adequacy of the specifications, cross sections, and physics codes used in the calculations

  19. Development of a risk-based mine closure cost calculation model

    CSIR Research Space (South Africa)

    Du

    2006-06-01

    Full Text Available The study summarised in this paper focused on expanding existing South African mine closure cost calculation models to provide a new model that incorporates risks, which could have an effect on the closure costs during the life cycle of the mine...

  20. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals

    Czech Academy of Sciences Publication Activity Database

    Červinka, C.; Fulem, Michal; Růžička, K.

    2016-01-01

    Roč. 144, č. 6 (2016), 1-15, č. článku 064505. ISSN 0021-9606 Institutional support: RVO:68378271 Keywords : density-functional theory * organic oxygen compounds * quantum- mechanical calculations Subject RIV: BJ - Thermodynamics Impact factor: 2.965, year: 2016

  1. Performance of SOPPA-based methods in the calculation of vertical excitation energies and oscillator strengths

    DEFF Research Database (Denmark)

    Sauer, Stephan P. A.; Pitzner-Frydendahl, Henrik Frank; Buse, Mogens

    2015-01-01

    methods, the original SOPPA method as well as SOPPA(CCSD) and RPA(D) in the calculation of vertical electronic excitation energies and oscillator strengths is investigated for a large benchmark set of 28 medium-size molecules with 139 singlet and 71 triplet excited states. The results are compared...

  2. The prevalence of adult congenital heart disease, results from a systematic review and evidence based calculation

    NARCIS (Netherlands)

    van der Bom, Teun; Bouma, Berto J.; Meijboom, Folkert J.; Zwinderman, Aeilko H.; Mulder, Barbara J. M.

    2012-01-01

    Purpose The prevalence of adult patients with congenital heart disease (CHD) has been reported with a high degree of variability. Prevalence estimates have been calculated using birth rate, birth prevalence, and assumed survival and derived from large administrative databases. To report more robust

  3. A web-based calculator for estimating the profit potential of grain segregation by protein concentration

    Science.gov (United States)

    By ignoring spatial variability in grain quality, conventional harvesting systems may increase the likelihood that growers will not capture price premiums for high quality grain found within fields. The Grain Segregation Profit Calculator was developed to demonstrate the profit potential of segregat...

  4. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...

  5. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine.

    Science.gov (United States)

    Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei

    2015-04-11

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.

  6. New Diagnosis of AIDS Based on Salmonella enterica subsp. I (enterica Enteritidis (A Meningitis in a Previously Immunocompetent Adult in the United States

    Directory of Open Access Journals (Sweden)

    Andrew C. Elton

    2017-01-01

    Full Text Available Salmonella meningitis is a rare manifestation of meningitis typically presenting in neonates and the elderly. This infection typically associates with foodborne outbreaks in developing nations and AIDS-endemic regions. We report a case of a 19-year-old male presenting with altered mental status after 3-day absence from work at a Wisconsin tourist area. He was febrile, tachycardic, and tachypneic with a GCS of 8. The patient was intubated and a presumptive diagnosis of meningitis was made. Treatment was initiated with ceftriaxone, vancomycin, acyclovir, dexamethasone, and fluid resuscitation. A lumbar puncture showed cloudy CSF with Gram negative rods. He was admitted to the ICU. CSF culture confirmed Salmonella enterica subsp. I (enterica Enteritidis (A. Based on this finding, a 4th-generation HIV antibody/p24 antigen test was sent. When this returned positive, a CD4 count was obtained and showed 3 cells/mm3, confirming AIDS. The patient ultimately received 38 days of ceftriaxone, was placed on elvitegravir, cobicistat, emtricitabine, and tenofovir alafenamide (Genvoya for HIV/AIDS, and was discharged neurologically intact after a 44-day admission.

  7. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    International Nuclear Information System (INIS)

    Xu, H; Guerrero, M; Prado, K; Yi, B

    2016-01-01

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  8. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission...

    Science.gov (United States)

    2010-07-01

    ... HFET-based fuel economy and carbon-related exhaust emission values for vehicle configurations. 600.206... POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for... Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for...

  9. A shock-layer theory based on thirteen-moment equations and DSMC calculations of rarefied hypersonic flows

    Science.gov (United States)

    Cheng, H. K.; Wong, Eric Y.; Dogra, V. K.

    1991-01-01

    Grad's thirteen-moment equations are applied to the flow behind a bow shock under the formalism of a thin shock layer. Comparison of this version of the theory with Direct Simulation Monte Carlo calculations of flows about a flat plate at finite attack angle has lent support to the approach as a useful extension of the continuum model for studying translational nonequilibrium in the shock layer. This paper reassesses the physical basis and limitations of the development with additional calculations and comparisons. The streamline correlation principle, which allows transformation of the 13-moment based system to one based on the Navier-Stokes equations, is extended to a three-dimensional formulation. The development yields a strip theory for planar lifting surfaces at finite incidences. Examples reveal that the lift-to-drag ratio is little influenced by planform geometry and varies with altitudes according to a 'bridging function' determined by correlated two-dimensional calculations.

  10. Calculations of reactivity based in the solution of the Neutron transport equation in X Y geometry and Lineal perturbation theory

    International Nuclear Information System (INIS)

    Valle G, E. del; Mugica R, C.A.

    2005-01-01

    In our country, in last congresses, Gomez et al carried out reactivity calculations based on the solution of the diffusion equation for an energy group using nodal methods in one dimension and the TPL approach (Lineal Perturbation Theory). Later on, Mugica extended the application to the case of multigroup so much so much in one as in two dimensions (X Y geometry) with excellent results. Presently work is carried out similar calculations but this time based on the solution of the neutron transport equation in X Y geometry using nodal methods and again the TPL approximation. The idea is to provide a calculation method that allows to obtain in quick form the reactivity solving the direct problem as well as the enclosed problem of the not perturbed problem. A test problem for the one that results are provided for the effective multiplication factor is described and its are offered some conclusions. (Author)

  11. The use of approximation formulae in calculations of acid-base equilibria-I Mono- and diprotic acids and bases.

    Science.gov (United States)

    Narasaki, H

    1979-07-01

    The pH of mono- and diprotic acids is calculated by use of approximation formulae and the theoretically exact equations. The regions for useful application of the approximation formulae (error <0.02 pH) have been identified.

  12. Calculation of effective dose for external photon exposure based on ICRP new recommendations

    International Nuclear Information System (INIS)

    Yamaguchi, Y.; Yoshizawa, M.

    1992-01-01

    The ICRP has adopted its new basic recommendations in which the effective dose equivalent HE has been renamed to the effective dose E, and the tissue weighting factor WT having been used in the calculation of HE has been also changed. One of the most interesting problems in this context is how much the value of E differs from that of HE. The values of HE and E were calculated by the Monte Carlo method for external photon irradiations from 17 keV to 5.9 MeV in AP, PA and LAT geometries to make sure of the quantitative difference between them. It was found from a comparison of them that E appears inferior to HE in the whole energy range of interest for these irradiation geometries, and the ambient dose equivalent H* (10) leads to an overestimate of E rather than conservative one when used as an operational quantity for external photon. (author)

  13. Calculation Scheme of Transformer Saturated Inductances based on Field Test Data

    Science.gov (United States)

    Nakachi, Yoshiki; Hatano, Ryosuke; Matsubara, Takumi; Uemura, Yoichi; Furukawa, Nobuhiko; Hirayama, Kaiichiro

    In small power system, such as a black start of a power system, an overvoltage could be caused by core saturation on the energization of a transformer with residual flux. Such an over-voltage might damage some equipment and delay power system restoration. Through an actual field test and EMTP simulations, we have found that such phenomena cannot be accurately simulated using a normal transformer model with inductance data measured during the factory test. This paper proposes a new calculation scheme of transformer inductance by using actual field test data in order to grasp the saturated transformer characteristics accurately. We also show an analytical scheme for calculating the saturated inductances from shunt currents in delta windings that give extensive influence to over-voltage under flux saturation.

  14. Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations

    DEFF Research Database (Denmark)

    Pettersen, E. E.; Demazire, C.; Jareteg, K.

    2015-01-01

    equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real......This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one...... stationary dynamic calculations, the presented method does not require any modification of the Monte Carlo code....

  15. Nuclear group constant set FUSION-J3 for fusion reactor nuclear calculations based on JENDL-3

    International Nuclear Information System (INIS)

    Maki, Koichi; Seki, Yasushi; Kosako, Kazuaki; Kawasaki, Hiromitsu.

    1991-03-01

    Based on evaluated nuclear data file JENDL-3, published in April 1990, we produced a nuclear group constant set 'FUSION-J3' for fusion reactor nuclear calculation by ANISN code instead of GICX40 produced in 1977. The set FUSION-J3 is the coupled group constant set with neutron 125 and gamma-ray 40 group structure, and has the maximum order of 5 as Legendre expansion in scattering cross section. Forty nuclides included in FUSION-J3 can be used in fusion reactor nuclear calculations. Considering mobility in two-dimensional calculations and fixed group structure in induced activity calculation code system as the GICX40 structure, we composed also FUSION-40 group constant set with neutron 42 group and gamma-ray 21 group structure. The set FUSION-40 includes the same maximum order of the Legendre expansion and the same nuclides as FUSION-J3. From the results in experimental analysis and benchmark calculations, it became proved that JENDL-3 is at higher level of accuracy than ENDF/B-IV and -V. The set FUSION-J3 can be clear applicable to fusion reactor nuclear calculations. (author)

  16. Calculations of the hurricane eye motion based on singularity propagation theory

    Directory of Open Access Journals (Sweden)

    Vladimir Danilov

    2002-02-01

    Full Text Available We discuss the possibility of using calculating singularities to forecast the dynamics of hurricanes. Our basic model is the shallow-water system. By treating the hurricane eye as a vortex type singularity and truncating the corresponding sequence of Hugoniot type conditions, we carry out many numerical experiments. The comparison of our results with the tracks of three actual hurricanes shows that our approach is rather fruitful.

  17. QM/MM-Based Calculations of Absorption and Emission Spectra of LSSmOrange Variants.

    Science.gov (United States)

    Bergeler, Maike; Mizuno, Hideaki; Fron, Eduard; Harvey, Jeremy N

    2016-12-15

    The goal of this computational work is to gain new insight into the photochemistry of the fluorescent protein (FP) LSSmOrange. This FP is of interest because besides exhibiting the eponymous large spectral shift (LSS) between the absorption and emission energies, it has been experimentally observed that it can also undergo a photoconversion process, which leads to a change in the absorption wavelength of the chromophore (from 437 to 553 nm). There is strong experimental evidence that this photoconversion is caused by decarboxylation of a glutamate located in the close vicinity of the chromophore. Still, the exact chemical mechanism of the decarboxylation process as well as the precise understanding of structure-property relations in the measured absorption and emission spectra is not yet fully understood. Therefore, hybrid quantum mechanics/molecular mechanics (QM/MM) calculations are performed to model the absorption and emission spectra of the original and photoconverted forms of LSSmOrange. The necessary force-field parameters of the chromophore are optimized with CGenFF and the FFToolkit. A thorough analysis of QM methods to study the excitation energies of this specific FP chromophore has been carried out. Furthermore, the influence of the size of the QM region has been investigated. We found that QM/MM calculations performed with time-dependent density functional theory (CAM-B3LYP/D3/6-31G*) and QM calculations performed with the semiempirical ZIndo/S method including a polarizable continuum model can describe the excitation energies reasonably well. Moreover, already a small QM region size seems to be sufficient for the study of the photochemistry in LSSmOrange. Especially, the calculated ZIndo spectra are in very good agreement with the experimental ones. On the basis of the spectra obtained, we could verify the experimentally assigned structures.

  18. Health utility scores in Alzheimer's disease: differences based on calculation with American and Canadian preference weights.

    Science.gov (United States)

    Oremus, Mark; Tarride, Jean-Eric; Clayton, Natasha; Raina, Parminder

    2014-01-01

    Health utility scores quantify health-related quality-of-life (HRQOL) in Alzheimer's disease (AD). These scores are calculated by using preference weights derived from general population samples. We recruited persons with AD and their primary informal caregivers and examined differences in health utility scores calculated by using two sets of published preference weights. We recruited participants from nine clinics across Canada and administered the EuroQol five-dimensional (EQ-5D) questionnaire HRQOL instrument. We converted participants' EQ-5D questionnaire responses into two sets of health utility scores by using US and Canadian preference weights. We assessed agreement between sets by using the intraclass correlation coefficient. Bland-Altman plots depicted individual-level differences between sets. For 216 persons with AD and their caregivers, mean health utility scores were higher when calculated with US instead of Canadian preference weights (P caregiver group. Ninety-five percent of the individual differences in utility score fell between -0.16 and 0.03 for persons with AD and -0.15 and 0.05 for caregivers. Forty-three percent of these differences exceeded a minimum clinically important threshold of 0.074. In AD studies, researchers should calculate health utility scores by using preference weights obtained in the general population of their country of interest. Using weights from other countries' populations could bias the utilities and adversely affect the results of economic evaluations of AD treatments. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  19. Automated molecular simulation based binding affinity calculator for ligand-bound HIV-1 proteases.

    Science.gov (United States)

    Sadiq, S Kashif; Wright, David; Watson, Simon J; Zasada, Stefan J; Stoica, Ileana; Coveney, Peter V

    2008-09-01

    The successful application of high throughput molecular simulations to determine biochemical properties would be of great importance to the biomedical community if such simulations could be turned around in a clinically relevant timescale. An important example is the determination of antiretroviral inhibitor efficacy against varying strains of HIV through calculation of drug-protein binding affinities. We describe the Binding Affinity Calculator (BAC), a tool for the automated calculation of HIV-1 protease-ligand binding affinities. The tool employs fully atomistic molecular simulations alongside the well established molecular mechanics Poisson-Boltzmann solvent accessible surface area (MMPBSA) free energy methodology to enable the calculation of the binding free energy of several ligand-protease complexes, including all nine FDA approved inhibitors of HIV-1 protease and seven of the natural substrates cleaved by the protease. This enables the efficacy of these inhibitors to be ranked across several mutant strains of the protease relative to the wildtype. BAC is a tool that utilizes the power provided by a computational grid to automate all of the stages required to compute free energies of binding: model preparation, equilibration, simulation, postprocessing, and data-marshaling around the generally widely distributed compute resources utilized. Such automation enables the molecular dynamics methodology to be used in a high throughput manner not achievable by manual methods. This paper describes the architecture and workflow management of BAC and the function of each of its components. Given adequate compute resources, BAC can yield quantitative information regarding drug resistance at the molecular level within 96 h. Such a timescale is of direct clinical relevance and can assist in decision support for the assessment of patient-specific optimal drug treatment and the subsequent response to therapy for any given genotype.

  20. Analysis of the Relationship between Estimation Skills Based on Calculation and Number Sense of Prospective Classroom Teachers

    Science.gov (United States)

    Senol, Ali; Dündar, Sefa; Gündüz, Nazan

    2015-01-01

    The aim of this study are to examine the relationship between prospective classroom teachers' estimation skills based on calculation and their number sense and to investigate whether their number sense and estimation skills change according to their class level and gender. The participants of the study are 125 prospective classroom teachers…

  1. Reducing the Time Complexity and Identifying Ill-Posed Problem Instances of Minkowski Sum Based Similarity Calculations

    NARCIS (Netherlands)

    Bekker, H.; Brink, A.A.; Roerdink, J.B.T.M.

    2009-01-01

    To calculate the Minkowski-sum based similarity measure of two convex polyhedra, many relative orientations have to be considered. These relative orientations are characterized by the fact that some faces and edges of the polyhedra are parallel. For every relative orientation of the polyhedra, the

  2. Accurate pKa Calculation of the Conjugate Acids of Alkanolamines, Alkaloids and Nucleotide Bases by Quantum Chemical Methods

    NARCIS (Netherlands)

    Gangarapu, S.; Marcelis, A.T.M.; Zuilhof, H.

    2013-01-01

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum

  3. Accurate high level ab initio-based global potential energy surface and dynamics calculations for ground state of CH2(+).

    Science.gov (United States)

    Li, Y Q; Zhang, P Y; Han, K L

    2015-03-28

    A global many-body expansion potential energy surface is reported for the electronic ground state of CH2 (+) by fitting high level ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pV6Z basis set. The topographical features of the new global potential energy surface are examined in detail and found to be in good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. In turn, in order to validate the potential energy surface, a test theoretical study of the reaction CH(+)(X(1)Σ(+))+H((2)S)→C(+)((2)P)+H2(X(1)Σg (+)) has been carried out with the method of time dependent wavepacket on the title potential energy surface. The total integral cross sections and the rate coefficients have been calculated; the results determined that the new potential energy surface can both be recommended for dynamics studies of any type and as building blocks for constructing the potential energy surfaces of larger C(+)/H containing systems.

  4. Basic Research about Calculation of the Decommissioning Unit Cost based on The KRR-2 Decommissioning Project

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Hee-Seong; Ha, Jea-Hyun; Jin, Hyung-Gon; Park, Seung-Kook [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    The KAERI be used to calculate the decommissioning cost and manage the data of decommissioning activity experience through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). Some country such as Japan and The United States have the information for decommissioning experience of the NPP and publish reports on decommissioning cost analysis. These reports as valuable data be used to compare with the decommissioning unit cost. In particular, need a method to estimate the decommissioning cost of the NPP because there is no decommissioning experience of NPP in case of Korea. makes possible to predict the more precise prediction about the decommissioning unit cost. But still, there are many differences on calculation for the decommissioning unit cost in domestic and foreign country. Typically, it is difficult to compare with data because published not detailed reports. Therefore, field of estimation for decommissioning cost have to use a unified framework in order to the decommissioning cost be provided to exact of the decommissioning cost.

  5. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    Science.gov (United States)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  6. Machine learning assisted first-principles calculation of multicomponent solid solutions: estimation of interface energy in Ni-based superalloys

    Science.gov (United States)

    Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok

    2018-02-01

    A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.

  7. DEPDOSE: An interactive, microcomputer based program to calculate doses from exposure to radionuclides deposited on the ground

    International Nuclear Information System (INIS)

    Beres, D.A.; Hull, A.P.

    1991-12-01

    DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer

  8. Alternate approach for calculating hardness based on residual indentation depth: Comparison with experiments

    Science.gov (United States)

    Ananthakrishna, G.; K, Srikanth

    2018-03-01

    It is well known that plastic deformation is a highly nonlinear dissipative irreversible phenomenon of considerable complexity. As a consequence, little progress has been made in modeling some well-known size-dependent properties of plastic deformation, for instance, calculating hardness as a function of indentation depth independently. Here, we devise a method of calculating hardness by calculating the residual indentation depth and then calculate the hardness as the ratio of the load to the residual imprint area. Recognizing the fact that dislocations are the basic defects controlling the plastic component of the indentation depth, we set up a system of coupled nonlinear time evolution equations for the mobile, forest, and geometrically necessary dislocation densities. Within our approach, we consider the geometrically necessary dislocations to be immobile since they contribute to additional hardness. The model includes dislocation multiplication, storage, and recovery mechanisms. The growth of the geometrically necessary dislocation density is controlled by the number of loops that can be activated under the contact area and the mean strain gradient. The equations are then coupled to the load rate equation. Our approach has the ability to adopt experimental parameters such as the indentation rates, the geometrical parameters defining the Berkovich indenter, including the nominal tip radius. The residual indentation depth is obtained by integrating the Orowan expression for the plastic strain rate, which is then used to calculate the hardness. Consistent with the experimental observations, the increasing hardness with decreasing indentation depth in our model arises from limited dislocation sources at small indentation depths and therefore avoids divergence in the limit of small depths reported in the Nix-Gao model. We demonstrate that for a range of parameter values that physically represent different materials, the model predicts the three characteristic

  9. Comparison of Conductor-Temperature Calculations Based on Different Radial-Position-Temperature Detections for High-Voltage Power Cable

    Directory of Open Access Journals (Sweden)

    Lin Yang

    2018-01-01

    Full Text Available In this paper, the calculation of the conductor temperature is related to the temperature sensor position in high-voltage power cables and four thermal circuits—based on the temperatures of insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface are established to calculate the conductor temperature. To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum are built up, and thermocouples are placed at the four radial positions in a 110 kV cross-linked polyethylene (XLPE insulated power cable to measure the temperatures of four positions. In measurements, six cases of current heating test under three laying environments, such as duct, water, and backfilled soil were carried out. Both errors of the conductor temperature calculation and the simulation based on the temperature of insulation shield were significantly smaller than others under all laying environments. It is the uncertainty of the thermal resistivity, together with the difference of the initial temperature of each radial position by the solar radiation, which led to the above results. The thermal capacitance of the air has little impact on errors. The thermal resistance of the air gap is the largest error source. Compromising the temperature-estimation accuracy and the insulation-damage risk, the waterproof compound is the recommended sensor position to improve the accuracy of conductor-temperature calculation. When the thermal resistances were calculated correctly, the aluminum sheath is also the recommended sensor position besides the waterproof compound.

  10. Image phase shift invariance based cloud motion displacement vector calculation method for ultra-short-term solar PV power forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Fei; Zhen, Zhao; Liu, Chun; Mi, Zengqiang; Hodge, Bri-Mathias; Shafie-khah, Miadreza; Catalão, João P. S.

    2018-02-01

    Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method through the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated

  11. Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings

    Science.gov (United States)

    Ucun, Fatih; Tokatlı, Ahmet

    2015-02-01

    In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.

  12. Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object.

    Science.gov (United States)

    Kim, Hak Gu; Man Ro, Yong

    2017-11-27

    In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.

  13. A Review of Solid-Solution Models of High-Entropy Alloys Based on Ab Initio Calculations

    Directory of Open Access Journals (Sweden)

    Fuyang Tian

    2017-11-01

    Full Text Available Similar to the importance of XRD in experiments, ab initio calculations, as a powerful tool, have been applied to predict the new potential materials and investigate the intrinsic properties of materials in theory. As a typical solid-solution material, the large degree of uncertainty of high-entropy alloys (HEAs results in the difficulty of ab initio calculations application to HEAs. The present review focuses on the available ab initio based solid-solution models (virtual lattice approximation, coherent potential approximation, special quasirandom structure, similar local atomic environment, maximum-entropy method, and hybrid Monte Carlo/molecular dynamics and their applications and limits in single phase HEAs.

  14. Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems

    Science.gov (United States)

    da Jornada, Felipe H.

    2015-03-01

    Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.

  15. Programs and subroutines for calculating cadmium body burdens based on a one-compartment model

    International Nuclear Information System (INIS)

    Robinson, C.V.; Novak, K.M.

    1980-08-01

    A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters

  16. A Bolus Calculator Based on Continuous-Discrete Unscented Kalman Filtering for Type 1 Diabetics

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Aradóttir, Tinna Björk; Hagdrup, Morten

    2015-01-01

    In patients with type 1 diabetes, the effects of meals intake on blood glucose level are usually mitigated by administering a large amount of insulin (bolus) at mealtime or even slightly before. This strategy assumes, among other things, a prior knowledge of the meal size and the postprandial...... after or 30 minutes after the beginning of the meal). We implement a continuous-discrete unscented Kalman filter to estimate the states and insulin sensitivity. These estimates are used in a bolus calculator. The numerical results demonstrate that administering the meal bolus 15 minutes after mealtime...

  17. Microscopic calculations of elastic scattering between light nuclei based on a realistic nuclear interaction

    Energy Technology Data Exchange (ETDEWEB)

    Dohet-Eraly, Jeremy [F.R.S.-FNRS (Belgium); Sparenberg, Jean-Marc; Baye, Daniel, E-mail: jdoheter@ulb.ac.be, E-mail: jmspar@ulb.ac.be, E-mail: dbaye@ulb.ac.be [Physique Nucleaire et Physique Quantique, CP229, Universite Libre de Bruxelles (ULB), B-1050 Brussels (Belgium)

    2011-09-16

    The elastic phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions are calculated in a cluster approach by the Generator Coordinate Method coupled with the Microscopic R-matrix Method. Two interactions are derived from the realistic Argonne potentials AV8' and AV18 with the Unitary Correlation Operator Method. With a specific adjustment of correlations on the {alpha} + {alpha} collision, the phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions agree rather well with experimental data.

  18. A Microsoft Excel® 2010 Based Tool for Calculating Interobserver Agreement

    Science.gov (United States)

    Azulay, Richard L

    2011-01-01

    This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel®) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work. PMID:22649578

  19. A microsoft excel(®) 2010 based tool for calculating interobserver agreement.

    Science.gov (United States)

    Reed, Derek D; Azulay, Richard L

    2011-01-01

    This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel(®)) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work.

  20. Convergence study of isogeometric analysis based on Bezier extraction in electronic structure calculations

    Czech Academy of Sciences Publication Activity Database

    Cimrman, R.; Novák, Matyáš; Kolman, Radek; Tůma, Miroslav; Plešek, Jiří; Vackář, Jiří

    2018-01-01

    Roč. 319, Feb (2018), s. 138-152 ISSN 0096-3003 R&D Projects: GA ČR GA17-12925S; GA ČR(CZ) GAP108/11/0853 Institutional support: RVO:68378271 ; RVO:61388998 ; RVO:67985807 Keywords : electronic structure calculation * density functional theory * finite element method * isogeometric analysis OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.); Composites (including laminates, reinforced plastics, cermets, combined natural and synthetic fibre fabrics (UT-L); Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) (UIVT-O) Impact factor: 1.738, year: 2016

  1. Time reversed test particle calculations at Titan, based on CAPS-IMS measurements

    Science.gov (United States)

    Bebesi, Zsofia; Erdos, Geza; Szego, Karoly; Young, David T.

    2013-04-01

    We used the theoretical approach of Kobel and Flückiger (1994) to construct a magnetic environment model in the vicinity of Titan - with the exception of placing the bow shock (which is not present at Titan) into infinity. The model has 4 free parameters to calibrate the shape and orientation of the field. We investigate the CAPS-IMS Singles data to calculate/estimate the location of origin of the detected cold ions at Titan, and we also use the measurements of the onboard Magnetometer to set the parameters of the model magnetic field. A 4th order Runge-Kutta method is applied to calculate the test particle trajectories in a time reversed scenario, in the curved magnetic environment. Several different ion species can be tracked by the model along their possible trajectories, as a first approach we considered three particle groups (1, 2 and 16 amu ions). In this initial study we show the results for some thoroughly discussed flybys like TA, TB and T5, but we consider more recent tailside encounters as well. Reference: Kobel, E. and E.O. Flückiger, A model of the steady state magnetic field in the magnetosheath, JGR 99, Issue A12, 23617, 1994

  2. Consideration of relativistic effects in band structure calculations based on the empirical tight-binding method

    International Nuclear Information System (INIS)

    Hanke, M.; Hennig, D.; Kaschte, A.; Koeppen, M.

    1988-01-01

    The energy band structure of cadmium telluride and mercury telluride materials is investigated by means of the tight-binding (TB) method considering relativistic effects and the spin-orbit interaction. Taking into account relativistic effects in the method is rather simple though the size of the Hamilton matrix doubles. Such considerations are necessary for the interesting small-interstice semiconductors, and the experimental results are reflected correctly in the band structures. The transformation behaviour of the eigenvectors within the Brillouin zone gets more complicated, but is, nevertheless, theoretically controllable. If, however, the matrix elements of the Green operator are to be calculated, one has to use formula manipulation programmes in particular for non-diagonal elements. For defect calculations by the Koster-Slater theory of scattering it is necessary to know these matrix elements. Knowledge of the transformation behaviour of eigenfunctions saves frequent diagonalization of the Hamilton matrix and thus permits a numerical solution of the problem. Corresponding results for the sp 3 basis are available

  3. REVIEW OF ADVANCES IN COBB ANGLE CALCULATION AND IMAGE-BASED MODELLING TECHNIQUES FOR SPINAL DEFORMITIES

    Directory of Open Access Journals (Sweden)

    V. Giannoglou

    2016-06-01

    Full Text Available Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  4. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  5. Effects of sulfur on lead partitioning during sludge incineration based on experiments and thermodynamic calculations.

    Science.gov (United States)

    Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-Jie; Zhuo, Zhong-xu; Fu, Jie-wen

    2015-04-01

    Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na2S and Na2SO4) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na2SO4 and Na2S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO4(s) at low temperatures (sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the concentration of Si, Ca and Al-containing compounds in the sludge. These findings provide useful information for understanding the partitioning behavior of Pb, facilitating the development of strategies to control the volatilization of Pb during sludge incineration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Model-based implementation of self-configurable intellectual property modules for image histogram calculation in FPGAs

    Directory of Open Access Journals (Sweden)

    Luis Manuel Garcés Socarrás

    2017-05-01

    Full Text Available This work presents the development of self-modifiable Intellectual Property (IP modules for histogram calculation using the modelbased design technique provided by Xilinx System Generator. In this work, an analysis and a comparison among histogram calculation architectures are presented, selecting the best solution for the design flow used. Also, the paper emphasizes the use of generic architectures capable of been adjustable by a self configurable procedure to ensure a processing flow adequate to the application requirements. In addition, the implementation of a configurable IP module for histogram calculation using a model-based design flow is described and some implementation results are shown over a Xilinx FPGA Spartan-6 LX45.

  7. Transition state theory thermal rate constants and RRKM-based branching ratios for the N((2)D) + CH(4) reaction based on multi-state and multi-reference ab initio calculations of interest for the Titan's chemistry.

    Science.gov (United States)

    Ouk, Chanda-Malis; Zvereva-Loëte, Natalia; Scribano, Yohann; Bussery-Honvault, Béatrice

    2012-10-30

    Multireference single and double configuration interaction (MRCI) calculations including Davidson (+Q) or Pople (+P) corrections have been conducted in this work for the reactants, products, and extrema of the doublet ground state potential energy surface involved in the N((2)D) + CH(4) reaction. Such highly correlated ab initio calculations are then compared with previous PMP4, CCSD(T), W1, and DFT/B3LYP studies. Large relative differences are observed in particular for the transition state in the entrance channel resolving the disagreement between previous ab initio calculations. We confirm the existence of a small but positive potential barrier (3.86 ± 0.84 kJ mol(-1) (MR-AQCC) and 3.89 kJ mol(-1) (MRCI+P)) in the entrance channel of the title reaction. The correlation is seen to change significantly the energetic position of the two minima and five saddle points of this system together with the dissociation channels but not their relative order. The influence of the electronic correlation into the energetic of the system is clearly demonstrated by the thermal rate constant evaluation and it temperature dependance by means of the transition state theory. Indeed, only MRCI values are able to reproduce the experimental rate constant of the title reaction and its behavior with temperature. Similarly, product branching ratios, evaluated by means of unimolecular RRKM theory, confirm the NH production of Umemoto et al., whereas previous works based on less accurate ab initio calculations failed. We confirm the previous findings that the N((2)D) + CH(4) reaction proceeds via an insertion-dissociation mechanism and that the dominant product channels are CH(2)NH + H and CH(3) + NH. Copyright © 2012 Wiley Periodicals, Inc.

  8. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    International Nuclear Information System (INIS)

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-01-01

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm 2 ). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm 2 . Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm 2 ) only 92% of the data meet the criteria. Total scatter factors show a good agreement ( 2 ) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm 2 . Special care must be taken for smaller fields.

  9. Web-based Tsunami Early Warning System with instant Tsunami Propagation Calculations in the GPU Cloud

    Science.gov (United States)

    Hammitzsch, M.; Spazier, J.; Reißland, S.

    2014-12-01

    Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the

  10. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    International Nuclear Information System (INIS)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K

    2016-01-01

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  11. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K [Department of Radiation Oncology, NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  12. Molecular interactions of nucleic acid bases. From ab initio calculations to molecular dynamics simulations

    Czech Academy of Sciences Publication Activity Database

    Šponer, Jiří

    2002-01-01

    Roč. 223, - (2002), s. 212 ISSN 0065-7727. [Annual Meeting of the American Chemistry Society /223./. 07.04.2002-11.04.2002, Orlando ] Institutional research plan: CEZ:AV0Z5004920 Keywords : quantum chemistry * base pairing * base stacking Subject RIV: BO - Biophysics

  13. Optimization of air plasma reconversion of UF6 to UO2 based on thermodynamic calculations

    Science.gov (United States)

    Tundeshev, Nikolay; Karengin, Alexander; Shamanin, Igor

    2018-03-01

    The possibility of plasma-chemical conversion of depleted uranium-235 hexafluoride (DUHF) in air plasma in the form of gas-air mixtures with hydrogen is considered in the paper. Calculation of burning parameters of gas-air mixtures is carried out and the compositions of mixtures obtained via energy-efficient conversion of DUHF in air plasma are determined. With the help of plasma-chemical conversion, thermodynamic modeling optimal composition of UF6-H2-Air mixtures and its burning parameters, the modes for production of uranium dioxide in the condensed phase are determined. The results of the conducted researches can be used for creation of technology for plasma-chemical conversion of DUHF in the form of air-gas mixtures with hydrogen.

  14. A robust force field based method for calculating conformational energies of charged drug-like molecules

    DEFF Research Database (Denmark)

    Pøhlsgaard, Jacob; Harpsøe, Kasper; Jørgensen, Flemming Steen

    2012-01-01

    The binding affinity of a drug like molecule depends among other things on the availability of the bioactive conformation. If the bioactive conformation has a significantly higher energy than the global minimum energy conformation, the molecule is unlikely to bind to its target. Determination...... of the global minimum energy conformation and calculation of conformational penalties of binding are prerequisites for prediction of reliable binding affinities. Here, we present a simple and computationally efficient procedure to estimate the global energy minimum for a wide variety of structurally diverse...... molecules, including polar and charged compounds. Identifying global energy minimum conformations of such compounds with force-field methods is problematic due to the exaggeration of intramolecular electrostatic interactions. We demonstrate that the global energy minimum conformations of zwitterionic...

  15. An approach to calculating metal particle detection in lubrication oil based on a micro inductive sensor

    Science.gov (United States)

    Wu, Yu; Zhang, Hongpeng

    2017-12-01

    A new microfluidic chip is presented to enhance the sensitivity of a micro inductive sensor, and an approach to coil inductance change calculation is introduced for metal particle detection in lubrication oil. Electromagnetic knowledge is used to establish a mathematical model of an inductive sensor for metal particle detection, and the analytic expression of coil inductance change is obtained by a magnetic vector potential. Experimental verification is carried out. The results show that copper particles 50-52 µm in diameter have been detected; the relative errors between the theoretical and experimental values are 7.68% and 10.02% at particle diameters of 108-110 µm and 50-52 µm, respectively. The approach presented here can provide a theoretical basis for an inductive sensor in metal particle detection in oil and other areas of application.

  16. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  17. CALCULATION METHOD OF ELECTRIC POWER LINES MAGNETIC FIELD STRENGTH BASED ON CYLINDRICAL SPATIAL HARMONICS

    Directory of Open Access Journals (Sweden)

    A.V. Erisov

    2016-05-01

    Full Text Available Purpose. Simplification of accounting ratio to determine the magnetic field strength of electric power lines, and assessment of their environmental safety. Methodology. Description of the transmission lines of the magnetic field by using techniques of spatial harmonic analysis in the cylindrical coordinate system is carried out. Results. For engineering calculations of electric power lines magnetic field with sufficient accuracy describes their first spatial harmonic magnetic field. Originality. Substantial simplification of the definition of the impact of the construction of transmission line poles on the value of its magnetic field and the bands of land alienation sizes. Practical value. The environmentally friendly projection electric power lines on the level of the magnetic field.

  18. ACCENT-based web calculators to predict recurrence and overall survival in stage III colon cancer.

    Science.gov (United States)

    Renfro, Lindsay A; Grothey, Axel; Xue, Yuan; Saltz, Leonard B; André, Thierry; Twelves, Chris; Labianca, Roberto; Allegra, Carmen J; Alberts, Steven R; Loprinzi, Charles L; Yothers, Greg; Sargent, Daniel J

    2014-12-01

    Current prognostic tools in colon cancer use relatively few patient characteristics. We constructed and validated clinical calculators for overall survival (OS) and time to recurrence (TTR) for stage III colon cancer and compared their performance against an existing tool (Numeracy) and American Joint Committee on Cancer (AJCC) version 7 staging. Data from 15936 stage III patients accrued to phase III clinical trials since 1989 were used to construct Cox models for TTR and OS. Variables included age, sex, race, body mass index, performance status, tumor grade, tumor stage, ratio of positive lymph nodes to nodes examined, number and location of primary tumors, and adjuvant treatment (fluoropyrimidine single agent or in combination). Missing data were imputed, and final models internally validated for optimism-corrected calibration and discrimination and compared with AJCC. External validation and comparisons against Numeracy were performed using stage III patients from NSABP trial C-08. All statistical tests were two-sided. All variables were statistically and clinically significant for OS prediction, while age and race did not predict TTR. No meaningful interactions existed. Models for OS and TTR were well calibrated and associated with C-indices of 0.66 and 0.65, respectively, compared with C-indices of 0.58 and 0.59 for AJCC. These tools, available online, better predicted patient outcomes than Numeracy, both overall and within patient subgroups, in external validation. The proposed ACCENT calculators are internally and externally valid, better discriminate patient risk than AJCC version 7 staging, and better predict patient outcomes than Numeracy. These tools have replaced Numeracy for online clinical use and will aid prognostication and patient/physician communication. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Score based procedures for the calculation of forensic likelihood ratios - Scores should take account of both similarity and typicality.

    Science.gov (United States)

    Morrison, Geoffrey Stewart; Enzinger, Ewald

    2018-01-01

    Score based procedures for the calculation of forensic likelihood ratios are popular across different branches of forensic science. They have two stages, first a function or model which takes measured features from known-source and questioned-source pairs as input and calculates scores as output, then a subsequent model which converts scores to likelihood ratios. We demonstrate that scores which are purely measures of similarity are not appropriate for calculating forensically interpretable likelihood ratios. In addition to taking account of similarity between the questioned-origin specimen and the known-origin sample, scores must also take account of the typicality of the questioned-origin specimen with respect to a sample of the relevant population specified by the defence hypothesis. We use Monte Carlo simulations to compare the output of three score based procedures with reference likelihood ratio values calculated directly from the fully specified Monte Carlo distributions. The three types of scores compared are: 1. non-anchored similarity-only scores; 2. non-anchored similarity and typicality scores; and 3. known-source anchored same-origin scores and questioned-source anchored different-origin scores. We also make a comparison with the performance of a procedure using a dichotomous "match"/"non-match" similarity score, and compare the performance of 1 and 2 on real data. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  1. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    International Nuclear Information System (INIS)

    Pan, Zhao; Thomson, Scott; Whitehead, Jared; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. (paper)

  2. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587

  3. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    Science.gov (United States)

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift

  4. Set of molecular models based on quantum mechanical ab initio calculations and thermodynamic data.

    Science.gov (United States)

    Eckl, Bernhard; Vrabec, Jadran; Hasse, Hans

    2008-10-09

    A parametrization strategy for molecular models on the basis of force fields is proposed, which allows a rapid development of models for small molecules by using results from quantum mechanical (QM) ab initio calculations and thermodynamic data. The geometry of the molecular models is specified according to the atom positions determined by QM energy minimization. The electrostatic interactions are modeled by reducing the electron density distribution to point dipoles and point quadrupoles located in the center of mass of the molecules. Dispersive and repulsive interactions are described by Lennard-Jones sites, for which the parameters are iteratively optimized to experimental vapor-liquid equilibrium (VLE) data, i.e., vapor pressure, saturated liquid density, and enthalpy of vaporization of the considered substance. The proposed modeling strategy was applied to a sample set of ten molecules from different substance classes. New molecular models are presented for iso-butane, cyclohexane, formaldehyde, dimethyl ether, sulfur dioxide, dimethyl sulfide, thiophene, hydrogen cyanide, acetonitrile, and nitromethane. Most of the models are able to describe the experimental VLE data with deviations of a few percent.

  5. Calculation of benefit reserves based on true m-thly benefit premiums

    Science.gov (United States)

    Riaman; Susanti, Dwi; Supriatna, Agus; Nurani Ruchjana, Budi

    2017-10-01

    Life insurance is a form of insurance that provides risk mitigation in life or death of a human. One of its advantages is measured life insurance. Insurance companies ought to give a sum of money as reserves to the customers. The benefit reserves are an alternative calculation which involves net and cost premiums. An insured may pay a series of benefit premiums to an insurer equivalent, at the date of policy issue, to the sum of to be paid on the death of the insured, or on survival of the insured to the maturity date. A balancing item is required and this item is a liability for one of the parties and the other is an asset. The balancing item, in loan, is the outstanding principle, an asset for the lender and the liability for the borrower. In this paper we examined the benefit reserves formulas corresponding to the formulas for true m-thly benefit premiums by the prospective method. This method specifies that, the reserves at the end of the first year are zero. Several principles can be used for the determined of benefit premiums, an equivalence relation is established in our discussion.

  6. Calculation of Long-Term Averages of Surface Air Temperature Based on Insolation Data

    Science.gov (United States)

    Fedorov, V. M.; Grebennikov, P. B.

    2017-12-01

    The solar radiation coming to the Earth's ellipsoid is considered without taking into account the atmosphere on the basis of the astronomical ephemerides for the time interval from 3000 BC to 3000 AD. Using the regression equations between the Earth's insolation and near-surface air temperature, the insolation annual and semiannual climatic norms of near-surface air temperature for the Earth as a whole and the hemispheres are calculated in intervals of 30 years for the period from 2930 BC to 2930 AD with 100 and 900- to 1000-year time steps. The analysis shows that the annual insolation rates of the near-surface air temperature of the Earth and the hemispheres decrease at all intervals. The semiannual insolation rates of the near-surface air temperature increase in winter and decrease in summer. This means that the seasonal difference decreases. The annual and semiannual rates of insolation near-surface air temperature of the Earth increase in the equatorial and decrease in the polar regions; the latitudinal contrast increases. The interlatitudinal gradient is higher in the Southern Hemisphere. It practically does not change in winter and increases in summer, most strongly in the Southern Hemisphere.

  7. Dispersion calculation method based on S-transform and coordinate rotation for Love channel waves with two components

    Science.gov (United States)

    Feng, Lei; Zhang, Yugui

    2017-08-01

    Dispersion analysis is an important part of in-seam seismic data processing, and the calculation accuracy of the dispersion curve directly influences pickup errors of channel wave travel time. To extract an accurate channel wave dispersion curve from in-seam seismic two-component signals, we proposed a time-frequency analysis method based on single-trace signal processing; in addition, we formulated a dispersion calculation equation, based on S-transform, with a freely adjusted filter window width. To unify the azimuth of seismic wave propagation received by a two-component geophone, the original in-seam seismic data undergoes coordinate rotation. The rotation angle can be calculated based on P-wave characteristics, with high energy in the wave propagation direction and weak energy in the vertical direction. With this angle acquisition, a two-component signal can be converted to horizontal and vertical directions. Because Love channel waves have a particle vibration track perpendicular to the wave propagation direction, the signal in the horizontal and vertical directions is mainly Love channel waves. More accurate dispersion characters of Love channel waves can be extracted after the coordinate rotation of two-component signals.

  8. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods

    International Nuclear Information System (INIS)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.

    2009-01-01

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  9. Aromatic character of planar boron-based clusters revisited by ring current calculations

    NARCIS (Netherlands)

    Hung Tan Pham, [Unknown; Lim, Kie Zen; Havenith, Remco W. A.; Minh Tho Nguyen, [No Value

    2016-01-01

    The planarity of small boron-based clusters is the result of an interplay between geometry, electron delocalization, covalent bonding and stability. These compounds contain two different bonding patterns involving both sigma and pi delocalized bonds, and up to now, their aromaticity has been

  10. Calculating the Entropy of Solid and Liquid Metals, Based on Acoustic Data

    Science.gov (United States)

    Tekuchev, V. V.; Kalinkin, D. P.; Ivanova, I. V.

    2018-05-01

    The entropies of iron, cobalt, rhodium, and platinum are studied for the first time, based on acoustic data and using the Debye theory and rigid-sphere model, from 298 K up to the boiling point. A formula for the melting entropy of metals is validated. Good agreement between the research results and the literature data is obtained.

  11. One-dimensional thermal evolution calculation based on a mixing length theory: Application to Saturnian icy satellites

    Science.gov (United States)

    Kamata, S.

    2017-12-01

    Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.

  12. Verification study of thorium cross section in MVP calculation of thorium based fuel core using experimental data

    International Nuclear Information System (INIS)

    Mai, V. T.; Fujii, T.; Wada, K.; Kitada, T.; Takaki, N.; Yamaguchi, A.; Watanabe, H.; Unesaki, H.

    2012-01-01

    Considering the importance of thorium data and concerning about the accuracy of Th-232 cross section library, a series of experiments of thorium critical core carried out at KUCA facility of Kyoto Univ. Research Reactor Inst. have been analyzed. The core was composed of pure thorium plates and 93% enriched uranium plates, solid polyethylene moderator with hydro to U-235 ratio of 140 and Th-232 to U-235 ratio of 15.2. Calculations of the effective multiplication factor, control rod worth, reactivity worth of Th plates have been conducted by MVP code using JENDL-4.0 library [1]. At the experiment site, after achieving the critical state with 51 fuel rods inserted inside the reactor, the measurements of the reactivity worth of control rod and thorium sample are carried out. By comparing with the experimental data, the calculation overestimates the effective multiplication factor about 0.90%. Reactivity worth of the control rods evaluation using MVP is acceptable with the maximum discrepancy about the statistical error of the measured data. The calculated results agree to the measurement ones within the difference range of 3.1% for the reactivity worth of one Th plate. From this investigation, further experiments and research on Th-232 cross section library need to be conducted to provide more reliable data for thorium based fuel core design and safety calculation. (authors)

  13. Comparison of the ESTRO formalism for monitor unit calculation with a Clarkson based algorithm of a treatment planning system and a traditional ''full-scatter'' methodology

    International Nuclear Information System (INIS)

    Pirotta, M.; Aquilina, D.; Bhikha, T.; Georg, D.

    2005-01-01

    The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (z m ), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth z R of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at z m and z R ), and the traditional ''full-scatter'' methodology. All methodologies, except for the ''full-scatter'' methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at z m (4% at 6 MV and 1.6% at 10 MV) than at z R (1.9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The ''full-scatter'' methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were

  14. Full Waveform Inversion Using an Energy-Based Objective Function with Efficient Calculation of the Gradient

    KAUST Repository

    Choi, Yun Seok

    2017-05-26

    Full waveform inversion (FWI) using an energy-based objective function has the potential to provide long wavelength model information even without low frequency in the data. However, without the back-propagation method (adjoint-state method), its implementation is impractical for the model size of general seismic survey. We derive the gradient of the energy-based objective function using the back-propagation method to make its FWI feasible. We also raise the energy signal to the power of a small positive number to properly handle the energy signal imbalance as a function of offset. Examples demonstrate that the proposed FWI algorithm provides a convergent long wavelength structure model even without low-frequency information, which can be used as a good starting model for the subsequent conventional FWI.

  15. Activity-based Calculation Models for the Brazilian Air Force Cellular Unit of Intendancy

    Science.gov (United States)

    2013-03-01

    being of the troops. Small gestures made the difference in the journey of more than ten hours of work: "hot chocolate on cold and rainy afternoons...Tools for Improving the Innovation Process”, Industrial Management & Data Systems, Vol. 98 Iss: 7, pp.330 – 337(1998). “ABB Definition”. (2012...Kuespert, D. and Estes, G. “Delphi in Industrial Forecasting”. C&EN Review, p. 40-47 (Augut 23, 1976) Lewis, Ronald J. Activity-Based Models

  16. Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting

    OpenAIRE

    Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng

    2013-01-01

    The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentrati...

  17. A calculation method for RF couplers design based on numerical simulation by microwave studio

    International Nuclear Information System (INIS)

    Wang Rong; Pei Yuanji; Jin Kai

    2006-01-01

    A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)

  18. Calculation of microscopic exchange interactions and modelling of macroscopic magnetic properties in molecule-based magnets.

    Science.gov (United States)

    Novoa, J J; Deumal, M; Jornet-Somoza, J

    2011-06-01

    The state-of-the-art theoretical evaluation and rationalization of the magnetic interactions (J(AB)) in molecule-based magnets is discussed in this critical review, focusing first on isolated radical···radical pair interactions and afterwards on how these interactions cooperate in the solid phase. Concerning isolated radical pairwise magnetic interactions, an initial analysis is done on qualitative grounds, concentrating also on the validity of the most commonly used models to predict their size and angularity (namely, McConnell-I and McConnell-II models, overlap of magnetic orbitals,…). The failure of these models, caused by their oversimplified description of the magnetic interactions, prompted the introduction of quantitative approaches, whose basic principles and relative quality are also evaluated. Concerning the computation of magnetic interactions in solids, we resort to a sum of pairwise magnetic interactions within the Heisenberg Hamiltonian framework, and follow the First-principles Bottom-Up procedure, which allows the accurate study of the magnetic properties of any molecule-based magnet in an unbiased way. The basic principles of this approach are outlined, applied in detail to a model system, and finally demonstrated to properly describe the magnetic properties of molecule-based systems that show a variety of magnetic topologies, which range from 1D to 3D (152 references).

  19. On the validity of microscopic calculations of double-quantum-dot spin qubits based on Fock-Darwin states

    Science.gov (United States)

    Chan, GuoXuan; Wang, Xin

    2018-04-01

    We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.

  20. Thermodynamic properties calculation of the flue gas based on its composition estimation for coal-fired power plants

    International Nuclear Information System (INIS)

    Xu, Liang; Yuan, Jingqi

    2015-01-01

    Thermodynamic properties of the working fluid and the flue gas play an important role in the thermodynamic calculation for the boiler design and the operational optimization in power plants. In this study, a generic approach to online calculate the thermodynamic properties of the flue gas is proposed based on its composition estimation. It covers the full operation scope of the flue gas, including the two-phase state when the temperature becomes lower than the dew point. The composition of the flue gas is online estimated based on the routinely offline assays of the coal samples and the online measured oxygen mole fraction in the flue gas. The relative error of the proposed approach is found less than 1% when the standard data set of the dry and humid air and the typical flue gas is used for validation. Also, the sensitivity analysis of the individual component and the influence of the measurement error of the oxygen mole fraction on the thermodynamic properties of the flue gas are presented. - Highlights: • Flue gas thermodynamic properties in coal-fired power plants are online calculated. • Flue gas composition is online estimated using the measured oxygen mole fraction. • The proposed approach covers full operation scope, including two-phase flue gas. • Component sensitivity to the thermodynamic properties of flue gas is presented.

  1. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    Energy Technology Data Exchange (ETDEWEB)

    Chen, C.L. [Faculty of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, 155, Li-Nong St., Section 2, Taipei 112, Taiwan (China); Wu, T.H. [Department of Medical Imaging and Radiological Sciences, Chung Shan Medical University, 110, Section 1, Chien-Kuo N. Road, Taichung 402, Taiwan (China); Cheng, M.C. [Faculty of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, 155, Li-Nong St., Section 2, Taipei 112, Taiwan (China); Huang, Y.H. [Faculty of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, 155, Li-Nong St., Section 2, Taipei 112, Taiwan (China); Sheu, C.Y. [Department of Radiology, Mackay Memorial Hospital, 92, Section 2, Chungshan North Road, Taipei 104, Taiwan (China); Hsieh, J.C. [Integrated Brain Research Unit, Taipei Veterans General Hospital, 201, Section 2, Shih-Pai Road, Taipei 112, Taiwan (China); Lee, J.S. [Faculty of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, 155, Li-Nong St., Section 2, Taipei 112, Taiwan (China)]. E-mail: jslee@ym.edu.tw

    2006-12-20

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.

  2. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    Science.gov (United States)

    Chen, C. L.; Wu, T. H.; Cheng, M. C.; Huang, Y. H.; Sheu, C. Y.; Hsieh, J. C.; Lee, J. S.

    2006-12-01

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.

  3. Methodologie de conception numerique d'un ventilateur helico-centrifuge basee sur l'emploi du calcul meridien

    Science.gov (United States)

    Lallier-Daniels, Dominic

    La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul m

  4. Bases, Assumptions, and Results of the Flowsheet Calculations for the Decision Phase Salt Disposition Alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.

    2001-03-26

    The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.

  5. Calculation of the information content of retrieval procedures applied to mass spectral data bases

    International Nuclear Information System (INIS)

    Marlen, G. van; Dijkstra, A.; Van't Klooster, H.A.

    1979-01-01

    A procedure has been developed for estimating the information content of retrieval systems with binary-coded mass spectra, as well as mass spectra coded by other methods, from the statistical properties of a reference file. For a reference file, binary-coded with a threshold of 1% of the intensity of the base peak, this results typically in an estimated information content of about 50 bits for 200 selected m/z values. It is shown that, because of errors occuring in the binary spectra, the actual information content is only about 12 bits. This explains the poor performance observed for retrieval systems with binary-coded mass spectra. (Auth.)

  6. Photon and electron data bases and their use in radiation transport calculations

    International Nuclear Information System (INIS)

    Cullen, D.E.; Perkins, S.T.; Seltzer, S.M.

    1992-01-01

    Traditionally, the data included in the ENDF/B photon interaction data base have been sufficient to describe the interaction of primary photons with matter. The data usually contained in this data base included: (1) cross sections: coherent and incoherent scattering, pair production as well as photoelectric absorption; and (2) form factors and scattering functions: to describe the angular distribution of coherent and incoherently scattered photons. These data were sufficient to describe the interaction of primary photons with matter. However, they were not adequate to uniquely define the emission of secondary photons following photoelectric effects such as fluorescence. Traditionally, it has been assumed that when a photoelectric event occurs, all of the energy of the incident photons is deposited at the point of the interaction. In fact, in the case of photons with energies near the K photoelectric edge of lead, almost 88% of the energy will be reradiated as fluorescence X rays. Traditional data also did not include the effect of anomalous scattering on coherent scattering. Including this effect predicts a coherent scattering cross section that approaches zero at low energy, as opposed to the constant low-energy limit predicted by simply using form factors. Lastly, traditional data did not differentiate between pair and triplet production

  7. Porphyrin-based polymeric nanostructures for light harvesting applications: Ab initio calculations

    Science.gov (United States)

    Orellana, Walter

    The capture and conversion of solar energy into electricity is one of the most important challenges to the sustainable development of mankind. Among the large variety of materials available for this purpose, porphyrins concentrate great attention due to their well-known absorption properties in the visible range. However, extended materials like polymers with similar absorption properties are highly desirable. In this work, we investigate the stability, electronic and optical properties of polymeric nanostructures based on free-base porphyrins and phthalocyanines (H2P, H2Pc), within the framework of the time-dependent density functional perturbation theory. The aim of this work is the stability, electronic, and optical characterization of polymeric sheets and nanotubes obtained from H2P and H2Pc monomers. Our results show that H2P and H2Pc sheets exhibit absorption bands between 350 and 400 nm, slightly different that the isolated molecules. However, the H2P and H2Pc nanotubes exhibit a wide absorption in the visible and near-UV range, with larger peaks at 600 and 700 nm, respectively, suggesting good characteristic for light harvesting. The stability and absorption properties of similar structures obtained from ZnP and ZnPc molecules is also discussed. Departamento de Ciencias Físicas, República 220, 037-0134 Santiago, Chile.

  8. Lattice and electronic properties of strongly correlated PuCoGa5 based on first principles calculations and thermodynamic modelling

    Science.gov (United States)

    Filanovich, A. N.; Povzner, A. A.

    2017-12-01

    In the framework of density functional theory method, the ground state energy of the PuCoGa5 compound is calculated for different values of the unit cell volume. The obtained data were incorporated into the thermodynamic model, which was utilized to calculate the temperature dependencies of thermal and elastic properties of PuCoGa5. The parameters of the developed model were estimated based on data of ab initio phonon spectrum. The Gruneisen parameters, which characterize degree of anharmonicity of the acoustic and optical phonons, are obtained. Using experimental data, non-lattice contributions to the coefficient of thermal expansion and heat capacity are determined. The nature of observed anomalies of the properties of PuCoGa5 is discussed, in particular, the possibility of a valence phase transition.

  9. A method for calculation of forces acting on air cooled gas turbine blades based on the aerodynamic theory

    Directory of Open Access Journals (Sweden)

    Grković Vojin R.

    2013-01-01

    Full Text Available The paper presents the mathematical model and the procedure for calculation of the resultant force acting on the air cooled gas turbine blade(s based on the aerodynamic theory and computation of the circulation around the blade profile. In the conducted analysis was examined the influence of the cooling air mass flow expressed through the cooling air flow parameter λc, as well as, the values of the inlet and outlet angles β1 and β2, on the magnitude of the tangential and axial forces. The procedure and analysis were exemplified by the calculation of the tangential and axial forces magnitudes. [Projekat Ministarstva nauke Republike Srbije: Development and building the demonstrative facility for combined heat and power with gasification

  10. AxML: a fast program for sequential and parallel phylogenetic tree calculations based on the maximum likelihood method.

    Science.gov (United States)

    Stamatakis, Alexandros P; Ludwig, Thomas; Meier, Harald; Wolf, Marty J

    2002-01-01

    Heuristics for the NP-complete problem of calculating the optimal phylogenetic tree for a set of aligned rRNA sequences based on the maximum likelihood method are computationally expensive. In most existing algorithms the tree evaluation and branch length optimization functions, calculating the likelihood value for each tree topology examined in the search space, account for the greatest part of overall computation time. This paper introduces AxML, a program derived from fastDNAml, incorporating a fast topology evaluation function. The algorithmic optimizations introduced, represent a general approach for accelerating this function and are applicable to both sequential and parallel phylogeny programs, irrespective of their search space strategy. Therefore, their integration into three existing phylogeny programs rendered encouraging results. Experimental results on conventional processor architectures show a global run time improvement of 35% up to 47% for the various test sets and program versions we used.

  11. Relativistic mean-field interaction with density-dependent meson-nucleon vertices based on microscopical calculations

    International Nuclear Information System (INIS)

    Roca-Maza, X.; Vinas, X.; Centelles, M.; Ring, P.; Schuck, P.

    2011-01-01

    Although ab initio calculations of relativistic Brueckner theory lead to large scalar isovector fields in nuclear matter, at present, successful versions of covariant density functional theory neglect the interactions in this channel. A new high-precision density functional DD-MEδ is presented which includes four mesons, σ, ω, δ, and ρ, with density-dependent meson-nucleon couplings. It is based to a large extent on microscopic ab initiocalculations in nuclear matter. Only four of its parameters are determined by adjusting to binding energies and charge radii of finite nuclei. The other parameters, in particular the density dependence of the meson-nucleon vertices, are adjusted to nonrelativistic and relativistic Brueckner calculations of symmetric and asymmetric nuclear matter. The isovector effective mass m p * -m n * derived from relativistic Brueckner theory is used to determine the coupling strength of the δ meson and its density dependence.

  12. Parallel calculations on shared memory, NUMA-based computers using MATLAB

    Science.gov (United States)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2014-05-01

    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU

  13. Effectiveness of variable-gain Kalman filter based on angle error calculated from acceleration signals in lower limb angle measurement with inertial sensors.

    Science.gov (United States)

    Teruyama, Yuta; Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors.

  14. Actinide solubility in deep groundwaters - estimates for upper limits based on chemical equilibrium calculations

    International Nuclear Information System (INIS)

    Schweingruber, M.

    1983-12-01

    A chemical equilibrium model is used to estimate maximum upper concentration limits for some actinides (Th, U, Np, Pu, Am) in groundwaters. Eh/pH diagrams for solubility isopleths, dominant dissolved species and limiting solids are constructed for fixed parameter sets including temperature, thermodynamic database, ionic strength and total concentrations of most important inorganic ligands (carbonate, fluoride, phosphate, sulphate, chloride). In order to assess conservative conditions, a reference water is defined with high ligand content and ionic strength, but without competing cations. In addition, actinide oxides and hydroxides are the only solid phases considered. Recommendations for 'safe' upper actinide solubility limits for deep groundwaters are derived from such diagrams, based on the predicted Eh/pH domain. The model results are validated as far as the scarce experimental data permit. (Auth.)

  15. Calculation of critical fault recovery time for nonlinear systems based on region of attraction analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Blanke, Mogens

    2014-01-01

    In safety critical systems, the control system is composed of a core control system with a fault detection and isolation scheme together with a repair or a recovery strategy. The time that it takes to detect, isolate, and recover from the fault (fault recovery time) is a critical factor in safety...... of a system. It must be guaranteed that the trajectory of a system subject to fault remains in the region of attraction (ROA) of the post-fault system during this time. This paper proposes a new algorithm to compute the critical fault recovery time for nonlinear systems with polynomial vector elds using sum...... of squares programming. The proposed algorithm is based on computation of ROA of the recovered system and nite-time stability of the faulty system....

  16. Challenges in calculating the bandgap of triazine-based carbon nitride structures

    KAUST Repository

    Steinmann, Stephan N.

    2017-02-08

    Graphitic carbon nitrides form a popular family of materials, particularly as photoharvesters in photocatalytic water splitting cells. Recently, relatively ordered g-C3N4 and g-C6N9H3 were characterized by X-ray diffraction and their ability to photogenerate excitons was subsequently estimated using density functional theory. In this study, the ability of triazine-based g-C3N4 and g-C6N9H3 to photogenerate excitons was studied using self-consistent GW computations followed by solving the Bethe–Salpeter Equation (BSE). In particular, monolayers, bilayers and 3D-periodic systems were characterized. The predicted optical band gaps are in the order of 1 eV higher than the experimentally measured ones, which is explained by a combination of shortcomings in the adopted model, small defects in the experimentally obtained structures and the particular nature of the experimental determination of the band gap.

  17. Method for calculating thermal properties of lightweight floor heating panels based on an experimental setup

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    Lightweight floor heating systems consist of a plastic tube connected to a heat distribution aluminium plate and are used in wooden floor constructions. The thermal properties of lightweight floor heating systems cannot be described accurately. The reason is a very complex interaction of convection......, radiation and conduction of the heat transfer between pipe and surrounding materials. The European Standard for floor heating, EN1264, does not cover lightweight systems, while the supplemental Nordtest Method VVS127 is aimed at lightweight systems. The thermal properties can be found using tabulated values...... or experiments. Neither includes dynamic properties. This article describes a method to find steady-state and dynamical thermal properties in an experimental setup based on finding a characteristic thermal resistance between pipe and heat transfer plate, which can be directly implemented in a numerical...

  18. Site- and phase-selective x-ray absorption spectroscopy based on phase-retrieval calculation

    International Nuclear Information System (INIS)

    Kawaguchi, Tomoya; Fukuda, Katsutoshi; Matsubara, Eiichiro

    2017-01-01

    Understanding the chemical state of a particular element with multiple crystallographic sites and/or phases is essential to unlocking the origin of material properties. To this end, resonant x-ray diffraction spectroscopy (RXDS) achieved through a combination of x-ray diffraction (XRD) and x-ray absorption spectroscopy (XAS) techniques can allow for the measurement of diffraction anomalous fine structure (DAFS). This is expected to provide a peerless tool for electronic/local structural analyses of materials with complicated structures thanks to its capability to extract spectroscopic information about a given element at each crystallographic site and/or phase. At present, one of the major challenges for the practical application of RXDS is the rigorous determination of resonant terms from observed DAFS, as this requires somehow determining the phase change in the elastic scattering around the absorption edge from the scattering intensity. This is widely known in the field of XRD as the phase problem. The present review describes the basics of this problem, including the relevant background and theory for DAFS and a guide to a newly-developed phase-retrieval method based on the logarithmic dispersion relation that makes it possible to analyze DAFS without suffering from the intrinsic ambiguities of conventional iterative-fitting. Several matters relating to data collection and correction of RXDS are also covered, with a final emphasis on the great potential of powder-sample-based RXDS (P-RXDS) to be used in various applications relevant to practical materials, including antisite-defect-type electrode materials for lithium-ion batteries. (topical review)

  19. Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions

    International Nuclear Information System (INIS)

    Li, Jun; Yim, Man-Sung; McNelis, David N.

    2007-01-01

    explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)

  20. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    Science.gov (United States)

    Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.

    2015-02-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.

  1. Molybdenum-99 production calculation analysis of SAMOP reactor based on thorium nitrate fuel

    Science.gov (United States)

    Syarip; Togatorop, E.; Yassar

    2018-03-01

    SAMOP (Subcritical Assembly for Molybdenum-99 Production) has the potential to use thorium as fuel to produce 99Mo after modifying the design, but the production performance has not been discovered yet. A study needs to be done to obtain the correlation between 99Mo production with the mixed fuel composition of uranium and with SAMOP power on the modified SAMOP design. The study aims to obtain the production of 99Mo based thorium nitrate fuel on SAMOP’s modified designs. Monte Carlo N-Particle eXtended (MCNPX) is required to simulate the operation of the assembly by varying the composition of the uranium-thorium nitrate mixed fuel, geometry and power fraction on the SAMOP modified designs. The burnup command on the MCNPX is used to confirm the 99Mo production result. The assembly is simulated to operate for 6 days with subcritical neutron multiplication factor (keff = 0.97-0.99). The neutron multiplication factor of the modified design (keff) is 0.97, the activity obtained from 99Mo is 18.58 Ci at 1 kW power operation.

  2. Response matrix Monte Carlo based on a general geometry local calculation for electron transport

    International Nuclear Information System (INIS)

    Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.

    1991-01-01

    A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs

  3. Global land surface climate analysis based on the calculation of a modified Bowen ratio

    Science.gov (United States)

    Han, Bo; Lü, Shihua; Li, Ruiqing; Wang, Xin; Zhao, Lin; Zhao, Cailing; Wang, Danyun; Meng, Xianhong

    2017-05-01

    A modified Bowen ratio (BRm), the sign of which is determined by the direction of the surface sensible heat flux, was used to represent the major divisions in climate across the globe, and the usefulness of this approach was evaluated. Five reanalysis datasets and the results of an offline land surface model were investigated. We divided the global continents into five major BRm zones using the climatological means of the sensible and latent heat fluxes during the period 1980-2010: extremely cold, extremely wet, semi-wet, semi-arid and extremely arid. These zones had BRm ranges of (-∞, 0), (0, 0.5), (0.5, 2), (2, 10) and (10, +∞), respectively. The climatological mean distribution of the Bowen ratio zones corresponded well with the K¨oppen-like climate classification, and it reflected well the seasonal variation for each subdivision of climate classification. The features of climate change over the mean climatological BRm zones were also investigated. In addition to giving a map-like classification of climate, the BRm also reflects temporal variations in different climatic zones based on land surface processes. An investigation of the coverage of the BRm zones showed that the extremely wet and extremely arid regions expanded, whereas a reduction in area was seen for the semi-wet and semi-arid regions in boreal spring during the period 1980-2010. This indicates that the arid regions may have become drier and the wet regions wetter over this period of time.

  4. Enzymatic logic calculation systems based on solid-state electrochemiluminescence and molecularly imprinted polymer film electrodes.

    Science.gov (United States)

    Lian, Wenjing; Liang, Jiying; Shen, Li; Jin, Yue; Liu, Hongyun

    2018-02-15

    The molecularly imprinted polymer (MIP) films were electropolymerized on the surface of Au electrodes with luminol and pyrrole (PY) as the two monomers and ampicillin (AM) as the template molecule. The electrochemiluminescence (ECL) intensity peak of polyluminol (PL) of the AM-free MIP films at 0.7V vs Ag/AgCl could be greatly enhanced by AM rebinding. In addition, the ECL signals of the MIP films could also be enhanced by the addition of glucose oxidase (GOD)/glucose and/or ferrocenedicarboxylic acid (Fc(COOH) 2 ) in the testing solution. Moreover, Fc(COOH) 2 exhibited cyclic voltammetric (CV) response at the AM-free MIP film electrodes. Based on these results, a binary 3-input/6-output biomolecular logic gate system was established with AM, GOD and Fc(COOH) 2 as inputs and the ECL responses at different levels and CV signal as outputs. Some functional non-Boolean logic devices such as an encoder, a decoder and a demultiplexer were also constructed on the same platform. Particularly, on the basis of the same system, a ternary AND logic gate was established. The present work combined MIP film electrodes, the solid-state ECL, and the enzymatic reaction together, and various types of biomolecular logic circuits and devices were developed, which opened a novel avenue to construct more complicated bio-logic gate systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A New Sensorless MRAS Based on Active Power Calculations for Rotor Position Estimation of a DFIG

    Directory of Open Access Journals (Sweden)

    Gil Domingos Marques

    2011-01-01

    Full Text Available A sensorless method for the estimation of the rotor position of the wound-rotor induction machine is described in this paper. The method is based on the MRAS methodology and consists in the comparison of two models for the evaluation of the active power transferred across the air gap: the reference model and the adaptive model. The reference model obtains the power transferred across the air gap using directly available and measured stator variables. The adaptive model obtains the same quantity in function of electromotive forces and rotor currents that are measurable on the rotor position, which is under estimation. The method does not need any information about the stator or rotor flux and can be implemented in the rotor or in the stator reference frames with a hysteresis or with a PI controller. The stability analysis gives an unstable region on the rotor current dq plane. Simulation and experimental results show that the method is appropriate for the vector control of the doubly fed induction machine under the stability region.

  6. Novel Displacement Agents for Aqueous 2-Phase Extraction Can Be Estimated Based on Hybrid Shortcut Calculations.

    Science.gov (United States)

    Kress, Christian; Sadowski, Gabriele; Brandenbusch, Christoph

    2016-10-01

    The purification of therapeutic proteins is a challenging task with immediate need for optimization. Besides other techniques, aqueous 2-phase extraction (ATPE) of proteins has been shown to be a promising alternative to cost-intensive state-of-the-art chromatographic protein purification. Most likely, to enable a selective extraction, protein partitioning has to be influenced using a displacement agent to isolate the target protein from the impurities. In this work, a new displacement agent (lithium bromide [LiBr]) allowing for the selective separation of the target protein IgG from human serum albumin (represents the impurity) within a citrate-polyethylene glycol (PEG) ATPS is presented. In order to characterize the displacement suitability of LiBr on IgG, the mutual influence of LiBr and the phase formers on the aqueous 2-phase system (ATPS) and partitioning is investigated. Using osmotic virial coefficients (B22 and B23) accessible by composition gradient multiangle light-scattering measurements, the precipitating effect of LiBr on both proteins and an estimation of both protein partition coefficients is estimated. The stabilizing effect of LiBr on both proteins was estimated based on B22 and experimentally validated within the citrate-PEG ATPS. Our approach contributes to an efficient implementation of ATPE within the downstream processing development of therapeutic proteins. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. A tesselation-based model for intensity estimation and laser plasma interactions calculations in three dimensions

    Science.gov (United States)

    Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.

    2018-03-01

    A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.

  8. Model based multi-wavelength spectrophotometric method for calculation of formation constants of phenanthrenequinone thiosemicarbazone complexes with some metallic cations

    Directory of Open Access Journals (Sweden)

    Naser Samadi

    2013-04-01

    Full Text Available In traditional spectrophotometric determination of stability constants of complexation, it is necessary to find a wavelength at which only one of the components has absorbance without any spectroscopic interference of the other reaction components. In the present work, a simple multi-wavelength model-based method has been developed to determine stability constants for complexation reaction regardless of the spectra overlapping of components. Also, pure spectra and concentration profiles of all components are extracted using multi-wavelength model based method. In the present work spectrophotometric titration of several cationic metal ions with new synthetic ligand were studied in order to calculate the formation constant(s. In order to estimate the formation constants a chemometrics method, model based analysis was applied.

  9. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations

    Energy Technology Data Exchange (ETDEWEB)

    Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H., E-mail: Ahmed.hussienabdelazim@hotmil.com

    2016-04-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10{sup −2}–1.0 × 10{sup −5} M with detection limit 8.5 × 10{sup −6} M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.

  10. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations

    International Nuclear Information System (INIS)

    Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H.

    2016-01-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10 −2 –1.0 × 10 −5 M with detection limit 8.5 × 10 −6 M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.

  11. Film based verification of calculation algorithms used for brachytherapy planning-getting ready for upcoming challenges of MBDCA

    Directory of Open Access Journals (Sweden)

    Grzegorz Zwierzchowski

    2016-08-01

    Full Text Available Purpose: Well-known defect of TG-43 based algorithms used in brachytherapy is a lack of information about interaction cross-sections, which are determined not only by electron density but also by atomic number. TG-186 recommendations with using of MBDCA (model-based dose calculation algorithm, accurate tissues segmentation, and the structure’s elemental composition continue to create difficulties in brachytherapy dosimetry. For the clinical use of new algorithms, it is necessary to introduce reliable and repeatable methods of treatment planning systems (TPS verification. The aim of this study is the verification of calculation algorithm used in TPS for shielded vaginal applicators as well as developing verification procedures for current and further use, based on the film dosimetry method. Material and methods : Calibration data was collected by separately irradiating 14 sheets of Gafchromic® EBT films with the doses from 0.25 Gy to 8.0 Gy using HDR 192Ir source. Standard vaginal cylinders of three diameters were used in the water phantom. Measurements were performed without any shields and with three shields combination. Gamma analyses were performed using the VeriSoft® package. Results : Calibration curve was determined as third-degree polynomial type. For all used diameters of unshielded cylinder and for all shields combinations, Gamma analysis were performed and showed that over 90% of analyzed points meets Gamma criteria (3%, 3 mm. Conclusions : Gamma analysis showed good agreement between dose distributions calculated using TPS and measured by Gafchromic films, thus showing the viability of using film dosimetry in brachytherapy.

  12. A robust cooperative spectrum sensing scheme based on Dempster-Shafer theory and trustworthiness degree calculation in cognitive radio networks

    Science.gov (United States)

    Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru

    2014-12-01

    Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.

  13. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    Science.gov (United States)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  14. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    Energy Technology Data Exchange (ETDEWEB)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  15. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    Science.gov (United States)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  16. Boron neutron capture therapy design calculation of a 3H(p,n reaction based BSA for brain cancer setup

    Directory of Open Access Journals (Sweden)

    Bassem Elshahat

    2015-09-01

    Full Text Available Purpose: Boron neutron capture therapy (BNCT is a promising technique for the treatment of malignant disease targeting organs of the human body. Monte Carlo simulations were carried out to calculate optimum design parameters of an accelerator based beam shaping assembly (BSA for BNCT of brain cancer setup.Methods: Epithermal beam of neutrons were obtained through moderation of fast neutrons from 3H(p,n reaction in a high density polyethylene moderator and a graphite reflector. The dimensions of the moderator and the reflector were optimized through optimization of epithermal / fast neutron intensity ratio as a function of geometric parameters of the setup. Results: The results of our calculation showed the capability of our setup to treat the tumor within 4 cm of the head surface. The calculated peak therapeutic ratio for the setup was found to be 2.15. Conclusion: With further improvement in the polyethylene moderator design and brain phantom irradiation arrangement, the setup capabilities can be improved to reach further deep-seated tumor.

  17. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    Science.gov (United States)

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  18. Dynamics study of the OH + NH3 hydrogen abstraction reaction using QCT calculations based on an analytical potential energy surface.

    Science.gov (United States)

    Monge-Palacios, M; Corchado, J C; Espinosa-Garcia, J

    2013-06-07

    To understand the reactivity and mechanism of the OH + NH3 → H2O + NH2 gas-phase reaction, which evolves through wells in the entrance and exit channels, a detailed dynamics study was carried out using quasi-classical trajectory calculations. The calculations were performed on an analytical potential energy surface (PES) recently developed by our group, PES-2012 [Monge-Palacios et al. J. Chem. Phys. 138, 084305 (2013)]. Most of the available energy appeared as H2O product vibrational energy (54%), reproducing the only experimental evidence, while only the 21% of this energy appeared as NH2 co-product vibrational energy. Both products appeared with cold and broad rotational distributions. The excitation function (constant collision energy in the range 1.0-14.0 kcal mol(-1)) increases smoothly with energy, contrasting with the only theoretical information (reduced-dimensional quantum scattering calculations based on a simplified PES), which presented a peak at low collision energies, related to quantized states. Analysis of the individual reactive trajectories showed that different mechanisms operate depending on the collision energy. Thus, while at high energies (E(coll) ≥ 6 kcal mol(-1)) all trajectories are direct, at low energies about 20%-30% of trajectories are indirect, i.e., with the mediation of a trapping complex, mainly in the product well. Finally, the effect of the zero-point energy constraint on the dynamics properties was analyzed.

  19. Multi-scale calculation of the electric properties of organic-based devices from the molecular structure

    KAUST Repository

    Li, Haoyuan

    2016-03-24

    A method is proposed to calculate the electric properties of organic-based devices from the molecular structure. The charge transfer rate is obtained using non-adiabatic molecular dynamics. The organic film in the device is modeled using the snapshots from the dynamic trajectory of the simulated molecular system. Kinetic Monte Carlo simulations are carried out to calculate the current characteristics. A widely used hole-transporting material, N,N′-diphenyl-N,N′-bis(1-naphthyl)-1,1′-biphenyl-4,4′-diamine (NPB) is studied as an application of this method, and the properties of its hole-only device are investigated. The calculated current densities and dependence on the applied voltage without an injection barrier are close to those obtained by the Mott-Gurney equation. The results with injection barriers are also in good agreement with experiment. This method can be used to aid the design of molecules and guide the optimization of devices. © 2016 Elsevier B.V. All rights reserved.

  20. Trans Hoogsteen/sugar edge base pairing in RNA. Structures, energies, and stabilities from quantum chemical calculations

    Czech Academy of Sciences Publication Activity Database

    Mládek, Arnošt; Sharma, P.; Mitra, A.; Bhattacharyya, D.; Šponer, Jiří; Šponer, Judit E.

    2009-01-01

    Roč. 113, č. 6 (2009), s. 1743-1755 ISSN 1520-6106 R&D Projects: GA AV ČR(CZ) IAA400550701; GA AV ČR(CZ) IAA400040802; GA AV ČR(CZ) 1QS500040581; GA MŠk(CZ) LC06030 Grant - others:GA MŠk(CZ) LC512 Program:LC Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702; CEZ:AV0Z40550506 Keywords : quantum chemical calculations * base pairing * RNA Subject RIV: BO - Biophysics Impact factor: 3.471, year: 2009

  1. Validation of LWR calculation methods and JEF-1 based data libraries by TRX and BAPL critical experiments

    International Nuclear Information System (INIS)

    Pelloni, S.; Grimm, P.; Mathews, D.; Paratte, J.M.

    1989-06-01

    In this report the capability of various code systems widely used at PSI (such as WIMS-D, BOXER, and the AARE modules TRAMIX and MICROX-2 in connection with the one-dimensional transport code ONEDANT) and JEF-1 based nuclear data libraries to compute LWR lattices is analysed by comparing results from thermal reactor benchmarks TRX and BAPL with experiment and with previously published values. It is shown that with the JEF-1 evaluation eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and that all methods give reasonable results for the measured reaction rate within or not too far from the experimental uncertainty. This is consistent with previous similar studies. (author) 7 tabs., 36 refs

  2. Optical Coherence Tomography–Based Corneal Power Measurement and Intraocular Lens Power Calculation Following Laser Vision Correction (An American Ophthalmological Society Thesis)

    Science.gov (United States)

    Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.

    2013-01-01

    Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323

  3. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  4. Development of a micro-depletion model to us WIMS properties in history-based local-parameter calculations in RFSP

    International Nuclear Information System (INIS)

    Shen, W.

    2004-01-01

    A micro-depletion model has been developed and implemented in the *SIMULATE module of RFSP to use WIMS-calculated lattice properties in history-based local-parameter calculations. A comparison between the micro-depletion and WIMS results for each type of lattice cross section and for the infinite-lattice multiplication factor was also performed for a fuel similar to that which may be used in the ACR fuel. The comparison shows that the micro-depletion calculation agrees well with the WIMS-IST calculation. The relative differences in k-infinity are within ±0.5 mk and ±0.9 mk for perturbation and depletion calculations, respectively. The micro-depletion model gives the *SIMULATE module of RFSP the capability to use WIMS-calculated lattice properties in history-based local-parameter calculations without resorting to the Simple-Cell-Methodology (SCM) surrogate for CANDU core-tracking simulations. (author)

  5. Declination Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  6. Compact All-optical Parity calculator based on a single all-active Mach-Zehnder Interferometer with an all-SOA amplified feedback

    DEFF Research Database (Denmark)

    Nielsen, Mads Lønstrup; Petersen, Martin Nordal; Nord, Martin

    2003-01-01

    An all-optical signal processing circuit capable of parity calculations is demonstrated using a single integrated all-active SOA-based MZI, exploiting the integrated SOAs for feedback amplification.......An all-optical signal processing circuit capable of parity calculations is demonstrated using a single integrated all-active SOA-based MZI, exploiting the integrated SOAs for feedback amplification....

  7. Fast and scalable prediction of local energy at grain boundaries: machine-learning based modeling of first-principles calculations

    Science.gov (United States)

    Tamura, Tomoyuki; Karasuyama, Masayuki; Kobayashi, Ryo; Arakawa, Ryuichi; Shiihara, Yoshinori; Takeuchi, Ichiro

    2017-10-01

    We propose a new scheme based on machine learning for the efficient screening in grain-boundary (GB) engineering. A set of results obtained from first-principles calculations based on density functional theory (DFT) for a small number of GB systems is used as a training data set. In our scheme, by partitioning the total energy into atomic energies using a local-energy analysis scheme, we can increase the training data set significantly. We use atomic radial distribution functions and additional structural features as atom descriptors to predict atomic energies and GB energies simultaneously using the least absolute shrinkage and selection operator, which is a recent standard regression technique in statistical machine learning. In the test study with fcc-Al [110] symmetric tilt GBs, we could achieve enough predictive accuracy to understand energy changes at and near GBs at a glance, even if we collected training data from only 10 GB systems. The present scheme can emulate time-consuming DFT calculations for large GB systems with negligible computational costs, and thus enable the fast screening of possible alternative GB systems.

  8. A Methodology for Calculating EGS Electricity Generation Potential Based on the Gringarten Model for Heat Extraction From Fractured Rock

    Energy Technology Data Exchange (ETDEWEB)

    Augustine, Chad

    2017-05-01

    Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGS electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.

  9. Ab initio electron propagator calculations of transverse conduction through DNA nucleotide bases in 1-nm nanopore corroborate third generation sequencing.

    Science.gov (United States)

    Kletsov, Aleksey A; Glukhovskoy, Evgeny G; Chumakov, Aleksey S; Ortiz, Joseph V

    2016-01-01

    The conduction properties of DNA molecule, particularly its transverse conductance (electron transfer through nucleotide bridges), represent a point of interest for DNA chemistry community, especially for DNA sequencing. However, there is no fully developed first-principles theory for molecular conductance and current that allows one to analyze the transverse flow of electrical charge through a nucleotide base. We theoretically investigate the transverse electron transport through all four DNA nucleotide bases by implementing an unbiased ab initio theoretical approach, namely, the electron propagator theory. The electrical conductance and current through DNA nucleobases (guanine [G], cytosine [C], adenine [A] and thymine [T]) inserted into a model 1-nm Ag-Ag nanogap are calculated. The magnitudes of the calculated conductance and current are ordered in the following hierarchies: gA>gG>gC>gT and IG>IA>IT>IC correspondingly. The new distinguishing parameter for the nucleobase identification is proposed, namely, the onset bias magnitude. Nucleobases exhibit the following hierarchy with respect to this parameter: Vonset(A)DNA translocation through an electrode-equipped nanopore. The results represent interest for the theorists and practitioners in the field of third generation sequencing techniques as well as in the field of DNA chemistry. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Power Flow Calculation for Weakly Meshed Distribution Networks with Multiple DGs Based on Generalized Chain-table Storage Structure

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2014-01-01

    Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target...... done on the modified version of the IEEE 69-bus distribution system. The results verify that the proposed method can keep a good efficiency level. Hence, it is promising to calculate the power flow of weakly meshed distribution networks with multiple DGs....... is to describe the topology of radial distribution networks with a clear logic and a small memory size. The strategies of compensating the equivalent currents of break-point branches and the reactive power outputs of PV-type DGs are presented on the basis of superposition theorem. Their formulations...

  11. Highlights from the previous volumes

    Science.gov (United States)

    Vergini Eduardo, G.; Pan, Y.; al., Vardi R. et; al., Akkermans Eric et; et al.

    2014-01-01

    Semiclassical propagation up to the Heisenberg time Superconductivity and magnetic order in the half-Heusler compound ErPdBi An experimental evidence-based computational paradigm for new logic-gates in neuronal activity Universality in the symmetric exclusion process and diffusive systems

  12. Clinical impact of dosimetric changes for volumetric modulated arc therapy in log file-based patient dose calculations.

    Science.gov (United States)

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi

    2017-10-01

    A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. SU-E-T-416: Experimental Evaluation of a Commercial GPU-Based Monte Carlo Dose Calculation Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Paudel, M R; Beachey, D J; Sarfehnia, A; Sahgal, A; Keller, B [Sunnybrook Odette Cancer Center, Toronto, ON (Canada); University of Toronto, Department of Radiation Oncology, Toronto, ON (Canada); Kim, A; Ahmad, S [Sunnybrook Odette Cancer Center, Toronto, ON (Canada)

    2015-06-15

    Purpose: A new commercial GPU-based Monte Carlo dose calculation algorithm (GPUMCD) developed by the vendor Elekta™ to be used in the Monaco Treatment Planning System (TPS) is capable of modeling dose for both a standard linear accelerator and for an Elekta MRI-Linear accelerator (modeling magnetic field effects). We are evaluating this algorithm in two parts: commissioning the algorithm for an Elekta Agility linear accelerator (the focus of this work) and evaluating the algorithm’s ability to model magnetic field effects for an MRI-linear accelerator. Methods: A beam model was developed in the Monaco TPS (v.5.09.06) using the commissioned beam data for a 6MV Agility linac. A heterogeneous phantom representing tumor-in-lung, lung, bone-in-tissue, and prosthetic was designed/built. Dose calculations in Monaco were done using the current clinical algorithm (XVMC) and the new GPUMCD algorithm (1 mm3 voxel size, 0.5% statistical uncertainty) and in the Pinnacle TPS using the collapsed cone convolution (CCC) algorithm. These were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm{sup 2}, 5×5 cm{sup 2}, and 10×10 cm{sup 2} field sizes. Results: The calculated central axis percentage depth doses (PDDs) in homogeneous solid water were within 2% compared to measurements for XVMC and GPUMCD. For tumor-in-lung and lung phantoms, doses calculated by all of the algorithms were within the experimental uncertainty of the measurements (±2% in the homogeneous phantom and ±3% for the tumor-in-lung or lung phantoms), except for 2×2 cm{sup 2} field size where only the CCC algorithm differs from film by 5% in the lung region. The analysis for bone-in-tissue and the prosthetic phantoms are ongoing. Conclusion: The new GPUMCD algorithm calculated dose comparable to both the XVMC algorithm and to measurements in both a homogeneous solid water medium and the heterogeneous phantom representing lung or tumor-in-lung for 2×2 cm

  14. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    Science.gov (United States)

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  15. Use of a Web-Based Calculator and a Structured Report Generator to Improve Efficiency, Accuracy, and Consistency of Radiology Reporting.

    Science.gov (United States)

    Towbin, Alexander J; Hawkins, C Matthew

    2017-10-01

    While medical calculators are common, they are infrequently used in the day-to-day radiology practice. We hypothesized that a calculator coupled with a structured report generator would decrease the time required to interpret and dictate a study in addition to decreasing the number of errors in interpretation. A web-based application was created to help radiologists calculate leg-length discrepancies. A time motion study was performed to evaluate if the calculator helped to decrease the time for interpretation and dictation of leg-length radiographs. Two radiologists each evaluated two sets of ten radiographs, one set using the traditional pen and paper method and the other set using the calculator. The time to interpret each study and the time to dictate each study were recorded. In addition, each calculation was checked for errors. When comparing the two methods of calculating the leg lengths, the manual method was significantly slower than the calculator for all time points measured: the mean time to calculate the leg-length discrepancy (131.8 vs. 59.7 s; p calculator were more accurate than reports created via the manual method (100 vs. 90%), although this result was not significant (p = 0.16). A calculator with a structured report generator significantly improved the time required to calculate and dictate leg-length discrepancy studies.

  16. Internationally comparable diagnosis-specific survival probabilities for calculation of the ICD-10-based Injury Severity Score.

    Science.gov (United States)

    Gedeborg, Rolf; Warner, Margaret; Chen, Li-Hui; Gulliver, Pauline; Cryer, Colin; Robitaille, Yvonne; Bauer, Robert; Ubeda, Clotilde; Lauritsen, Jens; Harrison, James; Henley, Geoff; Langley, John

    2014-02-01

    The International Statistical Classification of Diseases, 10th Revision (ICD-10)-based Injury Severity Score (ICISS) performs well but requires diagnosis-specific survival probabilities (DSPs), which are empirically derived, for its calculation. The objective was to examine if DSPs based on data pooled from several countries could increase accuracy, precision, utility, and international comparability of DSPs and ICISS. Australia, Argentina, Austria, Canada, Denmark, New Zealand, and Sweden provided ICD-10-coded injury hospital discharge data, including in-hospital mortality status. Data from the seven countries were pooled using four different methods to create an international collaborative effort ICISS (ICE-ICISS). The ability of the ICISS to predict mortality using the country-specific DSPs and the pooled DSPs was estimated and compared. The pooled DSPs were based on a total of 3,966,550 observations of injury diagnoses from the seven countries. The proportion of injury diagnoses having at least 100 discharges to calculate the DSP varied from 12% to 48% in the country-specific data set and was 66% in the pooled data set. When compared with using a country's own DSPs for ICISS calculation, the pooled DSPs resulted in somewhat reduced discrimination in predicting mortality (difference in c statistic varied from 0.006 to 0.04). Calibration was generally good when the predicted mortality risk was less than 20%. When Danish and Swedish data were used, ICISS was combined with age and sex in a logistic regression model to predict in-hospital mortality. Including age and sex improved both discrimination and calibration substantially, and the differences from using country-specific or pooled DSPs were minor. Pooling data from seven countries generated empirically derived DSPs. These pooled DSPs facilitate international comparisons and enables the use of ICISS in all settings where ICD-10 hospital discharge diagnoses are available. The modest reduction in performance of

  17. Investigating Heavy Water Zero Power Reactors with a New Core Configuration Based on Experiment and Calculation Results

    Directory of Open Access Journals (Sweden)

    Zahra Nasrazadani

    2017-02-01

    Full Text Available The heavy water zero power reactor (HWZPR, which is a critical assembly with a maximum power of 100 W, can be used in different lattice pitches. The last change of core configuration was from a lattice pitch of 18–20 cm. Based on regulations, prior to the first operation of the reactor, a new core was simulated with MCNP (Monte Carlo N-Particle-4C and WIMS (Winfrith Improved Multigroup Scheme–CITATON codes. To investigate the criticality of this core, the effective multiplication factor (Keff versus heavy water level, and the critical water level were calculated. Then, for safety considerations, the reactivity worth of D2O, the reactivity worth of safety and control rods, and temperature reactivity coefficients for the fuel and the moderator, were calculated. The results show that the relevant criteria in the safety analysis report were satisfied in the new core. Therefore, with the permission of the reactor safety committee, the first criticality operation was conducted, and important physical parameters were measured experimentally. The results were compared with the corresponding values in the original core.

  18. Effects of B site doping on electronic structures of InNbO4 based on hybrid density functional calculations

    Science.gov (United States)

    Lu, M. F.; Zhou, C. P.; Li, Q. Q.; Zhang, C. L.; Shi, H. F.

    2018-01-01

    In order to improve the photocatalytic activity under visible-light irradiation, we adopted first principle calculations based on density functional theory (DFT) to calculate the electronic structures of B site transition metal element doped InNbO4. The results indicated that the complete hybridization of Nb 4d states and some Ti 3d states contributed to the new conduction band of Ti doped InNbO4, barely changing the position of band edge. For Cr doping, some localized Cr 3d states were introduced into the band gap. Nonetheless, the potential of localized levels was too positive to cause visible-light reaction. When it came to Cu doping, the band gap was almost same with that of InNbO4 as well as some localized Cu 3d states appeared above the top of VB. The introduction of localized energy levels benefited electrons to migrate from valence band (VB) to conduction band (CB) by absorbing lower energy photons, realizing visible-light response.

  19. Photonuclear reactions in astrophysical p-process: Theoretical calculations and experiment simulation based on ELI-NP

    Directory of Open Access Journals (Sweden)

    Xu Yi

    2017-01-01

    Full Text Available The astrophysical p-process is an important way of nucleosynthesis to produce the stable and proton-rich nuclei beyond Fe which can not be reached by the s- and r-processes. In the present study, the astrophysical reaction rates of (γ,n, (γ,p, and (γ,α reactions are computed within the modern reaction code TALYS for about 3000 stable and proton-rich nuclei with 12 < Z < 110. The nuclear structure ingredients involved in the calculation are determined from experimental data whenever available and, if not, from global microscopic nuclear models. In particular, both of the Wood-Saxon potential and the double folding potential with density dependent M3Y (DDM3Y effective interaction are used for the calculations. It is found that the photonuclear reaction rates are very sensitive to the nuclear potential, and the better determination of nuclear potential would be important to reduce the uncertainties of reaction rates. Meanwhile, the Extreme Light Infrastructure-Nuclear Physics (ELI-NP facility is being developed, which will provide the great opportunity to experimentally study the photonuclear reactions in p-process. Simulations of the experimental setup for the measurements of the photonuclear reactions 96Ru(γ,p and 96Ru(γ,α are performed. It is shown that the experiments of photonuclear reactions in p-process based on ELI-NP are quite promising.

  20. The grid-based fast multipole method--a massively parallel numerical scheme for calculating two-electron interaction energies.

    Science.gov (United States)

    Toivanen, Elias A; Losilla, Sergio A; Sundholm, Dage

    2015-12-21

    Algorithms and working expressions for a grid-based fast multipole method (GB-FMM) have been developed and implemented. The computational domain is divided into cubic subdomains, organized in a hierarchical tree. The contribution to the electrostatic interaction energies from pairs of neighboring subdomains is computed using numerical integration, whereas the contributions from further apart subdomains are obtained using multipole expansions. The multipole moments of the subdomains are obtained by numerical integration. Linear scaling is achieved by translating and summing the multipoles according to the tree structure, such that each subdomain interacts with a number of subdomains that are almost independent of the size of the system. To compute electrostatic interaction energies of neighboring subdomains, we employ an algorithm which performs efficiently on general purpose graphics processing units (GPGPU). Calculations using one CPU for the FMM part and 20 GPGPUs consisting of tens of thousands of execution threads for the numerical integration algorithm show the scalability and parallel performance of the scheme. For calculations on systems consisting of Gaussian functions (α = 1) distributed as fullerenes from C20 to C720, the total computation time and relative accuracy (ppb) are independent of the system size.

  1. Calculation procedure for formulating lauric and palmitic fat blends based on the grouping of triacylglycerol melting points

    International Nuclear Information System (INIS)

    Nusantoro, B.P.; Yanty, N.A.M.; Van de Walle, D.; Hidayat, C.; Danthine, S.; Dewettinck, K.

    2017-01-01

    A calculation procedure for formulating lauric and palmitic fat blends has been developed based on grouping TAG melting points. This procedure offered more flexibility in choosing the initial fats and oils and eventually gave deeper insight into the existing chemical compositions and better prediction on the physicochemical properties and microstructure of the fat blends. The amount of high, medium and low melting TAGs could be adjusted using the given calculation procedure to obtain the desired functional properties in the fat blends. Solid fat contents and melting behavior of formulated fat blends showed particular patterns with respect to ratio adjustments of the melting TAG groups. These outcomes also suggested that both TAG species and their quantity had a significant influence on the crystallization behavior of the fat blends. Palmitic fat blends, in general, were found to exhibit higher SFC values than those of Lauric fat blends. Instead of the similarity in crystal microstructure, lauric fat blends were stabilized at β polymorph while palmitic fat blends were stabilized at β’ polymorph. [es

  2. A Brief User's Guide to the Excel® -Based DF Calculator

    Energy Technology Data Exchange (ETDEWEB)

    Jubin, Robert Thomas [ORNL

    2016-06-01

    To understand the importance of capturing penetrating forms of iodine as well as the other volatile radionuclides, a calculation tool was developed in the form of an Excel® spreadsheet to estimate the overall plant decontamination factor (DF). The tool requires the user to estimate splits of the volatile radionuclides within the major portions of the reprocessing plant, speciation of iodine and individual DFs for each off-gas stream within the Used Nuclear Fuel reprocessing plant. The Impact to the overall plant DF for each volatile radionuclide is then calculated by the tool based on the specific user choices. The Excel® spreadsheet tracks both elemental and penetrating forms of iodine separately and allows changes in the speciation of iodine at each processing step. It also tracks 3H, 14C and 85Kr. This document provides a basic user's guide to the manipulation of this tool.

  3. Accurate finite element method for atomic calculations based on density functional theory and Hartree-Fock method

    Science.gov (United States)

    Ozaki, Taisuke; Toyoda, Masayuki

    2011-06-01

    An accurate finite element method is developed for atomic calculations based on density functional theory (DFT) within local density approximation (LDA) and Hartree-Fock (HF) method. The radial wave functions are expanded by cubic Hermite spline functions on a uniform mesh for x=√{r}, and all the associated integrals are analytically evaluated in conjunction with fitting procedures of the Hartree and the exchange-correlation potentials to the same cubic Hermite spline functions using a set of recurrence formulas. The total energy of atoms systematically converges from above, and the error algebraically decays as the mesh spacing decreases. When the mesh spacing d is taken to be 0.025/√{Z} bohr, the total energy for an atom of atomic number Z can be calculated within error of 10 hartree for both the LDA and HF methods. The equal applicability of the method to DFT and the HF method with a similar degree of high accuracy enables the method to be a reliable platform for development of new functionals in DFT such as hybrid functionals.

  4. Accurate pKa calculation of the conjugate acids of alkanolamines, alkaloids and nucleotide bases by quantum chemical methods.

    Science.gov (United States)

    Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han

    2013-04-02

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. ARS-Media for Excel: A Spreadsheet Tool for Calculating Media Recipes Based on Ion-Specific Constraints.

    Science.gov (United States)

    Niedz, Randall P

    2016-01-01

    ARS-Media for Excel is an ion solution calculator that uses "Microsoft Excel" to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Excel's Solver add-on solves the linear programming equation to generate a recipe. Calculating a mixture of salts to generate exact solutions of complex ionic mixtures is required for at least 2 types of problems- 1) formulating relevant ecological/biological ionic solutions such as those from a specific lake, soil, cell, tissue, or organ and, 2) designing ion confounding-free experiments to determine ion-specific effects where ions are treated as statistical factors. Using ARS-Media for Excel to solve these two problems is illustrated by 1) exactly reconstructing a soil solution representative of a loamy agricultural soil and, 2) constructing an ion-based experiment to determine the effects of substituting Na+ for K+ on the growth of a Valencia sweet orange nonembryogenic cell line.

  6. Using Inverted Indices for Accelerating LINGO Calculations

    DEFF Research Database (Denmark)

    Kristensen, Thomas Greve; Nielsen, Jesper; Pedersen, Christian Nørgaard Storm

    2011-01-01

    queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialised hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices it is possible to calculate LINGOsim......The ever growing size of chemical data bases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating...

  7. Methods for calculating the absolute entropy and free energy of biological systems based on ideas from polymer physics.

    Science.gov (United States)

    Meirovitch, Hagai

    2010-01-01

    The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently

  8. Uterine rupture without previous caesarean delivery

    DEFF Research Database (Denmark)

    Thisted, Dorthe L. A.; H. Mortensen, Laust; Krebs, Lone

    2015-01-01

    OBJECTIVE: To determine incidence and patient characteristics of women with uterine rupture during singleton births at term without a previous caesarean delivery. STUDY DESIGN: Population based cohort study. Women with term singleton birth, no record of previous caesarean delivery and planned...... vaginal delivery (n=611,803) were identified in the Danish Medical Birth Registry (1997-2008). Medical records from women recorded with uterine rupture during labour were reviewed to ascertain events of complete uterine rupture. Relative Risk (RR) and adjusted Relative Risk Ratio (aRR) of complete uterine...... rupture with 95% confidence intervals (95% CI) were ascertained according to characteristics of the women and of the delivery. RESULTS: We identified 20 cases with complete uterine rupture. The incidence of complete uterine rupture among women without previous caesarean delivery was about 3...

  9. A study of displacement-based fluid finite elements for calculating frequencies of fluid and fluid-structure systems

    International Nuclear Information System (INIS)

    Olson, L.G.; Bathe, K.J.

    1983-01-01

    The widely-used displacement-based finite element formulation for inviscid, compressible, small displacement fluid motions is examined, with the specific objective of calculating fluid-structure frequencies. It is shown that the formulation can be employed with confidence to predict the static response of fluids. Also the resonant frequencies of fluids in rigid cavities and the frequencies of fluids in flexible boundaries are solved successfully if a penalty on rotations is included in the formulation. However, the reason for writing this paper is that problems involving structures moving through fluids that behave almost incompressibly - such as an ellipse vibrating on a spring in water - could not be solved satisfactorily, for which a general explanation is given. (orig.)

  10. Methods For The Calculation Of Pebble Bed High Temperature Reactors With High Burnup Plutonium And Minor Actinide Based Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Meier, Astrid; Bernnat, Wolfgang; Lohnert, Guenter [Institute for Nuclear Technology and Energy Systems, University of Stuttgart, Pfaffenwaldring 31, 70569 Stuttgart (Germany)

    2008-07-01

    The graphite moderated Modular High Temperature Pebble Bed Reactor enables very flexible loading strategies and is one candidate of the Generation IV reactors. For this reactor fuel cycles with high burnup (about 600 MWd/kg HM) based on plutonium (Pu) and minor actinides (MA) fuel will be investigated. The composition of this fuel is defined in the EU-PuMA-project which aims the reduction of high level waste. There exist nearly no neutronic full core calculations for this fuel composition with high burnup. Two methods (deterministic and Monte Carlo) will be used to determine the neutronics in a full core. The detailed results will be compared with respect to the influence on criticality and safety related parameters. (authors)

  11. Some considerations about bond indices in non-orthogonal bases and the MO calculation of valence and oxidation number

    International Nuclear Information System (INIS)

    Giambiagi, M.S. de; Giambiagi, M.; Jorge, F.E.

    1984-01-01

    In order to guarantee the desired invariance properties of bond indices, the importance of expliciting the tensor character of the matrices concerned, so as to deal with a contraction in the tensor sense between a covariant index and a contravariant one is shown. An MO valence definition using Wiberg's indices is generalized to non-orthogonal bases and a straightforward definition of oxidation numbers is proposed. IEH calculations of their magnitudes for some appropriate examples are performed: they emphasize the role of 'secondary' bonds in N and C-containing compounds; the hydrogen behaviour in half-bonds and strong H-bonds is satisfactorily accounted for; valence and oxidation number values are assigned to Fe, Co and Ni in a few complexes. (Author) [pt

  12. A phylogeny-based sampling strategy and power calculator informs genome-wide associations study design for microbial pathogens.

    Science.gov (United States)

    Farhat, Maha R; Shapiro, B Jesse; Sheppard, Samuel K; Colijn, Caroline; Murray, Megan

    2014-01-01

    Whole genome sequencing is increasingly used to study phenotypic variation among infectious pathogens and to evaluate their relative transmissibility, virulence, and immunogenicity. To date, relatively little has been published on how and how many pathogen strains should be selected for studies associating phenotype and genotype. There are specific challenges when identifying genetic associations in bacteria which often comprise highly structured populations. Here we consider general methodological questions related to sampling and analysis focusing on clonal to moderately recombining pathogens. We propose that a matched sampling scheme constitutes an efficient study design, and provide a power calculator based on phylogenetic convergence. We demonstrate this approach by applying it to genomic datasets for two microbial pathogens: Mycobacterium tuberculosis and Campylobacter species.

  13. Methods For The Calculation Of Pebble Bed High Temperature Reactors With High Burnup Plutonium And Minor Actinide Based Fuel

    International Nuclear Information System (INIS)

    Meier, Astrid; Bernnat, Wolfgang; Lohnert, Guenter

    2008-01-01

    The graphite moderated Modular High Temperature Pebble Bed Reactor enables very flexible loading strategies and is one candidate of the Generation IV reactors. For this reactor fuel cycles with high burnup (about 600 MWd/kg HM) based on plutonium (Pu) and minor actinides (MA) fuel will be investigated. The composition of this fuel is defined in the EU-PuMA-project which aims the reduction of high level waste. There exist nearly no neutronic full core calculations for this fuel composition with high burnup. Two methods (deterministic and Monte Carlo) will be used to determine the neutronics in a full core. The detailed results will be compared with respect to the influence on criticality and safety related parameters. (authors)

  14. Derivation of airfoil characteristics for the LM 19.1 blade based on 3D CFD rotor calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bak, C.; Soerensen, N.N.; Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Airfoil characteristics for the LM 19.1 blade are derived from 3D CFD computations on a full-scale 41-m rotor. Based on 3D CFD the force distributions on the blades are determined, from which airfoil characteristics are derived using the momentum theory. The final airfoil characteristics are constructed using both wind tunnel measurements and 3D CFD. Compared to 2D wind tunnel measurements they show a low lift in stall for the airfoil sections at the tip. At the airfoil sections at the inner part of the blade, they show a high lift in stall. At about 60% radius the lift agrees well to 2D wind tunnel measurements. Aero-elastic calculations using the final airfoil characteristics show good agreement to measured power and flap moments. Furthermore, a fatigue load analysis shows a reduction of up to 15% of the load compared to commonly used data. (au)

  15. Proposed equations and reference values for calculating bone health in children and adolescent based on age and sex.

    Directory of Open Access Journals (Sweden)

    Rossana Gómez-Campos

    Full Text Available The Dual Energy X-Ray Absorptiometry (DXA is the gold standard for measuring BMD and bone mineral content (BMC. In general, DXA is ideal for pediatric use. However, the development of specific standards for particular geographic regions limits its use and application for certain socio-cultural contexts. Additionally, the anthropometry may be a low cost and easy to use alternative method in epidemiological contexts. The goal of our study was to develop regression equations for predicting bone health of children and adolescents based on anthropometric indicators to propose reference values based on age and sex.3020 students (1567 males and 1453 females ranging in ages 4.0 to 18.9 were studied from the Maule Region (Chile. Anthropometric variables evaluated included: weight, standing height, sitting height, forearm length, and femur diameter. A total body scan (without the head was conducted by means of the Dual Energy X-Ray Absorptiometry. Bone mineral density (BMD and the bone mineral content (BMC were also determined. Calcium consumption was controlled for by recording the intake of the three last days prior to the evaluation. Body Mass Index (BMI was calculated, and somatic maturation was determined by using the years of peak growth rate (APHV.Four regression models were generated to calculate bone health: for males BMD = (R2 = 0.79 and BMC = (R2 = 0.84 and for the females BMD = (R2 = 0.76 and BMC = (R2 = 0.83. Percentiles were developed by using the LMS method (p3, p5, p15, p25, p50, p75, p85, p95 and p97.Regression equations and reference curves were developed to assess the bone health of Chilean children and adolescents. These instruments help identify children with potential underlying problems in bone mineralization during the growth stage and biological maturation.

  16. Concomitant and previous osteoporotic vertebral fractures.

    Science.gov (United States)

    Lenski, Markus; Büser, Natalie; Scherer, Michael

    2017-04-01

    Background and purpose - Patients with osteoporosis who present with an acute onset of back pain often have multiple fractures on plain radiographs. Differentiation of an acute osteoporotic vertebral fracture (AOVF) from previous fractures is difficult. The aim of this study was to investigate the incidence of concomitant AOVFs and previous OVFs in patients with symptomatic AOVFs, and to identify risk factors for concomitant AOVFs. Patients and methods - This was a prospective epidemiological study based on the Registry of Pathological Osteoporotic Vertebral Fractures (REPAPORA) with 1,005 patients and 2,874 osteoporotic vertebral fractures, which has been running since February 1, 2006. Concomitant fractures are defined as at least 2 acute short-tau inversion recovery (STIR-) positive vertebral fractures that happen concomitantly. A previous fracture is a STIR-negative fracture at the time of initial diagnostics. Logistic regression was used to examine the influence of various variables on the incidence of concomitant fractures. Results - More than 99% of osteoporotic vertebral fractures occurred in the thoracic and lumbar spine. The incidence of concomitant fractures at the time of first patient contact was 26% and that of previous fractures was 60%. The odds ratio (OR) for concomitant fractures decreased with a higher number of previous fractures (OR =0.86; p = 0.03) and higher dual-energy X-ray absorptiometry T-score (OR =0.72; p = 0.003). Interpretation - Concomitant and previous osteoporotic vertebral fractures are common. Risk factors for concomitant fractures are a low T-score and a low number of previous vertebral fractures in cases of osteoporotic vertebral fracture. An MRI scan of the the complete thoracic and lumbar spine with STIR sequence reduces the risk of under-diagnosis and under-treatment.

  17. Optimal definition of inter-residual contact in globular proteins based on pairwise interaction energy calculations, its robustness, and applications.

    Science.gov (United States)

    Fačkovec, Boris; Vondrášek, Jiří

    2012-10-25

    Although a contact is an essential measurement for the topology as well as strength of non-covalent interactions in biomolecules and their complexes, there is no general agreement in the definition of this feature. Most of the definitions work with simple geometric criteria which do not fully reflect the energy content or ability of the biomolecular building blocks to arrange their environment. We offer a reasonable solution to this problem by distinguishing between "productive" and "non-productive" contacts based on their interaction energy strength and properties. We have proposed a method which converts the protein topology into a contact map that represents interactions with statistically significant high interaction energies. We do not prove that these contacts are exclusively stabilizing, but they represent a gateway to thermodynamically important rather than geometry-based contacts. The process is based on protein fragmentation and calculation of interaction energies using the OPLS force field and relies on pairwise additivity of amino acid interactions. Our approach integrates the treatment of different types of interactions, avoiding the problems resulting from different contributions to the overall stability and the different effect of the environment. The first applications on a set of homologous proteins have shown the usefulness of this classification for a sound estimate of protein stability.

  18. Real-time model-based image reconstruction with a prior calculated database for electrical capacitance tomography

    Science.gov (United States)

    Rodriguez Frias, Marco A.; Yang, Wuqiang

    2017-04-01

    Image reconstruction for electrical capacitance tomography is a challenging task due to the severely underdetermined nature of the inverse problem. A model-based algorithm tackles this problem by reducing the number of unknowns to be calculated from the limited number of independent measurements. The conventional model-based algorithm is implemented with a finite element method to solve the forward problem at each iteration and can produce good results. However, it is time-consuming and hence the algorithm can be used for off-line image reconstruction only. In this paper, a solution to this limitation is proposed. The model-based algorithm is implemented with a database containing a set of prior solved forward problems. In this way, the time required to perform image reconstruction is drastically reduced without sacrificing accuracy, and real-time image reconstruction achieved with up to 100 frames s-1. Further enhancement in speed may be accomplished by implementing the reconstruction algorithm in a parallel processing general purpose graphics process unit.

  19. SU-D-BRD-01: Cloud-Based Radiation Treatment Planning: Performance Evaluation of Dose Calculation and Plan Optimization

    International Nuclear Information System (INIS)

    Na, Y; Kapp, D; Kim, Y; Xing, L; Suh, T

    2014-01-01

    Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computational efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1

  20. Bolus calculators.

    Science.gov (United States)

    Schmidt, Signe; Nørgaard, Kirsten

    2014-09-01

    Matching meal insulin to carbohydrate intake, blood glucose, and activity level is recommended in type 1 diabetes management. Calculating an appropriate insulin bolus size several times per day is, however, challenging and resource demanding. Accordingly, there is a need for bolus calculators to support patients in insulin treatment decisions. Currently, bolus calculators are available integrated in insulin pumps, as stand-alone devices and in the form of software applications that can be downloaded to, for example, smartphones. Functionality and complexity of bolus calculators vary greatly, and the few handfuls of published bolus calculator studies are heterogeneous with regard to study design, intervention, duration, and outcome measures. Furthermore, many factors unrelated to the specific device affect outcomes from bolus calculator use and therefore bolus calculator study comparisons should be conducted cautiously. Despite these reservations, there seems to be increasing evidence that bolus calculators may improve glycemic control and treatment satisfaction in patients who use the devices actively and as intended. © 2014 Diabetes Technology Society.

  1. Integrated design of Nb-based superalloys: Ab initio calculations, computational thermodynamics and kinetics, and experimental results

    International Nuclear Information System (INIS)

    Ghosh, G.; Olson, G.B.

    2007-01-01

    An optimal integration of modern computational tools and efficient experimentation is presented for the accelerated design of Nb-based superalloys. Integrated within a systems engineering framework, we have used ab initio methods along with alloy theory tools to predict phase stability of solid solutions and intermetallics to accelerate assessment of thermodynamic and kinetic databases enabling comprehensive predictive design of multicomponent multiphase microstructures as dynamic systems. Such an approach is also applicable for the accelerated design and development of other high performance materials. Based on established principles underlying Ni-based superalloys, the central microstructural concept is a precipitation strengthened system in which coherent cubic aluminide phase(s) provide both creep strengthening and a source of Al for Al 2 O 3 passivation enabled by a Nb-based alloy matrix with required ductile-to-brittle transition temperature, atomic transport kinetics and oxygen solubility behaviors. Ultrasoft and PAW pseudopotentials, as implemented in VASP, are used to calculate total energy, density of states and bonding charge densities of aluminides with B2 and L2 1 structures relevant to this research. Characterization of prototype alloys by transmission and analytical electron microscopy demonstrates the precipitation of B2 or L2 1 aluminide in a (Nb) matrix. Employing Thermo-Calc and DICTRA software systems, thermodynamic and kinetic databases are developed for substitutional alloying elements and interstitial oxygen to enhance the diffusivity ratio of Al to O for promotion of Al 2 O 3 passivation. However, the oxidation study of a Nb-Hf-Al alloy, with enhanced solubility of Al in (Nb) than in binary Nb-Al alloys, at 1300 deg. C shows the presence of a mixed oxide layer of NbAlO 4 and HfO 2 exhibiting parabolic growth

  2. An algorithm for calculating exam quality as a basis for performance-based allocation of funds at medical schools.

    Science.gov (United States)

    Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte

    2016-01-01

    The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. In the spring of 2014, the students' dean commissioned the "core group" for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and - in analogy to impact factors and third party grants - a ranking among faculty. Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds.

  3. An algorithm for calculating exam quality as a basis for performance-based allocation of funds at medical schools

    Directory of Open Access Journals (Sweden)

    Kirschstein, Timo

    2016-05-01

    Full Text Available Objective: The amendment of the Medical Licensing Act (ÄAppO in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality.Methods: In the spring of 2014, the students‘ dean commissioned the „core group“ for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. Results: This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and – in analogy to impact factors and third party grants – a ranking among faculty. Conclusion: Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds.

  4. Screening nitrogen-rich bases and oxygen-rich acids by theoretical calculations for forming highly stable salts.

    Science.gov (United States)

    Zhang, Xueli; Gong, Xuedong

    2014-08-04

    Nitrogen-rich heterocyclic bases and oxygen-rich acids react to produce energetic salts with potential application in the field of composite explosives and propellants. In this study, 12 salts formed by the reaction of the bases 4-amino-1,2,4-trizole (A), 1-amino-1,2,4-trizole (B), and 5-aminotetrazole (C), upon reaction with the acids HNO3 (I), HN(NO2 )2 (II), HClO4 (III), and HC(NO2 )3 (IV), are studied using DFT calculations at the B97-D/6-311++G** level of theory. For the reactions with the same base, those of HClO4 are the most exothermic and spontaneous, and the most negative Δr Gm in the formation reaction also corresponds to the highest decomposition temperature of the resulting salt. The ability of anions and cations to form hydrogen bonds decreases in the order NO3 (-) >N(NO2 )2 (-) >ClO4 (-) >C(NO2 )3 (-) , and C(+) >B(+) >A(+) . In particular, those different cation abilities are mainly due to their different conformations and charge distributions. For the salts with the same anion, the larger total hydrogen-bond energy (EH,tot ) leads to a higher melting point. The order of cations and anions on charge transfer (q), second-order perturbation energy (E2 ), and binding energy (Eb ) are the same to that of EH,tot , so larger q leads to larger E2 , Eb , and EH,tot . All salts have similar frontier orbitals distributions, and their HOMO and LUMO are derived from the anion and the cation, respectively. The molecular orbital shapes are kept as the ions form a salt. To produce energetic salts, 5-aminotetrazole and HClO4 are the preferred base and acid, respectively. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A flow-based methodology for the calculation of TSO to TSO compensations for cross-border flows

    International Nuclear Information System (INIS)

    Glavitsch, H.; Andersson, G.; Lekane, Th.; Marien, A.; Mees, E.; Naef, U.

    2004-01-01

    In the context of the development of the European internal electricity market, several methods for the tarification of cross-border flows have been proposed. This paper presents a flow-based method for the calculation of TSO to TSO compensations for cross-border flows. The basic principle of this approach is the allocation of the costs of cross-border flows to the TSOs who are responsible for these flows. This method is cost reflective, non-transaction based and compatible with domestic tariffs. It can be applied when limited data are available. Each internal transmission network is then modelled as an aggregated node, called 'supernode', and the European network is synthesized by a graph of supernodes and arcs, each arc representing all cross-border lines between two adjacent countries. When detailed data are available, the proposed methodology is also applicable to all the nodes and lines of the transmission network. Costs associated with flows transiting through supernodes or network elements are forwarded through the network in a way reflecting how the flows make use of the network. The costs can be charged either towards loads and exports or towards generations and imports. Combination of the two charging directions can also be considered. (author)

  6. MEMS Calculator

    Science.gov (United States)

    SRD 166 MEMS Calculator (Web, free access)   This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.

  7. Synthesis, characterization, nano-sized binuclear nickel complexes, DFT calculations and antibacterial evaluation of new macrocyclic Schiff base compounds

    Science.gov (United States)

    Parsaee, Zohreh; Mohammadi, Khosro

    2017-06-01

    Some new macrocyclic bridged dianilines tetradentate with N4coordination sphere Schiff base ligands and their nickel(II)complexes with general formula [{Ni2LCl4} where L = (C20H14N2X)2, X = SO2, O, CH2] have been synthesized. The compounds have been characterized by FT-IR, 1H and 13C NMR, mass spectroscopy, TGA, elemental analysis, molar conductivity and magnetic moment techniques. Scanning electron microscopy (SEM) shows nano-sized structures under 100 nm for nickel (II) complexes. NiO nanoparticle was achieved via the thermal decomposition method and analyzed by FT-IR, SEM and X-ray powder diffraction which indicates closeaccordance to standard pattern of NiO nanoparticle. All the Schiff bases and their complexes have been detected in vitro both for antibacterial activity against two gram-negative and two gram-positive bacteria. The nickel(II) complexes were found to be more active than the free macrocycle Schiff bases. In addition, computational studies of three ligands have been carried out at the DFT-B3LYP/6-31G+(d,p) level of theory on the spectroscopic properties, including IR, 1HNMR and 13CNMR spectroscopy. The correlation between the theoretical and the experimental vibrational frequencies, 1H NMR and 13C NMR of the ligands were 0.999, 0.930-0.973 and 0.917-0.995, respectively. Also, the energy gap was determined and by using HOMO and LUMO energy values, chemical hardness-softness, electronegativity and electrophilic index were calculated.

  8. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  9. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    Science.gov (United States)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  10. Gibbs energy calculation of electrolytic plasma channel with inclusions of copper and copper oxide with Al-base

    Science.gov (United States)

    Posuvailo, V. M.; Klapkiv, M. D.; Student, M. M.; Sirak, Y. Y.; Pokhmurska, H. V.

    2017-03-01

    The oxide ceramic coating with copper inclusions was synthesized by the method of plasma electrolytic oxidation (PEO). Calculations of the Gibbs energies of reactions between the plasma channel elements with inclusions of copper and copper oxide were carried out. Two methods of forming the oxide-ceramic coatings on aluminum base in electrolytic plasma with copper inclusions were established. The first method - consist in the introduction of copper into the aluminum matrix, the second - copper oxide. During the synthesis of oxide ceramic coatings plasma channel does not react with copper and copper oxide-ceramic included in the coating. In the second case is reduction of copper oxide in interaction with elements of the plasma channel. The content of oxide-ceramic layer was investigated by X-ray and X-ray microelement analysis. The inclusions of copper, CuAl2, Cu9Al4 in the oxide-ceramic coatings were found. It was established that in the spark plasma channels alongside with the oxidation reaction occurs also the reaction aluminothermic reduction of the metal that allows us to dope the oxide-ceramic coating by metal the isobaric-isothermal potential oxidation of which is less negative than the potential of the aluminum oxide.

  11. Electronic, Magnetic, and Transport Properties of Polyacrylonitrile-Based Carbon Nanofibers of Various Widths: Density-Functional Theory Calculations

    Science.gov (United States)

    Partovi-Azar, P.; Panahian Jand, S.; Kaghazchi, P.

    2018-01-01

    Edge termination of graphene nanoribbons is a key factor in determination of their physical and chemical properties. Here, we focus on nitrogen-terminated zigzag graphene nanoribbons resembling polyacrylonitrile-based carbon nanofibers (CNFs) which are widely studied in energy research. In particular, we investigate magnetic, electronic, and transport properties of these CNFs as functions of their widths using density-functional theory calculations together with the nonequilibrium Green's function method. We report on metallic behavior of all the CNFs considered in this study and demonstrate that the narrow CNFs show finite magnetic moments. The spin-polarized electronic states in these fibers exhibit similar spin configurations on both edges and result in spin-dependent transport channels in the narrow CNFs. We show that the partially filled nitrogen dangling-bond bands are mainly responsible for the ferromagnetic spin ordering in the narrow samples. However, the magnetic moment becomes vanishingly small in the case of wide CNFs where the dangling-bond bands fall below the Fermi level and graphenelike transport properties arising from the π orbitals are recovered. The magnetic properties of the CNFs as well as their stability have also been discussed in the presence of water molecules and the hexagonal boron nitride substrate.

  12. Cooling Capacity Optimization: Calculation of Hardening Power of Aqueous Solution Based on Poly(N-Vinyl-2-Pyrrolidone)

    Science.gov (United States)

    Koudil, Z.; Ikkene, R.; Mouzali, M.

    2013-11-01

    Polymer quenchants are becoming increasingly popular as substitutes for traditional quenching media in hardening metallic alloys. Water-soluble organic polymer offers a number of environmental, economic, and technical advantages, as well as eliminating the quench-oil fire hazard. The close control of polymer quenchant solutions is essential for their successful applications, in order to avoid the defects of structure of steels, such as shrinkage cracks and deformations. The aim of the present paper is to evaluate and optimize the experimental parameters of polymer quenching bath which gives the best behavior quenching process and homogeneous microstructure of the final work-piece. This study has been carried out on water-soluble polymer based on poly(N-vinyl-2-pyrrolidone) PVP K30, which does not exhibit inverse solubility phenomena in water. The studied parameters include polymer concentration, bath temperature, and agitation speed. Evaluation of cooling power and hardening performance has been measured with IVF SmartQuench apparatus, using standard ISO Inconel-600 alloy. The original numerical evaluation method has been introduced in the computation software called SQ Integra. The heat transfer coefficients were used as input data for calculation of microstructural constituents and the hardness profile of cylindrical sample.

  13. Multi-scale modelling of radiation damage in Fe-Cr based on ab initio electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, Paer

    2004-04-01

    The efficiency of fast neutron reactors, such as for fusion, breeding and transmutation, depend strongly on the neutron radiation resistance of the materials used in the reactors. The binary Fe-Cr alloy, which has many attractive properties in this regard, is the base for the best steels of today which are, however, still not up to the required standards. Therefore, substantial effort has been devoted to finding new materials that can cope with the demands better. Experimental studies must be complemented with extensive theoretical modelling in order to understand the effects that different alloying elements has on the resistance properties of materials. To this end, the first steps of multi-scale modelling has been taken, starting out with ab initio calculations of the electronic structure of the complete concentration range range of the disordered binary Fe-C alloy. The mixing enthalpy of Fe-Cr has been quantitatively predicted and has, together with data from literature, been used in order to fit two sets of interatomic potentials for the purpose of simulating defect evolution with molecular dynamics and kinetic Monte-Carlo codes. These dedicated Fe-Cr alloy potentials are new and represent important additions to the pure element potentials that can be found in literature.

  14. Calculation of the overlap factor for scanning LiDAR based on the tridimensional ray-tracing method.

    Science.gov (United States)

    Chen, Ruiqiang; Jiang, Yuesong; Wen, Luhong; Wen, Donghai

    2017-06-01

    The overlap factor is used to evaluate the LiDAR light collection ability. Ranging LiDAR is mainly determined by the optical configuration. However, scanning LiDAR, equipped with a scanning mechanism to acquire a 3D coordinate points cloud for a specified target, is essential in considering the scanning effect at the same time. Otherwise, scanning LiDAR will reduce the light collection ability and even cannot receive any echo. From this point of view, we propose a scanning LiDAR overlap factor calculation method based on the tridimensional ray-tracing method, which can be applied to scanning LiDAR with any special laser intensity distribution, any type of telescope (reflector, refractor, or mixed), and any shape obstruction (i.e., the reflector of a coaxial optical system). A case study for our LiDAR with a scanning mirror is carried out, and a MATLAB program is written to analyze the laser emission and reception process. Sensitivity analysis is carried out as a function of scanning mirror rotation speed and detector position, and the results guide how to optimize the overlap factor for our LiDAR. The results of this research will have a guiding significance in scanning LiDAR design and assembly.

  15. Anisotropic thermal expansion of SnSe from first-principles calculations based on Grüneisen's theory.

    Science.gov (United States)

    Liu, Gang; Zhou, Jian; Wang, Hui

    2017-06-14

    Based on Grüneisen's theory, the elastic properties and thermal expansion of bulk SnSe with the Pnma phase are investigated by using first-principles calculations. Our numerical results indicate that the linear thermal expansion coefficient along the a direction is smaller than the one along the b direction, while the one along the c direction shows a significant negative value, even at high temperature. The numerical results are in good accordance with experimental results. In addition, generalized and macroscopic Grüneisen parameters are also presented. It is also found that SnSe possesses negative Possion's ratio. The contributions of different phonon modes to NTE along the c direction are investigated, and it is found that the two modes which make the most important contributions to NTE are transverse vibrations perpendicular to the c direction. Finally, we analyze the relation of elastic constants to negative thermal expansion, and demonstrate that negative thermal expansion can also occur even with all positive macroscopic Grüneisen parameters.

  16. First-principles equation-of-state table of beryllium based on density-functional theory calculations

    Science.gov (United States)

    Ding, Y. H.; Hu, S. X.

    2017-06-01

    Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm3 and temperature T = 2000 to 108 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ˜10% stiffer than the last two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ˜20%. By implementing the FPEOS table into the 1-D radiation-hydrodynamic code LILAC, we studied the EOS effects on beryllium-shell-target implosions. The FPEOS simulation predicts higher neutron yield (˜15%) compared to the simulation using the SESAME 2023 EOS table.

  17. Multi-scale modelling of radiation damage in Fe-Cr based on ab initio electronic structure calculations

    International Nuclear Information System (INIS)

    Olsson, Paer

    2004-04-01

    The efficiency of fast neutron reactors, such as for fusion, breeding and transmutation, depend strongly on the neutron radiation resistance of the materials used in the reactors. The binary Fe-Cr alloy, which has many attractive properties in this regard, is the base for the best steels of today which are, however, still not up to the required standards. Therefore, substantial effort has been devoted to finding new materials that can cope with the demands better. Experimental studies must be complemented with extensive theoretical modelling in order to understand the effects that different alloying elements has on the resistance properties of materials. To this end, the first steps of multi-scale modelling has been taken, starting out with ab initio calculations of the electronic structure of the complete concentration range range of the disordered binary Fe-C alloy. The mixing enthalpy of Fe-Cr has been quantitatively predicted and has, together with data from literature, been used in order to fit two sets of interatomic potentials for the purpose of simulating defect evolution with molecular dynamics and kinetic Monte-Carlo codes. These dedicated Fe-Cr alloy potentials are new and represent important additions to the pure element potentials that can be found in literature

  18. DFT and TD-DFT calculation of new thienopyrazine-based small molecules for organic solar cells.

    Science.gov (United States)

    Bourass, Mohamed; Benjelloun, Adil Touimi; Benzakour, Mohammed; Mcharfi, Mohammed; Hamidi, Mohammed; Bouzzine, Si Mohamed; Bouachrine, Mohammed

    2016-01-01

    Novel six organic donor-π-acceptor molecules (D-π-A) used for Bulk Heterojunction organic solar cells (BHJ), based on thienopyrazine were studied by density functional theory (DFT) and time-dependent DFT (TD-DFT) approaches, to shed light on how the π-conjugation order influence the performance of the solar cells. The electron acceptor group was 2-cyanoacrylic for all compounds, whereas the electron donor unit was varied and the influence was investigated. The TD-DFT method, combined with a hybrid exchange-correlation functional using the Coulomb-attenuating method (CAM-B3LYP) in conjunction with a polarizable continuum model of salvation (PCM) together with a 6-31G(d,p) basis set, was used to predict the excitation energies, the absorption and the emission spectra of all molecules. The trend of the calculated HOMO-LUMO gaps nicely compares with the spectral data. In addition, the estimated values of the open-circuit photovoltage (V oc ) for these compounds were presented in two cases/PC 60 BM and/PC 71 BM. The study of structural, electronics and optical properties for these compounds could help to design more efficient functional photovoltaic organic materials.

  19. 3D atlas-based registration can calculate malalignment of femoral shaft fractures in six degrees of freedom.

    Science.gov (United States)

    Crookshank, Meghan C; Beek, Maarten; Hardisty, Michael R; Schemitsch, Emil H; Whyne, Cari M

    2014-01-01

    This study presents and evaluates a semi-automated algorithm for quantifying malalignment in complex femoral shaft fractures from a single intraoperative cone-beam CT (CBCT) image of the fractured limb. CBCT images were acquired of complex comminuted diaphyseal fractures created in 9 cadaveric femora (27 cases). Scans were segmented using intensity-based thresholding, yielding image stacks of the proximal, distal and comminuted bone. Semi-deformable and rigid affine registrations to an intact femur atlas (synthetic or cadaveric-based) were performed to transform the distal fragment to its neutral alignment. Leg length was calculated from the volume of bone within the comminution fragment. The transformations were compared to the physical input malalignments. Using the synthetic atlas, translations were within 1.71 ± 1.08 mm (medial/lateral) and 2.24 ± 2.11 mm (anterior/posterior). The varus/valgus, flexion/extension and periaxial rotation errors were 3.45 ± 2.6°, 1.86 ± 1.5° and 3.4 ± 2.0°, respectively. The cadaveric-based atlas yielded similar results in medial/lateral and anterior/posterior translation (1.73 ± 1.28 mm and 2.15 ± 2.13 mm, respectively). Varus/valgus, flexion/extension and periaxial rotation errors were 2.3 ± 1.3°, 2.0 ± 1.6° and 3.4 ± 2.0°, respectively. Leg length errors were 1.41 ± 1.01 mm (synthetic) and 1.26 ± 0.94 mm (cadaveric). The cadaveric model demonstrated a small improvement in flexion/extension and the synthetic atlas performed slightly faster (6 min 24 s ± 50 s versus 8 min 42 s ± 2 min 25 s). This atlas-based algorithm quantified malalignment in complex femoral shaft fractures within clinical tolerances from a single CBCT image of the fractured limb.

  20. Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer.

    Science.gov (United States)

    Müller, Dirk K; Pampel, André; Möller, Harald E

    2013-05-01

    Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. A kinematic-based methodology for radiological protection: Runoff analysis to calculate the effective dose for internal exposure caused by ingestion of radioactive isotopes

    Science.gov (United States)

    Sasaki, Syota; Yamada, Tadashi; Yamada, Tomohito J.

    2014-05-01

    We aim to propose a kinematic-based methodology similar with runoff analysis for readily understandable radiological protection. A merit of this methodology is to produce sufficiently accurate effective doses by basic analysis. The great earthquake attacked the north-east area in Japan on March 11, 2011. The system of electrical facilities to control Fukushima Daiichi nuclear power plant was completely destroyed by the following tsunamis. From the damaged reactor containment vessels, an amount of radioactive isotopes had leaked and been diffused in the vicinity of the plant. Radiological internal exposure caused by ingestion of food containing radioactive isotopes has become an issue of great interest to the public, and has caused excessive anxiety because of a deficiency of fundamental knowledge concerning radioactivity. Concentrations of radioactivity in the human body and internal exposure have been studied extensively. Previous radiologic studies, for example, studies by International Commission on Radiological Protection(ICRP), employ a large-scale computational simulation including actual mechanism of metabolism in the human body. While computational simulation is a standard method for calculating exposure doses among radiology specialists, these methods, although exact, are too difficult for non-specialists to grasp the whole image owing to the sophistication. In this study, the human body is treated as a vessel. The number of radioactive atoms in the human body can be described by an equation of continuity, which is the only governing equation. Half-life, the period of time required for the amount of a substance decreases by half, is only parameter to calculate the number of radioactive isotopes in the human body. Half-life depends only on the kinds of nuclides, there are no arbitrary parameters. It is known that the number of radioactive isotopes decrease exponentially by radioactive decay (physical outflow). It is also known that radioactive isotopes

  2. [The influence of energy on X-ray voxel Monte Carlo algorithm based on kilovoltage cone beam computed tomography images for dose calculation].

    Science.gov (United States)

    Wu, Kui; Li, Guangjun; Bai, Sen

    2012-06-01

    This paper is to investigate how the different energy impact the accuracy of X-ray Voxel Monte Carlo (XVMC) algorithm when it is applied for dose calculation in Kilovoltage cone beam CT(kv-CBCT) images. The CIRS model 062 was used to calibrate the CT numbers-relative electron density table of CT and CBCT images. CT and CBCT scans were performed when simulation model of human head-and-neck placed in same position to simulate locally advanced nasopharyngeal carcinoma. 6MV and 15MV photon were selected in Monaco TPS to design intensity-modulated radiotherapy (IMRT) plans. XVMC algorithm was selected for dose calculation then the calculation results were compared and the impact of energy on the calculation accuracy was analyzed. The comparison results of dose volume histograms (DVHs), dose received by targets, organs at risk, conform index and uniform index of targets indicate a high agreement between CT based and CBCT based plans. More evaluation indicators show higher accuracy when 15MV photon was selected for dose calculation. gamma index analysis with the criterion of 2mm/2% and threshold of 10% was used for comparison of dose distribution. The average pass rate of each plane was 99.3% +/- 0.47% on the base of 6MV and 99.4% +/- 0.44% on the base of 15MV. CBCT images after calibration has high accuracy of dose calculation and has higher accuracy when 15MV photon was selected.

  3. A Simple Method for the Calculation of Lattice Energies of Inorganic Ionic Crystals Based on the Chemical Hardness.

    Science.gov (United States)

    Kaya, Savaş; Kaya, Cemal

    2015-09-08

    This paper presents a new technique for estimation of lattice energies of inorganic ionic compounds using a simple formula. This new method demonstrates the relationship between chemical hardness and lattice energies of ionic compounds. Here chemical hardness values of ionic compounds are calculated via our molecular hardness equation. The results obtained using the present method and comparisons made by considering experimental data and the results from other theoretical methods in the literature showed that the new method allows easy evaluation of lattice energies of inorganic ionic crystals without the need for ab initio calculations and complex calculations.

  4. A virtual photon source model of an Elekta linear accelerator with integrated mini MLC for Monte Carlo based IMRT dose calculation.

    Science.gov (United States)

    Sikora, M; Dohm, O; Alber, M

    2007-08-07

    A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification

  5. Pulse arrival time (PAT) measurement based on arm ECG and finger PPG signals - comparison of PPG feature detection methods for PAT calculation.

    Science.gov (United States)

    Rajala, Satu; Ahmaniemi, Teemu; Lindholm, Harri; Taipalus, Tapio

    2017-07-01

    In this study, pulse arrival time (PAT) was measured using a simple measurement setup consisting of arm electrocardiogram (ECG) and finger photoplethysmogram (PPG). Four methods to calculate PAT from the measured signals were evaluated. PAT was calculated as the time delay between ECG R peak and one of the following points in the PPG waveform: peak (maximum value of PPG waveform), foot (minimum value of PPG waveform), dpeak (maximum value of the first derivative of PPG waveform) and ddpeak (maximum value of the second derivative of PPG waveform). In addition to PAT calculation, pulse period (PP) intervals based on the detected features were determined and compared to RR intervals derived from ECG signal. Based on the results obtained here, the most promising method to be used in PAT or PP calculation seems to be the dpeak detection method.

  6. Calculation and decomposition of indirect carbon emissions from residential consumption in China based on the input–output model

    International Nuclear Information System (INIS)

    Zhu Qin; Peng Xizhe; Wu Kaiya

    2012-01-01

    Based on the input–output model and the comparable price input–output tables, the current paper investigates the indirect carbon emissions from residential consumption in China in 1992–2005, and examines the impacts on the emissions using the structural decomposition method. The results demonstrate that the rise of the residential consumption level played a dominant role in the growth of residential indirect emissions. The persistent decline of the carbon emission intensity of industrial sectors presented a significant negative effect on the emissions. The change in the intermediate demand of industrial sectors resulted in an overall positive effect, except in the initial years. The increase in population prompted the indirect emissions to a certain extent; however, population size is no longer the main reason for the growth of the emissions. The change in the consumption structure showed a weak positive effect, demonstrating the importance for China to control and slow down the increase in the emissions while in the process of optimizing the residential consumption structure. The results imply that the means for restructuring the economy and improving efficiency, rather than for lowering the consumption scale, should be adopted by China to achieve the targets of energy conservation and emission reduction. - Highlights: ► We build the input–output model of indirect carbon emissions from residential consumption. ► We calculate the indirect emissions using the comparable price input–output tables. ► We examine the impacts on the indirect emissions using the structural decomposition method. ► The change in the consumption structure showed a weak positive effect on the emissions. ► China's population size is no longer the main reason for the growth of the emissions.

  7. New conformity indices based on the calculation of distances between the target volume and the volume of reference isodose

    Science.gov (United States)

    Park, J M; Park, S-Y; Ye, S-J; Kim, J H; Carlson, J

    2014-01-01

    Objective: To present conformity indices (CIs) based on the distance differences between the target volume (TV) and the volume of reference isodose (VRI). Methods: The points on the three-dimensional surfaces of the TV and the VRI were generated. Then, the averaged distances between the points on the TV and the VRI were calculated (CIdistance). The performance of the presented CIs were evaluated by analysing six situations, which were a perfect match, an expansion and a reduction of the distance from the centroid to the VRI compared with the distance from the centroid to the TV by 10%, a lateral shift of the VRI by 3 cm, a rotation of the VRI by 45° and a spherical-shaped VRI having the same volume as the TV. The presented CIs were applied to the clinical prostate and head and neck (H&N) plans. Results: For the perfect match, CIdistance was 0 with 0 as the standard deviation (SD). When expanding and reducing, CIdistance was 10 and −10 with SDs 11. The average value of the CIdistance in the prostate and H&N plans was 0.13 ± 7.44 and 6.04 ± 23.27, respectively. Conclusion: The performance of the CIdistance was equal or better than those of the conventional CIs. Advances in knowledge: The evaluation of target conformity by the distances between the surface of the TV and the VRI could be more accurate than evaluation with volume information. PMID:25225915

  8. Calculation of Lung Cancer Volume of Target Based on Thorax Computed Tomography Images using Active Contour Segmentation Method for Treatment Planning System

    Science.gov (United States)

    Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur

    2017-06-01

    In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.

  9. Extension and validation of ARTM (atmospheric radionuclide transportation model) for the application as dispersion calculation model in AVV (general administrative provision) and SBG (incident calculation bases); Erweiterung und Validierung von ARTM fuer den Einsatz als Ausbreitungsmodell in AVV und SBG

    Energy Technology Data Exchange (ETDEWEB)

    Martens, Reinhard; Bruecher, Wenzel; Richter, Cornelia; Sentuc, Florence; Sogalla, Martin; Thielen, Harald

    2012-02-15

    In the medium-term time scale the Gaussian plume model used so far for atmospheric dispersion calculations in the General Administrative Provision (AVV) relating to Section 47 of the Radiation Protection Ordinance (StrISchV) as well as in the Incident Calculation Bases (SGB) relating to Section 49 StrISchV is to be replaced by a Lagrangian particle model. Meanwhile the Atmospheric Radionuclide Transportation Model (ARTM) is available, which allows the simulation of the atmospheric dispersion of operational releases from nuclear installations. ARTM is based on the program package AUSTAL2000 which is designed for the simulation of atmospheric dispersion of nonradioactive operational releases from industrial plants and was adapted to the application of airborne radioactive releases. In the context of the research project 3608S05005 possibilities for an upgrade of ARTM were investigated and implemented as far as possible to the program system. The work program comprises the validation and evaluation of ARTM, the implementation of technical-scientific extensions of the model system and the continuation of experience exchange between developers and users. In particular, the suitability of the model approach for simulations of radiological consequences according to the German SBG and the representation of the influence of buildings typical for nuclear power stations have been validated and further evaluated. Moreover, post-processing modules for calculation of dose-relevant decay products and for dose calculations have been developed and implemented. In order to continue the experience feedback and exchange, a web page has been established and maintained. Questions by users and other feedback have been dealt with and a common workshop has been held. The continued development and validation of ARTM has strengthened the basis for applications of this model system in line with the German regulations AVV and SBG. Further activity in this field can contribute to maintain and

  10. PS1-22: A Systematic Review of Web-Based Cancer Prognostic Calculators: Can They Support Patient-Centered Communication with Cancer Patients?

    Science.gov (United States)

    Rabin, Borsika; Gaglio, Bridget; Sanders, Tristan; Nekhlyudov, Larissa; Bull, Sheana; Marcus, Alfred; Dearing, James

    2013-01-01

    Background/Aims Information about cancer prognosis is a main topic of interest for cancer patients and clinicians alike. Prognostic information can help with decisions about treatment, lessen patients’ uncertainty and empower them to participate in the decision making process. Calculating and communicating cancer prognostic information can be challenging due to the high complexity and probabilistic nature of the information. Furthermore, prognostic information is further complicated by the potential interplay between cancer and other comorbid medical conditions. The purpose of this presentation is to present findings from a systematic review of web-based interactive prognostic calculators and assess how they might support patient-centered communication of prognostic information with cancer patients. Methods A systematic review of web-based cancer prognostic calculators was conducted using web search engines, peer-reviewed manuscripts, and expert input. Calculators had to be interactive, focus on cancer, available in English, and provide information about probabilities of survival/mortality, recurrence, spread, or clinical response to treatment. Eligible calculators were reviewed and abstracted for content, format, and functions of patient-centered communication and findings were summarized in a tabular format for comparison. The abstraction guide was pilot tested by all abstractors and was refined using a consensus approach. Results A total of 22 eligible web-based cancer prognostic tools including 95 individual calculators for 88 distinct cancer sites were identified and abstracted. Thirteen of the tools recommended patients as potential direct users; all other tools were designed for clinicians. Outcomes presented will include: 1) general description of calculators, including cancer type, designated users, types of data elements used in prognosis prediction, and validation, 2) calculator interface data entry features, and graphic output, 3) interpretation of

  11. A school-based program implemented by community providers previously trained for the prevention of eating and weight-related problems in secondary-school adolescents: the MABIC study protocol