Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Initial clinical results for breath-hold CT-based processing of respiratory-gated PET acquisitions
Energy Technology Data Exchange (ETDEWEB)
Fin, Loic; Daouk, Joel; Morvan, Julie; Esper, Isabelle El; Saidi, Lazhar; Meyer, Marc-Etienne [Amiens University Hospital, Nuclear Medicine Department, Amiens (France); Bailly, Pascal [Amiens University Hospital, Nuclear Medicine Department, Amiens (France); CHU d' Amiens, Service de Medecine Nucleaire, unite TEP, Hopital Sud, Amiens cedex (France)
2008-11-15
Respiratory motion causes uptake in positron emission tomography (PET) images of chest structures to spread out and misregister with the CT images. This misregistration can alter the attenuation correction and thus the quantisation of PET images. In this paper, we present the first clinical results for a respiratory-gated PET (RG-PET) processing method based on a single breath-hold CT (BH-CT) acquisition, which seeks to improve diagnostic accuracy via better PET-to-CT co-registration. We refer to this method as ''CT-based'' RG-PET processing. Thirteen lesions were studied. Patients underwent a standard clinical PET protocol and then the CT-based protocol, which consists of a 10-min List Mode RG-PET acquisition, followed by a shallow end-expiration BH-CT. The respective performances of the CT-based and clinical PET methods were evaluated by comparing the distances between the lesions' centroids on PET and CT images. SUV{sub MAX} and volume variations were also investigated. The CT-based method showed significantly lower (p=0.027) centroid distances (mean change relative to the clinical method =-49%; range =-100% to 0%). This led to higher SUV{sub MAX} (mean change =+33%; range =-4% to 69%). Lesion volumes were significantly lower (p=0.022) in CT-based PET volumes (mean change =-39%: range =-74% to -1%) compared with clinical ones. A CT-based RG-PET processing method can be implemented in clinical practice with a small increase in radiation exposure. It improves PET-CT co-registration of lung lesions and should lead to more accurate attenuation correction and thus SUV measurement. (orig.)
Pourmorteza, Amir; Chen, Marcus Y; van der Pals, Jesper; Arai, Andrew E; McVeigh, Elliot R
2016-05-01
The objective of this study was to investigate the correlation between local myocardial function estimates from CT and myocardial strain from tagged MRI in the same heart. Accurate detection of regional myocardial dysfunction can be an important finding in the diagnosis of functionally significant coronary artery disease. Tagged MRI is currently a reference standard for noninvasive regional myocardial function analysis; however, it has practical drawbacks. We have developed a CT imaging protocol and automated image analysis algorithm for estimating regional cardiac function from a few heartbeats. This method tracks the motion of the left ventricular (LV) endocardial surface to produce local function maps: we call the method Stretch Quantification of Endocardial Engraved Zones (SQUEEZ). Myocardial infarction was created by ligation of the left anterior descending coronary artery for 2 h followed by reperfusion in canine models. Tagged and cine MRI scans were performed during the reperfusion phase and first-pass contrast enhanced CT scans were acquired. The average delay between the CT and MRI scans was myocardial strain (Ecc) was calculated from the tagged MRI data. The agreement between peak systolic Ecc and SQUEEZ was investigated in 162 segments in the 9 hearts. Linear regression and Bland-Altman analysis was used to assess the correlation between the two metrics of local LV function. The results show good agreement between SQUEEZ and Ecc: (r = 0.71, slope = 0.78, p function. The good agreement between the estimates of local myocardial function obtained from CT SQUEEZ and tagged MRI provides encouragement to investigate the use of SQUEEZ for measuring regional cardiac function at a low clinical dose in humans.
Directory of Open Access Journals (Sweden)
Andrzej Kotela
2015-01-01
Full Text Available Total knee arthroplasty (TKA is a frequently performed procedure in orthopaedic surgery. Recently, patient-specific instrumentation was introduced to facilitate correct positioning of implants. The aim of this study was to compare the early clinical results of TKA performed with patient-specific CT-based instrumentation and conventional technique. A prospective, randomized controlled trial on 112 patients was performed between January 2011 and December 2011. A group of 112 patients who met the inclusion and exclusion criteria were enrolled in this study and randomly assigned to an experimental or control group. The experimental group comprised 52 patients who received the Signature CT-based implant positioning system, and the control group consisted of 60 patients with conventional instrumentation. Clinical outcomes were evaluated with the KSS scale, WOMAC scale, and VAS scales to assess knee pain severity and patient satisfaction with the surgery. Specified in-hospital data were recorded. Patients were followed up for 12 months. At one year after surgery, there were no statistically significant differences between groups with respect to clinical outcomes and in-hospital data, including operative time, blood loss, hospital length of stay, intraoperative observations, and postoperative complications. Further high-quality investigations of various patient-specific systems and longer follow-up may be helpful in assessing their utility for TKA.
Energy Technology Data Exchange (ETDEWEB)
Kruis, Matthijs F.; Kamer, Jeroen B. van de; Houweling, Antonetta C.; Sonke, Jan-Jakob; Belderbos, José S.A.; Herk, Marcel van, E-mail: m.v.herk@nki.nl
2013-10-01
Purpose: Four-dimensional positron emission tomography (4D PET) imaging of the thorax produces sharper images with reduced motion artifacts. Current radiation therapy planning systems, however, do not facilitate 4D plan optimization. When images are acquired in a 2-minute time slot, the signal-to-noise ratio of each 4D frame is low, compromising image quality. The purpose of this study was to implement and evaluate the construction of mid-position 3D PET scans, with motion compensated using a 4D computed tomography (CT)-derived motion model. Methods and Materials: All voxels of 4D PET were registered to the time-averaged position by using a motion model derived from the 4D CT frames. After the registration the scans were summed, resulting in a motion-compensated 3D mid-position PET scan. The method was tested with a phantom dataset as well as data from 27 lung cancer patients. Results: PET motion compensation using a CT-based motion model improved image quality of both phantoms and patients in terms of increased maximum SUV (SUV{sub max}) values and decreased apparent volumes. In homogenous phantom data, a strong relationship was found between the amplitude-to-diameter ratio and the effects of the method. In heterogeneous patient data, the effect correlated better with the motion amplitude. In case of large amplitudes, motion compensation may increase SUV{sub max} up to 25% and reduce the diameter of the 50% SUV{sub max} volume by 10%. Conclusions: 4D CT-based motion-compensated mid-position PET scans provide improved quantitative data in terms of uptake values and volumes at the time-averaged position, thereby facilitating more accurate radiation therapy treatment planning of pulmonary lesions.
CT-based interstitial HDR brachytherapy
Energy Technology Data Exchange (ETDEWEB)
Kolotas, C.; Baltas, D.; Zamboglou, N. [Staedtische Kliniken Offenbach (Germany). Strahlenklinik
1999-09-01
Purpose: Development, application and evaluation of a CT-guided implantation technique and a fully CT-based treatment planning procedure for brachytherapy. Methods and Materials: A brachytherapy procedure based on CT-guided implantation technique and CT-based treatment planning has been developed and clinical evaluated. For this purpose a software system (PROMETHEUS) for the 3D reconstruction of brachytherapy catheters and patient anatomy using only CT scans has been developed. An interface for the Nucletron PLATO BPS treatment planning system for optimization and calculation of dose distribution has been devised. The planning target volume(s) are defined as sets of points using contouring tools and are used for optimization of the 3D dose distribution. Dose-volume histogram based analysis of the dose distribution (COIN analysis) enables a clinically realistic evaluation of the brachytherapy application to be made. The CT-guided implantation of catheters and the CT-based treatment planning procedure has been performed for interstitial brachytherapy and for different tumor sites in 197 patients between 1996 and 1997. Results: The accuracy of the CT reconstruction was tested using first a quality assurance phantom and second, a simulated interstitial implant of 12 needles. These were compared with the results of reconstruction using radiographs. Both methods gave comparable results with regard to accuracy, but the CT based reconstruction was faster. Clinical feasibility was proved in pre-irradiated recurrences of brain tumors, in pretreated recurrences or metastatic disease, and in breast carcinomas. The tumor volumes treated were in the range 5.1 to 2,741 cm{sup 3}. Analysis of implant quality showed a slightly significant lower COIN value for the bone implants, but no differences with respect to the planning target volume. Conclusions: The Offenbach system, incorporating the PROMETHEUS software for interstitial HDR brachytherapy has proved to be extremely valuable
DEFF Research Database (Denmark)
Klausen, Thomas Levin; Mortensen, Jann; de Nijs, Robin;
2015-01-01
, and (C) 110 ± 9. For all attenuation correction (AC) scans, the mean values increased with increasing iodine concentration. PATIENTS: there were no visible artifacts in single photon emission computed tomography (SPECT) following CT-AC with contrast-enhanced CT. The average score of image quality was 4......BACKGROUND: CT-based attenuation correction (CT-AC) using contrast-enhancement CT impacts (111)In-SPECT image quality and quantification. In this study we assessed and evaluated the effect. METHODS: A phantom (5.15 L) was filled with an aqueous solution of In-111. Three SPECT/CT scans were...... in a central volume. Ten patients referred for (111)In-octreotide scintigraphy were scanned according to our clinical (111)In-SPECT/CT protocol including a topogram, a LD (140 kVp), and a FD (120 kVp). The FD/contrast-enhanced CT was acquired in both arterial (FDAP) and venous phase (FDVP) following a mono...
CT-Based Attenuation Correction in I-123-Ioflupane SPECT
Lange, Catharina; Seese, Anita; Schwarzenböck, Sarah; Steinhoff, Karen; Umland-Seidler, Bert; Krause, Bernd J.; Brenner, Winfried; Sabri, Osama
2014-01-01
Purpose Attenuation correction (AC) based on low-dose computed tomography (CT) could be more accurate in brain single-photon emission computed tomography (SPECT) than the widely used Chang method, and, therefore, has the potential to improve both semi-quantitative analysis and visual image interpretation. The present study evaluated CT-based AC for dopamine transporter SPECT with I-123-ioflupane. Materials and methods Sixty-two consecutive patients in whom I-123-ioflupane SPECT including low-dose CT had been performed were recruited retrospectively at 3 centres. For each patient, 3 different SPECT images were reconstructed: without AC, with Chang AC and with CT-based AC. Distribution volume ratio (DVR) images were obtained by scaling voxel intensities using the whole brain without striata as reference. For assessing the impact of AC on semi-quantitative analysis, specific-to-background ratios (SBR) in caudate and putamen were obtained by fully automated SPM8-based region of interest (ROI) analysis and tested for their diagnostic power using receiver-operator-characteristic (ROC) analysis. For assessing the impact of AC on visual image reading, screenshots of stereotactically normalized DVR images presented in randomized order were interpreted independently by two raters at each centre. Results CT-based AC resulted in intermediate SBRs about half way between no AC and Chang. Maximum area under the ROC curve was achieved by the putamen SBR, with negligible impact of AC (0.924, 0.935 and 0.938 for no, CT-based and Chang AC). Diagnostic accuracy of visual interpretation also did not depend on AC. Conclusions The impact of CT-based versus Chang AC on the interpretation of I-123-ioflupane SPECT is negligible. Therefore, CT-based AC cannot be recommended for routine use in clinical patient care, not least because of the additional radiation exposure. PMID:25268228
CT-based attenuation correction in I-123-ioflupane SPECT.
Directory of Open Access Journals (Sweden)
Catharina Lange
Full Text Available PURPOSE: Attenuation correction (AC based on low-dose computed tomography (CT could be more accurate in brain single-photon emission computed tomography (SPECT than the widely used Chang method, and, therefore, has the potential to improve both semi-quantitative analysis and visual image interpretation. The present study evaluated CT-based AC for dopamine transporter SPECT with I-123-ioflupane. MATERIALS AND METHODS: Sixty-two consecutive patients in whom I-123-ioflupane SPECT including low-dose CT had been performed were recruited retrospectively at 3 centres. For each patient, 3 different SPECT images were reconstructed: without AC, with Chang AC and with CT-based AC. Distribution volume ratio (DVR images were obtained by scaling voxel intensities using the whole brain without striata as reference. For assessing the impact of AC on semi-quantitative analysis, specific-to-background ratios (SBR in caudate and putamen were obtained by fully automated SPM8-based region of interest (ROI analysis and tested for their diagnostic power using receiver-operator-characteristic (ROC analysis. For assessing the impact of AC on visual image reading, screenshots of stereotactically normalized DVR images presented in randomized order were interpreted independently by two raters at each centre. RESULTS: CT-based AC resulted in intermediate SBRs about half way between no AC and Chang. Maximum area under the ROC curve was achieved by the putamen SBR, with negligible impact of AC (0.924, 0.935 and 0.938 for no, CT-based and Chang AC. Diagnostic accuracy of visual interpretation also did not depend on AC. CONCLUSIONS: The impact of CT-based versus Chang AC on the interpretation of I-123-ioflupane SPECT is negligible. Therefore, CT-based AC cannot be recommended for routine use in clinical patient care, not least because of the additional radiation exposure.
Energy Technology Data Exchange (ETDEWEB)
Garin, Etienne [Cancer Institute Eugene Marquis, Department of Nuclear Medicine, Rennes (France); University of Rennes 1, Rennes (France); INSERM, U-991, Liver Metabolisms and Cancer, Rennes (France); Rolland, Yan [Cancer Institute Eugene Marquis, Department of Medical Imaging, Rennes (France); Laffont, Sophie [University of Rennes 1, Rennes (France); Edeline, Julien [University of Rennes 1, Rennes (France); INSERM, U-991, Liver Metabolisms and Cancer, Rennes (France); Cancer Institute Eugene Marquis, Department of Medical Oncology, Rennes (France)
2016-03-15
Radioembolization with {sup 90}Y-loaded microspheres is increasingly used in the treatment of primary and secondary liver cancer. Technetium-99 m macroaggregated albumin (MAA) scintigraphy is used as a surrogate of microsphere distribution to assess lung or digestive shunting prior to therapy, based on tumoral targeting and dosimetry. To date, this has been the sole pre-therapeutic tool available for such evaluation. Several dosimetric approaches have been described using both glass and resin microspheres in hepatocellular carcinoma (HCC) and liver metastasis. Given that each product offers different specific activities and numbers of spheres injected, their radiobiological properties are believed to lightly differ. This paper summarizes and discusses the available studies focused on MAA-based dosimetry, particularly concentrating on potential confounding factors like clinical context, tumor size, cirrhosis, previous or concomitant therapy, and product used. In terms of the impact of tumoral dose in HCC, the results were concordant and a response relationship and tumoral threshold dose was clearly identified, especially in studies using glass microspheres. Tumoral dose has also been found to influence survival. The concept of treatment intensification has recently been introduced, yet despite several studies publishing interesting findings on the tumor dose-metastasis relationship, no consensus has been reached, and further clarification is thus required. Nor has the maximal tolerated dose to the liver been well documented, requiring more accurate evaluation. Lung dose was well described, despite recently identified factors influencing its evaluation, requiring further assessment. MAA SPECT/CT dosimetry is accurate in HCC and can now be used in order to achieve a fully customized approach, including treatment intensification. Yet further studies are warranted for the metastasis setting and evaluating the maximal tolerated liver dose. (orig.)
Evaluation of CT-based SUV normalization
Devriese, Joke; Beels, Laurence; Maes, Alex; Van de Wiele, Christophe; Pottel, Hans
2016-09-01
The purpose of this study was to determine patients’ lean body mass (LBM) and lean tissue (LT) mass using a computed tomography (CT)-based method, and to compare standardized uptake value (SUV) normalized by these parameters to conventionally normalized SUVs. Head-to-toe positron emission tomography (PET)/CT examinations were retrospectively retrieved and semi-automatically segmented into tissue types based on thresholding of CT Hounsfield units (HU). The following HU ranges were used for determination of CT-estimated LBM and LT (LBMCT and LTCT): -180 to -7 for adipose tissue (AT), -6 to 142 for LT, and 143 to 3010 for bone tissue (BT). Formula-estimated LBMs were calculated using formulas of James (1976 Research on Obesity: a Report of the DHSS/MRC Group (London: HMSO)) and Janmahasatian et al (2005 Clin. Pharmacokinet. 44 1051-65), and body surface area (BSA) was calculated using the DuBois formula (Dubois and Dubois 1989 Nutrition 5 303-11). The CT segmentation method was validated by comparing total patient body weight (BW) to CT-estimated BW (BWCT). LBMCT was compared to formula-based estimates (LBMJames and LBMJanma). SUVs in two healthy reference tissues, liver and mediastinum, were normalized for the aforementioned parameters and compared to each other in terms of variability and dependence on normalization factors and BW. Comparison of actual BW to BWCT shows a non-significant difference of 0.8 kg. LBMJames estimates are significantly higher than LBMJanma with differences of 4.7 kg for female and 1.0 kg for male patients. Formula-based LBM estimates do not significantly differ from LBMCT, neither for men nor for women. The coefficient of variation (CV) of SUV normalized for LBMJames (SUVLBM-James) (12.3%) was significantly reduced in liver compared to SUVBW (15.4%). All SUV variances in mediastinum were significantly reduced (CVs were 11.1-12.2%) compared to SUVBW (15.5%), except SUVBSA (15.2%). Only SUVBW and SUVLBM-James show
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
Intensity modulation with electrons: calculations, measurements and clinical applications.
Karlsson, M G; Karlsson, M; Zackrisson, B
1998-05-01
Intensity modulation of electron beams is one step towards truly conformal therapy. This can be realized with the MM50 racetrack microtron that utilizes a scanning beam technique. By adjusting the scan pattern it is possible to obtain arbitrary fluence distributions. Since the monitor chambers in the treatment head are segmented in both x- and y-directions it is possible to verify the fluence distribution to the patient at any time during the treatment. Intensity modulated electron beams have been measured with film and a plane parallel chamber and compared with calculations. The calculations were based on a pencil beam method. An intensity distribution at the multileaf collimator (MLC) level was calculated by superposition of measured pencil beams over scan patterns. By convolving this distribution with a Gaussian pencil beam, which has propagated from the MLC to the isocentre, a fluence distribution at isocentre level was obtained. The agreement between calculations and measurements was within 2% in dose or 1 mm in distance in the penumbra zones. A standard set of intensity modulated electron beams has been developed. These beams have been implemented in a treatment planning system and are used for manual optimization. A clinical example (prostate) of such an application is presented and compared with a standard irradiation technique.
Intensity modulation with electrons: calculations, measurements and clinical applications
Energy Technology Data Exchange (ETDEWEB)
Karlsson, Magnus G.; Karlsson, Mikael [Department of Radiation Physics, Umeaa University, S-901 85 Umeaa (Sweden); Zackrisson, Bjoern [Department of Oncology, Umeaa University, S-901 85 Umeaa (Sweden)
1998-05-01
Intensity modulation of electron beams is one step towards truly conformal therapy. This can be realized with the MM50 racetrack microtron that utilizes a scanning beam technique. By adjusting the scan pattern it is possible to obtain arbitrary fluence distributions. Since the monitor chambers in the treatment head are segmented in both x- and y-directions it is possible to verify the fluence distribution to the patient at any time during the treatment. Intensity modulated electron beams have been measured with film and a plane parallel chamber and compared with calculations. The calculations were based on a pencil beam method. An intensity distribution at the multileaf collimator (MLC) level was calculated by superposition of measured pencil beams over scan patterns. By convolving this distribution with a Gaussian pencil beam, which has propagated from the MLC to the isocentre, a fluence distribution at isocentre level was obtained. The agreement between calculations and measurements was within 2% in dose or 1 mm in distance in the penumbra zones. A standard set of intensity modulated electron beams has been developed. These beams have been implemented in a treatment planning system and are used for manual optimization. A clinical example (prostate) of such an application is presented and compared with a standard irradiation technique. (author)
Energy Technology Data Exchange (ETDEWEB)
Bissonnette, Jean-Pierre; Balter, Peter A.; Dong Lei; Langen, Katja M.; Lovelock, D. Michael; Miften, Moyed; Moseley, Douglas J.; Pouliot, Jean; Sonke, Jan-Jakob; Yoo, Sua [Task Group 179, Department of Radiation Physics, Princess Margaret Hospital, University of Toronto, Toronto, Ontario, M5G 2M9 (Canada); Department of Radiation Physics, University of Texas M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology, M. D. Anderson Cancer Center Orlando, Orlando, Florida 32806 (United States); Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10021 (United States); Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado 80045 (United States); Department of Radiation Physics, Princess Margaret Hospital, University of Toronto, Toronto, Ontario, M5G 2M9 (Canada); Department of Radiation Oncology, UCSF Comprehensive Cancer Center, 1600 Divisadero St., Suite H 1031, San Francisco, California 94143-1708 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Department of Radiation Oncology, Duke University, Durham, North Carolina 27710 (United States)
2012-04-15
Purpose: Commercial CT-based image-guided radiotherapy (IGRT) systems allow widespread management of geometric variations in patient setup and internal organ motion. This document provides consensus recommendations for quality assurance protocols that ensure patient safety and patient treatment fidelity for such systems. Methods: The AAPM TG-179 reviews clinical implementation and quality assurance aspects for commercially available CT-based IGRT, each with their unique capabilities and underlying physics. The systems described are kilovolt and megavolt cone-beam CT, fan-beam MVCT, and CT-on-rails. A summary of the literature describing current clinical usage is also provided. Results: This report proposes a generic quality assurance program for CT-based IGRT systems in an effort to provide a vendor-independent program for clinical users. Published data from long-term, repeated quality control tests form the basis of the proposed test frequencies and tolerances.Conclusion: A program for quality control of CT-based image-guidance systems has been produced, with focus on geometry, image quality, image dose, system operation, and safety. Agreement and clarification with respect to reports from the AAPM TG-101, TG-104, TG-142, and TG-148 has been addressed.
PET/CT Based Dose Planning in Radiotherapy
DEFF Research Database (Denmark)
Berthelsen, Anne Kiil; Jakobsen, Annika Loft; Sapru, Wendy;
2011-01-01
This mini-review describes how to perform PET/CT based radiotherapy dose planning and the advantages and possibilities obtained with the technique for radiation therapy. Our own experience since 2002 is briefly summarized from more than 2,500 patients with various malignant diseases undergoing...... radiotherapy planning with PET/CT prior to the treatment. The PET/CT, including the radiotherapy planning process as well as the radiotherapy process, is outlined in detail. The demanding collaboration between mould technicians, nuclear medicine physicians and technologists, radiologists and radiology...... technologists, radiation oncologists, physicists, and dosimetrists is emphasized. We strongly believe that PET/CT based radiotherapy planning will improve the therapeutic output in terms of target definition and non-target avoidance and will play an important role in future therapeutic interventions in many...
SU-E-J-92: On-Line Cone Beam CT Based Planning for Emergency and Palliative Radiation Therapy
Energy Technology Data Exchange (ETDEWEB)
Held, M; Morin, O; Pouliot, J [UC San Francisco, San Francisco, CA (United States)
2014-06-01
Purpose: To evaluate and develop the feasibility of on-line cone beam CT based planning for emergency and palliative radiotherapy treatments. Methods: Subsequent to phantom studies, a case library of 28 clinical megavoltage cone beam CT (MVCBCT) was built to assess dose-planning accuracies on MVCBCT for all anatomical sites. A simple emergency treatment plan was created on the MVCBCT and copied to its reference CT. The agreement between the dose distributions of each image pair was evaluated by the mean dose difference of the dose volume and the gamma index of the central 2D axial plane. An array of popular urgent and palliative cases was also evaluated for imaging component clearance and field-of-view. Results: The treatment cases were categorized into four groups (head and neck, thorax/spine, pelvis and extremities). Dose distributions for head and neck treatments were predicted accurately in all cases with a gamma index of >95% for 2% and 2 mm criteria. Thoracic spine treatments had a gamma index as low as 60% indicating a need for better uniformity correction and tissue density calibration. Small anatomy changes between CT and MVCBCT could contribute to local errors. Pelvis and sacral spine treatment cases had a gamma index between 90% and 98% for 3%/3 mm criteria. The limited FOV became an issue for large pelvis patients. Imaging clearance was difficult for cases where the tumor was positioned far off midline. Conclusion: The MVCBCT based dose planning and delivery approach is feasible in many treatment cases. Dose distributions for head and neck patients are unrestrictedly predictable. Some FOV restrictions apply to other treatment sites. Lung tissue is most challenging for accurate dose calculations given the current imaging filters and corrections. Additional clinical cases for extremities need to be included in the study to assess the full range of site-specific planning accuracies. This work is supported by Siemens.
Assurance calculations for planning clinical trials with time-to-event outcomes.
Ren, Shijie; Oakley, Jeremy E
2014-01-15
We consider the use of the assurance method in clinical trial planning. In the assurance method, which is an alternative to a power calculation, we calculate the probability of a clinical trial resulting in a successful outcome, via eliciting a prior probability distribution about the relevant treatment effect. This is typically a hybrid Bayesian-frequentist procedure, in that it is usually assumed that the trial data will be analysed using a frequentist hypothesis test, so that the prior distribution is only used to calculate the probability of observing the desired outcome in the frequentist test. We argue that assessing the probability of a successful clinical trial is a useful part of the trial planning process. We develop assurance methods to accommodate survival outcome measures, assuming both parametric and nonparametric models. We also develop prior elicitation procedures for each survival model so that the assurance calculations can be performed more easily and reliably. We have made free software available for implementing our methods.
Mathematical knowledge and drug dosage calculation: Necessary clinical skills for the nurse
Directory of Open Access Journals (Sweden)
Athanasakis Efstratios
2013-01-01
Full Text Available When nurses perform their tasks, they manage situations where maths knowledge is required. Such a situation is the calculation of medication dosage. Aim: The literature review of papers relevant with the mathematical knowledge and drug calculation skills of nurses and nursing students. Material-Method: A search of published research and review articles from January 1989 until March 2012, has been conducted in Pubmed database. The search terms used were: nurses, mathematics skills, numeracy skills and medication dosology calculation skills. Results: Literature review showed that many studies focus in the mathematical knowledge and drug dosage calculation competency of nursing students. Results from these studies revealed that nursing students had poor mathematical knowledge and drug dosage calculation skills. In contrast with students, professional nurses are more likely to have sufficient skills in drug calculations. Apart from the papers analyzing calculation skills' assessment, several studies examined educational interventions in the context of calculation skills enhancement. Accuracy and proficiency in the dosage calculation of medications is a preventive factor of errors made at medication preparation and administration. Conclusion: Mathematical knowledge and drug dosage calculation abilities are interrelated concepts and essential clinical skills for the nurse. The fact that nursing students do not have adequate skills for calculating medications' dosage, might be an issue that schools of nursing education should focus in. Further research of the drug dosage calculation skills is considered essential.
Do calculation errors by nurses cause medication errors in clinical practice? A literature review.
Wright, Kerri
2010-01-01
This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor
A stepwise approach to the visual interpretation of CT-based myocardial perfusion.
Mehra, Vishal C; Valdiviezo, Carolina; Arbab-Zadeh, Armin; Ko, Brian S; Seneviratne, Sujith K; Cerci, Rodrigo; Lima, Joao A C; George, Richard T
2011-01-01
Cardiovascular anatomic and functional testing have been longstanding and key components of cardiac risk assessment. As part of that strategy, CT-based imaging has made steady progress, with coronary computed tomography angiography (CTA) now established as the most sensitive noninvasive strategy for assessment of significant coronary artery disease. Myocardial CT perfusion imaging (CTP), as the functional equivalent of coronary CTA, is being tested in currently ongoing multicenter trials and is proposed to enhance the accuracy of coronary CTA alone. However, unlike coronary CTA that has published guidelines for interpretation and is rapidly gaining applicability in the noninvasive risk assessment paradigms, myocardial CTP is rapidly evolving, and guidance on a standard approach to its interpretation is lacking. In this article we describe a practical stepwise approach for interpretation of myocardial CTP that should add to the clinical applicability of this modality. These steps include (1) coronary CTA interpretation for potentially obstructive atherosclerosis, (2) reconstruction and preprocessing of myocardial CTP images, (3) image quality assessment and the identification of potentially confounding artifacts, (4) rest and stress image interpretation for enhancement patterns and areas of hypoattenuation, and (5) correlation of coronary anatomy and myocardial perfusion deficits. This systematic review uses already published methods from multiple clinical studies and is intended for general usage, independent of the platform used for image acquisition.
SU-F-207-06: CT-Based Assessment of Tumor Volume in Malignant Pleural Mesothelioma
Energy Technology Data Exchange (ETDEWEB)
Qayyum, F; Armato, S; Straus, C; Husain, A; Vigneswaran, W; Kindler, H [The University of Chicago, Chicago, IL (United States)
2015-06-15
Purpose: To determine the potential utility of computed tomography (CT) scans in the assessment of physical tumor bulk in malignant pleural mesothelioma patients. Methods: Twenty-eight patients with malignant pleural mesothelioma were used for this study. A CT scan was acquired for each patient prior to surgical resection of the tumor (median time between scan and surgery: 27 days). After surgery, the ex-vivo tumor volume was measured by a pathologist using a water displacement method. Separately, a radiologist identified and outlined the tumor boundary on each CT section that demonstrated tumor. These outlines then were analyzed to determine the total volume of disease present, the number of sections with outlines, and the mean volume of disease per outlined section. Subsets of the initial patient cohort were defined based on these parameters, i.e. cases with at least 30 sections of disease with a mean disease volume of at least 3mL per section. For each subset, the R- squared correlation between CT-based tumor volume and physical ex-vivo tumor volume was calculated. Results: The full cohort of 28 patients yielded a modest correlation between CT-based tumor volume and the ex-vivo tumor volume with an R-squared value of 0.66. In general, as the mean tumor volume per section increased, the correlation of CT-based volume with the physical tumor volume improved substantially. For example, when cases with at least 40 CT sections presenting a mean of at least 2mL of disease per section were evaluated (n=20) the R-squared correlation increased to 0.79. Conclusion: While image-based volumetry for mesothelioma may not generally capture physical tumor volume as accurately as one might expect, there exists a set of conditions in which CT-based volume is highly correlated with the physical tumor volume. SGA receives royalties and licensing fees through the University of Chicago for computer-aided diagnosis technology.
Energy Technology Data Exchange (ETDEWEB)
Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D
1999-07-01
PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.
Results of 1 year of clinical experience with independent dose calculation software for VMAT fields
Directory of Open Access Journals (Sweden)
Juan Fernando Mata Colodro
2014-01-01
Full Text Available It is widely accepted that a redundant independent dose calculation (RIDC must be included in any treatment planning verification procedure. Specifically, volumetric modulated arc therapy (VMAT technique implies a comprehensive quality assurance (QA program in which RIDC should be included. In this paper, the results obtained in 1 year of clinical experience are presented. Eclipse from Varian is the treatment planning system (TPS, here in use. RIDC were performed with the commercial software; Diamond ® (PTW which is capable of calculating VMAT fields. Once the plan is clinically accepted, it is exported via Digital Imaging and Communications in Medicine (DICOM to RIDC, together with the body contour, and then a point dose calculation is performed, usually at the isocenter. A total of 459 plans were evaluated. The total average deviation was -0.3 ± 1.8% (one standard deviation (1SD. For higher clearance the plans were grouped by location in: Prostate, pelvis, abdomen, chest, head and neck, brain, stereotactic radiosurgery, lung stereotactic body radiation therapy, and miscellaneous. The highest absolute deviation was -0.8 ± 1.5% corresponding to the prostate. A linear fit between doses calculated by RIDC and by TPS produced a correlation coefficient of 0.9991 and a slope of 1.0023. These results are very close to those obtained in the validation process. This agreement led us to consider this RIDC software as a valuable tool for QA in VMAT plans.
Results of 1 year of clinical experience with independent dose calculation software for VMAT fields.
Colodro, Juan Fernando Mata; Berna, Alfredo Serna; Puchades, Vicente Puchades; Amores, David Ramos; Baños, Miguel Alcaraz
2014-10-01
It is widely accepted that a redundant independent dose calculation (RIDC) must be included in any treatment planning verification procedure. Specifically, volumetric modulated arc therapy (VMAT) technique implies a comprehensive quality assurance (QA) program in which RIDC should be included. In this paper, the results obtained in 1 year of clinical experience are presented. Eclipse from Varian is the treatment planning system (TPS), here in use. RIDC were performed with the commercial software; Diamond(®) (PTW) which is capable of calculating VMAT fields. Once the plan is clinically accepted, it is exported via Digital Imaging and Communications in Medicine (DICOM) to RIDC, together with the body contour, and then a point dose calculation is performed, usually at the isocenter. A total of 459 plans were evaluated. The total average deviation was -0.3 ± 1.8% (one standard deviation (1SD)). For higher clearance the plans were grouped by location in: Prostate, pelvis, abdomen, chest, head and neck, brain, stereotactic radiosurgery, lung stereotactic body radiation therapy, and miscellaneous. The highest absolute deviation was -0.8 ± 1.5% corresponding to the prostate. A linear fit between doses calculated by RIDC and by TPS produced a correlation coefficient of 0.9991 and a slope of 1.0023. These results are very close to those obtained in the validation process. This agreement led us to consider this RIDC software as a valuable tool for QA in VMAT plans.
Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J
2013-03-04
A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed
Additive scales in degenerative disease - calculation of effect sizes and clinical judgment
Directory of Open Access Journals (Sweden)
Riepe Matthias W
2011-12-01
Full Text Available Abstract Background The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. Methods We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. Results We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Conclusions Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.
Doucet, R.; Olivares, M.; DeBlois, F.; Podgorsak, E. B.; Kawrakow, I.; Seuntjens, J.
2003-08-01
Calculations of dose distributions in heterogeneous phantoms in clinical electron beams, carried out using the fast voxel Monte Carlo (MC) system XVMC and the conventional MC code EGSnrc, were compared with measurements. Irradiations were performed using the 9 MeV and 15 MeV beams from a Varian Clinac-18 accelerator with a 10 × 10 cm2 applicator and an SSD of 100 cm. Depth doses were measured with thermoluminescent dosimetry techniques (TLD 700) in phantoms consisting of slabs of Solid WaterTM (SW) and bone and slabs of SW and lung tissue-equivalent materials. Lateral profiles in water were measured using an electron diode at different depths behind one and two immersed aluminium rods. The accelerator was modelled using the EGS4/BEAM system and optimized phase-space files were used as input to the EGSnrc and the XVMC calculations. Also, for the XVMC, an experiment-based beam model was used. All measurements were corrected by the EGSnrc-calculated stopping power ratios. Overall, there is excellent agreement between the corrected experimental and the two MC dose distributions. Small remaining discrepancies may be due to the non-equivalence between physical and simulated tissue-equivalent materials and to detector fluence perturbation effect correction factors that were calculated for the 9 MeV beam at selected depths in the heterogeneous phantoms.
Energy Technology Data Exchange (ETDEWEB)
Doucet, R [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); Olivares, M [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); DeBlois, F [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); Podgorsak, E B [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada); Kawrakow, I [National Research Council Canada, Ionizing Radiation Standards Group, Ottawa K1A 0R6, Canada (Canada); Seuntjens, J [Medical Physics Unit, McGill University, Montreal General Hospital, 1650 Ave Cedar, Montreal H3G 1A4 (Canada)
2003-08-07
Calculations of dose distributions in heterogeneous phantoms in clinical electron beams, carried out using the fast voxel Monte Carlo (MC) system XVMC and the conventional MC code EGSnrc, were compared with measurements. Irradiations were performed using the 9 MeV and 15 MeV beams from a Varian Clinac-18 accelerator with a 10 x 10 cm{sup 2} applicator and an SSD of 100 cm. Depth doses were measured with thermoluminescent dosimetry techniques (TLD 700) in phantoms consisting of slabs of Solid Water{sup TM} (SW) and bone and slabs of SW and lung tissue-equivalent materials. Lateral profiles in water were measured using an electron diode at different depths behind one and two immersed aluminium rods. The accelerator was modelled using the EGS4/BEAM system and optimized phase-space files were used as input to the EGSnrc and the XVMC calculations. Also, for the XVMC, an experiment-based beam model was used. All measurements were corrected by the EGSnrc-calculated stopping power ratios. Overall, there is excellent agreement between the corrected experimental and the two MC dose distributions. Small remaining discrepancies may be due to the non-equivalence between physical and simulated tissue-equivalent materials and to detector fluence perturbation effect correction factors that were calculated for the 9 MeV beam at selected depths in the heterogeneous phantoms.
MicroCT-Based Skeletal Models for Use in Tomographic Voxel Phantoms for Radiological Protection
Energy Technology Data Exchange (ETDEWEB)
Bolch, Wesley [Univ. of Florida, Gainesville, FL (United States)
2010-03-30
The University of Florida (UF) proposes to develop two high-resolution image-based skeletal dosimetry models for direct use by ICRP Committee 2’s Task Group on Dose Calculation in their forthcoming Reference Voxel Male (RVM) and Reference Voxel Female (RVF) whole-body dosimetry phantoms. These two phantoms are CT-based, and thus do not have the image resolution to delineate and perform radiation transport modeling of the individual marrow cavities and bone trabeculae throughout their skeletal structures. Furthermore, new and innovative 3D microimaging techniques will now be required for the skeletal tissues following Committee 2’s revision of the target tissues of relevance for radiogenic bone cancer induction. This target tissue had been defined in ICRP Publication 30 as a 10-μm cell layer on all bone surfaces of trabecular and cortical bone. The revised target tissue is now a 50-μm layer within the marrow cavities of trabecular bone only and is exclusive of the marrow adipocytes. Clearly, this new definition requires the use of 3D microimages of the trabecular architecture not available from past 2D optical studies of the adult skeleton. With our recent acquisition of two relatively young cadavers (males of age 18-years and 40-years), we will develop a series of reference skeletal models that can be directly applied to (1) the new ICRP reference voxel man and female phantoms developed for the ICRP, and (2) pediatric phantoms developed to target the ICRP reference children. Dosimetry data to be developed will include absorbed fractions for internal beta and alpha-particle sources, as well as photon and neutron fluence-to-dose response functions for direct use in external dosimetry studies of the ICRP reference workers and members of the general public
Dietrich, Johannes W.; Landgrafe-Mende, Gabi; Wiora, Evelin; Chatzitomaris, Apostolos; Klein, Harald H.; Midgley, John E. M.; Hoermann, Rudolf
2016-01-01
Although technical problems of thyroid testing have largely been resolved by modern assay technology, biological variation remains a challenge. This applies to subclinical thyroid disease, non-thyroidal illness syndrome, and those 10% of hypothyroid patients, who report impaired quality of life, despite normal thyrotropin (TSH) concentrations under levothyroxine (L-T4) replacement. Among multiple explanations for this condition, inadequate treatment dosage and monotherapy with L-T4 in subjects with impaired deiodination have received major attention. Translation to clinical practice is difficult, however, since univariate reference ranges for TSH and thyroid hormones fail to deliver robust decision algorithms for therapeutic interventions in patients with more subtle thyroid dysfunctions. Advances in mathematical and simulative modeling of pituitary–thyroid feedback control have improved our understanding of physiological mechanisms governing the homeostatic behavior. From multiple cybernetic models developed since 1956, four examples have also been translated to applications in medical decision-making and clinical trials. Structure parameters representing fundamental properties of the processing structure include the calculated secretory capacity of the thyroid gland (SPINA-GT), sum activity of peripheral deiodinases (SPINA-GD) and Jostel’s TSH index for assessment of thyrotropic pituitary function, supplemented by a recently published algorithm for reconstructing the personal set point of thyroid homeostasis. In addition, a family of integrated models (University of California-Los Angeles platform) provides advanced methods for bioequivalence studies. This perspective article delivers an overview of current clinical research on the basis of mathematical thyroid models. In addition to a summary of large clinical trials, it provides previously unpublished results of validation studies based on simulation and clinical samples. PMID:27375554
Dietrich, Johannes W; Landgrafe-Mende, Gabi; Wiora, Evelin; Chatzitomaris, Apostolos; Klein, Harald H; Midgley, John E M; Hoermann, Rudolf
2016-01-01
Although technical problems of thyroid testing have largely been resolved by modern assay technology, biological variation remains a challenge. This applies to subclinical thyroid disease, non-thyroidal illness syndrome, and those 10% of hypothyroid patients, who report impaired quality of life, despite normal thyrotropin (TSH) concentrations under levothyroxine (L-T4) replacement. Among multiple explanations for this condition, inadequate treatment dosage and monotherapy with L-T4 in subjects with impaired deiodination have received major attention. Translation to clinical practice is difficult, however, since univariate reference ranges for TSH and thyroid hormones fail to deliver robust decision algorithms for therapeutic interventions in patients with more subtle thyroid dysfunctions. Advances in mathematical and simulative modeling of pituitary-thyroid feedback control have improved our understanding of physiological mechanisms governing the homeostatic behavior. From multiple cybernetic models developed since 1956, four examples have also been translated to applications in medical decision-making and clinical trials. Structure parameters representing fundamental properties of the processing structure include the calculated secretory capacity of the thyroid gland (SPINA-GT), sum activity of peripheral deiodinases (SPINA-GD) and Jostel's TSH index for assessment of thyrotropic pituitary function, supplemented by a recently published algorithm for reconstructing the personal set point of thyroid homeostasis. In addition, a family of integrated models (University of California-Los Angeles platform) provides advanced methods for bioequivalence studies. This perspective article delivers an overview of current clinical research on the basis of mathematical thyroid models. In addition to a summary of large clinical trials, it provides previously unpublished results of validation studies based on simulation and clinical samples.
Nazarian, Ara; Entezari, Vahid; Zurakowski, David; Calderon, Nathan; Hipp, John A.; Villa-Camacho, Juan C.; Lin, Patrick P.; Cheung, Felix H.; Aboulafia, Albert J.; Turcotte, Robert; Anderson, Megan E.; Gebhardt, Mark C.; Cheng, Edward Y.; Terek, Richard M.; Yaszemski, Michael; Damron, Timothy A.; Snyder, Brian D.
2015-01-01
Background Pathological fractures could be prevented if reliable methods of fracture risk assessment were available. A multi-center, prospective study was conducted to identify significant predictors of physicians' treatment plan for skeletal metastasis based on clinical fracture risk assessments and the proposed CT-based Rigidity Analysis (CTRA). Methods Orthopaedic oncologists selected a treatment plan for 124 patients with 149 metastatic lesions based on Mirels method. Then, CTRA was performed and the results were provided to the physicians, who were asked to reassess their treatment plan. The pre- and post-CTRA treatment plans were compared to identify cases where the treatment plan was changed based on the CTRA report. Patients were followed for a 4 month period to establish the incidence of pathological fractures. Results Pain, lesion type and lesion size were significant predictors of the pre-CTRA plan. After providing the CTRA results, physicians changed their plan for 36 patients. CTRA results, pain and primary source of metastasis were significant predictors of the post-CTRA plan. Follow up of patients who did not undergo fixation resulted in 7 fractures; CTRA predicted these fractures with 100% sensitivity and 90% specificity, whereas the Mirels method was 71% sensitive and 50% specific. Conclusions Lesion type and size and pain level influenced the physicians’ plans for management of metastatic lesions. Physicians’ treatment plans and fracture risk predictions were significantly influenced by the availability of CTRA results. Due to its high sensitivity and specificity. CTRA could potentially be used as a screening method for pathological fractures. PMID:25724521
Cleophas, Ton J
2012-01-01
The first part of this title contained all statistical tests relevant to starting clinical investigations, and included tests for continuous and binary data, power, sample size, multiple testing, variability, confounding, interaction, and reliability. The current part 2 of this title reviews methods for handling missing data, manipulated data, multiple confounders, predictions beyond observation, uncertainty of diagnostic tests, and the problems of outliers. Also robust tests, non-linear modeling , goodness of fit testing, Bhatacharya models, item response modeling, superiority testing, variab
Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.
Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P
2016-06-14
Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates.
Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca; Brink, Carsten; Fogliata, Antonella; Albers, Dirk; Nyström, Håkan; Lassen, Søren
2006-11-21
A study of the performance of five commercial radiotherapy treatment planning systems (TPSs) for common treatment sites regarding their ability to model heterogeneities and scattered photons has been performed. The comparison was based on CT information for prostate, head and neck, breast and lung cancer cases. The TPSs were installed locally at different institutions and commissioned for clinical use based on local procedures. For the evaluation, beam qualities as identical as possible were used: low energy (6 MV) and high energy (15 or 18 MV) x-rays. All relevant anatomical structures were outlined and simple treatment plans were set up. Images, structures and plans were exported, anonymized and distributed to the participating institutions using the DICOM protocol. The plans were then re-calculated locally and exported back for evaluation. The TPSs cover dose calculation techniques from correction-based equivalent path length algorithms to model-based algorithms. These were divided into two groups based on how changes in electron transport are accounted for ((a) not considered and (b) considered). Increasing the complexity from the relatively homogeneous pelvic region to the very inhomogeneous lung region resulted in less accurate dose distributions. Improvements in the calculated dose have been shown when models consider volume scatter and changes in electron transport, especially when the extension of the irradiated volume was limited and when low densities were present in or adjacent to the fields. A Monte Carlo calculated algorithm input data set and a benchmark set for a virtual linear accelerator have been produced which have facilitated the analysis and interpretation of the results. The more sophisticated models in the type b group exhibit changes in both absorbed dose and its distribution which are congruent with the simulations performed by Monte Carlo-based virtual accelerator.
Is CT-based perfusion and collateral imaging sensitive to time since stroke onset?
Directory of Open Access Journals (Sweden)
Smriti eAgarwal
2015-04-01
Full Text Available Abstract PurposeCT-based perfusion and collateral imaging is increasingly used in the assessment of patients with acute stroke. Time of stroke onset is a critical factor in determining eligibility for and benefit from thrombolysis. Animal studies predict that the volume of ischemic penumbra decreases with time. Here we evaluate if CT is able to detect a relationship between perfusion or collateral status, as assessed by CT and time since stroke onset.Materials and MethodsWe studied fifty-three consecutive patients with proximal vessel occlusions, mean (SD age of 71.3 (14.9 years at a mean (SD of 125.2 (55.3 minutes from onset, using whole-brain CT perfusion (CTp imaging. Penumbra was defined using voxel-based thresholds for cerebral blood flow (CBF and mean transit time (MTT; core was defined by cerebral blood volume (CBV. Normalized penumbra fraction was calculated as Penumbra volume /(Penumbra volume +Core volume for both CBF and MTT (PenCBF and PenMTT, respectively. Collaterals were assessed on CT angiography (CTA. CTp ASPECTS score was applied visually, lower scores indicating larger lesions. ASPECTS ratios were calculated corresponding to penumbra fractions.ResultsBoth PenCBF and PenMTT showed decremental trends with increasing time since onset (Kendall’s tau-b=-0.196, p=0.055, and -0.187, p=0.068, respectively. The CBF/CBV ASPECTS ratio, which showed a relationship to PenCBF (Kendall’s tau-b=0.190, p=0.070, decreased with increasing time since onset (Kendall’s tau-b=-0.265, p=0.006. Collateral response did not relate to time (Kendall’s tau-b=-0.039, p=0.724.ConclusionEven within 4.5hrs since stroke onset, a decremental relationship between penumbra and time, but not between collateral status and time, may be detected using perfusion CT imaging. The trends that we demonstrate merit evaluation in larger datasets to confirm our results, which may have potential wider applications e.g. in the setting of strokes of unknown onset time.
Directory of Open Access Journals (Sweden)
Huijun Xu
2016-01-01
Full Text Available Many clinics still use monitor unit (MU calculations for electron treatment planning and/or quality assurance (QA. This work (1 investigates the clinical implementation of a dosimetry system including a modified American Association of Physicists in Medicine-task group-71 (TG-71-based electron MU calculation protocol (modified TG-71 electron [mTG-71E] and an independent commercial calculation program and (2 provides the practice recommendations for clinical usage. Following the recently published TG-71 guidance, an organized mTG-71E databook was developed to facilitate data access and subsequent MU computation according to our clinical need. A recently released commercial secondary calculation program - Mobius3D (version 1.5.1 Electron Quick Calc (EQC (Mobius Medical System, LP, Houston, TX, USA, with inherent pencil beam algorithm and independent beam data, was used to corroborate the calculation results. For various setups, the calculation consistency and accuracy of mTG-71E and EQC were validated by their cross-comparison and the ion chamber measurements in a solid water phantom. Our results show good agreement between mTG-71E and EQC calculations, with average 2% difference. Both mTG-71E and EQC calculations match with measurements within 3%. In general, these differences increase with decreased cutout size, increased extended source to surface distance, and lower energy. It is feasible to use TG71 and Mobius3D clinically as primary and secondary electron MU calculations or vice versa. We recommend a practice that only requires patient-specific measurements in rare cases when mTG-71E and EQC calculations differ by 5% or more.
Energy Technology Data Exchange (ETDEWEB)
Jo, Sun Mi; Chun, Mi Sun; Kim, Mi Hwa; Oh, Young Taek; Noh, O Kyu [Ajou University School of Medicine, Seoul (Korea, Republic of); Kang, Seung Hee [Inje University, Ilsan Paik Hospital, Ilsan (Korea, Republic of)
2010-11-15
Simulation using computed tomography (CT) is now widely available for radiation treatment planning for breast cancer. It is an important tool to help define the tumor target and normal tissue based on anatomical features of an individual patient. In Korea, most patients have small sized breasts and the purpose of this study was to review the margin of treatment field between conventional two-dimensional (2D) planning and CT based three-dimensional (3D) planning in patients with small breasts. Twenty-five consecutive patients with early breast cancer undergoing breast conservation therapy were selected. All patients underwent 3D CT based planning with a conventional breast tangential field design. In 2D planning, the treatment field margins were determined by palpation of the breast parenchyma (In general, the superior: base of the clavicle, medial: midline, lateral: mid - axillary line, and inferior margin: 2 m below the inflamammary fold). In 3D planning, the clinical target volume (CTV) ought to comprise all glandular breast tissue, and the PTV was obtained by adding a 3D margin of 1 cm around the CTV except in the skin direction. The difference in the treatment field margin and equivalent field size between 2D and 3D planning were evaluated. The association between radiation field margins and factors such as body mass index, menopause status, and bra size was determined. Lung volume and heart volume were examined on the basis of the prescribed breast radiation dose and 3D dose distribution. The margins of the treatment field were smaller in the 3D planning except for two patients. The superior margin was especially variable (average, 2.5 cm; range, -2.5 to 4.5 cm; SD, 1.85). The margin of these targets did not vary equally across BMI class, menopause status, or bra size. The average irradiated lung volume was significantly lower for 3D planning. The average irradiated heart volume did not decrease significantly. The use of 3D CT based planning reduced the
Energy Technology Data Exchange (ETDEWEB)
Chen, S; Le, Q; Mutaf, Y; Yi, B; D’Souza, W [University of Maryland School of Medicine, Baltimore, MD (United States)
2015-06-15
Purpose: To assess dose calculation accuracy of cone-beam CT (CBCT) based treatment plans using a patient-specific stepwise CT-density conversion table in comparison to conventional CT-based treatment plans. Methods: Unlike CT-based treatment planning which use fixed CT-density table, this study used patient-specific CT-density table to minimize the errors in reconstructed mass densities due to the effects of CBCT Hounsfield unit (HU) uncertainties. The patient-specific CT-density table was a stepwise function which maps HUs to only 6 classes of materials with different mass densities: air (0.00121g/cm3), lung (0.26g/cm3), adipose (0.95g/cm3), tissue (1.05 g/cm3), cartilage/bone (1.6g/cm3), and other (3g/cm3). HU thresholds to define different materials were adjusted for each CBCT via best match with the known tissue types in these images. Dose distributions were compared between CT-based plans and CBCT-based plans (IMRT/VMAT) for four types of treatment sites: head and neck (HN), lung, pancreas, and pelvis. For dosimetric comparison, PTV mean dose in both plans were compared. A gamma analysis was also performed to directly compare dosimetry in the two plans. Results: Compared to CT-based plans, the differences for PTV mean dose were 0.1% for pelvis, 1.1% for pancreas, 1.8% for lung, and −2.5% for HN in CBCT-based plans. The gamma passing rate was 99.8% for pelvis, 99.6% for pancreas, and 99.3% for lung with 3%/3mm criteria, and 80.5% for head and neck with 5%/3mm criteria. Different dosimetry accuracy level was observed: 1% for pelvis, 3% for lung and pancreas, and 5% for head and neck. Conclusion: By converting CBCT data to 6 classes of materials for dose calculation, 3% of dose calculation accuracy can be achieved for anatomical sites studied here, except HN which had a 5% accuracy. CBCT-based treatment planning using a patient-specific stepwise CT-density table can facilitate the evaluation of dosimetry changes resulting from variation in patient anatomy.
Energy Technology Data Exchange (ETDEWEB)
An, Chansik; Lee, Hye-Jeong; Ahn, Sung Soo; Choi, Byoung Wook; Kim, Myeong-Jin; Chung, Yong Eun [Severance Hospital, Yonsei University College of Medicine, Department of Radiology, Research Institute of Radiological Science, 50 Yonsei-Ro, Seodaemun-Gu, Seoul (Korea, Republic of); Lee, Hye Sun [Yonsei University College of Medicine, Biostatistics Collaboration Unit, Department of Research Affairs, Seoul (Korea, Republic of)
2014-10-15
To assess the value of a CT-based abdominal aortic calcification (AAC) score as a surrogate marker for the presence of asymptomatic coronary artery disease (CAD). The AAC scores of 373 patients without cardiac symptoms who underwent both screening coronary CT angiography and abdominal CT within one year were calculated according to the Agatston method. Logistic regression was used to derive two multivariate models from traditional cardiovascular risk factors, with and without AAC scores, to predict the presence of CAD. The AAC score and the two multivariate models were compared by calculating the area under the receiver operating characteristic curve (AUC) and the net reclassification improvement (NRI). The AAC score alone showed a marginally higher AUC (0.823 vs. 0.767, P = 0.061) and significantly better risk classification (NRI = 0.158, P = 0.048) than the multivariate model without AAC. The multivariate model using traditional factors and AAC did not show a significantly higher AUC (0.832 vs. 0.823, P = 0.616) or NRI (0.073, P = 0.13) than the AAC score alone. The optimal cutoff value of the AAC score for predicting CAD was 1025.8 (sensitivity, 79.5 %; specificity, 75.9 %). AAC scores may serve as a surrogate marker for the presence or absence of asymptomatic CAD. (orig.)
Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C
2011-01-01
The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.
Energy Technology Data Exchange (ETDEWEB)
Fotina, Irina; Kragl, Gabriele; Kroupa, Bernhard; Trausmuth, Robert; Georg, Dietmar [Medical Univ. Vienna (Austria). Division of Medical Radiation Physics, Dept. of Radiotherapy
2011-07-15
Comparison of the dosimetric accuracy of the enhanced collapsed cone (eCC) algorithm with the commercially available Monte Carlo (MC) dose calculation for complex treatment techniques. A total of 8 intensity-modulated radiotherapy (IMRT) and 2 stereotactic body radiotherapy (SBRT) lung cases were calculated with eCC and MC algorithms with the treatment planning systems (TPS) Oncentra MasterPlan 3.2 (Nucletron) and Monaco 2.01 (Elekta/CMS). Fluence optimization as well as sequencing of IMRT plans was primarily performed using Monaco. Dose prediction errors were calculated using MC as reference. The dose-volume histrogram (DVH) analysis was complemented with 2D and 3D gamma evaluation. Both algorithms were compared to measurements using the Delta4 system (Scandidos). Recalculated with eCC IMRT plans resulted in lower planned target volume (PTV) coverage, as well as in lower organs-at-risk (OAR) doses up to 8%. Small deviations between MC and eCC in PTV dose (1-2%) were detected for IMRT cases, while larger deviations were observed for SBRT (up to 5%). Conformity indices of both calculations were similar; however, the homogeneity of the eCC calculated plans was slightly better. Delta4 measurements confirmed high dosimetric accuracy of both TPS. Mean dose prediction errors < 3% for PTV suggest that both algorithms enable highly accurate dose calculations under clinical conditions. However, users should be aware of slightly underestimated OAR doses using the eCC algorithm. (orig.)
Institute of Scientific and Technical Information of China (English)
于洁; 张娜; 陈诚豪; 曹隽; 曾骐
2014-01-01
calculated by CT-based pulmonary volumetric evaluation is significantly correlated with PFT results and this method is a useful way to evaluate the lung volume in clinical practice.
Brons, S; Elsässer, T; Ferrari, A; Gadioli, E; Mairani, A; Parodi, K; Sala, P; Scholz, M; Sommerer, F
2010-01-01
Monte Carlo codes are rapidly spreading among hadron therapy community due to their sophisticated nuclear/electromagnetic models which allow an improved description of the complex mixed radiation field produced by nuclear reactions in therapeutic irradiation. In this contribution results obtained with the Monte Carlo code FLUKA are presented focusing on the production of secondary fragments in carbon ion interaction with water and on CT-based calculations of absorbed and biological effective dose for typical clinical situations. The results of the simulations are compared with the available experimental data and with the predictions of the GSI analytical treatment planning code TRiP.
4D cone beam CT-based dose assessment for SBRT lung cancer treatment
Cai, Weixing; Dhou, Salam; Cifter, Fulya; Myronakis, Marios; Hurwitz, Martina H.; Williams, Christopher L.; Berbeco, Ross I.; Seco, Joao; Lewis, John H.
2016-01-01
The purpose of this research is to develop a 4DCBCT-based dose assessment method for calculating actual delivered dose for patients with significant respiratory motion or anatomical changes during the course of SBRT. To address the limitation of 4DCT-based dose assessment, we propose to calculate the delivered dose using time-varying (‘fluoroscopic’) 3D patient images generated from a 4DCBCT-based motion model. The method includes four steps: (1) before each treatment, 4DCBCT data is acquired with the patient in treatment position, based on which a patient-specific motion model is created using a principal components analysis algorithm. (2) During treatment, 2D time-varying kV projection images are continuously acquired, from which time-varying ‘fluoroscopic’ 3D images of the patient are reconstructed using the motion model. (3) Lateral truncation artifacts are corrected using planning 4DCT images. (4) The 3D dose distribution is computed for each timepoint in the set of 3D fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach is validated using six modified XCAT phantoms with lung tumors and different respiratory motions derived from patient data. The estimated doses are compared to that calculated using ground-truth XCAT phantoms. For each XCAT phantom, the calculated delivered tumor dose values generally follow the same trend as that of the ground truth and at most timepoints the difference is less than 5%. For the overall delivered dose, the normalized error of calculated 3D dose distribution is generally less than 3% and the tumor D95 error is less than 1.5%. XCAT phantom studies indicate the potential of the proposed method to accurately estimate 3D tumor dose distributions for SBRT lung treatment based on 4DCBCT imaging and motion modeling. Further research is necessary to investigate its performance for clinical patient data.
Energy Technology Data Exchange (ETDEWEB)
Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); Medical Sciences/University of Tehran, Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran (Iran); Bidgoli, Javad H. [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); East Tehran Azad University, Department of Electrical and Computer Engineering, Tehran (Iran); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine, Geneva (Switzerland)
2008-10-15
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map ({mu}map), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated {mu}maps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique
Energy Technology Data Exchange (ETDEWEB)
Abdoli, Mehrsima; Jong, Johan R. de; Pruim, Jan; Dierckx, Rudi A.J.O. [University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Zaidi, Habib [University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland)
2011-12-15
Metallic prosthetic replacements, such as hip or knee implants, are known to cause strong streaking artefacts in CT images. These artefacts likely induce over- or underestimation of the activity concentration near the metallic implants when applying CT-based attenuation correction of positron emission tomography (PET) images. Since this degrades the diagnostic quality of the images, metal artefact reduction (MAR) prior to attenuation correction is required. The proposed MAR method, referred to as virtual sinogram-based technique, replaces the projection bins of the sinogram that are influenced by metallic implants by a 2-D Clough-Tocher cubic interpolation scheme performed in an irregular grid, called Delaunay triangulated grid. To assess the performance of the proposed method, a physical phantom and 30 clinical PET/CT studies including hip prostheses were used. The results were compared to the method implemented on the Siemens Biograph mCT PET/CT scanner. Both phantom and clinical studies revealed that the proposed method performs equally well as the Siemens MAR method in the regions corresponding to bright streaking artefacts and the artefact-free regions. However, in regions corresponding to dark streaking artefacts, the Siemens method does not seem to appropriately correct the tracer uptake while the proposed method consistently increased the uptake in the underestimated regions, thus bringing it to the expected level. This observation is corroborated by the experimental phantom study which demonstrates that the proposed method approaches the true activity concentration more closely. The proposed MAR method allows more accurate CT-based attenuation correction of PET images and prevents misinterpretation of tracer uptake, which might be biased owing to the propagation of bright and dark streaking artefacts from CT images to the PET data following the attenuation correction procedure. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Nichols, Trent L. [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37901 (United States)], E-mail: tnichol2@utk.edu; Kabalka, George W. [Department of Chemistry, University of Tennessee, Knoxville, TN 37901 (United States); Miller, Laurence F. [Department of Nuclear and Radiological Engineering, University of Tennessee, Knoxville, TN 37901 (United States); McCormack, Michael T. [Department of Medicine, University of Tennessee Graduate School of Medicine, Knoxville, TN 37920 (United States); Johnson, Andrew [Rush University Medical Center, Chicago, IL 60612 (United States)
2009-07-15
Boron neutron capture therapy has now been used for several malignancies. Most clinical trials have addressed its use for the treatment of glioblastoma multiforme. A few trials have focused on the treatment of malignant melanoma with brain metastases. Trial results for the treatment of glioblastoma multiforme have been encouraging, but have not achieved the success anticipated. Results of trials for the treatment of malignant melanoma have been very promising, though with too few patients for conclusions to be drawn. Subsequent to these trials, regimens for undifferentiated thyroid carcinoma, hepatic metastases from adenocarcinoma of the colon, and head and neck malignancies have been developed. These tumors have also responded well to boron neutron capture therapy. Glioblastoma is an infiltrative tumor with distant individual tumor cells that might create a mechanism for therapeutic failure though recurrences are often local. The microdosimetry of boron neutron capture therapy can provide an explanation for this observation. Codes written to examine the micrometer scale energy deposition in boron neutron capture therapy have been used to explore the effects of near neighbor cells. Near neighbor cells can contribute a significantly increased dose depending on the geometric relationships. Different geometries demonstrate that tumors which grow by direct extension have a greater near neighbor effect, whereas infiltrative tumors lose this near neighbor dose which can be a significant decrease in dose to the cells that do not achieve optimal boron loading. This understanding helps to explain prior trial results and implies that tumors with small, closely packed cells that grow by direct extension will be the most amenable to boron neutron capture therapy.
2015-10-01
space mediated by stromal fibroblasts, all of which are hall - marks of cancer [24,25]. Since a limitation of current CT based screening for lung cancer is...Biomedical Imaging (ISBI): From Nano to Macro; Barcelona, Spain: 2012. [16] Negahdar M, Amini AA. Tracking planar lung motion in 4D CT with optical
Energy Technology Data Exchange (ETDEWEB)
Littooij, Annemieke S. [University Medical Centre Utrecht/Wilhelmina Children' s Hospital, Department of Radiology and Nuclear Medicine, Utrecht (Netherlands); KK Women' s and Children' s Hospital, Department of Diagnostic and Interventional Imaging, Singapore (Singapore); Kwee, Thomas C.; Vermoolen, Malou A.; Keizer, Bart de; Beek, Frederik J.A.; Hobbelink, Monique G.; Nievelstein, Rutger A.J. [University Medical Centre Utrecht/Wilhelmina Children' s Hospital, Department of Radiology and Nuclear Medicine, Utrecht (Netherlands); Barber, Ignasi; Enriquez, Goya [Hospital Materno-Infantil Vall d' Hebron, Department of Paediatric Radiology, Barcelona (Spain); Granata, Claudio [IRCCS Giannina Gaslini Hospital, Department of Radiology, Genoa (Italy); Zsiros, Jozsef [University of Amsterdam, Department of Paediatric Oncology, Emma Children' s Hospital, Academic Medical Centre, Amsterdam (Netherlands); Soh, Shui Yen [KK Women' s and Children' s Hospital, Haematology and Oncology service, Department of Paediatric Subspecialities, Singapore (Singapore); Bierings, Marc B. [University Medical Centre Utrecht/Wilhelmina Children' s Hospital, Department of Paediatric Haematology-Oncology, Utrecht (Netherlands); Stoker, Jaap [University of Amsterdam, Department of Radiology, Academic Medical Centre, Amsterdam (Netherlands)
2014-05-15
To compare whole-body MRI, including diffusion-weighted imaging (whole-body MRI-DWI), with FDG-PET/CT for staging newly diagnosed paediatric lymphoma. A total of 36 children with newly diagnosed lymphoma prospectively underwent both whole-body MRI-DWI and FDG-PET/CT. Whole-body MRI-DWI was successfully performed in 33 patients (mean age 13.9 years). Whole-body MRI-DWI was independently evaluated by two blinded observers. After consensus reading, an unblinded expert panel evaluated the discrepant findings between whole-body MRI-DWI and FDG-PET/CT and used bone marrow biopsy, other imaging data and clinical information to derive an FDG-PET/CT-based reference standard. Interobserver agreement of whole-body MRI-DWI was good [all nodal sites together (κ = 0.79); all extranodal sites together (κ = 0.69)]. There was very good agreement between the consensus whole-body MRI-DWI- and FDG-PET/CT-based reference standard for nodal (κ = 0.91) and extranodal (κ = 0.94) staging. The sensitivity and specificity of consensus whole-body MRI-DWI were 93 % and 98 % for nodal staging and 89 % and 100 % for extranodal staging, respectively. Following removal of MRI reader errors, the disease stage according to whole-body MRI-DWI agreed with the reference standard in 28 of 33 patients. Our results indicate that whole-body MRI-DWI is feasible for staging paediatric lymphoma and could potentially serve as a good radiation-free alternative to FDG-PET/CT. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Yang, Xiaofeng, E-mail: xyang43@emory.edu; Rossi, Peter; Ogunleye, Tomi; Marcus, David M.; Jani, Ashesh B.; Curran, Walter J.; Liu, Tian [Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322 (United States); Mao, Hui [Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia 30322 (United States)
2014-11-01
.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. Conclusions: The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.
Yang, Xiaofeng; Rossi, Peter; Ogunleye, Tomi; Marcus, David M.; Jani, Ashesh B.; Mao, Hui; Curran, Walter J.; Liu, Tian
2014-01-01
.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. Conclusions: The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy. PMID:25370648
Effect of adult weight and CT-based selection on rabbit meat quality
Directory of Open Access Journals (Sweden)
Zsolt Szendrő
2010-01-01
Full Text Available This study compared the meat quality of different genotypes. Maternal (M; adult weight/AW/=4.0-4.5kg; selected for the number of kits born alive, Pannon White (P; AW=4.3-4.8kg and Large type (L; AW=4.8-5.4kg rabbits were analysed. P and L genotypes were selected for carcass traits based on CT/Computer tomography/data. Rabbits were slaughtered at 11wk of age and hindleg (HL meat and M. Longissimus dorsi (LD were analysed for proximate composition and fatty acid (FA profile. Proximate composition was unaffected by the selection programme, even though the meat of P rabbits was leaner and had higher ash content (P<0.10. The LD meat of P rabbits exhibited significantly lower MUFA contents compared to M and L rabbits (25.4 vs 28.0 vs 27.7%; P<0.01 and higher PUFA content compared to M rabbits (31.9 vs 24.9%; P<0.05. This study revealed that long-term CT-based selection is effective in increasing meat leanness and PUFA content.
What is the benefit of CT-based attenuation correction in myocardial perfusion SPET?
Apostolopoulos, Dimitrios J; Savvopoulos, Christos
2016-01-01
In multimodality imaging, CT-derived transmission maps are used for attenuation correction (AC) of SPET or PET data. Regarding SPET myocardial perfusion imaging (MPI), however, the bene����t of CT-based AC (CT-AC) has been questioned. Although most attenuation-related artifacts are removed by this technique, new false defects may appear while some true perfusion abnormalities may be masked. The merits and the drawbacks of CT-AC in MPI SPET are reviewed and discussed in this editorial. In conclusion, CT-AC is most helpful in men, overweight in particular, and in those with low or low to intermediate pre-test probability of coronary artery disease (CAD). It is also useful for the evaluation of myocardial viability. In high-risk patients though, CT-AC may underestimate the presence or the extent of CAD. In any case, corrected and non-corrected images should be viewed side-by-side and both considered in the interpretation of the study.
Current concepts in F18 FDG PET/CT-based Radiation Therapy planning for Lung Cancer
Directory of Open Access Journals (Sweden)
Percy eLee
2012-07-01
Full Text Available Radiation therapy is an important component of cancer therapy for early stage as well as locally advanced lung cancer. The use of F18 FDG PET/CT has come to the forefront of lung cancer staging and overall treatment decision-making. FDG PET/CT parameters such as standard uptake value and metabolic tumor volume provide important prognostic and predictive information in lung cancer. Importantly, FDG PET/CT for radiation planning has added biological information in defining the gross tumor volume as well as involved nodal disease. For example, accurate target delineation between tumor and atelectasis is facilitated by utilizing PET and CT imaging. Furthermore, there has been meaningful progress in incorporating metabolic information from FDG PET/CT imaging in radiation treatment planning strategies such as radiation dose escalation based on standard uptake value thresholds as well as using respiratory gated PET and CT planning for improved target delineation of moving targets. In addition, PET/CT based follow-up after radiation therapy has provided the possibility of early detection of local as well as distant recurrences after treatment. More research is needed to incorporate other biomarkers such as proliferative and hypoxia biomarkers in PET as well as integrating metabolic information in adaptive, patient-centered, tailored radiation therapy.
CT based treatment planning system of proton beam therapy for ocular melanoma
Energy Technology Data Exchange (ETDEWEB)
Nakano, Takashi E-mail: tnakano@med.gunma-u.ac.jp; Kanai, Tatsuaki; Furukawa, Shigeo; Shibayama, Kouichi; Sato, Sinichiro; Hiraoka, Takeshi; Morita, Shinroku; Tsujii, Hirohiko
2003-09-01
A computed tomography (CT) based treatment planning system of proton beam therapy was established specially for ocular melanoma treatment. A technique of collimated proton beams with maximum energy of 70 MeV are applied for treatment for ocular melanoma. The vertical proton beam line has a range modulator for spreading beams out, a multi-leaf collimator, an aperture, light beam localizer, field light, and X-ray verification system. The treatment planning program includes; eye model, selecting the best direction of gaze, designing the shape of aperture, determining the proton range and range modulation necessary to encompass the target volume, and indicating the relative positions of the eyes, beam center and creation of beam aperture. Tumor contours are extracted from CT/MRI images of 1 mm thickness by assistant by various information of fundus photography and ultrasonography. The CT image-based treatment system for ocular melanoma is useful for Japanese patients as having thick choroid membrane in terms of dose sparing to skin and normal organs in the eye. The characteristics of the system and merits/demerits were reported.
Energy Technology Data Exchange (ETDEWEB)
Faught, A [UT MD Anderson Cancer Center, Houston, TX (United States); University of Texas Health Science Center Houston, Graduate School of Biomedical Sciences, Houston, TX (United States); Davidson, S [University of Texas Medical Branch of Galveston, Galveston, TX (United States); Kry, S; Ibbott, G; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States); Fontenot, J [Mary Bird Perkins Cancer Center, Baton Rouge, LA (United States); Etzel, C [Consortium of Rheumatology Researchers of North America (CORRONA), Inc., Southborough, MA (United States)
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 40×40cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 3×3cm2 to 30×30cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a
Energy Technology Data Exchange (ETDEWEB)
Christersson, Albert; Larsson, Sune [Uppsala University, Department of Orthopaedics, Uppsala (Sweden); Nysjoe, Johan; Malmberg, Filip; Sintorn, Ida-Maria; Nystroem, Ingela [Uppsala University, Centre for Image Analysis, Uppsala (Sweden); Berglund, Lars [Uppsala University, Uppsala Clinical Research Centre, UCR Statistics, Uppsala (Sweden)
2016-06-15
The aim of the present study was to compare the reliability and agreement between a computer tomography-based method (CT) and digitalised 2D radiographs (XR) when measuring change in dorsal angulation over time in distal radius fractures. Radiographs from 33 distal radius fractures treated with external fixation were retrospectively analysed. All fractures had been examined using both XR and CT at six times over 6 months postoperatively. The changes in dorsal angulation between the first reference images and the following examinations in every patient were calculated from 133 follow-up measurements by two assessors and repeated at two different time points. The measurements were analysed using Bland-Altman plots, comparing intra- and inter-observer agreement within and between XR and CT. The mean differences in intra- and inter-observer measurements for XR, CT, and between XR and CT were close to zero, implying equal validity. The average intra- and inter-observer limits of agreement for XR, CT, and between XR and CT were ± 4.4 , ± 1.9 and ± 6.8 respectively. For scientific purpose, the reliability of XR seems unacceptably low when measuring changes in dorsal angulation in distal radius fractures, whereas the reliability for the semi-automatic CT-based method was higher and is therefore preferable when a more precise method is requested. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Maneru, F; Gracia, M; Gallardo, N; Olasolo, J; Fuentemilla, N; Bragado, L; Martin-Albina, M; Lozares, S; Pellejero, S; Miquelez, S; Rubio, A [Complejo Hospitalario de Navarra, Pamplona, Navarra (Spain); Otal, A [Hospital Clinica Benidorm, Benidorm, Alicante (Spain)
2015-06-15
Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported from the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than
Accuracy of the phase space evolution dose calculation model for clinical 25 MeV electron beams
Energy Technology Data Exchange (ETDEWEB)
Korevaar, Erik W. [Daniel den Hoed Cancer Center, University Hospital Rotterdam, PO Box 5201, 3008 AE Rotterdam (Netherlands). E-mail: korevaar at kfih.azr.nl; Akhiat, Abdelhafid; Heijmen, Ben J.M. [Daniel den Hoed Cancer Center, University Hospital Rotterdam, PO Box 5201, 3008 AE Rotterdam (Netherlands); Huizenga, Henk [Joint Center for Radiation Oncology Arnhem-Nijmegen, University Medical Center Nijmegen, PO Box 9101, 6500 HB Nijmegen (Netherlands)
2000-10-01
The phase space evolution (PSE) model is a dose calculation model for electron beams in radiation oncology developed with the aim of a higher accuracy than the commonly used pencil beam (PB) models and with shorter calculation times than needed for Monte Carlo (MC) calculations. In this paper the accuracy of the PSE model has been investigated for 25 MeV electron beams of a MM50 racetrack microtron (Scanditronix Medical AB, Sweden) and compared with the results of a PB model. Measurements have been performed for tests like non-standard SSD, irregularly shaped fields, oblique incidence and in phantoms with heterogeneities of air, bone and lung. MC calculations have been performed as well, to reveal possible errors in the measurements and/or possible inaccuracies in the interaction data used for the bone and lung substitute materials. Results show a good agreement between PSE calculated dose distributions and measurements. For all points the differences - in absolute dose - were generally well within 3% and 3 mm. However, the PSE model was found to be less accurate in large regions of low-density material and errors of up to 6% were found for the lung phantom. Results of the PB model show larger deviations, with differences of up to 6% and 6 mm and of up to 10% for the lung phantom; at shortened SSDs the dose was overestimated by up to 6%. The agreement between MC calculations and measurement was good. For the bone and the lung phantom maximum deviations of 4% and 3% were found, caused by uncertainties about the actual interaction data. In conclusion, using the phase space evolution model, absolute 3D dose distributions of 25 MeV electron beams can be calculated with sufficient accuracy in most cases. The accuracy is significantly better than for a pencil beam model. In regions of lung tissue, a Monte Carlo model yields more accurate results than the current implementation of the PSE model. (author)
Accuracy of the phase space evolution dose calculation model for clinical 25 MeV electron beams
Korevaar, Erik W.; Akhiat, Abdelhafid; Heijmen, Ben J. M.; Huizenga, Henk
2000-10-01
The phase space evolution (PSE) model is a dose calculation model for electron beams in radiation oncology developed with the aim of a higher accuracy than the commonly used pencil beam (PB) models and with shorter calculation times than needed for Monte Carlo (MC) calculations. In this paper the accuracy of the PSE model has been investigated for 25 MeV electron beams of a MM50 racetrack microtron (Scanditronix Medical AB, Sweden) and compared with the results of a PB model. Measurements have been performed for tests like non-standard SSD, irregularly shaped fields, oblique incidence and in phantoms with heterogeneities of air, bone and lung. MC calculations have been performed as well, to reveal possible errors in the measurements and/or possible inaccuracies in the interaction data used for the bone and lung substitute materials. Results show a good agreement between PSE calculated dose distributions and measurements. For all points the differences - in absolute dose - were generally well within 3% and 3 mm. However, the PSE model was found to be less accurate in large regions of low-density material and errors of up to 6% were found for the lung phantom. Results of the PB model show larger deviations, with differences of up to 6% and 6 mm and of up to 10% for the lung phantom; at shortened SSDs the dose was overestimated by up to 6%. The agreement between MC calculations and measurement was good. For the bone and the lung phantom maximum deviations of 4% and 3% were found, caused by uncertainties about the actual interaction data. In conclusion, using the phase space evolution model, absolute 3D dose distributions of 25 MeV electron beams can be calculated with sufficient accuracy in most cases. The accuracy is significantly better than for a pencil beam model. In regions of lung tissue, a Monte Carlo model yields more accurate results than the current implementation of the PSE model.
Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation
Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.
2016-03-01
The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.
Directory of Open Access Journals (Sweden)
Anders Chen
Full Text Available BACKGROUND: Oral pre-exposure prophylaxis (PrEP can be clinically effective and cost-effective for HIV prevention in high-risk men who have sex with men (MSM. However, individual patients have different risk profiles, real-world populations vary, and no practical tools exist to guide clinical decisions or public health strategies. We introduce a practical model of HIV acquisition, including both a personalized risk calculator for clinical management and a cost-effectiveness calculator for population-level decisions. METHODS: We developed a decision-analytic model of PrEP for MSM. The primary clinical effectiveness and cost-effectiveness outcomes were the number needed to treat (NNT to prevent one HIV infection, and the cost per quality-adjusted life-year (QALY gained. We characterized patients according to risk factors including PrEP adherence, condom use, sexual frequency, background HIV prevalence and antiretroviral therapy use. RESULTS: With standard PrEP adherence and national epidemiologic parameters, the estimated NNT was 64 (95% uncertainty range: 26, 176 at a cost of $160,000 (cost saving, $740,000 per QALY--comparable to other published models. With high (35% HIV prevalence, the NNT was 35 (21, 57, and cost per QALY was $27,000 (cost saving, $160,000, and with high PrEP adherence, the NNT was 30 (14, 69, and cost per QALY was $3,000 (cost saving, $200,000. In contrast, for monogamous, serodiscordant relationships with partner antiretroviral therapy use, the NNT was 90 (39, 157 and cost per QALY was $280,000 ($14,000, $670,000. CONCLUSIONS: PrEP results vary widely across individuals and populations. Risk calculators may aid in patient education, clinical decision-making, and cost-effectiveness evaluation.
Energy Technology Data Exchange (ETDEWEB)
Kim, Dong Hwan; Kim, Hyuk Jung; Jang, Suk Ki; Yeon, Jae Woo [Dept. of Radiology, Daejin Medical Center Bundang Jesaeng General Hospital, Seongnam (Korea, Republic of); Ko, You Sun; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)
2015-08-15
The purpose of this report is to retrospectively analyze the need for surgery, and the recurrence rate, using a CT-based method in patients with right colonic diverticulitis. For the purposes of our study, we included 416 patients with a mean age of 41.9 (238 of which were men), with a diagnosis of colonic diverticulitis that was based on CT findings. These findings were reviewed by two independent radiologists, who localized diverticulitis and determined it using a modified Hinchey classification. We were able to follow-up with 384 patients over a period of 30 months. Out of the 416 patients, 396 of them had right colonic diverticulitis. In right colonic diverticulitis, the κ value in determining the modified Hinchey classification was 0.80. 98.2% (389/396) of the patients with right colonic diverticulitis had stages Ia-II. The surgery rate was 4.6% (17/366) and 28% (5/18) for right and left colonic diverticulitis, respectively (p < 0.001). In the instances of right colonic diverticulitis, the surgery rate was 2.8% (10/359) for stages Ia-II, while all seven patients with stage III or IV underwent surgery. The recurrence rate was 6.5% (23/356) and 15% (2/13) for right and left colonic diverticulitis, respectively (p = 0.224). The CT-based modified Hinchey classification of right colonic diverticulitis showed good interobserver agreement. Most patients with right colonic diverticulitis had lower stages (Ia-II) at the point of CT, rarely needed surgery, and had a low recurrence rate.
CT based computerized identification and analysis of human airways: a review.
Pu, Jiantao; Gu, Suicheng; Liu, Shusen; Zhu, Shaocheng; Wilson, David; Siegfried, Jill M; Gur, David
2012-05-01
As one of the most prevalent chronic disorders, airway disease is a major cause of morbidity and mortality worldwide. In order to understand its underlying mechanisms and to enable assessment of therapeutic efficacy of a variety of possible interventions, noninvasive investigation of the airways in a large number of subjects is of great research interest. Due to its high resolution in temporal and spatial domains, computed tomography (CT) has been widely used in clinical practices for studying the normal and abnormal manifestations of lung diseases, albeit there is a need to clearly demonstrate the benefits in light of the cost and radiation dose associated with CT examinations performed for the purpose of airway analysis. Whereas a single CT examination consists of a large number of images, manually identifying airway morphological characteristics and computing features to enable thorough investigations of airway and other lung diseases is very time-consuming and susceptible to errors. Hence, automated and semiautomated computerized analysis of human airways is becoming an important research area in medical imaging. A number of computerized techniques have been developed to date for the analysis of lung airways. In this review, we present a summary of the primary methods developed for computerized analysis of human airways, including airway segmentation, airway labeling, and airway morphometry, as well as a number of computer-aided clinical applications, such as virtual bronchoscopy. Both successes and underlying limitations of these approaches are discussed, while highlighting areas that may require additional work.
CT based computerized identification and analysis of human airways: A review
Energy Technology Data Exchange (ETDEWEB)
Pu Jiantao; Gu Suicheng; Liu Shusen; Zhu Shaocheng; Wilson, David; Siegfried, Jill M.; Gur, David [Imaging Research Center, Department of Radiology, University of Pittsburgh, 3362 Fifth Avenue, Pittsburgh, Pennsylvania 15213 (United States); School of Computing, University of Utah, Salt Lake City, Utah 84112 (United States); Department of Radiology, Henan Provincial People' s Hospital, Zhengzhou 450003 (China); Department of Medicine, University of Pittsburgh, 580 S. Aiken Avenue, Suite 400, Pittsburgh, Pennsylvania 15232 (United States); Department of Pharmacology and Chemical Biology, Hillman Cancer Center, Pittsburgh, Pennsylvania 15213 (United States); Imaging Research Center, Department of Radiology, University of Pittsburgh, 3362 Fifth Avenue, Pittsburgh, PA 15213 (United States)
2012-05-15
As one of the most prevalent chronic disorders, airway disease is a major cause of morbidity and mortality worldwide. In order to understand its underlying mechanisms and to enable assessment of therapeutic efficacy of a variety of possible interventions, noninvasive investigation of the airways in a large number of subjects is of great research interest. Due to its high resolution in temporal and spatial domains, computed tomography (CT) has been widely used in clinical practices for studying the normal and abnormal manifestations of lung diseases, albeit there is a need to clearly demonstrate the benefits in light of the cost and radiation dose associated with CT examinations performed for the purpose of airway analysis. Whereas a single CT examination consists of a large number of images, manually identifying airway morphological characteristics and computing features to enable thorough investigations of airway and other lung diseases is very time-consuming and susceptible to errors. Hence, automated and semiautomated computerized analysis of human airways is becoming an important research area in medical imaging. A number of computerized techniques have been developed to date for the analysis of lung airways. In this review, we present a summary of the primary methods developed for computerized analysis of human airways, including airway segmentation, airway labeling, and airway morphometry, as well as a number of computer-aided clinical applications, such as virtual bronchoscopy. Both successes and underlying limitations of these approaches are discussed, while highlighting areas that may require additional work.
Energy Technology Data Exchange (ETDEWEB)
Kaufmann, S., E-mail: sascha.kaufmann@med.uni-tuebingen.de [Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Strasse 3, 72076 Tübingen (Germany); Horger, T., E-mail: horger@ma.tum.de [Technische Universität München, Boltzmannstraße 3, 85748 Garching (Germany); Oelker, A., E-mail: oelker@ma.tum.de [Technische Universität München, Boltzmannstraße 3, 85748 Garching (Germany); Kloth, C., E-mail: christopher.kloth@med.uni-tuebingen.de [Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Strasse 3, 72076 Tübingen (Germany); Nikolaou, K., E-mail: Konstantin.Nikolaou@med.uni-tuebingen.de [Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Strasse 3, 72076 Tübingen (Germany); Schulze, M., E-mail: maximilian.schulze@med.uni-tuebingen.de [Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Strasse 3, 72076 Tübingen (Germany); Horger, M., E-mail: marius.horger@med.uni-tuebingen.de [Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Strasse 3, 72076 Tübingen (Germany)
2015-06-15
Highlights: • Quantification of perfusion with VPCT has great potential for functional imaging. • We present our preliminary results of perfusion parameters (Blood Flow, Blood Volume and kk-trans) of hepatocellular carcinoma (HCC) in terms of using VPCT and two different calculation methods, compare their results and look for correlation between tumor arterialization and lesion size. • VPCT can measure tumor volume perfusion non-invasively and enables quantification of the degree of HCC arterialization. Results are dependent on the technique used with best inter-method correlation for Blood Flow. • Tumor arterialization did not proved size-dependent. - Abstract: Objective: To characterize hepatocellular carcinoma (HCC) in terms of perfusion parameters using volume perfusion CT (VPCT) and two different calculation methods, compare their results, look for interobserver agreement of measurements and correlation between tumor arterialization and lesion size. Material and methods: This study was part of a prospective monitoring study in patients with HCC undergoing TACE, which was approved by the local Institutional Review Board. 79 HCC-patients (mean age, 64.7) with liver cirrhosis were enrolled. VPCT was performed for 40 s covering the involved liver (80 kV, 100/120 mAs) using 64 mm × 0.6 mm collimation, 26 consecutive volume measurements, 50 mL iodinated contrast IV and 5 mL/s flow rate. Mean/maximum blood flow (BF; ml/100 mL/min), blood volume (BV) and k-trans were determined both with the maximum slope + Patlak vs. deconvolution method. Additionally, the portal venous liver perfusion (PVP), the arterial liver perfusion (ALP) and the hepatic perfusion index (HPI) were determined for each tumor including size measurements. Interobserver agreement for all perfusion parameters was calculated using intraclass correlation coefficients (ICC). Results: The max. slope + Patlak method yielded: BFmean/max = 37.8/57 mL/100 g-tissue/′, BVmean/max = 9.8/11.1 mL/100 g
Validation of a deformable image registration technique for cone beam CT-based dose verification
Energy Technology Data Exchange (ETDEWEB)
Moteabbed, M., E-mail: mmoteabbed@partners.org; Sharp, G. C.; Wang, Y.; Trofimov, A.; Efstathiou, J. A.; Lu, H.-M. [Massachusetts General Hospital, Boston, Massachusetts 02114 and Harvard Medical School, Boston, Massachusetts 02115 (United States)
2015-01-15
Purpose: As radiation therapy evolves toward more adaptive techniques, image guidance plays an increasingly important role, not only in patient setup but also in monitoring the delivered dose and adapting the treatment to patient changes. This study aimed to validate a method for evaluation of delivered intensity modulated radiotherapy (IMRT) dose based on multimodal deformable image registration (DIR) for prostate treatments. Methods: A pelvic phantom was scanned with CT and cone-beam computed tomography (CBCT). Both images were digitally deformed using two realistic patient-based deformation fields. The original CT was then registered to the deformed CBCT resulting in a secondary deformed CT. The registration quality was assessed as the ability of the DIR method to recover the artificially induced deformations. The primary and secondary deformed CT images as well as vector fields were compared to evaluate the efficacy of the registration method and it’s suitability to be used for dose calculation. PLASTIMATCH, a free and open source software was used for deformable image registration. A B-spline algorithm with optimized parameters was used to achieve the best registration quality. Geometric image evaluation was performed through voxel-based Hounsfield unit (HU) and vector field comparison. For dosimetric evaluation, IMRT treatment plans were created and optimized on the original CT image and recomputed on the two warped images to be compared. The dose volume histograms were compared for the warped structures that were identical in both warped images. This procedure was repeated for the phantom with full, half full, and empty bladder. Results: The results indicated mean HU differences of up to 120 between registered and ground-truth deformed CT images. However, when the CBCT intensities were calibrated using a region of interest (ROI)-based calibration curve, these differences were reduced by up to 60%. Similarly, the mean differences in average vector field
SURGICAL TREATMENT OF DIASTEMATOMYELIA USING CT-BASED NAVIGATION SYSTEM (CASE REPORT
Directory of Open Access Journals (Sweden)
S. V. Vissarionov
2013-01-01
Full Text Available The authors presented the clinical observation of the patient 14 years old with congenital malformation of the spinal canal associated with congenital scoliosis and multiple vertebral malformations. The main congenital malformation was diastematomyelia type I at level Th11-Th12, fixed spinal cord syndrome and flail legs. The surgery was performed in the following way: removal of the bone septum of the spinal canal and elimination of the spinal cord fixation using 3D computer navigation. Using 3D navigation allowed exactly to detect the location of the bone septum, creating conditions for reducing the extent of surgical access and minimizing the area of the approach to the same bone spicules. These factors allowed to manage in postoperative period without additional external orthotics. The observation period for patients was 1 year 7 months after surgery.
Abdoli, Mehrsima; Ay, Mohammad Reza; Ahmadian, Alireza; Dierckx, Rudi A. J. O.; Zaidi, Habib
2010-01-01
Purpose: The presence of metallic dental fillings is prevalent in head and neck PET/CT imaging and generates bright and dark streaking artifacts in reconstructed CT images. The resulting artifacts would propagate to the corresponding PET images following CT-based attenuation correction (CTAC). This
Harnish, Roy; Prevrhal, Sven; Alavi, Abass; Zaidi, Habib; Lang, Thomas F.
2014-01-01
To determine if metal artefact reduction (MAR) combined with a priori knowledge of prosthesis material composition can be applied to obtain CT-based attenuation maps with sufficient accuracy for quantitative assessment of F-18-fluorodeoxyglucose uptake in lesions near metallic prostheses. A custom h
Abdoli, Mehrsima; de Jong, Johan R.; Pruim, Jan; Dierckx, Rudi A. J. O.; Zaidi, Habib
2011-01-01
Purpose Metallic prosthetic replacements, such as hip or knee implants, are known to cause strong streaking artefacts in CT images. These artefacts likely induce over-or underestimation of the activity concentration near the metallic implants when applying CT-based attenuation correction of positron
de Souza, Leonardo Cordeiro; Lugon, Jocemir Ronaldo
2015-01-01
ABSTRACT OBJECTIVE: The use of the rapid shallow breathing index (RSBI) is recommended in ICUs, where it is used as a predictor of mechanical ventilation (MV) weaning success. The aim of this study was to compare the performance of the RSBI calculated by the traditional method (described in 1991) with that of the RSBI calculated directly from MV parameters. METHODS: This was a prospective observational study involving patients who had been on MV for more than 24 h and were candidates for weaning. The RSBI was obtained by the same examiner using the two different methods (employing a spirometer and the parameters from the ventilator display) at random. In comparing the values obtained with the two methods, we used the Mann-Whitney test, Pearson's linear correlation test, and Bland-Altman plots. The performance of the methods was compared by evaluation of the areas under the ROC curves. RESULTS: Of the 109 selected patients (60 males; mean age, 62 ± 20 years), 65 were successfully weaned, and 36 died. There were statistically significant differences between the two methods for respiratory rate, tidal volume, and RSBI (p < 0.001 for all). However, when the two methods were compared, the concordance and the intra-observer variation coefficient were 0.94 (0.92-0.96) and 11.16%, respectively. The area under the ROC curve was similar for both methods (0.81 ± 0.04 vs. 0.82 ± 0.04; p = 0.935), which is relevant in the context of this study. CONCLUSIONS: The satisfactory performance of the RSBI as a predictor of weaning success, regardless of the method employed, demonstrates the utility of the method using the mechanical ventilator. PMID:26785962
Directory of Open Access Journals (Sweden)
Vahid Moslemi
2011-03-01
Full Text Available Introduction: In brachytherapy, radioactive sources are placed close to the tumor, therefore, small changes in their positions can cause large changes in the dose distribution. This emphasizes the need for computerized treatment planning. The usual method for treatment planning of cervix brachytherapy uses conventional radiographs in the Manchester system. Nowadays, because of their advantages in locating the source positions and the surrounding tissues, CT and MRI images are replacing conventional radiographs. In this study, we used CT images in Monte Carlo based dose calculation for brachytherapy treatment planning, using an interface software to create the geometry file required in the MCNP code. The aim of using the interface software is to facilitate and speed up the geometry set-up for simulations based on the patient’s anatomy. This paper examines the feasibility of this method in cervix brachytherapy and assesses its accuracy and speed. Material and Methods: For dosimetric measurements regarding the treatment plan, a pelvic phantom was made from polyethylene in which the treatment applicators could be placed. For simulations using CT images, the phantom was scanned at 120 kVp. Using an interface software written in MATLAB, the CT images were converted into MCNP input file and the simulation was then performed. Results: Using the interface software, preparation time for the simulations of the applicator and surrounding structures was approximately 3 minutes; the corresponding time needed in the conventional MCNP geometry entry being approximately 1 hour. The discrepancy in the simulated and measured doses to point A was 1.7% of the prescribed dose. The corresponding dose differences between the two methods in rectum and bladder were 3.0% and 3.7% of the prescribed dose, respectively. Comparing the results of simulation using the interface software with those of simulation using the standard MCNP geometry entry showed a less than 1
Energy Technology Data Exchange (ETDEWEB)
England, Andrew, E-mail: a.england@liv.ac.u [Directorate of Medical Imaging and Radiotherapy, University of Liverpool, Johnston Building, Quadrangle, Brownlow Hill, Liverpool L69 3GB (United Kingdom); Best, Abigail; Friend, Charlotte [Directorate of Medical Imaging and Radiotherapy, University of Liverpool, Johnston Building, Quadrangle, Brownlow Hill, Liverpool L69 3GB (United Kingdom)
2010-11-15
Aim: To evaluate the variability of CT AAA measurements undertaken by radiologists and radiographers. Methods: 19 Observers (4 radiologists, 15 radiographers) were invited to independently measure maximum aneurysm diameter (Dmax) on ten CT scans. Each CT scan was presented randomly to each observer; four were duplicates testing intra-observer variability. All measurements were undertaken from axial CT images using electronic callipers, all observers were blinded to any previous measurements. Both the slice number and the maximum AAA diameter (in any plane) were recorded. Results: Intra-observer variability was lower for radiographers with a mean paired difference of -0.18 {+-} 2.6 mm compared to -2.1 {+-} 3.5 mm (P = 0.054). Inter-observer variability within each observer group was comparable, radiographers 0.1 {+-} 5.0 mm; radiologists -0.1 {+-} 3.1 mm (P = 0.680). When directly comparing between the two groups mean difference was -2.0 {+-} 4.0 mm with 43% of paired measurements {<=}2 mm or less and 78% {<=}5 mm. Slice selection was less variable between the two groups with 88% of repeat radiographer measurements within {+-}1 slice and 91% of radiologists measurements with {+-}1 slice (P = 0.228). Conclusion: The accuracy of radiographers in performing AAA CT measurements is encouraging. Variability exists for both professions, and in some instances may be clinically significant. Observers should be aware of measurement variability issues and have an understanding of the factors responsible. Careful and repeat measurements of AAAs around 5.5 cm are recommended in order to define treatment.
Kozłowski, Sławomir; Pietraszek, Andrzej; Pietrzykowska-Kuncman, Malwina; Danielska, Justyna; Sobotkowski, Janusz; Łuniewska-Bury, Jolanta; Fijuth, Jacek
2016-01-01
Purpose Brachytherapy (BT), due to rapid dose fall off and minor set-up errors, should be superior to external beam radiotherapy (EBRT) for treatment of lesions in difficult locations like nose and earlobe. Evidences in this field are scarce. We describe computed tomography (CT) based surface mould BT for non-melanoma skin cancers (NMSC), and compare its conformity, dose coverage, and tissue sparing ability to EBRT. Material and methods We describe procedure of preparation of surface mould applicator and dosimetry parameters of BT plans, which were implemented in 10 individuals with NMSC of nose and earlobe. We evaluated dose coverage by minimal dose to 90% of planning target volume (PTV) (D90), volumes of PTV receiving 90-150% of prescribed dose (PD) (VPTV90-150), conformal index for 90 and 100% of PD (COIN90, COIN100), dose homogeneity index (DHI), dose nonuniformity ratio (DNR), exposure of organs. Prospectively, we created CT-based photons and electrons plans. We compared conformity (COIN90, COIN100), dose coverage of PTV (D90, VPTV90, VPTV100), volumes of body receiving 10-90% of PD (V10-V90) of EBRT and BT plans. Results We obtained mean BT-DHI = 0.76, BT-DNR = 0.23, EBRT-DHI = 1.26. We observed no significant differences in VPTV90 and D90 between BT and EBRT. Mean BT-VPTV100 (89.4%) was higher than EBRT-VPTV100 (71.2%). Both COIN90 (BT-COIN90 = 0.46 vs. EBRT-COIN90 = 0.21) and COIN100 (BT-COIN100 = 0.52 vs. EBRT-COIN100 = 0.26) were superior for BT plans. We observed more exposure of normal tissues for small doses in BT plans (V10, V20), for high doses in EBRT plans (V70, V90). Conclusions Computed tmography-based surface mould brachytherapy for superficial lesions on irregular surfaces is a highly conformal method with good homogeneity. Brachytherapy is superior to EBRT in those locations in terms of conformity and normal tissue sparing ability in high doses. PMID:27504128
Directory of Open Access Journals (Sweden)
Charlotte A. Brassey
2016-01-01
Full Text Available The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758 has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046. When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1, we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based
Brassey, Charlotte A; O'Mahoney, Thomas G; Kitchener, Andrew C; Manning, Phillip L; Sellers, William I
2016-01-01
The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit 'convex hulls' around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = - 2.31, b = 0.90, r (2) = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8-10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6-14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of
O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.
2016-01-01
The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a
Nizard, Remy S; Porcher, Raphael; Ravaud, Philippe; Vangaver, Edouard; Hannouche, Didier; Bizot, Pascal; Sedel, Laurent
2004-08-01
Most of the early failures of total knee replacements are related to technical flaws. Conventional ancillary devices achieve good alignment in the frontal plane in only an average of 75% of total knee replacements. Computer-assisted surgery may improve the technical quality of implantation surgery. The aim of our study was to evaluate the use of computer-assisted surgery using a quality control process. Seventy-eight total knee arthroplasties were done with a CT-based computer-assisted surgery system. The outcomes studied were alignment of the lower limb, implant positioning, and operative time. The target for alignment was 180 degrees +/- 3 degrees. Cusum analysis showed that the three outcomes were controlled during the study. The cusum test identified any existing outliers. Because few data were available at the beginning of this study regarding computer-assisted surgery for total knee replacement, a randomized study was not relevant. However, a control of the procedure was mandatory. The cusum technique allowed continuous evaluation of the performance of the new procedure, and is a useful tool in assessing new technology. The results of this study showed that it is possible to do a randomized study to determine if computer-assisted surgery can improve the technical result of total knee replacement.
Qin, Le; Li, Mei; Yao, Weiwu; Shen, Ji
2017-01-01
We aimed to assess the CT-based bony tunnel valuations and their correlation with knee function after patellar dislocation triple surgeries. A retrospective study was performed on 66 patients (70 knees) who underwent patellar dislocation triple surgeries. The surgery was MPFL reconstruction primarily, combined with lateral retinaculum release and tibial tubercle osteotomy. CT examinations were performed to determine the femoral tunnel position, along with the patellar and femoral tunnel width 3 days and more than 1 year after operation for follow-up. Functional evaluation based on Kujala and Lysholm scores was also implemented. We compared tunnel width of the first and last examinations and correlated femoral tunnel position of the last examination with knee function. At the last follow-up, femoral tunnel position in the anterior-posterior direction was moderately correlated with knee function. Femoral tunnel position in the proximal-distal direction was not associated with postoperative knee function. Patellar and femoral tunnel width increased significantly at the last follow-up. However, no significant functional difference was found between patients with and without femoral tunnel enlargement. Our results suggested that the tunnel malposition in anterior-posterior position based on CT was related to impaired knee function during the follow-ups.
Energy Technology Data Exchange (ETDEWEB)
Schadewaldt, N; Schulz, H; Helle, M; Renisch, S [Philips Research Laboratories Hamburg, Hamburg (Germany); Frantzen-Steneker, M; Heide, U [The Netherlands Cancer Institute, Amsterdam (Netherlands)
2014-06-01
Purpose: To analyze the effect of computing radiation dose on automatically generated MR-based simulated CT images compared to true patient CTs. Methods: Six prostate cancer patients received a regular planning CT for RT planning as well as a conventional 3D fast-field dual-echo scan on a Philips 3.0T Achieva, adding approximately 2 min of scan time to the clinical protocol. Simulated CTs (simCT) where synthesized by assigning known average CT values to the tissue classes air, water, fat, cortical and cancellous bone. For this, Dixon reconstruction of the nearly out-of-phase (echo 1) and in-phase images (echo 2) allowed for water and fat classification. Model based bone segmentation was performed on a combination of the DIXON images. A subsequent automatic threshold divides into cortical and cancellous bone. For validation, the simCT was registered to the true CT and clinical treatment plans were re-computed on the simCT in pinnacle{sup 3}. To differentiate effects related to the 5 tissue classes and changes in the patient anatomy not compensated by rigid registration, we also calculate the dose on a stratified CT, where HU values are sorted in to the same 5 tissue classes as the simCT. Results: Dose and volume parameters on PTV and risk organs as used for the clinical approval were compared. All deviations are below 1.1%, except the anal sphincter mean dose, which is at most 2.2%, but well below clinical acceptance threshold. Average deviations are below 0.4% for PTV and risk organs and 1.3% for the anal sphincter. The deviations of the stratifiedCT are in the same range as for the simCT. All plans would have passed clinical acceptance thresholds on the simulated CT images. Conclusion: This study demonstrated the clinical usability of MR based dose calculation with the presented Dixon acquisition and subsequent fully automatic image processing. N. Schadewaldt, H. Schulz, M. Helle and S. Renisch are employed by Phlips Technologie Innovative Techonologies, a
Korreman, Stine; Rasch, Coen; McNair, Helen; Verellen, Dirk; Oelfke, Uwe; Maingon, Philippe; Mijnheer, Ben; Khoo, Vincent
2010-02-01
The past decade has provided many technological advances in radiotherapy. The European Institute of Radiotherapy (EIR) was established by the European Society of Therapeutic Radiology and Oncology (ESTRO) to provide current consensus statement with evidence-based and pragmatic guidelines on topics of practical relevance for radiation oncology. This report focuses primarily on 3D CT-based in-room image guidance (3DCT-IGRT) systems. It will provide an overview and current standing of 3DCT-IGRT systems addressing the rationale, objectives, principles, applications, and process pathways, both clinical and technical for treatment delivery and quality assurance. These are reviewed for four categories of solutions; kV CT and kV CBCT (cone-beam CT) as well as MV CT and MV CBCT. It will also provide a framework and checklist to consider the capability and functionality of these systems as well as the resources needed for implementation. Two different but typical clinical cases (tonsillar and prostate cancer) using 3DCT-IGRT are illustrated with workflow processes via feedback questionnaires from several large clinical centres currently utilizing these systems. The feedback from these clinical centres demonstrates a wide variability based on local practices. This report whilst comprehensive is not exhaustive as this area of development remains a very active field for research and development. However, it should serve as a practical guide and framework for all professional groups within the field, focussed on clinicians, physicists and radiation therapy technologists interested in IGRT.
Energy Technology Data Exchange (ETDEWEB)
Izquierdo-Garcia, David [Mount Sinai School of Medicine, Translational and Molecular Imaging Institute, New York, NY (United States); Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA (United States); Sawiak, Stephen J. [University of Cambridge, Wolfson Brain Imaging Centre, Cambridge (United Kingdom); Knesaurek, Karin; Machac, Joseph [Mount Sinai School of Medicine, Division of Nuclear Medicine, Department of Radiology, New York, NY (United States); Narula, Jagat [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Fuster, Valentin [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); The Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid (Spain); Fayad, Zahi A. [Mount Sinai School of Medicine, Translational and Molecular Imaging Institute, New York, NY (United States); Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Mount Sinai School of Medicine, Department of Radiology, New York, NY (United States)
2014-08-15
The objective of this study was to evaluate the performance of the built-in MR-based attenuation correction (MRAC) included in the combined whole-body Ingenuity TF PET/MR scanner and compare it to the performance of CT-based attenuation correction (CTAC) as the gold standard. Included in the study were 26 patients who underwent clinical whole-body FDG PET/CT imaging and subsequently PET/MR imaging (mean delay 100 min). Patients were separated into two groups: the alpha group (14 patients) without MR coils during PET/MR imaging and the beta group (12 patients) with MR coils present (neurovascular, spine, cardiac and torso coils). All images were coregistered to the same space (PET/MR). The two PET images from PET/MR reconstructed using MRAC and CTAC were compared by voxel-based and region-based methods (with ten regions of interest, ROIs). Lesions were also compared by an experienced clinician. Body mass index and lung density showed significant differences between the alpha and beta groups. Right and left lung densities were also significantly different within each group. The percentage differences in uptake values using MRAC in relation to those using CTAC were greater in the beta group than in the alpha group (alpha group -0.2 ± 33.6 %, R{sup 2} = 0.98, p < 0.001; beta group 10.31 ± 69.86 %, R{sup 2} = 0.97, p < 0.001). In comparison to CTAC, MRAC led to underestimation of the PET values by less than 10 % on average, although some ROIs and lesions did differ by more (including the spine, lung and heart). The beta group (imaged with coils present) showed increased overall PET quantification as well as increased variability compared to the alpha group (imaged without coils). PET data reconstructed with MRAC and CTAC showed some differences, mostly in relation to air pockets, metallic implants and attenuation differences in large bone areas (such as the pelvis and spine) due to the segmentation limitation of the MRAC method. (orig.)
Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT
Maspero, Matteo; Seevinck, Peter R.; Schubert, Gerald; Hoesl, Michaela A. U.; van Asselen, Bram; Viergever, Max A.; Lagendijk, Jan J. W.; Meijer, Gert J.; van den Berg, Cornelis A. T.
2017-02-01
Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in
Impact of metallic dental implants on CT-based attenuation correction in a combined PET/CT scanner
Energy Technology Data Exchange (ETDEWEB)
Kamel, Ehab M.; Burger, Cyrill; Buck, Alfred; Schulthess, Gustav K. von; Goerres, Gerhard W. [Division of Nuclear Medicine, University Hospital Zurich, Raemistrasse 100, 8091 Zurich (Switzerland)
2003-04-01
Our objective was to study the effect of metal-induced artifacts on the accuracy of the CT-based anatomic map as a prerequisite for attenuation correction of the positron emission tomography (PET) emission data. Twenty-seven oncology patients with dental metalwork were enrolled in the present study. Data acquisition was performed on a PET/CT in-line system (Discovery LS, GE Medical Systems, Milwaukee, Wis.). Attenuation correction of emission data was done twice, using an 80-mA CT scan (PET{sub CT80}) and a {sup 68}Ge transmission scan (PET{sub 68Ge}). Average count in kBq/cc was measured in regions with and without artifacts and compared for PET{sub CT80} and PET{sub 68Ge}. Data analysis of region of interests (ROIs) revealed that the ratio (ROIs PET{sub CT80}/ROIs PET{sub 68Ge}) and the difference (ROIs PET{sub CT80} minus ROIs PET{sub 68Ge}) had a higher mean of values in regions with artifacts than in regions without artifacts (1.2{+-}0.17 vs 1.06{+-}0.06 and 0.68{+-}0.67 vs 0.15{+-}0.17 kBq/cc, respectively). For most of the studied artifactual ROIs, the PET{sub CT80} values were higher than those of the PET{sub 68Ge}. Attenuation correction of PET emission data using an artifactual CT map yields false values in regions nearby artifacts caused by dental metalwork. This may falsely estimate PET quantitative studies and may disturb the visual interpretation of PET scan. (orig.)
Influence of CT-based attenuation correction on dopamine transporter SPECT with [(123)I]FP-CIT.
Lapa, Constantin; Spehl, Timo S; Brumberg, Joachim; Isaias, Ioannis U; Schlögl, Susanne; Lassmann, Michael; Herrmann, Ken; Meyer, Philipp T
2015-01-01
Dopamine transporter (DAT) imaging using single-photon emission computed tomography (SPECT) and (123)I-labelled radiopharmaceuticals like [(123)I]FP-CIT is an established part in the diagnostic work-up of parkinsonism. Guidelines recommend attenuation correction (AC), either by a calculated uniform attenuation matrix (calAC) or by a measured attenuation map (nowadays done by low-dose CT; CTAC). We explored the impact of CTAC compared to conventional calAC on diagnostic accuracy and the use of DAT availability as a biomarker of nigrostriatal integrity.Integrated SPECT/CT studies with [(123)I]FP-CIT were performed in patients with Parkinson's disease (PD; n = 15) and essential tremor (ET; n = 15). SPECT data was reconstructed with calAC, CTAC and without AC (noAC). Regional DAT availability was assessed by uniform volume-of-interest analyses providing striatal binding potential (BP ND) estimates. BP ND values were compared among methods and correlated with clinical parameters. Compared to calAC, both CTAC and noAC provided significantly lower, but highly linearly correlated BP ND estimates (R (2) = 0.96). Diagnostic performance to distinguish between patients with PD and those with ET was very high and did not differ between AC methods. CTAC and noAC data tended so show a stronger correlation with severity and duration of disease in PD and age in ET than did calAC. Defining the reference region on low-dose CT instead of SPECT did not consistently alter findings. [(123)I]FP-CIT SPECT provides a very high diagnostic accuracy for differentiation between PD and ET that is not dependent on the employed AC method. Preliminary correlations analyses suggest that BP ND estimates derived from CTAC represent a superior biomarker of nigrostriatal integrity.
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
SRD 166 MEMS Calculator (Web, free access) This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.
Lung Dose Calculation With SPECT/CT for {sup 90}Yittrium Radioembolization of Liver Cancer
Energy Technology Data Exchange (ETDEWEB)
Yu, Naichang, E-mail: yun@ccf.org [Department of Radiation Oncology, Cleveland Clinic, Cleveland, OH (United States); Srinivas, Shaym M.; DiFilippo, Frank P.; Shrikanthan, Sankaran [Department of Nuclear Medicine, Cleveland Clinic, Cleveland, OH (United States); Levitin, Abraham; McLennan, Gordon; Spain, James [Department of Interventional Radiology, Cleveland Clinic, Cleveland, OH (United States); Xia, Ping; Wilkinson, Allan [Department of Radiation Oncology, Cleveland Clinic, Cleveland, OH (United States)
2013-03-01
Purpose: To propose a new method to estimate lung mean dose (LMD) using technetium-99m labeled macroaggregated albumin ({sup 99m}Tc-MAA) single photon emission CT (SPECT)/CT for {sup 90}Yttrium radioembolization of liver tumors and to compare the LMD estimated using SPECT/CT with clinical estimates of LMD using planar gamma scintigraphy (PS). Methods and Materials: Images of 71 patients who had SPECT/CT and PS images of {sup 99m}Tc-MAA acquired before TheraSphere radioembolization of liver cancer were analyzed retrospectively. LMD was calculated from the PS-based lung shunt assuming a lung mass of 1 kg and 50 Gy per GBq of injected activity shunted to the lung. For the SPECT/CT-based estimate, the LMD was calculated with the activity concentration and lung volume derived from SPECT/CT. The effect of attenuation correction and the patient's breathing on the calculated LMD was studied with the SPECT/CT. With these effects correctly taken into account in a more rigorous fashion, we compared the LMD calculated with SPECT/CT with the LMD calculated with PS. Results: The mean dose to the central region of the lung leads to a more accurate estimate of LMD. Inclusion of the lung region around the diaphragm in the calculation leads to an overestimate of LMD due to the misregistration of the liver activity to the lung from the patient's breathing. LMD calculated based on PS is a poor predictor of the actual LMD. For the subpopulation with large lung shunt, the mean overestimation from the PS method for the lung shunt was 170%. Conclusions: A new method of calculating the LMD for TheraSphere and SIR-Spheres radioembolization of liver cancer based on {sup 99m}Tc-MAA SPECT/CT is presented. The new method provides a more accurate estimate of radiation risk to the lungs. For patients with a large lung shunt calculated from PS, a recalculation of LMD based on SPECT/CT is recommended.
Influence of CT-based attenuation correction on dopamine transporter SPECT with [123I]FP-CIT
Lapa, Constantin; Spehl, Timo S; Brumberg, Joachim; Isaias, Ioannis U; Schlögl, Susanne; Lassmann, Michael; Herrmann, Ken; Philipp T. Meyer
2015-01-01
Dopamine transporter (DAT) imaging using single-photon emission computed tomography (SPECT) and 123I-labelled radiopharmaceuticals like [123I]FP-CIT is an established part in the diagnostic work-up of parkinsonism. Guidelines recommend attenuation correction (AC), either by a calculated uniform attenuation matrix (calAC) or by a measured attenuation map (nowadays done by low-dose CT; CTAC). We explored the impact of CTAC compared to conventional calAC on diagnostic accuracy and the use of DAT...
Directory of Open Access Journals (Sweden)
Hiram A. Gay
2013-01-01
Full Text Available Background. To characterize the lung tumor volume response during conventional and hypofractionated radiotherapy (RT based on diagnostic quality CT images prior to each treatment fraction. Methods. Out of 26 consecutive patients who had received CT-on-rails IGRT to the lung from 2004 to 2008, 18 were selected because they had lung lesions that could be easily distinguished. The time course of the tumor volume for each patient was individually analyzed using a computer program. Results. The model fits of group L (conventional fractionation patients were very close to experimental data, with a median Δ% (average percent difference between data and fit of 5.1% (range 3.5–10.2%. The fits obtained in group S (hypofractionation patients were generally good, with a median Δ% of 7.2% (range 3.7–23.9% for the best fitting model. Four types of tumor responses were observed—Type A: “high” kill and “slow” dying rate; Type B: “high” kill and “fast” dying rate; Type C: “low” kill and “slow” dying rate; and Type D: “low” kill and “fast” dying rate. Conclusions. The models used in this study performed well in fitting the available dataset. The models provided useful insights into the possible underlying mechanisms responsible for the RT tumor volume response.
De Boever, Wesley; Bultreys, Tom; Derluyn, Hannelore; Van Hoorebeke, Luc; Cnudde, Veerle
2016-06-01
In this paper, we examine the possibility to use on-site permeability measurements for cultural heritage applications as an alternative for traditional laboratory tests such as determination of the capillary absorption coefficient. These on-site measurements, performed with a portable air permeameter, were correlated with the pore network properties of eight sandstones and one granular limestone that are discussed in this paper. The network properties of the 9 materials tested in this study were obtained from micro-computed tomography (μCT) and compared to measurements and calculations of permeability and the capillary absorption rate of the stones under investigation, in order to find the correlation between pore network characteristics and fluid management characteristics of these sandstones. Results show a good correlation between capillary absorption, permeability and network properties, opening the possibility of using on-site permeability measurements as a standard method in cultural heritage applications.
Energy Technology Data Exchange (ETDEWEB)
De Boever, Wesley, E-mail: Wesley.deboever@ugent.be [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium); Bultreys, Tom; Derluyn, Hannelore [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium); Van Hoorebeke, Luc [UGCT/Radiation Physics, Dept. of Physics & Astronomy, Ghent University, Proeftuinstraat 86, 9000 Ghent (Belgium); Cnudde, Veerle [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium)
2016-06-01
In this paper, we examine the possibility to use on-site permeability measurements for cultural heritage applications as an alternative for traditional laboratory tests such as determination of the capillary absorption coefficient. These on-site measurements, performed with a portable air permeameter, were correlated with the pore network properties of eight sandstones and one granular limestone that are discussed in this paper. The network properties of the 9 materials tested in this study were obtained from micro-computed tomography (μCT) and compared to measurements and calculations of permeability and the capillary absorption rate of the stones under investigation, in order to find the correlation between pore network characteristics and fluid management characteristics of these sandstones. Results show a good correlation between capillary absorption, permeability and network properties, opening the possibility of using on-site permeability measurements as a standard method in cultural heritage applications. - Highlights: • Measurements of capillary absorption are compared to in-situ permeability. • We obtain pore size distribution and connectivity by using micro-CT. • These properties explain correlation between permeability and capillarity. • Correlation between both methods is good to excellent. • Permeability measurements could be a good alternative to capillarity measurement.
Energy Technology Data Exchange (ETDEWEB)
Castro, Robson C. de; Silva, Ademir X. da; Crispim, Verginia R. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: rcastro@con.ufrj.br; Facure, Alessandro; Falcao, Rossana C. [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)]. E-mail: afsoares@cnen.gov.br; Lima, Marco A.F. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Dept. de Biologia Geral. Lab. de Radiobiologia e Radiometria]. E-mail: egbakel@vm.uff.br
2005-07-01
Radiotherapy with photon and electron beams still represents the most technique to control and treat tumour diseases. To increase the treatment efficiency of this technique is linked to the increase of beam energy, resulting in fast neutrons in the radiotherapic beams that contribute with an undesired dose to the patient. In this work has been calculated, using the MCNP4B computer code radiation of transport and an mathematical anthropomorphic phantom, the equivalent doses in organs originated from generated photoneutrons from heads of linear accelerators of medical use, that operates in the 15 MV, 18 MV, 20 MV and 25 MV. The calculated values for the equivalent doses in organs established by the 74 publication of ICRP has show variations between 0.11 mSv.n Gy{sup -1} and 7.03 mSv.n Gy{sup -1}, for the accelerator that uses 18 MV therapic beams, showing good agreement with existing values in the literature. (author)
Institute of Scientific and Technical Information of China (English)
Shintaro; Shirai; Morio; Sato; Yasutaka; Noda; Yoshitaka; Kumayama; Noritaka; Shimizu
2014-01-01
In single photon emission computed tomography-based three-dimensional radiotherapy(SPECT-B-3DCRT), im-ages of Tc-99 m galactosyl human serum albumin(GSA), which bind to receptors on functional liver cells, are merged with the computed tomography simulation im-ages. Functional liver is defined as the area of normal liver where GSA accumulation exceeds that of hepato-cellular carcinoma(HCC). In cirrhotic patients with a gigantic, proton-beam-untreatable HCC of ≥ 14 cm in diameter, the use of SPECT-B-3DCRT in combination with transcatheter arterial chemoembolization achieved a 2-year local tumor control rate of 78.6% and a 2-year survival rate of 33.3%. SPECT-B-3DCRT was applied to HCC to preserve as much functional liver as possible. Sixty-four patients with HCC, including 30 with Child B liver cirrhosis, received SPECT-B-3DCRT and none ex-perienced fatal radiation-induced liver disease(RILD). The Child-Pugh score deteriorated by 1 or 2 in > 20% of functional liver volume that was irradiated with ≥ 20 Gy. The deterioration in the Child-Pugh score decreased when the radiation plan was designed to irradiate ≤ 20% of the functional liver volume in patients givendoses of ≥ 20 Gy(FLV20Gy). Therefore, FLV20 Gy ≤ 20% may represent a safety index to prevent RILD during 3DCRT for HCC. To supplement FLV20 Gy as a qualitative index, we propose a quantitative indicator, F 20 Gy, which was calculated as F 20 Gy = 100% ×(the GSA count in the area irradiated with ≥ 20 Gy)/(the GSA count in the whole liver).
McCarty, George
1982-01-01
How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...
Cortés-Giraldo, M A; Carabe, A
2015-04-07
We compare unrestricted dose average linear energy transfer (LET) maps calculated with three different Monte Carlo scoring methods in voxelized geometries irradiated with proton therapy beams with three different Monte Carlo scoring methods. Simulations were done with the Geant4 (Geometry ANd Tracking) toolkit. The first method corresponds to a step-by-step computation of LET which has been reported previously in the literature. We found that this scoring strategy is influenced by spurious high LET components, which relative contribution in the dose average LET calculations significantly increases as the voxel size becomes smaller. Dose average LET values calculated for primary protons in water with voxel size of 0.2 mm were a factor ~1.8 higher than those obtained with a size of 2.0 mm at the plateau region for a 160 MeV beam. Such high LET components are a consequence of proton steps in which the condensed-history algorithm determines an energy transfer to an electron of the material close to the maximum value, while the step length remains limited due to voxel boundary crossing. Two alternative methods were derived to overcome this problem. The second scores LET along the entire path described by each proton within the voxel. The third followed the same approach of the first method, but the LET was evaluated at each step from stopping power tables according to the proton kinetic energy value. We carried out microdosimetry calculations with the aim of deriving reference dose average LET values from microdosimetric quantities. Significant differences between the methods were reported either with pristine or spread-out Bragg peaks (SOBPs). The first method reported values systematically higher than the other two at depths proximal to SOBP by about 15% for a 5.9 cm wide SOBP and about 30% for a 11.0 cm one. At distal SOBP, the second method gave values about 15% lower than the others. Overall, we found that the third method gave the most consistent
TU-F-18C-08: Micro-Calcification Detectability Using Spectral Breast CT Based On a Si Strip Detector
Energy Technology Data Exchange (ETDEWEB)
Cho, H; Ding, H; Molloi, S [University of California, Irvine, CA (United States); Barber, W; Iwanczyk, J [DxRay Inc., Northridge, CA (United States)
2014-06-15
Purpose: To investigate the feasibility of micro-calcification (μCa) detectability by using an energy-resolved photon-counting Si strip detector for spectral breast computed tomography (CT). Methods: A bench-top CT system was constructed using a tungsten anode x-ray source with a focal spot size of 0.8 mm and a single line 256-pixel Si strip photon counting detector with a pixel pitch of 100 μm. The slice thickness was 0.5 mm. Five different size groups of calcium carbonate grains, from 105 to 215 μm in diameter, were embedded in a cylindrical resin phantom with a diameter of 16 mm to simulate μCas. The phantoms were imaged at 65 kVp with an Entrance Skin Air Kerma (ESAK) of 1.2, 3, 6, and 8 mGy. The images were reconstructed using a standard filtered back projection (FBP) with a ramp filter. A total of 200 μCa images (5 different sizes of μCas × 4 different doses × 10 images for each setting) were combined with another 200 control images without μCas, to ultimately form 400 images for the reader study. The images were displayed in random order to three blinded observers, who were asked to give a binary score on each image regarding the presence of μCas. The μCa detectability for each image was evaluated in terms of binary decision theory metrics. The sensitivity, specificity, and accuracy were calculated to study the size and dose-dependence for μCa detectability. Additionally, the influence of the partial volume effect on the μCa detectability was investigated by simulation. Results: For a μCa larger than 140 μm in diameter, detection accuracy of above 90 % was achieved with the investigated prototype spectral CT system at ESAK of 1.2 mGy. Conclusion: The proposed Si strip detector is expected to offer superior image quality with the capability to detect μCas for low dose breast imaging.
Held, Mareike; Cremers, Florian; Sneed, Penny K; Braunstein, Steve; Fogh, Shannon E; Nakamura, Jean; Barani, Igor; Perez-Andujar, Angelica; Pouliot, Jean; Morin, Olivier
2016-03-08
A clinical workflow was developed for urgent palliative radiotherapy treatments that integrates patient simulation, planning, quality assurance, and treatment in one 30-minute session. This has been successfully tested and implemented clinically on a linac with MV CBCT capabilities. To make this approach available to all clin-ics equipped with common imaging systems, dose calculation accuracy based on treatment sites was assessed for other imaging units. We evaluated the feasibility of palliative treatment planning using on-board imaging with respect to image quality and technical challenges. The purpose was to test multiple systems using their commercial setup, disregarding any additional in-house development. kV CT, kV CBCT, MV CBCT, and MV CT images of water and anthropomorphic phantoms were acquired on five different imaging units (Philips MX8000 CT Scanner, and Varian TrueBeam, Elekta VersaHD, Siemens Artiste, and Accuray Tomotherapy linacs). Image quality (noise, contrast, uniformity, spatial resolution) was evaluated and compared across all machines. Using individual image value to density calibrations, dose calculation accuracies for simple treatment plans were assessed for the same phantom images. Finally, image artifacts on clinical patient images were evaluated and compared among the machines. Image contrast to visualize bony anatomy was sufficient on all machines. Despite a high noise level and low contrast, MV CT images provided the most accurate treatment plans relative to kV CT-based planning. Spatial resolution was poorest for MV CBCT, but did not limit the visualization of small anatomical structures. A comparison of treatment plans showed that monitor units calculated based on a prescription point were within 5% difference relative to kV CT-based plans for all machines and all studied treatment sites (brain, neck, and pelvis). Local dose differences > 5% were found near the phantom edges. The gamma index for 3%/3 mm criteria was ≥ 95% in most
Energy Technology Data Exchange (ETDEWEB)
Van Bael, S., E-mail: Simon.Vanbael@mech.kuleuven.be [Department of Mechanical Engineering, Division of Production Engineering, Machine Design and Automation, Katholieke Universiteit Leuven, Celestijnenlaan 300B, B-3001 Leuven (Belgium); Department of Mechanical Engineering, Division of Biomechanics and Engineering Design, Katholieke Universiteit Leuven, Celestijnenlaan 300C, B-3001 Leuven (Belgium); Prometheus, Division of Skeletal Tissue Engineering, Katholieke Universiteit Leuven, O and N 1, Minderbroedersstraat 8A, B-3000 Leuven (Belgium); Kerckhofs, G., E-mail: Greet.Kerckhofs@mtm.kuleuven.be [Department of Metallurgy and Materials Engineering, Katholieke Universiteit Leuven, Kasteelpark Arenberg 44, B-3001 Leuven (Belgium); Prometheus, Division of Skeletal Tissue Engineering, Katholieke Universiteit Leuven, O and N 1, Minderbroedersstraat 8A, B-3000 Leuven (Belgium); Moesen, M., E-mail: Maarten.Moesen@mtm.kuleuven.be [Department of Metallurgy and Materials Engineering, Katholieke Universiteit Leuven, Kasteelpark Arenberg 44, B-3001 Leuven (Belgium); Prometheus, Division of Skeletal Tissue Engineering, Katholieke Universiteit Leuven, O and N 1, Minderbroedersstraat 8A, B-3000 Leuven (Belgium); Pyka, G., E-mail: Gregory.Pyka@mtm.kuleuven.be [Department of Metallurgy and Materials Engineering, Katholieke Universiteit Leuven, Kasteelpark Arenberg 44, B-3001 Leuven (Belgium); Prometheus, Division of Skeletal Tissue Engineering, Katholieke Universiteit Leuven, O and N 1, Minderbroedersstraat 8A, B-3000 Leuven (Belgium); Schrooten, J., E-mail: Jan.Schrooten@mtm.kuleuven.be [Department of Metallurgy and Materials Engineering, Katholieke Universiteit Leuven, Kasteelpark Arenberg 44, B-3001 Leuven (Belgium); Prometheus, Division of Skeletal Tissue Engineering, Katholieke Universiteit Leuven, O and N 1, Minderbroedersstraat 8A, B-3000 Leuven (Belgium); and others
2011-09-15
Highlights: {yields} Selective laser melting as a production tool for porous Ti6Al4V structures. {yields} Significant mismatch between designed and as-produced properties. {yields} Decreasing mismatch using a micro-CT-based protocol. {yields} Mismatch of pore size decreased from 45% to 5%. {yields} Increased morphological controllability increases mechanical controllability. - Abstract: Despite the fact that additive manufacturing (AM) techniques allow to manufacture complex porous parts with a controlled architecture, differences can occur between designed and as-produced morphological properties. Therefore this study aimed at optimizing the robustness and controllability of the production of porous Ti6Al4V structures using selective laser melting (SLM) by reducing the mismatch between designed and as-produced morphological and mechanical properties in two runs. In the first run, porous Ti6Al4V structures with different pore sizes were designed, manufactured by SLM, analyzed by microfocus X-ray computed tomography (micro-CT) image analysis and compared to the original design. The comparison was based on the following morphological parameters: pore size, strut thickness, porosity, surface area and structure volume. Integration of the mismatch between designed and measured properties into a second run enabled a decrease of the mismatch. For example, for the average pore size the mismatch decreased from 45% to 5%. The demonstrated protocol is furthermore applicable to other 3D structures, properties and production techniques, powder metallurgy, titanium alloys, porous materials, mechanical characterization, tomography.
Directory of Open Access Journals (Sweden)
Julien Guevar
Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.
Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez-Quintana, Rodrigo
2014-01-01
The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.
Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies
Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.
2017-04-01
Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4% ± 3.1% for CBAC and 3.5% ± 3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a
Energy Technology Data Exchange (ETDEWEB)
Strydhorst, Jared H., E-mail: jared.strydhorst@gmail.com; Ruddy, Terrence D.; Wells, R. Glenn [Cardiac Imaging, University of Ottawa Heart Institute, 40 Ruskin Street, Ottawa, Ontario K1Y 4W7 (Canada)
2015-04-15
Purpose: Our goal in this work was to investigate the impact of CT-based attenuation correction on measurements of rat myocardial perfusion with {sup 99m}Tc and {sup 201}Tl single photon emission computed tomography (SPECT). Methods: Eight male Sprague-Dawley rats were injected with {sup 99m}Tc-tetrofosmin and scanned in a small animal pinhole SPECT/CT scanner. Scans were repeated weekly over a period of 5 weeks. Eight additional rats were injected with {sup 201}Tl and also scanned following a similar protocol. The images were reconstructed with and without attenuation correction, and the relative perfusion was analyzed with the commercial cardiac analysis software. The absolute uptake of {sup 99m}Tc in the heart was also quantified with and without attenuation correction. Results: For {sup 99m}Tc imaging, relative segmental perfusion changed by up to +2.1%/−1.8% as a result of attenuation correction. Relative changes of +3.6%/−1.0% were observed for the {sup 201}Tl images. Interscan and inter-rat reproducibilities of relative segmental perfusion were 2.7% and 3.9%, respectively, for the uncorrected {sup 99m}Tc scans, and 3.6% and 4.3%, respectively, for the {sup 201}Tl scans, and were not significantly affected by attenuation correction for either tracer. Attenuation correction also significantly increased the measured absolute uptake of tetrofosmin and significantly altered the relationship between the rat weight and tracer uptake. Conclusions: Our results show that attenuation correction has a small but statistically significant impact on the relative perfusion measurements in some segments of the heart and does not adversely affect reproducibility. Attenuation correction had a small but statistically significant impact on measured absolute tracer uptake.
Energy Technology Data Exchange (ETDEWEB)
Reinhardt, Michael J.; Joe, Alexius Y.; Mallek, Dirk von; Ezziddin, Samer; Palmedo, Holger [Department of Nuclear Medicine, University Hospital of Bonn, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany); Brink, Ingo [Department of Nuclear Medicine, University Hospital of Freiburg (Germany); Krause, Thomas M. [Department of Nuclear Medicine, Inselspital Bern (Switzerland)
2002-09-01
This study was performed with three aims. The first was to analyse the effectiveness of radioiodine therapy in Graves' disease patients with and without goitres under conditions of mild iodine deficiency using several tissue-absorbed doses. The second aim was to detect further parameters which might be predictive for treatment outcome. Finally, we wished to determine the deviation of the therapeutically achieved dose from that intended. Activities of 185-2,220 MBq radioiodine were calculated by means of Marinelli's formula to deliver doses of 150, 200 or 300 Gy to the thyroids of 224 patients with Graves' disease and goitres up to 130 ml in volume. Control of hyperthyroidism, change in thyroid volume and thyrotropin-receptor antibodies were evaluated 15{+-}9 months after treatment for each dose. The results were further evaluated with respect to pre-treatment parameters which might be predictive for therapy outcome. Thyroidal radioiodine uptake was measured every day during therapy to determine the therapeutically achieved target dose and its coefficient of variation. There was a significant dose dependency in therapeutic outcome: frequency of hypothyroidism increased from 27.4% after 150 Gy to 67.7% after 300 Gy, while the frequency of persistent hyperthyroidism decreased from 27.4% after 150 Gy to 8.1% after 300 Gy. Patients who became hypothyroid had a maximum thyroid volume of 42 ml and received a target dose of 256{+-}80 Gy. The coefficient of variation for the achieved target dose ranged between 27.7% for 150 Gy and 17.8% for 300 Gy. When analysing further factors which might influence therapeutic outcome, only pre-treatment thyroid volume showed a significant relationship to the result of treatment. It is concluded that a target dose of 250 Gy is essential to achieve hypothyroidism within 1 year after radioiodine therapy in Graves' disease patients with goitres up to 40 ml in volume. Patients with larger goitres might need higher doses
Angarita, Fernando A.; University Health Network; Acuña, Sergio A.; Mount Sinai Hospital; Jimenez, Carolina; University of Toronto; Garay, Javier; Pontificia Universidad Javeriana; Gömez, David; University of Toronto; Domínguez, Luis Carlos; Pontificia Universidad Javeriana
2010-01-01
Acute calculous cholecystitis is the most important cause of cholecystectomies worldwide. We review the physiopathology of the inflammatory process in this organ secondary to biliary tract obstruction, as well as its clinical manifestations, workup, and the treatment it requires. La colecistitis calculosa aguda es la causa más importante de colecistectomías en el mundo. En esta revisión de tema se resume la fisiopatología del proceso inflamatorio de la vesículabiliar secundaria a la obstru...
Energy Technology Data Exchange (ETDEWEB)
Mast, M.E.; Kempen-Harteveld, M.L. van; Petoukhova, A.L. [Centre West, Radiotherapy, The Hague (Netherlands); Heijenbrok, M.W. [Medical Center Haaglanden, Department of Radiology, The Hague (Netherlands); Scholten, A.N. [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital, Department of Radiation Oncology, Amsterdam (Netherlands); Wolterbeek, R. [Leiden University Medical Centre, Department of Medical Statistics and Bioinformatics, Leiden (Netherlands); Schreur, J.H.M. [Medical Center Haaglanden, Department of Cardiology, The Hague (Netherlands); Struikmans, H. [Centre West, Radiotherapy, The Hague (Netherlands); Leiden University Medical Centre, Department of Clinical Oncology, Leiden (Netherlands)
2016-10-15
The aim of this prospective longitudinal study was to compare coronary artery calcium (CAC) scores determined before the start of whole breast irradiation with those determined 3 years afterwards. Changes in CAC scores were analysed in 99 breast cancer patients. Three groups were compared: patients receiving left- and right-sided radiotherapy, and those receiving left-sided radiotherapy with breath-hold. We analysed overall CAC scores and left anterior descending (LAD) and right coronary artery (RCA) CAC scores. Between the three groups, changes of the value of the LAD minus the RCA CAC scores of each individual patient were also compared. Three years after breath-hold-based whole breast irradiation, a less pronounced increase of CAC scores was noted. Furthermore, LAD minus RCA scores in patients treated for left-sided breast cancer without breath-hold were higher when compared to LAD minus RCA scores of patients with right-sided breast cancers and those with left-sided breast cancer treated with breath-hold. Breath-hold in breast-conserving radiotherapy leads to a less pronounced increase of CT-based CAC scores. Therefore, breath-hold probably prevents the development of radiation-induced coronary artery disease. However, the sample size of this study is limited and the follow-up period relatively short. (orig.) [German] Das Ziel dieser prospektiven Langzeitstudie war der Vergleich der Coronary-Artery-Calcium-(CAC-)Werte vor Beginn der Brustbestrahlung mit den Werten nach 3 Jahren. Aenderungen der CAC-Werte wurden bei 99 Brustkrebspatienten analysiert. Drei Gruppen wurden untersucht: Patienten nach links- und rechtsseitiger Strahlentherapie sowie mit Bestrahlung unter Atemanhalt. Wir analysierten die Gesamt-CAC-Werte sowie die CAC-Werte der vorderen linken absteigenden (''left anterior descending'', LAD) und der rechten Koronararterie (''right coronary artery'', RCA). Zwischen den drei Gruppen wurden auch die Veraenderungen
Jamalludin, Z.; Min, U. N.; Ishak, W. Z. Wan; Malik, R. Abdul
2016-03-01
This study presents our preliminary work of the computed tomography (CT) image guided brachytherapy (IGBT) implementation on cervical cancer patients. We developed a protocol in which patients undergo two Magnetic Resonance Imaging (MRI) examinations; a) prior to external beam radiotherapy (EBRT) and b) prior to intra-cavitary brachytherapy for tumour identification and delineation during IGBT planning and dosimetry. For each fraction, patients were simulated using CT simulator and images were transferred to the treatment planning system. The HR-CTV, IR-CTV, bladder and rectum were delineated on CT-based contouring for cervical cancer. Plans were optimised to achieve HR-CTV and IR-CTV dose (D90) of total EQD2 80Gy and 60Gy respectively, while limiting the minimum dose to the most irradiated 2cm3 volume (D2cc) of bladder and rectum to total EQD2 90Gy and 75Gy respectively. Data from seven insertions were analysed by comparing the volume-based with traditional point- based doses. Based on our data, there were differences between volume and point doses of HR- CTV, bladder and rectum organs. As the number of patients having the CT-based IGBT increases from day to day in our centre, it is expected that the treatment and dosimetry accuracy will be improved with the implementation.
National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...
Institute of Scientific and Technical Information of China (English)
刘毅
2015-01-01
目的：探讨角膜屈光手术后白内障患者进行超声乳化联合人工晶状体( intraocular lens,IOL)植入手术的临床效果,对不同IOL屈光度准确性进行比较。 方法：对我院收治的120例160眼接受白内障手术并曾行角膜屈光手术的治疗近视患者相关资料进行分析,采用病史法对可获得角膜屈光手术前的角膜曲率数据K值进行计算,采用矫正角膜曲率数值法以及角膜地形图法对患者治疗前后资料记录不完整者K值,将K值代入公式,通过比较白内障术后实际屈光状态和预期屈光状态(-0.50D),比较三种计算方法IOL屈光度准确性。 结果：白内障患者手术前平均最佳矫正视力为0.25±0.05,术后最佳矫正视力提高0.80±0.05；白内障患者手术前平均等效球镜值( spherical equivalent,SE)为-1.98±1.75,患者手术后SE为+0.85±3.38(P 结论：对具有角膜屈光手术史患者根据患者临床症状、病史等选择合适的方法,能够准确地计算患者IOL,对于资料完整者采用CHM提供角膜K值；对于资料不完整者采用AKM和CHM计算K值。%AIM: To investigate the clinical effect of phacoemulsification and intraocular lens ( IOL ) implantation for cataract patients after corneal refractive surgery, and to compare the accuracy of the different refractive IOL. METHODS:The data of 120 myopia cases (160 eyes) in our hospital, who underwent cataract surgery and corneal refractive surgery were analyzed. Corneal curvature K value before corneal refractive surgery were obtained and calculated by using history method. Corneal curvature correction numerical method and corneal topography were used to record K value of patients with incomplete data before and after treatment. The K value was substituted into the formula. By comparing the actual cataract surgery and refractive state expected refractive status (-0. 50D), the accuracy of IOL refractive obtained from three methods was compared
Energy Technology Data Exchange (ETDEWEB)
Nair, M; Li, C; White, M; Davis, J [Joe Arrington Cancer Center, Lubbock, TX (United States)
2014-06-15
Purpose: We have analyzed the dose volume histogram of 140 CT based HDR brachytherapy plans and evaluated the dose received to OAR ; rectum, bladder and sigmoid colon based on recommendations from ICRU and Image guided brachytherapy working group for cervical cancer . Methods: Our treatment protocol consist of XRT to whole pelvis with 45 Gy at 1.8Gy/fraction followed by 30 Gy at 6 Gy per fraction by HDR brachytherapy in 2 weeks . The CT compatible tandem and ovoid applicators were used and stabilized with radio opaque packing material. The patient was stabilized using special re-locatable implant table and stirrups for reproducibility of the geometry during treatment. The CT scan images were taken at 3mm slice thickness and exported to the treatment planning computer. The OAR structures, bladder, rectum and sigmoid colon were outlined on the images along with the applicators. The prescription dose was targeted to A left and A right as defined in Manchester system and optimized on geometry . The dosimetry was compared on all plans using the parameter Ci.sec.cGy-1 . Using the Dose Volume Histogram (DVH) obtained from the plans the doses to rectum, sigmoid colon and bladder for ICRU defined points and 2cc volume were analyzed and reported. The following criteria were used for limiting the tolerance dose by volume (D2cc) were calculated. The rectum and sigmoid colon doses were limited to <75Gy. The bladder dose was limited to < 90Gy from both XRT and HDR brachytherapy. Results: The average total (XRT+HDRBT) BED values to prescription volume was 120 Gy. Dose 2cc to rectum was 70Gy +/− 17Gy, dose to 2cc bladder was 82+/−32 Gy. The average Ci.sec.cGy-1 calculated for the HDR plans was 6.99 +/− 0.5 Conclusion: The image based treatment planning enabled to evaluati volume based dose to critical structures for clinical interpretation.
Energy Technology Data Exchange (ETDEWEB)
Toyama, Shingo [Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan); Department of Heavy Particle Therapy and Radiation Oncology, Faculty of Medicine, Saga University, Saga (Japan); Tsuji, Hiroshi, E-mail: h_tsuji@nirs.go.jp [Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan); Mizoguchi, Nobutaka; Nomiya, Takuma; Kamada, Tadashi [Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan); Tokumaru, Sunao [Department of Heavy Particle Therapy and Radiation Oncology, Faculty of Medicine, Saga University, Saga (Japan); Mizota, Atsushi [Department of Ophthalmology, Teikyo University School of Medicine, Tokyo (Japan); Ohnishi, Yoshitaka [Department of Ophthalmology, Wakayama Medical University, Wakayama (Japan); Tsujii, Hirohiko [Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan)
2013-06-01
Purpose: To determine the long-term results of carbon ion radiation therapy (C-ion RT) in patients with choroidal melanoma, and to assess the usefulness of CT-based 2-port irradiation in reducing the risk of neovascular glaucoma (NVG). Methods and Materials: Between January 2001 and February 2012, a total of 116 patients with locally advanced or unfavorably located choroidal melanoma received CT-based C-ion RT. Of these patients, 114 were followed up for more than 6 months and their data analyzed. The numbers of T3 and T2 patients (International Union Against Cancer [UICC], 5th edition) were 106 and 8, respectively. The total dose of C-ion RT varied from 60 to 85 GyE, with each dose given in 5 fractions. Since October 2005, 2-port therapy (51 patients) has been used in an attempt to reduce the risk of NVG. A dose-volume histogram analysis was also performed in 106 patients. Results: The median follow-up was 4.6 years (range, 0.5-10.6 years). The 5-year overall survival, cause-specific survival, local control, distant metastasis-free survival, and eye retention rates were 80.4% (95% confidence interval 89.0%-71.8%), 82.2% (90.6%-73.8%), 92.8% (98.5%-87.1%), 72.1% (81.9%-62.3%), and 92.8% (98.1%-87.5%), respectively. The overall 5-year NVG incidence rate was 35.9% (25.9%-45.9%) and that of 1-port group and 2-port group were 41.6% (29.3%-54.0%) and 13.9% (3.2%-24.6%) with statistically significant difference (P<.001). The dose-volume histogram analysis showed that the average irradiated volume of the iris-ciliary body was significantly lower in the non-NVG group than in the NVG group at all dose levels, and significantly lower in the 2-port group than in the 1-port group at high dose levels. Conclusions: The long-term results of C-ion RT for choroidal melanoma are satisfactory. CT-based 2-port C-ion RT can be used to reduce the high-dose irradiated volume of the iris-ciliary body and the resulting risk of NVG.
Sniderman, A.D.; Tremblay, A.J.; Graaf, J. de; Couture, P.
2014-01-01
OBJECTIVES: This study tests the validity of the Hattori formula to calculate LDL apoB based on plasma lipids and total apoB. METHODS: In 2178 patients in a tertiary care lipid clinic, LDL apoB calculated as suggested by Hattori et al. was compared to directly measured LDL apoB isolated by ultracent
DEFF Research Database (Denmark)
Ottosson, Wiviann; Rahma, Fatma; Sjöström, David;
2016-01-01
were calculated. Results: For the spine, the smallest residual misalignments were observed in FB, independently of registration method. For GTV-T and GTV-N, soft-tissue registrations were superior to bony registration, independently of FB or DIBH. Compared to FB, PTV-Totals were during DIBH reduced...... uncertainties compared to FB, DIBH resulted in smaller PTV-Totals for all registration methods. Soft-tissue registrations were superior to bony registration, independently of FB and DIBH. During DIBH, undesirable arching of the back was identified. Daily CBCT pre-treatment target verification is advised....
Energy Technology Data Exchange (ETDEWEB)
Zuca Aparicio, D.; Perez Moreno, J. M.; Fernandez Leton, P.; Garcia Ruiz-Zorrila, J.; Minambres Moro, A.
2013-07-01
At present it is not common to find commercial planning systems that incorporate dose calculation algorithms to do based on Monte Carlo [1,2] photons This paper summarizes the process followed in the evaluation of a dose calculation algorithm for MC beams of 6 MV photons from an accelerator dedicated to radiosurgery (SRS), cranial stereotactic radiotherapy (SRT) and extracranial (SBRT). (Author)
Geochemical Calculations Using Spreadsheets.
Dutch, Steven Ian
1991-01-01
Spreadsheets are well suited to many geochemical calculations, especially those that are highly repetitive. Some of the kinds of problems that can be conveniently solved with spreadsheets include elemental abundance calculations, equilibrium abundances in nuclear decay chains, and isochron calculations. (Author/PR)
Autistic Savant Calendar Calculators.
Patti, Paul J.
This study identified 10 savants with developmental disabilities and an exceptional ability to calculate calendar dates. These "calendar calculators" were asked to demonstrate their abilities, and their strategies were analyzed. The study found that the ability to calculate dates into the past or future varied widely among these…
How Do Calculators Calculate Trigonometric Functions?
Underwood, Jeremy M.; Edwards, Bruce H.
How does your calculator quickly produce values of trigonometric functions? You might be surprised to learn that it does not use series or polynomial approximations, but rather the so-called CORDIC method. This paper will focus on the geometry of the CORDIC method, as originally developed by Volder in 1959. This algorithm is a wonderful…
Energy Technology Data Exchange (ETDEWEB)
Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment
1998-03-01
In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)
Electrical installation calculations advanced
Kitcher, Christopher
2013-01-01
All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio
Electrical installation calculations basic
Kitcher, Christopher
2013-01-01
All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo
DEFF Research Database (Denmark)
Bahr, Patrick; Hutton, Graham
2015-01-01
In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers...... falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms...
Radar Signature Calculation Facility
Federal Laboratory Consortium — FUNCTION: The calculation, analysis, and visualization of the spatially extended radar signatures of complex objects such as ships in a sea multipath environment and...
Electronics Environmental Benefits Calculator
U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...
Directory of Open Access Journals (Sweden)
Andreasen Annette
2011-01-01
Full Text Available Abstract Background Coronary angiography is the current standard method to evaluate coronary atherosclerosis in patients with suspected angina pectoris, but non-invasive CT scanning of the coronaries are increasingly used for the same purpose. Low-density lipoprotein (LDL cholesterol and other lipid and lipoprotein variables are major risk factors for coronary artery disease. Small dense LDL particles may be of particular importance, but clinical studies evaluating their predictive value for coronary atherosclerosis are few. Methods We performed a study of 194 consecutive patients with chest pain, a priori considered of low to intermediate risk for significant coronary stenosis (>50% lumen obstruction who were referred for elective coronary angiography. Plasma lipids and lipoproteins were measured including the subtype pattern of LDL particles, and all patients were examined by coronary CT scanning before coronary angiography. Results The proportion of small dense LDL was a strong univariate predictor of significant coronary artery stenosis evaluated by both methods. After adjustment for age, gender, smoking, and waist circumference only results obtained by traditional coronary angiography remained statistically significant. Conclusion Small dense LDL particles may add to risk stratification of patients with suspected angina pectoris.
Rafiq, Ashiq Mahmood; Udagawa, Jun; Lundh, Torbjörn; Jahan, Esrat; Matsumoto, Akihiro; Sekine, Joji; Otani, Hiroki
2012-02-01
Prenatal development of the mandible is an important factor in its postnatal function. To examine quantitatively normal and abnormal developmental changes of the mandible, we here evaluated morphological changes in mineralizing mandibles by thin-plate spline (TPS) including bending energy (BE) and Procrustes distance (PD), and by Procrustes analyses including warp analysis, regression analysis, and discriminant function analysis. BE and PD were calculated from lateral views of the mandibles of mice or of human fetuses using scanned micro-computed tomography (CT) images or alizarin red S-stained specimens, respectively. BE and PD were compared (1) between different developmental stages, and further, to detect abnormalities in the data sets and to evaluate the deviation from normal development in mouse fetuses, (2) at embryonic day (E) 18.5 between the normal and deformed mandibles, the latter being caused by suturing the jaw at E15.5, (3) at E15.5 and E18.5 between normal and knockout mutant mice of receptor tyrosine kinase-like orphan receptor (Ror) 2. In mice, BE and PD were large during the prenatal period and small after postnatal day 3, suggesting that the mandibular shape changes rapidly during the prenatal and early postnatal periods. In humans, BE of the mandibles peaked at 16-19 weeks of gestation, suggesting the time-dependent change in the mandibular shape. TPS and Procrustes analyses statistically separated the abnormal mandibles of the sutured or Ror2 mutant mouse fetuses from the normal mandible. These results suggest that TPS and Procrustes analyses are useful for assessing the morphogenesis and deformity of the mandible.
Directory of Open Access Journals (Sweden)
Mehrdad Bakhshayeshkaram
2016-01-01
Full Text Available Background: In the era of well-developed site-specific treatment strategies in cancer, identification of occult primary is of paramount importance in CUP patients. Furthermore, exact determination of the extent of the disease may help in optimizing treatment planning. The aim of the present study was to investigate additional value of F-18 FDG PET/CT in patients with cancer of unknown primary (CUP as an appropriate imaging tool in early phase of initial standard work up.Materials and Methods: Sixty-two newly diagnosed CUP patients with inconclusive diagnostic CT scan of chest, abdomen and pelvis referring for F-18 FDG PET/CT were enrolled in this study. Standard of reference was defined as histopathology, other diagnostic procedures and a 3-month formal clinical follow up. The results of PET/CT were categorized as suggestion for primary site and additional metastasis and classified as true positive, false positive, false negative and true negative. The impact of additional metastasis revealed by F-18 FDG PET/CT on treatment planning and the time contribution of F-18 FDG PET/CT in diagnostic pathway was investigated.Results: Sixty-two patients with mean age of 62 (30 men, 32 women, PET/CT correctly identified primary origin in 32% with false positive rate of 14.8%. No primary lesion was detected after negative PET/CT according to standard of reference. Sensitivity, Specificity and accuracy were 100%, 78% and 85%, respectively. Additional metastatic site was found in 56% with 22% impact on treatment planning. Time contribution for PET/CT was 10% of total diagnostic pathway.Conclusion: Providing higher detection rate of primary origin with excellent diagnostic performance, shortening the diagnostic pathway and improving treatment planning, F-18 FDG PET/CT may play a major role in diagnostic work up of CUP patients and may be recommended as an alternative imaging tool in early phase of investigation.
Energy Technology Data Exchange (ETDEWEB)
Tang, Chun Xiang [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Zhang, Long Jiang, E-mail: kevinzhlj@163.com [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Han, Zong Hong; Zhou, Chang Sheng [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Krazinski, Aleksander W.; Silverman, Justin R. [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Schoepf, U. Joseph [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Lu, Guang Ming, E-mail: cjr.luguangming@vip.163.com [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China)
2013-12-01
Purpose: To evaluate the performance of dual-energy CT (DECT) based vascular iodine analysis for the detection of acute peripheral pulmonary thrombus (PE) in a canine model with histopathological findings as the reference standard. Materials and methods: The study protocol was approved by our institutional animal committee. Thrombi (n = 12) or saline (n = 4) were intravenously injected via right femoral vein in sixteen dogs, respectively. CT pulmonary angiography (CTPA) in DECT mode was performed and conventional CTPA images and DECT based vascular iodine studies using Lung Vessels application were reconstructed. Two radiologists visually evaluated the number and location of PEs using conventional CTPA and DECT series on a per-animal and a per-clot basis. Detailed histopathological examination of lung specimens and catheter angiography served as reference standard. Sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) of DECT and CTPA were calculated on a segmental and subsegmental or more distal pulmonary artery basis. Weighted κ values were computed to evaluate inter-modality and inter-reader agreement. Results: Thirteen dogs were enrolled for final image analysis (experimental group = 9, control group = 4). Histopathological results revealed 237 emboli in 45 lung lobes in 9 experimental dogs, 11 emboli in segmental pulmonary arteries, 49 in subsegmental pulmonary arteries, 177 in fifth-order or more distal pulmonary arteries. Overall sensitivity, specificity, accuracy, PPV, and NPV for CTPA plus DECT were 93.1%, 76.9%, 87.8%, 89.4%, and 84.2% for the detection of pulmonary emboli. With CTPA versus DECT, sensitivities, specificities, accuracies, PPVs, and NPVs are all 100% for the detection of pulmonary emboli on a segmental pulmonary artery basis, 88.9%, 100%, 96.0%, 100%, and 94.1% for CTPA and 90.4%, 93.0%, 92.0%, 88.7%, and 94.1% for DECT on a subsegmental pulmonary artery basis; 23.8%, 96.4%, 50.4%, 93
Calculators and Polynomial Evaluation.
Weaver, J. F.
The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…
Energy Technology Data Exchange (ETDEWEB)
Jong, Evelyn E.C. de; Elmpt, Wouter van; Leijenaar, Ralph T.H.; Lambin, Philippe [Maastricht University Medical Centre, Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht (Netherlands); Hoekstra, Otto S. [VU University Medical Center, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Groen, Harry J.M. [University of Groningen and University Medical Center Groningen, Department of Pulmonary Diseases, Groningen (Netherlands); Smit, Egbert F. [VU University Medical Center, Department of Pulmonary Diseases, Amsterdam (Netherlands); The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Department of Thoracic Oncology, Amsterdam (Netherlands); Boellaard, Ronald [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands); Noort, Vincent van der [The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Department of Biometrics, Amsterdam (Netherlands); Troost, Esther G.C. [Maastricht University Medical Centre, Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht (Netherlands); Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiooncology, Dresden (Germany); Medical Faculty and University Hospital Carl Gustav Carus of Technische Universitaet Dresden, Department of Radiotherapy and Radiation Oncology, Dresden (Germany); Dingemans, Anne-Marie C. [Maastricht University Medical Centre, Department of Pulmonology, GROW-School for Oncology and Developmental Biology, Maastricht (Netherlands)
2017-01-15
Nitroglycerin (NTG) is a vasodilating drug, which increases tumor blood flow and consequently decreases hypoxia. Therefore, changes in [18F] fluorodeoxyglucose positron emission tomography ([18F]FDG PET) uptake pattern may occur. In this analysis, we investigated the feasibility of [18F]FDG PET for response assessment to paclitaxel-carboplatin-bevacizumab (PCB) treatment with and without NTG patches. And we compared the [18F]FDG PET response assessment to RECIST response assessment and survival. A total of 223 stage IV non-small cell lung cancer (NSCLC) patients were included in a phase II study (NCT01171170) randomizing between PCB treatment with or without NTG patches. For 60 participating patients, a baseline and a second [18F]FDG PET/computed tomography (CT) scan, performed between day 22 and 24 after the start of treatment, were available. Tumor response was defined as a 30 % decrease in CT and PET parameters, and was compared to RECIST response at week 6. The predictive value of these assessments for progression free survival (PFS) and overall survival (OS) was assessed with and without NTG. A 30 % decrease in SUVpeak assessment identified more patients as responders compared to a 30 % decrease in CT diameter assessment (73 % vs. 18 %), however, this was not correlated to OS (SUVpeak30 p = 0.833; CTdiameter30 p = 0.557). Changes in PET parameters between the baseline and the second scan were not significantly different for the NTG group compared to the control group (p value range 0.159-0.634). The CT-based (part of the [18F]FDG PET/CT) parameters showed a significant difference between the baseline and the second scan for the NTG group compared to the control group (CT diameter decrease of 7 ± 23 % vs. 19 ± 14 %, p = 0.016, respectively). The decrease in tumoral FDG uptake in advanced NSCLC patients treated with chemotherapy with and without NTG did not differ between both treatment arms. Early PET-based response assessment showed more tumor responders
Interval arithmetic in calculations
Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima
2016-10-01
Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.
Unit Cost Compendium Calculations
U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...
DEFF Research Database (Denmark)
Frederiksen, Morten
2014-01-01
Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... to take actual subjective experience into consideration. It points out that, first, Løgstrup places trust alongside calculativeness as a different mode of engaging in social interaction, rather conceiving of trust as a state or the outcome of a decision-making process. Secondly, the analysis must take...
EFFECTIVE DISCHARGE CALCULATION GUIDE
Institute of Scientific and Technical Information of China (English)
D.S.BIEDENHARN; C.R.THORNE; P.J.SOAR; R.D.HEY; C.C.WATSON
2001-01-01
This paper presents a procedure for calculating the effective discharge for rivers with alluvial channels.An alluvial river adjusts the bankfull shape and dimensions of its channel to the wide range of flows that mobilize the boundary sediments. It has been shown that time-averaged river morphology is adjusted to the flow that, over a prolonged period, transports most sediment. This is termed the effective discharge.The effective discharge may be calculated provided that the necessary data are available or can be synthesized. The procedure for effective discharge calculation presented here is designed to have general applicability, have the capability to be applied consistently, and represent the effects of physical processes responsible for determining the channel, dimensions. An example of the calculations necessary and applications of the effective discharge concept are presented.
Magnetic Field Grid Calculator
National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...
Current interruption transients calculation
Peelo, David F
2014-01-01
Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,
Source and replica calculations
Energy Technology Data Exchange (ETDEWEB)
Whalen, P.P.
1994-02-01
The starting point of the Hiroshima-Nagasaki Dose Reevaluation Program is the energy and directional distributions of the prompt neutron and gamma-ray radiation emitted from the exploding bombs. A brief introduction to the neutron source calculations is presented. The development of our current understanding of the source problem is outlined. It is recommended that adjoint calculations be used to modify source spectra to resolve the neutron discrepancy problem.
Scientific calculating peripheral
Energy Technology Data Exchange (ETDEWEB)
Ethridge, C.D.; Nickell, J.D. Jr.; Hanna, W.H.
1979-09-01
A scientific calculating peripheral for small intelligent data acquisition and instrumentation systems and for distributed-task processing systems is established with a number-oriented microprocessor controlled by a single component universal peripheral interface microcontroller. A MOS/LSI number-oriented microprocessor provides the scientific calculating capability with Reverse Polish Notation data format. Master processor task definition storage, input data sequencing, computation processing, result reporting, and interface protocol is managed by a single component universal peripheral interface microcontroller.
Neyrinck, Marleen M; Vrielink, Hans
2015-02-01
It's important to work smoothly with your apheresis equipment when you are an apheresis nurse. Attention should be paid to your donor/patient and the product you're collecting. It gives additional value to your work when you are able to calculate the efficiency of your procedures. You must be capable to obtain an optimal product without putting your donor/patient at risk. Not only the total blood volume (TBV) of the donor/patient plays an important role, but also specific blood values influence the apheresis procedure. Therefore, not all donors/patients should be addressed in the same way. Calculation of TBV, extracorporeal volume, and total plasma volume is needed. Many issues determine your procedure time. By knowing the collection efficiency (CE) of your apheresis machine, you can calculate the number of blood volumes to be processed to obtain specific results. You can calculate whether you need one procedure to obtain specific results or more. It's not always needed to process 3× the TBV. In this way, it can be avoided that the donor/patient is needless long connected to the apheresis device. By calculating the CE of each device, you can also compare the various devices for quality control reasons, but also nurses/operators.
Trani, Daniela; Reniers, Brigitte; Persoon, Lucas; Podesta, Mark; Nalbantov, Georgi; Leijenaar, Ralph T H; Granzier, Marlies; Yaromina, Ala; Dubois, Ludwig; Verhaegen, Frank; Lambin, Philippe
2015-05-01
Advancements made over the past decades in both molecular imaging and radiotherapy planning and delivery have enabled studies that explore the efficacy of heterogeneous radiation treatment ("dose painting") of solid cancers based on biological information provided by different imaging modalities. In addition to clinical trials, preclinical studies may help contribute to identifying promising dose painting strategies. The goal of this current study was twofold: to develop a reproducible positioning and set-up verification protocol for a rat tumor model to be imaged and treated on a clinical platform, and to assess the dosimetric accuracy of dose planning and delivery for both uniform and positron emission tomography-computed tomography (PET-CT) based heterogeneous dose distributions. We employed a syngeneic rat rhabdomyosarcoma model, which was irradiated by volumetric modulated arc therapy (VMAT) with uniform or heterogeneous 6 MV photon dose distributions. Mean dose to the gross tumor volume (GTV) as a whole was kept at 12 Gy for all treatment arms. For the nonuniform plans, the dose was redistributed to treat the 30% of the GTV representing the biological target volume (BTV) with a dose 40% higher than the rest of the GTV (GTV - BTV) (~15 Gy was delivered to the BTV vs. ~10.7 Gy was delivered to the GTV - BTV). Cone beam computed tomography (CBCT) images acquired for each rat prior to irradiation were used to correctly reposition the tumor and calculate the delivered 3D dose. Film quality assurance was performed using a water-equivalent rat phantom. A comparison between CT or CBCT doses and film measurements resulted in passing rates >98% with a gamma criterion of 3%/2 mm using 2D dose images. Moreover, between the CT and CBCT calculated doses for both uniform and heterogeneous plans, we observed maximum differences of <2% for mean dose to the tumor and mean dose to the biological target volumes. In conclusion, we have developed a robust method for dose painting
Landoni, V; Borzì, G R; Strolin, S; Bruzzaniti, V; Soriani, A; D'Alessio, D; Ambesi, F; Di Grazia, A M; Strigari, L
2015-06-01
The purpose of this study is to evaluate the differences between dose distributions calculated with the pencil beam (PB) and X-ray voxel Monte Carlo (MC) algorithms for patients with lung cancer using intensity-modulated radiotherapy (IMRT) or HybridArc techniques. The 2 algorithms were compared in terms of dose-volume histograms, under normal and deep inspiration breath hold, and in terms of the tumor control probability (TCP). The dependence of the differences in tumor volume and location was investigated. Dosimetric validation was performed using Gafchromic EBT3 (International Specialty Products, ISP, Wayne, NJ). Forty-five Computed Tomography (CT) data sets were used for this study; 40 Gy at 8 Gy/fraction was prescribed with 5 noncoplanar 6-MV IMRT beams or 3 to 4 dynamic conformal arcs with 3 to 5 IMRT beams distributed per arc. The plans were first calculated with PB and then recalculated with MC. The difference between the mean tumor doses was approximately 10% ± 4%; these differences were even larger under deep inspiration breath hold. Differences between the mean tumor dose correlated with tumor volume and path length of the beams. The TCP values changed from 99.87% ± 0.24% to 96.78% ± 4.81% for both PB- and MC-calculated plans (P = .009). When a fraction of hypoxic cells was considered, the mean TCP values changed from 76.01% ± 5.83% to 34.78% ± 18.06% for the differently calculated plans (P < .0001). When the plans were renormalized to the same mean dose at the tumor, the mean TCP for oxic cells was 99.05% ± 1.59% and for hypoxic cells was 60.20% ± 9.53%. This study confirms that the MC algorithm adequately accounts for inhomogeneities. The inclusion of the MC in the process of IMRT optimization could represent a further step in the complex problem of determining the optimal treatment plan.
INVAP's Nuclear Calculation System
Directory of Open Access Journals (Sweden)
Ignacio Mochi
2011-01-01
Full Text Available Since its origins in 1976, INVAP has been on continuous development of the calculation system used for design and optimization of nuclear reactors. The calculation codes have been polished and enhanced with new capabilities as they were needed or useful for the new challenges that the market imposed. The actual state of the code packages enables INVAP to design nuclear installations with complex geometries using a set of easy-to-use input files that minimize user errors due to confusion or misinterpretation. A set of intuitive graphic postprocessors have also been developed providing a fast and complete visualization tool for the parameters obtained in the calculations. The capabilities and general characteristics of this deterministic software package are presented throughout the paper including several examples of its recent application.
Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim
2003-01-01
We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...
OFTIFEL PERSONALIZED NUTRITIONAL CALCULATOR
Directory of Open Access Journals (Sweden)
Malte BETHKE
2016-11-01
Full Text Available A food calculator for elderly people was elaborated by Centiv GmbH, an active partner in the European FP7 OPTIFEL Project, based on the functional requirement specifications and the existing recommendations for daily allowances across Europe, data which were synthetized and used to give aims in amounts per portion. The OPTIFEL Personalised Nutritional Calculator is the only available online tool which allows to determine on a personalised level the required nutrients for elderly people (65+. It has been developed mainly to support nursing homes providing best possible (personalised nutrient enriched food to their patients. The European FP7 OPTIFEL project “Optimised Food Products for Elderly Populations” aims to develop innovative products based on vegetables and fruits for elderly populations to increase length of independence. The OPTIFEL Personalised Nutritional Calculator is recommended to be used by nursing homes.
Spin Resonance Strength Calculations
Courant, E. D.
2009-08-01
In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.
Spin resonance strength calculations
Energy Technology Data Exchange (ETDEWEB)
Courant,E.D.
2008-10-06
In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.
Curvature calculations with GEOCALC
Energy Technology Data Exchange (ETDEWEB)
Moussiaux, A.; Tombal, P.
1987-04-01
A new method for calculating the curvature tensor has been recently proposed by D. Hestenes. This method is a particular application of geometric calculus, which has been implemented in an algebraic programming language on the form of a package called GEOCALC. They show how to apply this package to the Schwarzchild case and they discuss the different results.
Haida Numbers and Calculation.
Cogo, Robert
Experienced traders in furs, blankets, and other goods, the Haidas of the 1700's had a well-developed decimal system for counting and calculating. Their units of linear measure included the foot, yard, and fathom, or six feet. This booklet lists the numbers from 1 to 20 in English and Haida; explains the Haida use of ten, hundred, and thousand…
Daylight calculations in practice
DEFF Research Database (Denmark)
Iversen, Anne; Roy, Nicolas; Hvass, Mette;
programs can give different results. This can be due to restrictions in the program itself and/or be due to the skills of the persons setting up the models. This is crucial as daylight calculations are used to document that the demands and recommendations to daylight levels outlined by building authorities...
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
Compared with ellipse cavity, the spoke cavity has many advantages, especially for the low and medium beam energy. It will be used in the superconductor accelerator popular in the future. Based on the spoke cavity, we design and calculate an accelerator
Ohno, Tatsuya; Wakatsuki, Masaru; Toita, Takafumi; Kaneyasu, Yuko; Yoshida, Ken; Kato, Shingo; Li, Noriko; Tokumaru, Sunao; Ikushima, Hitoshi; Uno, Takashi; Noda, Shin-Ei; Kazumoto, Tomoko; Harima, Yoko
2016-11-10
Our purpose was to develop recommendations for contouring the computed tomography (CT)-based high-risk clinical target volume (CTVHR) for 3D image-guided brachytherapy (3D-IGBT) for cervical cancer. A 15-member Japanese Radiation Oncology Study Group (JROSG) committee with expertise in gynecological radiation oncology initiated guideline development for CT-based CTVHR (based on a comprehensive literature review as well as clinical experience) in July 2014. Extensive discussions occurred during four face-to-face meetings and frequent email communication until a consensus was reached. The CT-based CTVHR boundaries were defined by each anatomical plane (cranial-caudal, lateral, or anterior-posterior) with or without tumor progression beyond the uterine cervix at diagnosis. Since the availability of magnetic resonance imaging (MRI) with applicator insertion for 3D planning is currently limited, T2-weighted MRI obtained at diagnosis and just before brachytherapy without applicator insertion was used as a reference for accurately estimating the tumor size and topography. Furthermore, utilizing information from clinical examinations performed both at diagnosis and brachytherapy is strongly recommended. In conclusion, these recommendations will serve as a brachytherapy protocol to be used at institutions with limited availability of MRI for 3D treatment planning.
Radioprotection calculations for MEGAPIE.
Zanini, L
2005-01-01
The MEGAwatt PIlot Experiment (MEGAPIE) liquid lead-bismuth spallation neutron source will commence operation in 2006 at the SINQ facility of the Paul Scherrer Institut. Such an innovative system presents radioprotection concerns peculiar to a liquid spallation target. Several radioprotection issues have been addressed and studied by means of the Monte Carlo transport code, FLUKA. The dose rates in the room above the target, where personnel access may be needed at times, from the activated lead-bismuth and from the volatile species produced were calculated. Results indicate that the dose rate level is of the order of 40 mSv h(-1) 2 h after shutdown, but it can be reduced below the mSv h(-1) level with slight modifications to the shielding. Neutron spectra and dose rates from neutron transport, of interest for possible damage to radiation sensitive components, have also been calculated.
PIC: Protein Interactions Calculator.
Tina, K G; Bhadra, R; Srinivasan, N
2007-07-01
Interactions within a protein structure and interactions between proteins in an assembly are essential considerations in understanding molecular basis of stability and functions of proteins and their complexes. There are several weak and strong interactions that render stability to a protein structure or an assembly. Protein Interactions Calculator (PIC) is a server which, given the coordinate set of 3D structure of a protein or an assembly, computes various interactions such as disulphide bonds, interactions between hydrophobic residues, ionic interactions, hydrogen bonds, aromatic-aromatic interactions, aromatic-sulphur interactions and cation-pi interactions within a protein or between proteins in a complex. Interactions are calculated on the basis of standard, published criteria. The identified interactions between residues can be visualized using a RasMol and Jmol interface. The advantage with PIC server is the easy availability of inter-residue interaction calculations in a single site. It also determines the accessible surface area and residue-depth, which is the distance of a residue from the surface of the protein. User can also recognize specific kind of interactions, such as apolar-apolar residue interactions or ionic interactions, that are formed between buried or exposed residues or near the surface or deep inside.
Energy Technology Data Exchange (ETDEWEB)
Park, Y; Winey, B; Sharp, G [Massachusetts General Hospital, Boston, MA (United States)
2014-06-01
Purpose: To demonstrate feasibility of proton dose calculation on scattercorrected CBCT images for the purpose of adaptive proton therapy. Methods: Two CBCT image sets were acquired from a prostate cancer patient and a thorax phantom using an on-board imaging system of an Elekta infinity linear accelerator. 2-D scatter maps were estimated using a previously introduced CT-based technique, and were subtracted from each raw projection image. A CBCT image set was then reconstructed with an open source reconstruction toolkit (RTK). Conversion from the CBCT number to HU was performed by soft tissue-based shifting with reference to the plan CT. Passively scattered proton plans were simulated on the plan CT and corrected/uncorrected CBCT images using the XiO treatment planning system. For quantitative evaluation, water equivalent path length (WEPL) was compared in those treatment plans. Results: The scatter correction method significantly improved image quality and HU accuracy in the prostate case where large scatter artifacts were obvious. However, the correction technique showed limited effects on the thorax case that was associated with fewer scatter artifacts. Mean absolute WEPL errors from the plans with the uncorrected and corrected images were 1.3 mm and 5.1 mm in the thorax case and 13.5 mm and 3.1 mm in the prostate case. The prostate plan dose distribution of the corrected image demonstrated better agreement with the reference one than that of the uncorrected image. Conclusion: A priori CT-based CBCT scatter correction can reduce the proton dose calculation error when large scatter artifacts are involved. If scatter artifacts are low, an uncorrected CBCT image is also promising for proton dose calculation when it is calibrated with the soft-tissue based shifting.
Institute of Scientific and Technical Information of China (English)
赵艳群; 尹刚; 王先良; 王培; 祁国海; 吴大可; 肖明勇; 黎杰; 康盛伟
2016-01-01
.00,0.00,0.00,0.00,0.00),but the effect is not obvious in 3DCRT plans (P =0.18,0.08,0.62,0.08,0.97),similarly,the same effect was found in the differences between PBC and MC for IMRT plans,and the differences of dose volume are lager than that of CCC and MC.For the dose of ipsilateral lung,CCC algorithm overestimated dose for all lung,PBC algorithm overestimated V20(P=0.00,0.00),but underestimated V5(P=0.00,0.00),the difference of V10 have no statistical significant (P=0.47).Conclusions It is recommended that the treatment plan of lung cancer should be calculated by an advanced algorithm other than PBC.MC can calculate dose distribution of lung cancer accurately and can provide a very good tool for benchmarking the performance of other dose calculation algorithms.
Calculations in furnace technology
Davies, Clive; Hopkins, DW; Owen, WS
2013-01-01
Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi
Zero Temperature Hope Calculations
Energy Technology Data Exchange (ETDEWEB)
Rozsnyai, B F
2002-07-26
The primary purpose of the HOPE code is to calculate opacities over a wide temperature and density range. It can also produce equation of state (EOS) data. Since the experimental data at the high temperature region are scarce, comparisons of predictions with the ample zero temperature data provide a valuable physics check of the code. In this report we show a selected few examples across the periodic table. Below we give a brief general information about the physics of the HOPE code. The HOPE code is an ''average atom'' (AA) Dirac-Slater self-consistent code. The AA label in the case of finite temperature means that the one-electron levels are populated according to the Fermi statistics, at zero temperature it means that the ''aufbau'' principle works, i.e. no a priory electronic configuration is set, although it can be done. As such, it is a one-particle model (any Hartree-Fock model is a one particle model). The code is an ''ion-sphere'' model, meaning that the atom under investigation is neutral within the ion-sphere radius. Furthermore, the boundary conditions for the bound states are also set at the ion-sphere radius, which distinguishes the code from the INFERNO, OPAL and STA codes. Once the self-consistent AA state is obtained, the code proceeds to generate many-electron configurations and proceeds to calculate photoabsorption in the ''detailed configuration accounting'' (DCA) scheme. However, this last feature is meaningless at zero temperature. There is one important feature in the HOPE code which should be noted; any self-consistent model is self-consistent in the space of the occupied orbitals. The unoccupied orbitals, where electrons are lifted via photoexcitation, are unphysical. The rigorous way to deal with that problem is to carry out complete self-consistent calculations both in the initial and final states connecting photoexcitations, an enormous computational task
Linewidth calculations and simulations
Strandberg, Ingrid
2016-01-01
We are currently developing a new technique to further enhance the sensitivity of collinear laser spectroscopy in order to study the most exotic nuclides available at radioactive ion beam facilities, such as ISOLDE at CERN. The overall goal is to evaluate the feasibility of the new method. This report will focus on the determination of the expected linewidth (hence resolution) of this approach. Different effects which could lead to a broadening of the linewidth, e.g. the ions' energy spread and their trajectories inside the trap, are studied with theoretical calculations as well as simulations.
Lopez, Cesar
2015-01-01
MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. This book is designed for use as a scientific/business calculator so that you can get numerical solutions to problems involving a wide array of mathematics using MATLAB. Just look up the function y
Multilayer optical calculations
Byrnes, Steven J
2016-01-01
When light hits a multilayer planar stack, it is reflected, refracted, and absorbed in a way that can be derived from the Fresnel equations. The analysis is treated in many textbooks, and implemented in many software programs, but certain aspects of it are difficult to find explicitly and consistently worked out in the literature. Here, we derive the formulas underlying the transfer-matrix method of calculating the optical properties of these stacks, including oblique-angle incidence, absorption-vs-position profiles, and ellipsometry parameters. We discuss and explain some strange consequences of the formulas in the situation where the incident and/or final (semi-infinite) medium are absorptive, such as calculating $T>1$ in the absence of gain. We also discuss some implementation details like complex-plane branch cuts. Finally, we derive modified formulas for including one or more "incoherent" layers, i.e. very thick layers in which interference can be neglected. This document was written in conjunction with ...
Bhatnagar, Shalabh
2017-01-01
Sound is an emerging source of renewable energy but it has some limitations. The main limitation is, the amount of energy that can be extracted from sound is very less and that is because of the velocity of the sound. The velocity of sound changes as per medium. If we could increase the velocity of the sound in a medium we would be probably able to extract more amount of energy from sound and will be able to transfer it at a higher rate. To increase the velocity of sound we should know the speed of sound. If we go by the theory of classic mechanics speed is the distance travelled by a particle divided by time whereas velocity is the displacement of particle divided by time. The speed of sound in dry air at 20 °C (68 °F) is considered to be 343.2 meters per second and it won't be wrong in saying that 342.2 meters is the velocity of sound not the speed as it's the displacement of the sound not the total distance sound wave covered. Sound travels in the form of mechanical wave, so while calculating the speed of sound the whole path of wave should be considered not just the distance traveled by sound. In this paper I would like to focus on calculating the actual speed of sound wave which can help us to extract more energy and make sound travel with faster velocity.
Calculating reliability measures for ordinal data.
Gamsu, C V
1986-11-01
Establishing the reliability of measures taken by judges is important in both clinical and research work. Calculating the statistic of choice, the kappa coefficient, unfortunately is not a particularly quick and simple procedure. Two much-needed practical tools have been developed to overcome these difficulties: a comprehensive and easily understood guide to the manual calculation of the most complex form of the kappa coefficient, weighted kappa for ordinal data, has been written; and a computer program to run under CP/M, PC-DOS and MS-DOS has been developed. With simple modification the program will also run on a Sinclair Spectrum home computer.
Molecular Dynamics Calculations
1996-01-01
The development of thermodynamics and statistical mechanics is very important in the history of physics, and it underlines the difficulty in dealing with systems involving many bodies, even if those bodies are identical. Macroscopic systems of atoms typically contain so many particles that it would be virtually impossible to follow the behavior of all of the particles involved. Therefore, the behavior of a complete system can only be described or predicted in statistical ways. Under a grant to the NASA Lewis Research Center, scientists at the Case Western Reserve University have been examining the use of modern computing techniques that may be able to investigate and find the behavior of complete systems that have a large number of particles by tracking each particle individually. This is the study of molecular dynamics. In contrast to Monte Carlo techniques, which incorporate uncertainty from the outset, molecular dynamics calculations are fully deterministic. Although it is still impossible to track, even on high-speed computers, each particle in a system of a trillion trillion particles, it has been found that such systems can be well simulated by calculating the trajectories of a few thousand particles. Modern computers and efficient computing strategies have been used to calculate the behavior of a few physical systems and are now being employed to study important problems such as supersonic flows in the laboratory and in space. In particular, an animated video (available in mpeg format--4.4 MB) was produced by Dr. M.J. Woo, now a National Research Council fellow at Lewis, and the G-VIS laboratory at Lewis. This video shows the behavior of supersonic shocks produced by pistons in enclosed cylinders by following exactly the behavior of thousands of particles. The major assumptions made were that the particles involved were hard spheres and that all collisions with the walls and with other particles were fully elastic. The animated video was voted one of two
Ahrens, Thomas J.; Okeefe, J. D.; Smither, C.; Takata, T.
1991-01-01
In the course of carrying out finite difference calculations, it was discovered that for large craters, a previously unrecognized type of crater (diameter) growth occurred which was called lip wave propagation. This type of growth is illustrated for an impact of a 1000 km (2a) silicate bolide at 12 km/sec (U) onto a silicate half-space at earth gravity (1 g). The von Misses crustal strength is 2.4 kbar. The motion at the crater lip associated with this wave type phenomena is up, outward, and then down, similar to the particle motion of a surface wave. It is shown that the crater diameter has grown d/a of approximately 25 to d/a of approximately 4 via lip propagation from Ut/a = 5.56 to 17.0 during the time when rebound occurs. A new code is being used to study partitioning of energy and momentum and cratering efficiency with self gravity for finite-sized objects rather than the previously discussed planetary half-space problems. These are important and fundamental subjects which can be addressed with smoothed particle hydrodynamic (SPH) codes. The SPH method was used to model various problems in astrophysics and planetary physics. The initial work demonstrates that the energy budget for normal and oblique impacts are distinctly different than earlier calculations for silicate projectile impact on a silicate half space. Motivated by the first striking radar images of Venus obtained by Magellan, the effect of the atmosphere on impact cratering was studied. In order the further quantify the processes of meteor break-up and trajectory scattering upon break-up, the reentry physics of meteors striking Venus' atmosphere versus that of the Earth were studied.
On the Origins of Calculation Abilities
Directory of Open Access Journals (Sweden)
A. Ardila
1993-01-01
Full Text Available A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia, right–left discrimination disturbances, semantic aphasia, and acalculia are proposed to comprise a single neuropsychological syndrome associated with left angular gyrus damage. A classification of calculation disturbances resulting from brain damage is presented. It is emphasized that using historical/anthropological analysis, it becomes evident that acalculia, finger agnosia, and disorders in right–left discrimination (as in general, in the use of spatial concepts must constitute a single clinical syndrome, resulting from the disruption of some common brain activity and the impairment of common cognitive mechanisms.
Giantomassi, Matteo; Huhs, Georg; Waroquiers, David; Gonze, Xavier
2014-03-01
Many-Body Perturbation Theory (MBPT) defines a rigorous framework for the description of excited-state properties based on the Green's function formalism. Within MBPT, one can calculate charged excitations using e.g. Hedin's GW approximation for the electron self-energy. In the same framework, neutral excitations are also well described through the solution of the Bethe-Salpeter equation (BSE). In this talk, we report on the recent developments concerning the parallelization of the MBPT algorithms available in the ABINIT code (www.abinit.org). In particular, we discuss how to improve the parallel efficiency thanks to a hybrid version that employs MPI for the coarse-grained parallelization and OpenMP (a de facto standard for parallel programming on shared memory architectures) for the fine-grained parallelization of the most CPU-intensive parts. Benchmark results obtained with the new implementation are discussed. Finally, we present results for the GW corrections of amorphous SiO2 in the presence of defects and the BSE absorption spectrum. This work has been supported by the Prace project (PaRtnership for Advanced Computing in Europe, http://www.prace-ri.eu).
On the Origins of Calculation Abilities
Ardila, A.
1993-01-01
A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia), right–left discrimina...
The rating reliability calculator
Directory of Open Access Journals (Sweden)
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Cosmological Calculations on the GPU
Bard, Deborah; Allen, Mark T; Yepremyan, Hasmik; Kratochvil, Jan M
2012-01-01
Cosmological measurements require the calculation of nontrivial quantities over large datasets. The next generation of survey telescopes (such as DES, PanSTARRS, and LSST) will yield measurements of billions of galaxies. The scale of these datasets, and the nature of the calculations involved, make cosmological calculations ideal models for implementation on graphics processing units (GPUs). We consider two cosmological calculations, the two-point angular correlation function and the aperture mass statistic, and aim to improve the calculation time by constructing code for calculating them on the GPU. Using CUDA, we implement the two algorithms on the GPU and compare the calculation speeds to comparable code run on the CPU. We obtain a code speed-up of between 10 - 180x faster, compared to performing the same calculation on the CPU. The code has been made publicly available.
New Arsenic Cross Section Calculations
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-04
This report presents calculations for the new arsenic cross section. Cross sections for ^{73,74,75} As above the resonance range were calculated with a newly developed Hauser-Feshbach code, CoH3.
Paramedics’ Ability to Perform Drug Calculations
Directory of Open Access Journals (Sweden)
Eastwood, Kathyrn J
2009-11-01
Full Text Available Background: The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics’ drug calculation abilities was first published in 2000 and for nurses’ abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student’s ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations.Methods: A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL, MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved.Results: The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects.Conclusions: This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting.[WestJEM. 2009;10:240-243.
Institute of Scientific and Technical Information of China (English)
魏兴瑜; 周涛; 陆惠玲; 王文文
2015-01-01
PET/CT医学图像融合对于图像分析及临床诊断具有重要的应用价值，通过融合PET/CT图像，可以丰富图像的信息量，提高信息准确度。针对PET/CT融合问题，提出了一个基于双树复小波的PET/CT自适应融合算法。对已配准的PET和CT图像进行双树复小波变换（dual-tree complex wavelet transform，DTCWT），得到低频分量和高频分量；根据低频图像集中了大部分源图像能量及决定了图像轮廓的特点，采用了自适应高斯隶属度函数的融合规则；在高频图像部分，考虑了图像相邻像素之间的相关性和模糊性问题，在第一层的高频分量上采用了高斯隶属度函数和3×3领域窗口相结合的融合规则，在第二层高频分量上采用了区域方差的融合规则。最后，为了验证算法的有效性和可行性，做了3个方面的实验，分别是该算法和其他像素级融合算法的比较实验，利用信息熵、均值、标准方差和互信息的融合效果评价实验，双树复小波变换中不同融合规则的比较实验。实验结果表明，该算法信息熵提高了7.23%，互信息提高了17.98%，说明该算法是一种有效的多模态医学影像融合方法。%PET/CT medical image fusion has very important application value for medical image analysis and diseases diagnosis. It is useful to improve the image content and accuracy by fusing PET/CT images. Aiming at PET/CT fusion problem, this paper proposes a self-adaption fusion algorithm of PET/CT based on dual-tree complex wavelet trans-form. Firstly, source PET and CT images after registration are decomposed low and high frequency sub-images using dual-tree complex wavelet transform (DTCWT). Secondly, according to the characteristics of low frequency sub-images concentrating the majority energy of the source image and determining the image contour, a fusion rule based on self-adaption Gaussian membership function is adopted in low
Global nuclear-structure calculations
Energy Technology Data Exchange (ETDEWEB)
Moeller, P.; Nix, J.R.
1990-04-20
The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to {epsilon}{sub 2} and {epsilon}{sub 4} used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and {Beta}-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential.
Equilibrium calculations of firework mixtures
Energy Technology Data Exchange (ETDEWEB)
Hobbs, M.L. [Sandia National Labs., Albuquerque, NM (United States); Tanaka, Katsumi; Iida, Mitsuaki; Matsunaga, Takehiro [National Inst. of Materials and Chemical Research, Tsukuba, Ibaraki (Japan)
1994-12-31
Thermochemical equilibrium calculations have been used to calculate detonation conditions for typical firework components including three report charges, two display charges, and black powder which is used as a fuse or launch charge. Calculations were performed with a modified version of the TIGER code which allows calculations with 900 gaseous and 600 condensed product species at high pressure. The detonation calculations presented in this paper are thought to be the first report on the theoretical study of firework detonation. Measured velocities for two report charges are available and compare favorably to predicted detonation velocities. However, the measured velocities may not be true detonation velocities. Fast deflagration rather than an ideal detonation occurs when reactants contain significant amounts of slow reacting constituents such as aluminum or titanium. Despite such uncertainties in reacting pyrotechnics, the detonation calculations do show the complex nature of condensed phase formation at elevated pressures and give an upper bound for measured velocities.
CALCULATION OF LASER CUTTING COSTS
Directory of Open Access Journals (Sweden)
Bogdan Nedic
2016-09-01
Full Text Available The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, comparison' of costs made by other unconventional methods and provides documentation that consists of reports on estimated costs.
IOL Power Calculation after Corneal Refractive Surgery
Directory of Open Access Journals (Sweden)
Maddalena De Bernardo
2014-01-01
Full Text Available Purpose. To describe the different formulas that try to overcome the problem of calculating the intraocular lens (IOL power in patients that underwent corneal refractive surgery (CRS. Methods. A Pubmed literature search review of all published articles, on keyword associated with IOL power calculation and corneal refractive surgery, as well as the reference lists of retrieved articles, was performed. Results. A total of 33 peer reviewed articles dealing with methods that try to overcome the problem of calculating the IOL power in patients that underwent CRS were found. According to the information needed to try to overcome this problem, the methods were divided in two main categories: 18 methods were based on the knowledge of the patient clinical history and 15 methods that do not require such knowledge. The first group was further divided into five subgroups based on the parameters needed to make such calculation. Conclusion. In the light of our findings, to avoid postoperative nasty surprises, we suggest using only those methods that have shown good results in a large number of patients, possibly by averaging the results obtained with these methods.
Present Status of Radiotherapy in Clinical Practice
Duehmke, Eckhart
Aims of radiation oncology are cure from malignant diseases and - at the same time preservation of anatomy (e.g. female breast, uterus, prostate) and organ functions (e.g. brain, eye, voice, sphincter ani). At present, methods and results of clinical radiotherapy (RT) are based on experiences with natural history and radiobiology of malignant tumors in properly defined situations as well as on technical developments since World War II in geometrical and biological treatment planning in teletherapy and brachytherapy. Radiobiological research revealed tolerance limits of healthy tissues to be respected, effective total treatment doses of high cure probability depending on histology and tumor volume, and - more recently - altered fractionation schemes to be adapted to specific growth fractions and intrinsic radiosensitivities of clonogenic tumor cells. In addition, Biological Response Modifiers (BRM), such as cis-platinum, oxygen and hyperthermia may steepen cell survival curves of hypoxic tumor cells, others - such as tetrachiordekaoxid (TCDO) - may enhance repair of normal tissues. Computer assisted techniques in geometrical RT-planning based on individual healthy and pathologic anatomy (CT, MRT) provide high precision RT for well defined brain lesions by using dedicated linear accelerators (Stereotaxy). CT-based individual tissue compensators help with homogenization of distorted dose distributions in magna field irradiation for malignant lymphomas and with total body irradiation (TBI) before allogeneic bone marrow transplantation, e.g. for leukemia. RT with fast neutrons, Boron Neutron Capture Therapy (BNCT), RT with protons and heavy ions need to be tested in randomized trials before implementation into clinical routine.
Calculator. Owning a Small Business.
Parma City School District, OH.
Seven activities are presented in this student workbook designed for an exploration of small business ownership and the use of the calculator in this career. Included are simulated situations in which students must use a calculator to compute property taxes; estimate payroll taxes and franchise taxes; compute pricing, approximate salaries,…
Calculation of Spectra of Solids:
DEFF Research Database (Denmark)
Lindgård, Per-Anker
1975-01-01
The Gilat-Raubenheimer method simplified to tetrahedron division is used to calculate the real and imaginary part of the dynamical response function for electrons. A frequency expansion for the real part is discussed. The Lindhard function is calculated as a test for numerical accuracy. The condu...
Closure and Sealing Design Calculation
Energy Technology Data Exchange (ETDEWEB)
T. Lahnalampi; J. Case
2005-08-26
The purpose of the ''Closure and Sealing Design Calculation'' is to illustrate closure and sealing methods for sealing shafts, ramps, and identify boreholes that require sealing in order to limit the potential of water infiltration. In addition, this calculation will provide a description of the magma that can reduce the consequences of an igneous event intersecting the repository. This calculation will also include a listing of the project requirements related to closure and sealing. The scope of this calculation is to: summarize applicable project requirements and codes relating to backfilling nonemplacement openings, removal of uncommitted materials from the subsurface, installation of drip shields, and erecting monuments; compile an inventory of boreholes that are found in the area of the subsurface repository; describe the magma bulkhead feature and location; and include figures for the proposed shaft and ramp seals. The objective of this calculation is to: categorize the boreholes for sealing by depth and proximity to the subsurface repository; develop drawing figures which show the location and geometry for the magma bulkhead; include the shaft seal figures and a proposed construction sequence; and include the ramp seal figure and a proposed construction sequence. The intent of this closure and sealing calculation is to support the License Application by providing a description of the closure and sealing methods for the Safety Analysis Report. The closure and sealing calculation will also provide input for Post Closure Activities by describing the location of the magma bulkhead. This calculation is limited to describing the final configuration of the sealing and backfill systems for the underground area. The methods and procedures used to place the backfill and remove uncommitted materials (such as concrete) from the repository and detailed design of the magma bulkhead will be the subject of separate analyses or calculations. Post
SU-F-303-12: Implementation of MR-Only Simulation for Brain Cancer: A Virtual Clinical Trial
Energy Technology Data Exchange (ETDEWEB)
Glide-Hurst, C; Zheng, W; Kim, J; Wen, N; Chetty, I J [Henry Ford Health System, Detroit, MI (United States)
2015-06-15
Purpose: To perform a retrospective virtual clinical trial using an MR-only workflow for a variety of brain cancer cases by incorporating novel imaging sequences, tissue segmentation using phase images, and an innovative synthetic CT (synCT) solution. Methods: Ten patients (16 lesions) were evaluated using a 1.0T MR-SIM including UTE-DIXON imaging (TE = 0.144/3.4/6.9ms). Bone-enhanced images were generated from DIXON-water/fat and inverted UTE. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating intersection and Dice similarity coefficients (DSC) using CT-SIM as ground truth. SynCTs were generated using voxel-based weighted summation incorporating T2, FLAIR, UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized HU differences between synCT and CT-SIM. Dose was recalculated on synCTs; differences were quantified using planar gamma analysis (2%/2 mm dose difference/distance to agreement) at isocenter. Digitally reconstructed radiographs (DRRs) were compared. Results: On average, air maps intersected 80.8 ±5.5% (range: 71.8–88.8%) between MR-SIM and CT-SIM yielding DSCs of 0.78 ± 0.04 (range: 0.70–0.83). Whole-brain MAE between synCT and CT-SIM was 160.7±8.8 HU, with the largest uncertainty arising from bone (MAE = 423.3±33.2 HU). Gamma analysis revealed pass rates of 99.4 ± 0.04% between synCT and CT-SIM for the cohort. Dose volume histogram analysis revealed that synCT tended to yield slightly higher doses. Organs at risk such as the chiasm and optic nerves were most sensitive due to their proximities to air/bone interfaces. DRRs generated via synCT and CT-SIM were within clinical tolerances. Conclusion: Our approach for MR-only simulation for brain cancer treatment planning yielded clinically acceptable results relative to the CT-based benchmark. While slight dose differences were observed, reoptimization of treatment plans and improved image registration can address
Practical astronomy with your calculator
Duffett-Smith, Peter
1989-01-01
Practical Astronomy with your Calculator, first published in 1979, has enjoyed immense success. The author's clear and easy to follow routines enable you to solve a variety of practical and recreational problems in astronomy using a scientific calculator. Mathematical complexity is kept firmly in the background, leaving just the elements necessary for swiftly making calculations. The major topics are: time, coordinate systems, the Sun, the planetary system, binary stars, the Moon, and eclipses. In the third edition there are entirely new sections on generalised coordinate transformations, nutr
Transfer Area Mechanical Handling Calculation
Energy Technology Data Exchange (ETDEWEB)
B. Dianda
2004-06-23
This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use
Clinical Research and Clinical Trials
... NICHD Publications Data Sharing and Other Resources Research Clinical Trials & Clinical Research Skip sharing on social media links ... health care providers, and researchers. Find NICHD-Supported Clinical Trials Use this link to find a list of ...
Clinical Competence/Clinical Credibility.
Goorapah, David
1997-01-01
In interviews with 10 nurse teachers and 10 clinicians, respondents could describe clinical competence more fluently than clinical credibility. Responses raised the question of whether nursing teachers must be clinically competent/credible to teach nursing. (SK)
MFTF-B performance calculations
Energy Technology Data Exchange (ETDEWEB)
Thomassen, K.I.; Jong, R.A.
1982-12-06
In this report we document the operating scenario models and calculations as they exist and comment on those aspects of the models where performance is sensitive to the assumptions that are made. We also focus on areas where improvements need to be made in the mathematical descriptions of phenomena, work which is in progress. To illustrate the process of calculating performance, and to be very specific in our documentation, part 2 of this report contains the complete equations and sequence of calculations used to determine parameters for the MARS mode of operation in MFTF-B. Values for all variables for a particular set of input parameters are also given there. The point design so described is typical, but should be viewed as a snapshot in time of our ongoing estimations and predictions of performance.
Insertion device calculations with mathematica
Energy Technology Data Exchange (ETDEWEB)
Carr, R. [Stanford Synchrotron Radiation Lab., CA (United States); Lidia, S. [Univ. of California, Davis, CA (United States)
1995-02-01
The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectory solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.
The Collective Practice of Calculation
DEFF Research Database (Denmark)
Schrøder, Ida
and judgement to reach decisions to invest in social services. The line is not drawn between the two, but between the material arrangements that make decisions possible. This implies that the insisting on qualitatively based decisions gives the professionals agency to collectively engage in practical......The calculation of costs plays an increasingly large role in the decision-making processes of public sector human service organizations. This has brought scholars of management accounting to investigate the relationship between caring professions and demands to make economic entities of the service...... on the idea that professions are hybrids by introducing the notion of qualculation as an entry point to investigate decision-making in child protection work as an extreme case of calculating on the basis of other elements than quantitative numbers. The analysis reveals that it takes both calculation...
Marco-Ruiz, Luis; Maldonado, J Alberto; Karlsen, Randi; Bellika, Johan G
2015-01-01
Clinical Decision Support Systems (CDSS) help to improve health care and reduce costs. However, the lack of knowledge management and modelling hampers their maintenance and reuse. Current EHR standards and terminologies can allow the semantic representation of the data and knowledge of CDSS systems boosting their interoperability, reuse and maintenance. This paper presents the modelling process of respiratory conditions' symptoms and signs by a multidisciplinary team of clinicians and information architects with the help of openEHR, SNOMED and clinical information modelling tools for a CDSS. The information model of the CDSS was defined by means of an archetype and the knowledge model was implemented by means of an SNOMED-CT based ontology.
Friction and wear calculation methods
Kragelsky, I V; Kombalov, V S
1981-01-01
Friction and Wear: Calculation Methods provides an introduction to the main theories of a new branch of mechanics known as """"contact interaction of solids in relative motion."""" This branch is closely bound up with other sciences, especially physics and chemistry. The book analyzes the nature of friction and wear, and some theoretical relationships that link the characteristics of the processes and the properties of the contacting bodies essential for practical application of the theories in calculating friction forces and wear values. The effect of the environment on friction and wear is a
Multifragmentation calculated with relativistic forces
Feldmeier, H; Papp, G
1995-01-01
A saturating hamiltonian is presented in a relativistically covariant formalism. The interaction is described by scalar and vector mesons, with coupling strengths adjusted to the nuclear matter. No explicit density depe ndence is assumed. The hamiltonian is applied in a QMD calculation to determine the fragment distribution in O + Br collision at different energies (50 -- 200 MeV/u) to test the applicability of the model at low energies. The results are compared with experiment and with previous non-relativistic calculations. PACS: 25.70Mn, 25.75.+r
Molecular calculations with B functions
Steinborn, E O; Ema, I; López, R; Ramírez, G
1998-01-01
A program for molecular calculations with B functions is reported and its performance is analyzed. All the one- and two-center integrals, and the three-center nuclear attraction integrals are computed by direct procedures, using previously developed algorithms. The three- and four-center electron repulsion integrals are computed by means of Gaussian expansions of the B functions. A new procedure for obtaining these expansions is also reported. Some results on full molecular calculations are included to show the capabilities of the program and the quality of the B functions to represent the electronic functions in molecules.
Clinical reasoning as social deliberation
DEFF Research Database (Denmark)
2014-01-01
In this paper I will challenge the individualistic model of clinical reasoning. I will argue that sometimes clinical practice is rather machine-like, and information is called to mind and weighed, but the clinician is not just calculating how to use particular means to reach fixed ends. Often...
Methods for Melting Temperature Calculation
Hong, Qi-Jun
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which
Ab Initio Calculations of Oxosulfatovanadates
DEFF Research Database (Denmark)
Frøberg, Torben; Johansen, Helge
1996-01-01
Restricted Hartree-Fock and multi-configurational self-consistent-field calculations together with secondorder perturbation theory have been used to study the geometry, the electron density, and the electronicspectrum of (VO2SO4)-. A bidentate sulphate attachment to vanadium was found to be stable...
Dead reckoning calculating without instruments
Doerfler, Ronald W
1993-01-01
No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner
ITER Port Interspace Pressure Calculations
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan J [ORNL; Van Hove, Walter A [ORNL
2016-01-01
The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.
Calculations for cosmic axion detection
Krauss, L.; Moody, J.; Wilczek, F.; Morris, D. E.
1985-01-01
Calculations are presented, using properly nomalized couplings and masses for Dine-Fischler-Srednicki axions, of power rates and signal temperatures for axion-photon conversion in microwave cavities. The importance of the galactic-halo axion line shape is emphasized. Spin-coupled detection as an alternative to magnetic-field-coupled detection is mentioned.
Theoretical Calculation of MMF's Bandwidth
Institute of Scientific and Technical Information of China (English)
LI Xiao-fu; JIANG De-sheng; YU Hai-hu
2004-01-01
The difference between over-filled launch bandwidth (OFL BW) and restricted mode launch bandwidth (RML BW) is described. A theoretical model is founded to calculate the OFL BW of grade index multimode fiber (GI-MMF),and the result is useful to guide the modification of the manufacturing method.
Data Acquisition and Flux Calculations
DEFF Research Database (Denmark)
Rebmann, C.; Kolle, O; Heinesch, B;
2012-01-01
In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....
Fluence-convolution broad-beam (FCBB) dose calculation.
Lu, Weiguo; Chen, Mingli
2010-12-07
IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.
An electronic application for rapidly calculating Charlson comorbidity score
Directory of Open Access Journals (Sweden)
Jani Ashesh B
2004-12-01
Full Text Available Abstract Background Uncertainty regarding comorbid illness, and ability to tolerate aggressive therapy has led to minimal enrollment of elderly cancer patients into clinical trials and often substandard treatment. Increasingly, comorbid illness scales have proven useful in identifying subgroups of elderly patients who are more likely to tolerate and benefit from aggressive therapy. Unfortunately, the use of such scales has yet to be widely integrated into either clinical practice or clinical trials research. Methods This article reviews evidence for the validity of the Charlson Comorbidity Index (CCI in oncology and provides a Microsoft Excel (MS Excel Macro for the rapid and accurate calculation of CCI score. The interaction of comorbidity and malignant disease and the validation of the Charlson Index in oncology are discussed. Results The CCI score is based on one year mortality data from internal medicine patients admitted to an inpatient setting and is the most widely used comorbidity index in oncology. An MS Excel Macro file was constructed for calculating the CCI score using Microsoft Visual Basic. The Macro is provided for download and dissemination. The CCI has been widely used and validated throughout the oncology literature and has demonstrated utility for most major cancers. The MS Excel CCI Macro provides a rapid method for calculating CCI score with or without age adjustments. The calculator removes difficulty in score calculation as a limitation for integration of the CCI into clinical research. The simple nature of the MS Excel CCI Macro and the CCI itself makes it ideal for integration into emerging electronic medical records systems. Conclusions The increasing elderly population and concurrent increase in oncologic disease has made understanding the interaction between age and comorbid illness on life expectancy increasingly important. The MS Excel CCI Macro provides a means of increasing the use of the CCI scale in clinical
CONTRIBUTION FOR MINING ATMOSPHERE CALCULATION
Directory of Open Access Journals (Sweden)
Franica Trojanović
1989-12-01
Full Text Available Humid air is an unavoidable feature of mining atmosphere, which plays a significant role in defining the climate conditions as well as permitted circumstances for normal mining work. Saturated humid air prevents heat conduction from the human body by means of evaporation. Consequently, it is of primary interest in the mining practice to establish the relative air humidity either by means of direct or indirect methods. Percentage of water in the surrounding air may be determined in various procedures including tables, diagrams or particular calculations, where each technique has its specific advantages and disadvantages. Classical calculation is done according to Sprung's formula, in which case partial steam pressure should also be taken from the steam table. The new method without the use of diagram or tables, established on the functional relation of pressure and temperature on saturated line, is presented here for the first time (the paper is published in Croatian.
Archimedes' calculations of square roots
Davies, E B
2011-01-01
We reconsider Archimedes' evaluations of several square roots in 'Measurement of a Circle'. We show that several methods proposed over the last century or so for his evaluations fail one or more criteria of plausibility. We also provide internal evidence that he probably used an interpolation technique. The conclusions are relevant to the precise calculations by which he obtained upper and lower bounds on pi.
Parallel plasma fluid turbulence calculations
Energy Technology Data Exchange (ETDEWEB)
Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.
1994-12-31
The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.
Clinical trials are research studies that test how well new medical approaches work in people. Each study answers ... prevent, screen for, diagnose, or treat a disease. Clinical trials may also compare a new treatment to a ...
AGING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2004-09-10
The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the
Calculation of gas turbine characteristic
Mamaev, B. I.; Murashko, V. L.
2016-04-01
The reasons and regularities of vapor flow and turbine parameter variation depending on the total pressure drop rate π* and rotor rotation frequency n are studied, as exemplified by a two-stage compressor turbine of a power-generating gas turbine installation. The turbine characteristic is calculated in a wide range of mode parameters using the method in which analytical dependences provide high accuracy for the calculated flow output angle and different types of gas dynamic losses are determined with account of the influence of blade row geometry, blade surface roughness, angles, compressibility, Reynolds number, and flow turbulence. The method provides satisfactory agreement of results of calculation and turbine testing. In the design mode, the operation conditions for the blade rows are favorable, the flow output velocities are close to the optimal ones, the angles of incidence are small, and the flow "choking" modes (with respect to consumption) in the rows are absent. High performance and a nearly axial flow behind the turbine are obtained. Reduction of the rotor rotation frequency and variation of the pressure drop change the flow parameters, the parameters of the stages and the turbine, as well as the form of the characteristic. In particular, for decreased n, nonmonotonic variation of the second stage reactivity with increasing π* is observed. It is demonstrated that the turbine characteristic is mainly determined by the influence of the angles of incidence and the velocity at the output of the rows on the losses and the flow output angle. The account of the growing flow output angle due to the positive angle of incidence for decreased rotation frequencies results in a considerable change of the characteristic: poorer performance, redistribution of the pressure drop at the stages, and change of reactivities, growth of the turbine capacity, and change of the angle and flow velocity behind the turbine.
Rate calculation with colored noise
Bartsch, Thomas; Benito, R M; Borondo, F
2016-01-01
The usual identification of reactive trajectories for the calculation of reaction rates requires very time-consuming simulations, particularly if the environment presents memory effects. In this paper, we develop a new method that permits the identification of reactive trajectories in a system under the action of a stochastic colored driving. This method is based on the perturbative computation of the invariant structures that act as separatrices for reactivity. Furthermore, using this perturbative scheme, we have obtained a formally exact expression for the reaction rate in multidimensional systems coupled to colored noisy environments.
Electronics reliability calculation and design
Dummer, Geoffrey W A; Hiller, N
1966-01-01
Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea
Band calculation of lonsdaleite Ge
Chen, Pin-Shiang; Fan, Sheng-Ting; Lan, Huang-Siang; Liu, Chee Wee
2017-01-01
The band structure of Ge in the lonsdaleite phase is calculated using first principles. Lonsdaleite Ge has a direct band gap at the Γ point. For the conduction band, the Γ valley is anisotropic with the low transverse effective mass on the hexagonal plane and the large longitudinal effective mass along the c axis. For the valence band, both heavy-hole and light-hole effective masses are anisotropic at the Γ point. The in-plane electron effective mass also becomes anisotropic under uniaxial tensile strain. The strain response of the heavy-hole mass is opposite to the light hole.
Semiclassical calculation of decay rates
Bessa, A; Fraga, E S
2008-01-01
Several relevant aspects of quantum-field processes can be well described by semiclassical methods. In particular, the knowledge of non-trivial classical solutions of the field equations, and the thermal and quantum fluctuations around them, provide non-perturbative information about the theory. In this work, we discuss the calculation of the one-loop effective action from the semiclasssical viewpoint. We intend to use this formalism to obtain an accurate expression for the decay rate of non-static metastable states.
Digital calculations of engine cycles
Starkman, E S; Taylor, C Fayette
1964-01-01
Digital Calculations of Engine Cycles is a collection of seven papers which were presented before technical meetings of the Society of Automotive Engineers during 1962 and 1963. The papers cover the spectrum of the subject of engine cycle events, ranging from an examination of composition and properties of the working fluid to simulation of the pressure-time events in the combustion chamber. The volume has been organized to present the material in a logical sequence. The first two chapters are concerned with the equilibrium states of the working fluid. These include the concentrations of var
The Dental Trauma Internet Calculator
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; Lauridsen, Eva Fejerskov; Christensen, Søren Steno Ahrensburg
2012-01-01
Background/Aim Prediction tools are increasingly used to inform patients about the future dental health outcome. Advanced statistical methods are required to arrive at unbiased predictions based on follow-up studies. Material and Methods The Internet risk calculator at the Dental Trauma Guide...... provides prognoses for teeth with traumatic injuries based on the Copenhagen trauma database: http://www.dentaltraumaguide.org The database includes 2191 traumatized permanent teeth from 1282 patients that were treated at the dental trauma unit at the University Hospital in Copenhagen (Denmark...
Calculational Tool for Skin Contamination Dose Assessment
Hill, R L
2002-01-01
Spreadsheet calculational tool was developed to automate the calculations preformed for dose assessment of skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.
Calculation of sound propagation in fibrous materials
DEFF Research Database (Denmark)
Tarnow, Viggo
1996-01-01
Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements.......Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements....
Flow Field Calculations for Afterburner
Institute of Scientific and Technical Information of China (English)
ZhaoJianxing; LiuQuanzhong; 等
1995-01-01
In this paper a calculation procedure for simulating the coimbustion flow in the afterburner with the heat shield,flame stabilizer and the contracting nozzle is described and evaluated by comparison with experimental data.The modified two-equation κ-ε model is employed to consider the turbulence effects,and the κ-ε-g turbulent combustion model is used to determine the reaction rate.To take into accunt the influence of heat radiation on gas temperature distribution,heat flux model is applied to predictions of heat flux distributions,The solution domain spanned the entire region between centerline and afterburner wall ,with the heat shield represented as a blockage to the mesh.The enthalpy equation and wall boundary of the heat shield require special handling for two passages in the afterburner,In order to make the computer program suitable to engineering applications,a subregional scheme is developed for calculating flow fields of complex geometries.The computational grids employed are 100×100 and 333×100(non-uniformly distributed).The numerical results are compared with experimental data,Agreement between predictions and measurements shows that the numerical method and the computational program used in the study are fairly reasonable and appopriate for primary design of the afterburner.
47 CFR 1.1623 - Probability calculation.
2010-10-01
... Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be... determine their new intermediate probabilities. (g) Multiply each applicant's probability pursuant...
Verification of Oncentra brachytherapy planning using independent calculation
Safian, N. A. M.; Abdullah, N. H.; Abdullah, R.; Chiang, C. S.
2016-03-01
This study was done to investigate the verification technique of treatment plan quality assurance for brachytherapy. It is aimed to verify the point doses in 192Ir high dose rate (HDR) brachytherapy between Oncentra Masterplan brachytherapy treatment planning system and independent calculation software at a region of rectum, bladder and prescription points for both pair ovoids and full catheter set ups. The Oncentra TPS output text files were automatically loaded into the verification programme that has been developed based on spreadsheets. The output consists of source coordinates, desired calculation point coordinates and the dwell time of a patient plan. The source strength and reference dates were entered into the programme and then dose point calculations were independently performed. The programme shows its results in a comparison of its calculated point doses with the corresponding Oncentra TPS outcome. From the total of 40 clinical cases that consisted of two fractions for 20 patients, the results that were given in term of percentage difference, it shows an agreement between TPS and independent calculation are in the range of 2%. This programme only takes a few minutes to be used is preferably recommended to be implemented as the verification technique in clinical brachytherapy dosimetry.
Energy Technology Data Exchange (ETDEWEB)
Goodman, Karyn A., E-mail: goodmank@mskcc.org [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Khalid, Najma [Quality Research in Radiation Oncology, American College of Radiology Clinical Research Center, Philadelphia, Pennsylvania (United States); Kachnic, Lisa A. [Department of Radiation Oncology, Boston University Medical Center, Boston, Massachusetts (United States); Minsky, Bruce D. [Department of Radiation Oncology, University of Texas MD, Anderson Cancer Center, Houston, Texas (United States); Crozier, Cheryl; Owen, Jean B. [Quality Research in Radiation Oncology, American College of Radiology Clinical Research Center, Philadelphia, Pennsylvania (United States); Devlin, Phillip M. [Department of Radiation Oncology, Dana-Farber Cancer Institute/Brigham and Women' s Hospital, Boston, Massachusetts (United States); Thomas, Charles R. [Department of Radiation Medicine, Knight Cancer Institute at the Oregon Health and Science University, Portland, Oregon (United States)
2013-02-01
Background: The specific aim was to determine national patterns of radiation therapy (RT) practice in patients treated for stage IB-IV (nonmetastatic) gastric cancer (GC). Methods and Materials: A national process survey of randomly selected US RT facilities was conducted which retrospectively assessed demographics, staging, geographic region, practice setting, and treatment by using on-site record review of eligible GC cases treated from 2005 to 2007. Three clinical performance measures (CPMs), (1) use of computed tomography (CT)-based treatment planning; (2) use of dose volume histograms (DVHs) to evaluate RT dose to the kidneys and liver; and (3) completion of RT within the prescribed time frame; and emerging quality indicators, (i) use of intensity modulated RT (IMRT); (ii) use of image-guided tools (IGRT) other than CT for RT target delineation; and (iii) use of preoperative RT, were assessed. Results: CPMs were computed for 250 eligible patients at 45 institutions (median age, 62 years; 66% male; 60% Caucasian). Using 2000 American Joint Committee on Cancer criteria, 13% of patients were stage I, 29% were stage II, 32% were stage IIIA, 10% were stage IIIB, and 12% were stage IV. Most patients (43%) were treated at academic centers, 32% were treated at large nonacademic centers, and 25% were treated at small to medium sized facilities. Almost all patients (99.5%) underwent CT-based planning, and 75% had DVHs to evaluate normal tissue doses to the kidneys and liver. Seventy percent of patients completed RT within the prescribed time frame. IMRT and IGRT were used in 22% and 17% of patients, respectively. IGRT techniques included positron emission tomography (n=20), magnetic resonance imaging (n=1), respiratory gating and 4-dimensional CT (n=22), and on-board imaging (n=10). Nineteen percent of patients received preoperative RT. Conclusions: This analysis of radiation practice patterns for treating nonmetastatic GC indicates widespread adoption of CT-based
Painless causality in defect calculations
Cheung, C; Cheung, Charlotte; Magueijo, Joao
1997-01-01
Topological defects must respect causality, a statement leading to restrictive constraints on the power spectrum of the total cosmological perturbations they induce. Causality constraints have for long been known to require the presence of an under-density in the surrounding matter compensating the defect network on large scales. This so-called compensation can never be neglected and significantly complicates calculations in defect scenarios, eg. computing cosmic microwave background fluctuations. A quick and dirty way to implement the compensation are the so-called compensation fudge factors. Here we derive the complete photon-baryon-CDM backreaction effects in defect scenarios. The fudge factor comes out as an algebraic identity and so we drop the negative qualifier ``fudge''. The compensation scale is computed and physically interpreted. Secondary backreaction effects exist, and neglecting them constitutes the well-defined approximation scheme within which one should consider compensation factor calculatio...
Dyscalculia and the Calculating Brain.
Rapin, Isabelle
2016-08-01
Dyscalculia, like dyslexia, affects some 5% of school-age children but has received much less investigative attention. In two thirds of affected children, dyscalculia is associated with another developmental disorder like dyslexia, attention-deficit disorder, anxiety disorder, visual and spatial disorder, or cultural deprivation. Infants, primates, some birds, and other animals are born with the innate ability, called subitizing, to tell at a glance whether small sets of scattered dots or other items differ by one or more item. This nonverbal approximate number system extends mostly to single digit sets as visual discrimination drops logarithmically to "many" with increasing numerosity (size effect) and crowding (distance effect). Preschoolers need several years and specific teaching to learn verbal names and visual symbols for numbers and school agers to understand their cardinality and ordinality and the invariance of their sequence (arithmetic number line) that enables calculation. This arithmetic linear line differs drastically from the nonlinear approximate number system mental number line that parallels the individual number-tuned neurons in the intraparietal sulcus in monkeys and overlying scalp distribution of discrete functional magnetic resonance imaging activations by number tasks in man. Calculation is a complex skill that activates both visual and spatial and visual and verbal networks. It is less strongly left lateralized than language, with approximate number system activation somewhat more right sided and exact number and arithmetic activation more left sided. Maturation and increasing number skill decrease associated widespread non-numerical brain activations that persist in some individuals with dyscalculia, which has no single, universal neurological cause or underlying mechanism in all affected individuals.
Study of dose calculation on breast brachytherapy using prism TPS
Energy Technology Data Exchange (ETDEWEB)
Fendriani, Yoza; Haryanto, Freddy [Nuclear Physics and Biophysics Research Division, FMIPA Institut Teknologi Bandung, Physics Buildings, Jl. Ganesha 10, Bandung 40132 (Indonesia)
2015-09-30
PRISM is one of non-commercial Treatment Planning System (TPS) and is developed at the University of Washington. In Indonesia, many cancer hospitals use expensive commercial TPS. This study aims to investigate Prism TPS which been applied to the dose distribution of brachytherapy by taking into account the effect of source position and inhomogeneities. The results will be applicable for clinical Treatment Planning System. Dose calculation has been implemented for water phantom and CT scan images of breast cancer using point source and line source. This study used point source and line source and divided into two cases. On the first case, Ir-192 seed source is located at the center of treatment volume. On the second case, the source position is gradually changed. The dose calculation of every case performed on a homogeneous and inhomogeneous phantom with dimension 20 × 20 × 20 cm{sup 3}. The inhomogeneous phantom has inhomogeneities volume 2 × 2 × 2 cm{sup 3}. The results of dose calculations using PRISM TPS were compared to literature data. From the calculation of PRISM TPS, dose rates show good agreement with Plato TPS and other study as published by Ramdhani. No deviations greater than ±4% for all case. Dose calculation in inhomogeneous and homogenous cases show similar result. This results indicate that Prism TPS is good in dose calculation of brachytherapy but not sensitive for inhomogeneities. Thus, the dose calculation parameters developed in this study were found to be applicable for clinical treatment planning of brachytherapy.
Energy Technology Data Exchange (ETDEWEB)
Mah, K.; Van Dyk, J.; Braban, L.E.; Hao, Y.; Keane, T.J. (Univ. of Toronto, Ontario (Canada)); Poon, P.Y. (Univ. of British Columbia (Canada))
1994-02-01
The objective of this work was to assess the incidence of radiological changes compatible with radiation-induced lung damage as determined by computed tomography (CT), and subsequently calculate the dose effect factors (DEF) for specified chemotherapeutic regimens. Radiation treatments were administered once daily, 5 days-per-week. Six clinical protocols were evaluated: ABVD (adriamycin, bleomycin, vincristine, and DTIC) followed by 35 Gy in 20 fractions; MOPP (nitrogen mustard, vincristine, procarbazine, and prednisone) followed by 35 Gy in 20; MOPP/ABVD followed by 35 Gy in 20; CAV (cyclophosphamide, adriamycin, and vincristine) followed by 25 Gy in 10; and 5-FU (5-fluorouracil) concurrent with either 50-52 Gy in 20-21 or 30-36 Gy in 10-15 fractions. CT examinations were taken before and at predetermined intervals following radiotherapy. CT evidence for the development of radiation-induced damage was defined as an increase in lung density within the irradiated volume. The radiation dose to lung was calculated using a CT-based algorithm to account for tissue inhomogeneities. Different fractionation schedules were converted using two isoeffect models, the estimated single dose (ED) and the normalized total dose (NTD). The actuarial incidence of radiological pneumonitis was 71% for the ABVD, 49% for MOPP, 52% for MOPP/ABVD, 67% for CAV, 73% for 5-FU radical, and 58% for 5-FU palliative protocols. Depending on the isoeffect model selected and the method of analysis, the DEF was 1.11-1.14 for the ABVD, 0.96-0.97 for the MOPP, 0.96-1.02 for the MOPP/ABVD, 1.03-1.10 for the CAV, 0.74-0.79 for the 5-FU radical, and 0.94 for the 5-FU palliative protocols. DEF were measured by comparing the incidence of CT-observed lung damage in patients receiving chemotherapy and radiotherapy to those receiving radiotherapy alone. The addition of ABVD or CAV appeared to reduce the tolerance of lung to radiation. 40 refs., 3 figs., 3 tabs.
... they are receiving. Other clinical trials involve a crossover design, where participants are randomly assigned to take a new treatment, a treatment already in use, and/or a placebo for a specified time ... If I am involved in a "crossover" clinical trial, can I go back to the ...
Factors affecting calculation of L
Ciotola, Mark P.
2001-08-01
A detectable extraterrestrial civilization can be modeled as a series of successive regimes over time each of which is detectable for a certain proportion of its lifecycle. This methodology can be utilized to produce an estimate for L. Potential components of L include quantity of fossil fuel reserves, solar energy potential, quantity of regimes over time, lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and downtime between regimes. Relationships between these components provide a means of calculating the lifetime of communicative species in a detectable state, L. An example of how these factors interact is provided, utilizing values that are reasonable given known astronomical data for components such as solar energy potential while existing knowledge about the terrestrial case is used as a baseline for other components including fossil fuel reserves, quantity of regimes over time, and lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and gaps of time between regimes due to recovery from catastrophic war or resource exhaustion. A range of values is calculated for L when parameters are established for each component so as to determine the lowest and highest values of L. roadmap for SETI research at the SETI Institute for the next few decades. Three different approaches were identified. 1) Continue the radio search: build an affordable array incorporating consumer market technologies, expand the search frequency, and increase the target list to 100,000 stars. This array will also serve as a technology demonstration and enable the international radio astronomy community to realize an array that is a hundred times larger and capable (among other things) of searching a million stars. 2) Begin searches for very fast optical pulses from a million stars. 3) As Moore's Law delivers increased computational capacity, build an omni-directional sky survey array capable of detecting strong, transient
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wang, Weimin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-07-01
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
Selfconsistent calculations for hyperdeformed nuclei
Energy Technology Data Exchange (ETDEWEB)
Molique, H.; Dobaczewski, J.; Dudek, J.; Luo, W.D. [Universite Louis Pasteur, Strasbourg (France)
1996-12-31
Properties of the hyperdeformed nuclei in the A {approximately} 170 mass range are re-examined using the self-consistent Hartree-Fock method with the SOP parametrization. A comparison with the previous predictions that were based on a non-selfconsistent approach is made. The existence of the {open_quotes}hyper-deformed shell closures{close_quotes} at the proton and neutron numbers Z=70 and N=100 and their very weak dependence on the rotational frequency is suggested; the corresponding single-particle energy gaps are predicted to play a role similar to that of the Z=66 and N=86 gaps in the super-deformed nuclei of the A {approximately} 150 mass range. Selfconsistent calculations suggest also that the A {approximately} 170 hyperdeformed structures have neglegible mass asymmetry in their shapes. Very importantly for the experimental studies, both the fission barriers and the {open_quotes}inner{close_quotes} barriers (that separate the hyperdeformed structures from those with smaller deformations) are predicted to be relatively high, up to the factor of {approximately}2 higher than the corresponding ones in the {sup 152}Dy superdeformed nucleus used as a reference.
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D.; Wang, Weimin; Katipamula, Srinivas
2014-03-31
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
Quantification of Proton Dose Calculation Accuracy in the Lung
Energy Technology Data Exchange (ETDEWEB)
Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Daartz, Juliane; Dowdell, Stephen; Ruggieri, Thomas; Sharp, Greg; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States)
2014-06-01
Purpose: To quantify the accuracy of a clinical proton treatment planning system (TPS) as well as Monte Carlo (MC)–based dose calculation through measurements and to assess the clinical impact in a cohort of patients with tumors located in the lung. Methods and Materials: A lung phantom and ion chamber array were used to measure the dose to a plane through a tumor embedded in the lung, and to determine the distal fall-off of the proton beam. Results were compared with TPS and MC calculations. Dose distributions in 19 patients (54 fields total) were simulated using MC and compared to the TPS algorithm. Results: MC increased dose calculation accuracy in lung tissue compared with the TPS and reproduced dose measurements in the target to within ±2%. The average difference between measured and predicted dose in a plane through the center of the target was 5.6% for the TPS and 1.6% for MC. MC recalculations in patients showed a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. For large tumors, MC also predicted consistently higher V5 and V10 to the normal lung, because of a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target could show large deviations, although this effect was highly patient specific. Range measurements showed that MC can reduce range uncertainty by a factor of ∼2: the average (maximum) difference to the measured range was 3.9 mm (7.5 mm) for MC and 7 mm (17 mm) for the TPS in lung tissue. Conclusion: Integration of Monte Carlo dose calculation techniques into the clinic would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. In addition, the ability to confidently reduce range margins would benefit all patients by potentially lowering toxicity.
Fast Electron Beam Simulation and Dose Calculation
Trindade, A; Peralta, L; Lopes, M C; Alves, C; Chaves, A
2003-01-01
A flexible multiple source model capable of fast reconstruction of clinical electron beams is presented in this paper. A source model considers multiple virtual sources emulating the effect of accelerator head components. A reference configuration (10 MeV and 10x10 cm2 field size) for a Siemens KD2 linear accelerator was simulated in full detail using GEANT3 Monte Carlo code. Our model allows the reconstruction of other beam energies and field sizes as well as other beam configurations for similar accelerators using only the reference beam data. Electron dose calculations were performed with the reconstructed beams in a water phantom and compared with experimental data. An agreement of 1-2% / 1-2 mm was obtained, equivalent to the accuracy of full Monte Carlo accelerator simulation. The source model reduces accelerator simulation CPU time by a factor of 7500 relative to full Monte Carlo approaches. The developed model was then interfaced with DPM, a fast radiation transport Monte Carlo code for dose calculati...
DEFF Research Database (Denmark)
Christensen, Irene
2016-01-01
This paper is about the logic of problem solving and the production of scientific knowledge through the utilisation of clinical research perspective. Ramp-up effectiveness, productivity, efficiency and organizational excellence are topics that continue to engage research and will continue doing s...... for years to come. This paper seeks to provide insights into ramp-up management studies through providing an agenda for conducting collaborative clinical research and extend this area by proposing how clinical research could be designed and executed in the Ramp- up management setting....
CT-based manual segmentation and evaluation of paranasal sinuses.
Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G
2009-04-01
Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.
FDG-PET/CT based response-adapted treatment
DEFF Research Database (Denmark)
de Geus-Oei, Lioe-Fee; Vriens, Dennis; Arens, Anne I J
2012-01-01
and adenocarcinoma of the esophagogastric junction, in order to investigate whether the use of PET-guided treatment individualization results in a survival benefit. In Hodgkin lymphoma and aggressive non-Hodgkin lymphoma, several trials are ongoing. Some studies aim to investigate the use of PET in early...... identification of metabolic non-responders in order to intensify treatment to improve survival. Other studies aim at reducing toxicity without adversely affecting cure rates by safely de-escalating therapy in metabolic responders. In solid tumors the first PET response-adjusted treatment trials have been...... realized in adenocarcinoma of the esophagogastric junction. These trials showed that patients with an early metabolic response to neoadjuvant chemotherapy benefit from this treatment, whereas metabolic non-responders should switch early to surgery, thus reducing the risk of tumor progression during...
Vortical Structures in CT-based Breathing Lung Models
Choi, Jiwoong; Lee, Changhyun; Hoffman, Eric; Lin, Ching-Long
2016-11-01
The 1D-3D coupled computational fluid dynamics (CFD) lung model is applied to study vortical structures in the human airways during normal breathing cycles. During inhalation, small vortical structures form around the turbulent laryngeal jet and Taylor-Gőrtler-like vortices form near the curved walls in the supraglottal region and at airway bifurcations. On exhalation elongated vortical tubes are formed in the left main bronchus, whereas a relatively slower stream is observed in the right main bronchus. These structures result in helical motions in the trachea, producing long lasting high wall shear stress on the wall. The current study elucidates that the correct employment of image-based airway deformation and lung deflation information is crucial for capturing the physiologically consistent regional airflow structures. The pathophysiological implications of these structures in destruction of tracheal wall will be discussed.
Jakowenko, Janelle
2009-01-01
Digital cameras, when used correctly, can provide the basis for telemedicine services. The increasing sophistication of digital cameras, combined with the improved speed and availability of the Internet, make them an instrument that every health-care professional should be familiar with. Taking satisfactory images of patients requires clinical photography skills. Photographing charts, monitors, X-ray films and specimens also requires expertise. Image capture using digital cameras is often done with insufficient attention, which can lead to inaccurate study results. The procedures in clinical photography should not vary from camera to camera, or from country to country. Taking a photograph should be a standardised process. There are seven main scenarios in clinical photography and health professionals who use cameras should be familiar with all of them. Obtaining informed consent prior to photography should be a normal part of the clinical photography routine.
1979-09-30
Hiller, D.A., Elliott, J.P.: Tubal Ligation Syndrome Myth or Reality. Presented: Armed Forces Division of ACOG, New Orleans, Louisiana, October 1977...Molecular Weight Immunoreactive Glucagon Levels in Patients with the Post Prandial Syndrome . (Abst.) Western Society for Clinical Research, 1979. (3...Glucagon Levels in Patients witl the Post Prandial Syndrome . Presented: Western Society Meetings, Western Society for Clinical Research, February 1979. (3
Goorapah, D
1997-05-01
The introduction of clinical supervision to a wider sphere of nursing is being considered from a professional and organizational point of view. Positive views are being expressed about adopting this concept, although there are indications to suggest that there are also strong reservations. This paper examines the potential for its success amidst the scepticism that exists. One important question raised is whether clinical supervision will replace or run alongside other support systems.
Explosion Calculations of SN1087
Wooden, Diane H.; Morrison, David (Technical Monitor)
1994-01-01
Explosion calculations of SNT1987A generate pictures of Rayleigh-Taylor fingers of radioactive Ni-56 which are boosted to velocities of several thousand km/s. From the KAO observations of the mid-IR iron lines, a picture of the iron in the ejecta emerges which is consistent with the "frothy iron fingers" having expanded to fill about 50% of the metal-rich volume of the ejecta. The ratio of the nickel line intensities yields a high ionization fraction of greater than or equal to 0.9 in the volume associated with the iron-group elements at day 415, before dust condenses in the ejecta. From the KAO observations of the dust's thermal emission, it is deduced that when the grains condense their infrared radiation is trapped, their apparent opacity is gray, and they have a surface area filling factor of about 50%. The dust emission from SN1987A is featureless: no 9.7 micrometer silicate feature, nor PAH features, nor dust emission features of any kind are seen at any time. The total dust opacity increases with time even though the surface area filling factor and the dust/gas ratio remain constant. This suggests that the dust forms along coherent structures which can maintain their radial line-of-sight opacities, i.e., along fat fingers. The coincidence of the filling factor of the dust and the filling factor of the iron strongly suggests that the dust condenses within the iron, and therefore the dust is iron-rich. It only takes approximately 4 x 10(exp -4) solar mass of dust for the ejecta to be optically thick out to approximately 100 micrometers; a lower limit of 4 x 10(exp -4) solar mass of condensed grains exists in the metal-rich volume, but much more dust could be present. The episode of dust formation started at about 530 days and proceeded rapidly, so that by 600 days 45% of the bolometric luminosity was being emitted in the IR; by 775 days, 86% of the bolometric luminosity was being reradiated by the dust. Measurements of the bolometric luminosity of SN1987A from
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations.
A New Approach for Calculating Vacuum Susceptibility
Institute of Scientific and Technical Information of China (English)
宗红石; 平加伦; 顾建中
2004-01-01
Based on the Dyson-Schwinger approach, we propose a new method for calculating vacuum susceptibilities. As an example, the vector vacuum susceptibility is calculated. A comparison with the results of the previous approaches is presented.
Dynamics Calculation of Travel Wave Tube
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
During the dynamics calculating of the travel tube, we must obtain the field map in the tube. The field map can be affected by not only the beam loading, but also the attenuation coefficient. The calculation of the attenuation coefficient
Pressure Vessel Calculations for VVER-440 Reactors
Hordósy, G.; Hegyi, Gy.; Keresztúri, A.; Maráczy, Cs.; Temesvári, E.; Vértes, P.; Zsolnay, É.
2003-06-01
Monte Carlo calculations were performed for a selected cycle of the Paks NPP Unit II to test a computational model. In the model the source term was calculated by the core design code KARATE and the neutron transport calculations were performed by the MCNP. Different forms of the source specification were examined. The calculated results were compared with measurements and in most cases fairly good agreement was found.
A general formalism for phase space calculations
Norbury, John W.; Deutchman, Philip A.; Townsend, Lawrence W.; Cucinotta, Francis A.
1988-01-01
General formulas for calculating the interactions of galactic cosmic rays with target nuclei are presented. Methods for calculating the appropriate normalization volume elements and phase space factors are presented. Particular emphasis is placed on obtaining correct phase space factors for 2-, and 3-body final states. Calculations for both Lorentz-invariant and noninvariant phase space are presented.
Status Report of NNLO QCD Calculations
Klasen, M
2005-01-01
We review recent progress in next-to-next-to-leading order (NNLO) perturbative QCD calculations with special emphasis on results ready for phenomenological applications. Important examples are new results on structure functions and jet or Higgs boson production. In addition, we describe new calculational techniques based on twistors and their potential for efficient calculations of multiparticle amplitudes.
Mathematical Creative Activity and the Graphic Calculator
Duda, Janina
2011-01-01
Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…
Decimals, Denominators, Demons, Calculators, and Connections
Sparrow, Len; Swan, Paul
2005-01-01
The authors provide activities for overcoming some fraction misconceptions using calculators specially designed for learners in primary years. The writers advocate use of the calculator as a way to engage children in thinking about mathematics. By engaging with a calculator as part of mathematics learning, children are learning about and using the…
Clinical Implementation of Intensity Modulated Proton Therapy for Thoracic Malignancies
Energy Technology Data Exchange (ETDEWEB)
Chang, Joe Y., E-mail: jychang@mdanderson.org [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, Heng; Zhu, X. Ronald [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao, Zhongxing; Zhao, Lina [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liu, Amy [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, Yupeng [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Applied Research, Varian Medical Systems, Palo Alto, California (United States); Sahoo, Narayan; Poenisch, Falk [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Gomez, Daniel R. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Wu, Richard; Gillin, Michael [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhang, Xiaodong, E-mail: xizhang@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)
2014-11-15
Purpose: Intensity modulated proton therapy (IMPT) can improve dose conformality and better spare normal tissue over passive scattering techniques, but range uncertainties complicate its use, particularly for moving targets. We report our early experience with IMPT for thoracic malignancies in terms of motion analysis and management, plan optimization and robustness, and quality assurance. Methods and Materials: Thirty-four consecutive patients with lung/mediastinal cancers received IMPT to a median 66 Gy(relative biological equivalence [RBE]). All patients were able to undergo definitive radiation therapy. IMPT was used when the treating physician judged that IMPT conferred a dosimetric advantage; all patients had minimal tumor motion (<5 mm) and underwent individualized tumor-motion dose-uncertainty analysis and 4-dimensional (4D) computed tomographic (CT)-based treatment simulation and motion analysis. Plan robustness was optimized by using a worst-case scenario method. All patients had 4D CT repeated simulation during treatment. Results: IMPT produced lower mean lung dose (MLD), lung V{sub 5} and V{sub 20}, heart V{sub 40}, and esophageal V{sub 60} than did IMRT (P<.05) and lower MLD, lung V{sub 20}, and esophageal V{sub 60} than did passive scattering proton therapy (PSPT) (P<.05). D{sub 5} to the gross tumor volume and clinical target volume was higher with IMPT than with intensity modulated radiation therapy or PSPT (P<.05). All cases were analyzed for beam-angle-specific motion, water-equivalent thickness, and robustness. Beam angles were chosen to minimize the effect of respiratory motion and avoid previously treated regions, and the maximum deviation from the nominal dose-volume histogram values was kept at <5% for the target dose and met the normal tissue constraints under a worst-case scenario. Patient-specific quality assurance measurements showed that a median 99% (range, 95% to 100%) of the pixels met the 3% dose/3 mm distance criteria for the
Energy Technology Data Exchange (ETDEWEB)
Hegazy, Neamat [Dept. of Radiotherapy, Comprehensive Cancer Centre Vienna, Medical Univ. of Vienna, Vienna (Austria); Dept. of Clinical Oncology, Medical Univ. of Alexandria, Alexandria (Egypt); Poetter Rickard; Kirisits, Christian [Dept. of Radiotherapy, Comprehensive Cancer Centre Vienna, Medical Univ. of Vienna, Vienna (Austria); Christian Doppler Lab. for Medical Radiation Research for Radiation Oncology, Medical Univ. Vienna (Austria); Berger, Daniel; Federico, Mario; Sturdza, Alina; Nesvacil, Nicole [Dept. of Radiotherapy, Comprehensive Cancer Centre Vienna, Medical Univ. of Vienna, Vienna (Austria)], e-mail: nicole.nesvacil@meduniwien.ac.at
2013-10-15
Purpose: The aim of the study was to improve computed tomography (CT)-based high-risk clinical target volume (HR CTV) delineation protocols for cervix cancer patients, in settings without any access to magnetic resonance imaging (MRI) at the time of brachytherapy. Therefore the value of a systematic integration of comprehensive three-dimensional (3D) documentation of repetitive gynecological examination for CT-based HR CTV delineation protocols, in addition to information from FIGO staging, was investigated. In addition to a comparison between reference MRI contours and two different CT-based contouring methods (using complementary information from FIGO staging with or without additional 3D clinical drawings), the use of standardized uterine heights was also investigated. Material and methods: Thirty-five cervix cancer patients with CT- and MR-images and 3D clinical drawings at time of diagnosis and brachytherapy were included. HR CTV{sub stage} was based on CT information and FIGO stage. HR CTV{sub stage} {sub +3Dclin} was contoured on CT using FIGO stage and 3D clinical drawing. Standardized HR CTV heights were: 1/1, 2/3 and 1/2 of uterine height. MRI-based HR CTV was delineated independently. Resulting widths, thicknesses, heights, and volumes of HR CTV{sub stage}, HR CTV{sub stage+3Dclin} and MRI-based HR CTV contours were compared. Results: The overall normalized volume ratios (mean{+-}SD of CT/MRI{sub ref} volume) of HR CTV{sub stage} and HR{sub stage+3Dclin} were 2.6 ({+-}0.6) and 2.1 ({+-}0.4) for 1/1 and 2.3 ({+-}0.5) and 1.8 ({+-}0.4), for 2/3, and 1.9 ({+-}0.5) and 1.5 ({+-}0.3), for 1/2 of uterine height. The mean normalized widths were 1.5{+-}0.2 and 1.2{+-}0.2 for HR CTV{sub stage} and HR CTV{sub stage+3Dclin}, respectively (p < 0.05). The mean normalized heights for HR CTV{sub stage} and HR CTV{sub stage+3Dclin} were both 1.7{+-}0.4 for 1/1 (p < 0.05.), 1.3{+-}0.3 for 2/3 (p < 0.05) and 1.1{+-}0.3 for 1/2 of uterine height. Conclusion: CT-based HR
... and her initial results. Nueva Esperanza Para Las Enfermedades Del Corazón 09/23/2014 Milena tuvo un ... Story 09/23/2014 Nueva Esperanza Para Las Enfermedades Del Corazón 09/23/2014 Children and Clinical ...
Energy Technology Data Exchange (ETDEWEB)
Zucca Aparicio, D.; Perez Moreno, J. M.; Fernandez Leton, P.; Garcia Ruiz-Zorrilla, J.; Pinto Monedero, M.; Marti Asensjo, J.; Alonso Iracheta, L.
2015-07-01
Treatment of lung injury SBRT requires great dosimetric accuracy, the increasing clinical importance of dose calculation heterogeneities introducing algorithms that adequately model the transport of particles narrow beams in media of low density, as with Monte Carlo calculation. (Author)
Microscopic Calculations of 240Pu Fission
Energy Technology Data Exchange (ETDEWEB)
Younes, W; Gogny, D
2007-09-11
Hartree-Fock-Bogoliubov calculations have been performed with the Gogny finite-range effective interaction for {sup 240}Pu out to scission, using a new code developed at LLNL. A first set of calculations was performed with constrained quadrupole moment along the path of most probable fission, assuming axial symmetry but allowing for the spontaneous breaking of reflection symmetry of the nucleus. At a quadrupole moment of 345 b, the nucleus was found to spontaneously scission into two fragments. A second set of calculations, with all nuclear moments up to hexadecapole constrained, was performed to approach the scission configuration in a controlled manner. Calculated energies, moments, and representative plots of the total nuclear density are shown. The present calculations serve as a proof-of-principle, a blueprint, and starting-point solutions for a planned series of more comprehensive calculations to map out a large set of scission configurations, and the associated fission-fragment properties.
Maths anxiety and medication dosage calculation errors: A scoping review.
Williams, Brett; Davis, Samantha
2016-09-01
A student's accuracy on drug calculation tests may be influenced by maths anxiety, which can impede one's ability to understand and complete mathematic problems. It is important for healthcare students to overcome this barrier when calculating drug dosages in order to avoid administering the incorrect dose to a patient when in the clinical setting. The aim of this study was to examine the effects of maths anxiety on healthcare students' ability to accurately calculate drug dosages by performing a scoping review of the existing literature. This review utilised a six-stage methodology using the following databases; CINAHL, Embase, Medline, Scopus, PsycINFO, Google Scholar, Trip database (http://www.tripdatabase.com/) and Grey Literature report (http://www.greylit.org/). After an initial title/abstract review of relevant papers, and then full text review of the remaining papers, six articles were selected for inclusion in this study. Of the six articles included, there were three experimental studies, two quantitative studies and one mixed method study. All studies addressed nursing students and the presence of maths anxiety. No relevant studies from other disciplines were identified in the existing literature. Three studies took place in the U.S, the remainder in Canada, Australia and United Kingdom. Upon analysis of these studies, four factors including maths anxiety were identified as having an influence on a student's drug dosage calculation abilities. Ultimately, the results from this review suggest more research is required in nursing and other relevant healthcare disciplines regarding the effects of maths anxiety on drug dosage calculations. This additional knowledge will be important to further inform development of strategies to decrease the potentially serious effects of errors in drug dosage calculation to patient safety.
Calculation of the Moments of Polygons.
1987-06-01
2.1) VowUK-1N0+IDIO TUUNTKPlNO.YKNO C Calculate AREA YKXK-YKPIND*IKNO-YKNO*XKP1NO AIKA-hEEA4YKXX C Calculate ACEIT ACENT (1)- ACEIT ( 1) VSUNI4TKIK... ACEIT (2) -ACENT(2) .VSUNYKXK C Calculate SECHON 3ECNON (1) -SCNON( 1) TKXK*(XX~PIdO*VSUNXKKO**2) SECNO(2) -SEn N(2) .yrf* (XKP114*YKP1MO.XKO*YXO+VB1hi
Surface Tension Calculation of Undercooled Alloys
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Based on the Butler equation and extrapolated thermodynamic data of undercooled alloys from those of liquid stable alloys, a method for surface tension calculation of undercooled alloys is proposed. The surface tensions of liquid stable and undercooled Ni-Cu (xNi=0.42) and Ni-Fe (xNi=0.3 and 0.7) alloys are calculated using STCBE (Surface Tension Calculation based on Butler Equation) program. The agreement between calculated values and experimental data is good enough, and the temperature dependence of the surface tension can be reasonable down to 150-200 K under the liquid temperature of the alloys.
The conundrum of calculating carbon footprints
DEFF Research Database (Denmark)
Strobel, Bjarne W.; Erichsen, Anders Christian; Gausset, Quentin
2016-01-01
A pre-condition for reducing global warming is to minimise the emission of greenhouse gasses (GHGs). A common approach to informing people about the link between behaviour and climate change rests on developing GHG calculators that quantify the ‘carbon footprint’ of a product, a sector or an actor....... There is, however, an abundance of GHG calculators that rely on very different premises and give very different estimates of carbon footprints. In this chapter, we compare and analyse the main principles of calculating carbon footprints, and discuss how calculators can inform (or misinform) people who wish...
MATNORM: Calculating NORM using composition matrices
Pruseth, Kamal L.
2009-09-01
This paper discusses the implementation of an entirely new set of formulas to calculate the CIPW norm. MATNORM does not involve any sophisticated programming skill and has been developed using Microsoft Excel spreadsheet formulas. These formulas are easy to understand and a mere knowledge of the if-then-else construct in MS-Excel is sufficient to implement the whole calculation scheme outlined below. The sequence of calculation used here differs from that of the standard CIPW norm calculation, but the results are very similar. The use of MS-Excel macro programming and other high-level programming languages has been deliberately avoided for simplicity.
Pile Load Capacity – Calculation Methods
Directory of Open Access Journals (Sweden)
Wrana Bogumił
2015-12-01
Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.
Consultants' forum: should post hoc sample size calculations be done?
Walters, Stephen J
2009-01-01
Pre-study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre-determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre-study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre-specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a "non-statistically significant result" then investigators are frequently tempted to ask the question "Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?" The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed.
A simplified analytical random walk model for proton dose calculation
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.
Calculation of rotational deformity in pediatric supracondylar humerus fractures
Energy Technology Data Exchange (ETDEWEB)
Henderson, Eric R.; Egol, Kenneth A.; Bosse, Harold J.P. van; Schweitzer, Mark E.; Pettrone, Sarah K. [NYU Hospital for Joint Diseases, New York, NY (United States); Feldman, David S. [NYU Hospital for Joint Diseases, New York, NY (United States); NYU Hospital for Joint Diseases, Pediatric Orthopaedic Surgery, Center for Children, New York, NY (United States)
2007-03-15
Supracondylar humerus fractures (SCHF) are common in the pediatric population. Cubitus varus deformity (CVD) is the most common long-term complication of SCHFs and may lead to elbow instability and deficits in throwing or extension. Distal fragment malrotation in the axial plane disposes to fragment tilt and CVD; however, no simple method of assessing fracture malrotation exists. This study tested a mathematical method of measuring axial plane malrotation in SCHFs based on plain radiographs. A pediatric SCHF model was made, and x-rays were taken at known intervals of rotation. Five independent, blinded observers measured these films. Calculated rotation for each data set was compared to the known rotation. The identical protocol was performed for an aluminum phantom. The reliability and agreement of the rotation values were good for both models. This method is a reliable, accurate, and cost-effective means of calculating SCHF distal fragment malrotation and warrants clinical application. (orig.)
Supporting the development of calculating skills in nurses.
Wright, Kerri
This article discusses a well-known model in mathematical problem solving developed by Polya (1957) and suggests that this could be a beneficial framework to support the development of medication calculation skills. The model outlines four stages to problem solving: understanding the problem, devising a plan, carrying out the plan and examining the solution. These four stages are discussed in relation to the teaching and assessing of medication skills, drawing on literature from nursing, mathematics education and cognitive psychology. The article emphasizes the importance of clinical experience and knowledge and the cognitive structures that support the development of medication skills. This is the first part of a three-part series. Part two will examine the different methods that can be used to solve medication calculations and part three the resources that are required to support use of these methods.
Alexander, W. C.; Leach, C. S.; Fischer, C. L.
1975-01-01
The objectives of the biochemical studies conducted for the Apollo program were (1) to provide routine laboratory data for assessment of preflight crew physical status and for postflight comparisons; (2) to detect clinical or pathological abnormalities which might have required remedial action preflight; (3) to discover as early as possible any infectious disease process during the postflight quarantine periods following certain missions; and (4) to obtain fundamental medical knowledge relative to man's adjustment to and return from the space flight environment. The accumulated data presented suggest that these requirements were met by the program described. All changes ascribed to the space flight environment were subtle, whereas clinically significant changes were consistent with infrequent illnesses unrelated to the space flight exposure.
DEFF Research Database (Denmark)
Pallesen, Ulla
and repair? Have new materials improved longevity? Are there still clinical and material problems to be solved? And what has the highest impact on longevity of posterior resin restorations – the material, the dentist, the patient or the tooth? These matters will be discussed on the basis of the literature......Within the last 25 years composite resin materials have in many countries successively replaced amalgam as a restorative for posterior teeth. Resin materials and bonding systems are continuously being improved by the manufactures, adhesive procedures are now included in the curriculum of most...... universities and practicing dentists restore millions of teeth throughout the World with composite resin materials. Do we know enough about the clinical performance of these restorations over time? Numerous in vitro studies are being published on resin materials and adhesion, some of them attempting to imitate...
Jolley, D; Benbow, S M; Grizzell, M
2006-01-01
Memory clinics were first described in the 1980s. They have become accepted worldwide as useful vehicles for improving practice in the identification, investigation, and treatment of memory disorders, including dementia. They are provided in various settings, the setting determining clientele and practice. All aim to facilitate referral from GPs, other specialists, or by self referral, in the early stages of impairment, and to avoid the stigma associated with psychiatric services. They bring ...
2011-01-01
Cerebral palsy (CP) is the most common physical disability in early childhood. The worldwide prevalence of CP is approximately 2–2.5 per 1,000 live births. It has been clinically defined as a group of motor, cognitive, and perceptive impairments secondary to a non-progressive defect or lesion of the developing brain. Children with CP can have swallowing problems with severe drooling as one of the consequences. Malnutrition and recurrent aspiration pneumonia can increase the risk of morbidity ...
Atomic Structure Calculations for Neutral Oxygen
Norah Alonizan; Rabia Qindeel; Nabil Ben Nessib
2016-01-01
Energy levels and oscillator strengths for neutral oxygen have been calculated using the Cowan (CW), SUPERSTRUCTURE (SS), and AUTOSTRUCTURE (AS) atomic structure codes. The results obtained with these atomic codes have been compared with MCHF calculations and experimental values from the National Institute of Standards and Technology (NIST) database.
10 CFR 766.102 - Calculation methodology.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology....
Calculation of cohesive energy of actinide metals
Institute of Scientific and Technical Information of China (English)
钱存富; 陈秀芳; 余瑞璜; 耿平; 段占强
1997-01-01
According to empirical electron theory of solids and molecules (EET), an equation for calculating the cohesive energy of actinide metals is given, the cohesive energy of 9 actinide metals with known crystal structure is calculated, which is identical with the experimental values on the whole, and the cohesive energy of 6 actinide metals with unknown crystal structure is forecast.
Calculation reliability in vehicle accident reconstruction.
Wach, Wojciech
2016-06-01
The reconstruction of vehicle accidents is subject to assessment in terms of the reliability of a specific system of engineering and technical operations. In the article [26] a formalized concept of the reliability of vehicle accident reconstruction, defined using Bayesian networks, was proposed. The current article is focused on the calculation reliability since that is the most objective section of this model. It is shown that calculation reliability in accident reconstruction is not another form of calculation uncertainty. The calculation reliability is made dependent on modeling reliability, adequacy of the model and relative uncertainty of calculation. All the terms are defined. An example is presented concerning the analytical determination of the collision location of two vehicles on the road in the absence of evidential traces. It has been proved that the reliability of this kind of calculations generally does not exceed 0.65, despite the fact that the calculation uncertainty itself can reach only 0.05. In this example special attention is paid to the analysis of modeling reliability and calculation uncertainty using sensitivity coefficients and weighted relative uncertainty.
Calculating "g" from Acoustic Doppler Data
Torres, Sebastian; Gonzalez-Espada, Wilson J.
2006-01-01
Traditionally, the Doppler effect for sound is introduced in high school and college physics courses. Students calculate the perceived frequency for several scenarios relating a stationary or moving observer and a stationary or moving sound source. These calculations assume a constant velocity of the observer and/or source. Although seldom…
Efficient Calculation of Earth Penetrating Projectile Trajectories
2006-09-01
CALCULATION OF EARTH PENETRATING PROJECTILE TRAJECTORIES by Daniel F . Youch September 2006 Thesis Advisor: Joshua Gordis... Daniel F . Youch 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING...EFFICIENT CALCULATION OF EARTH PENETRATING PROJECTILE TRAJECTORIES Daniel F . Youch Lieutenant Commander, United States Navy B.S., Temple
Direct calculation of wind turbine tip loss
DEFF Research Database (Denmark)
Wood, D.H.; Okulov, Valery; Bhattacharjee, D.
2016-01-01
. We develop three methods for the direct calculation of the tip loss. The first is the computationally expensive calculation of the velocities induced by the helicoidal wake which requires the evaluation of infinite sums of products of Bessel functions. The second uses the asymptotic evaluation...
Calculating Electromagnetic Fields Of A Loop Antenna
Schieffer, Mitchell B.
1987-01-01
Approximate field values computed rapidly. MODEL computer program developed to calculate electromagnetic field values of large loop antenna at all distances to observation point. Antenna assumed to be in x-y plane with center at origin of coordinate system. Calculates field values in both rectangular and spherical components. Also solves for wave impedance. Written in MicroSoft FORTRAN 77.
New tool for standardized collector performance calculations
DEFF Research Database (Denmark)
Perers, Bengt; Kovacs, Peter; Olsson, Marcus;
2011-01-01
A new tool for standardized calculation of solar collector performance has been developed in cooperation between SP Technical Research Institute Sweden, DTU Denmark and SERC Dalarna University. The tool is designed to calculate the annual performance for a number of representative cities in Europe...
Calculation of Temperature Rise in Calorimetry.
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
Investment Return Calculations and Senior School Mathematics
Fitzherbert, Richard M.; Pitt, David G. W.
2010-01-01
The methods for calculating returns on investments are taught to undergraduate level business students. In this paper, the authors demonstrate how such calculations are within the scope of senior school students of mathematics. In providing this demonstration the authors hope to give teachers and students alike an illustration of the power and the…
40 CFR 1065.850 - Calculations.
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the...
Teaching Discrete Mathematics with Graphing Calculators.
Masat, Francis E.
Graphing calculator use is often thought of in terms of pre-calculus or continuous topics in mathematics. This paper contains examples and activities that demonstrate useful, interesting, and easy ways to use a graphing calculator with discrete topics. Examples are given for each of the following topics: functions, mathematical induction and…
Using Calculators in Mathematics 12. Student Text.
Rising, Gerald R.; And Others
This student textbook is designed to incorporate programable calculators in grade 12 mathematics. The seven chapters contained in this document are: (1) Using Calculators in Mathematics; (2) Sequences, Series, and Limits; (3) Iteration, Mathematical Induction, and the Binomial Theorem; (4) Applications of the Fundamental Counting Principle; (5)…
46 CFR 154.520 - Piping calculations.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Piping calculations. 154.520 Section 154.520 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Process Piping Systems § 154.520 Piping calculations. A piping system must be designed to meet...
Data base to compare calculations and observations
Energy Technology Data Exchange (ETDEWEB)
Tichler, J.L.
1985-01-01
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)
76 FR 71431 - Civil Penalty Calculation Methodology
2011-11-17
... Uniform Fine Assessment (UFA) algorithm, which FMCSA currently uses for calculation of civil penalties. UFA takes into account the statutory penalty factors under 49 U.S.C. 521(b)(2)(D). The evaluation will... will impose a minimum civil penalty that is calculated by UFA. In many cases involving small...
TH-A-19A-09: Towards Sub-Second Proton Dose Calculation On GPU
Energy Technology Data Exchange (ETDEWEB)
Silva, J da [University of Cambridge, Cambridge, Cambridgeshire (United Kingdom)
2014-06-15
Purpose: To achieve sub-second dose calculation for clinically relevant proton therapy treatment plans. Rapid dose calculation is a key component of adaptive radiotherapy, necessary to take advantage of the better dose conformity offered by hadron therapy. Methods: To speed up proton dose calculation, the pencil beam algorithm (PBA; clinical standard) was parallelised and implemented to run on a graphics processing unit (GPU). The implementation constitutes the first PBA to run all steps on GPU, and each part of the algorithm was carefully adapted for efficiency. Monte Carlo (MC) simulations obtained using Fluka of individual beams of energies representative of the clinical range impinging on simple geometries were used to tune the PBA. For benchmarking, a typical skull base case with a spot scanning plan consisting of a total of 8872 spots divided between two beam directions of 49 energy layers each was provided by CNAO (Pavia, Italy). The calculations were carried out on an Nvidia Geforce GTX680 desktop GPU with 1536 cores running at 1006 MHz. Results: The PBA reproduced within ±3% of maximum dose results obtained from MC simulations for a range of pencil beams impinging on a water tank. Additional analysis of more complex slab geometries is currently under way to fine-tune the algorithm. Full calculation of the clinical test case took 0.9 seconds in total, with the majority of the time spent in the kernel superposition step. Conclusion: The PBA lends itself well to implementation on many-core systems such as GPUs. Using the presented implementation and current hardware, sub-second dose calculation for a clinical proton therapy plan was achieved, opening the door for adaptive treatment. The successful parallelisation of all steps of the calculation indicates that further speedups can be expected with new hardware, brightening the prospects for real-time dose calculation. This work was funded by ENTERVISION, European Commission FP7 grant 264552.
Heat Calculation of Borehole Heat Exchangers
Directory of Open Access Journals (Sweden)
S. Filatov
2013-01-01
Full Text Available The paper considers a heat calculation method of borehole heat exchangers (BHE which can be used for designing and optimization of their design values and included in a comprehensive mathematical model of heat supply system with a heat pump based on utilization of low-grade heat from the ground.The developed method of calculation is based on the reduction of the problem general solution pertaining to heat transfer in BHE with due account of heat transfer between top-down and bottom-up flows of heat carrier to the solution for a boundary condition of one kind on the borehole wall. Used the a method of electrothermal analogy has been used for a calculation of the thermal resistance and the required shape factors for calculation of a borehole filler thermal resistance have been obtained numerically. The paper presents results of heat calculation of various BHE designs in accordance with the proposed method.
Classification of non-aneurysmal subarachnoid haemorrhage: CT correlation to the clinical outcome
Energy Technology Data Exchange (ETDEWEB)
Nayak, S., E-mail: sanjeevnayak@hotmail.co [Department of Neuroradiology, University Hospital of North Staffordshire, North Staffordshire Royal Infirmary, Princes Road, Stoke-on-Trent, Staffordshire, ST4 7LN (United Kingdom); Kunz, A.B.; Kieslinger, K. [University Clinic of Neurology, Paracelsus Medical University Salzburg (Austria); Ladurner, G.; Killer, M. [University Clinic of Neurology, Paracelsus Medical University Salzburg (Austria); Neuroscience Institute, Christian Doppler Clinic, Paracelsus Medical University Salzburg (Austria)
2010-08-15
Aim: To propose a new computed tomography (CT)-based classification system for non-aneurysmal subarachnoid haemorrhage (SAH), which predicts patients' discharge clinical outcome and helps to prioritize appropriate patient management. Methods and materials: A 5-year, retrospective, two-centre study was carried out involving 1486 patients presenting with SAH. One hundred and ninety patients with non-aneurysmal SAH were included in the study. Initial cranial CT findings at admission were correlated with the patients' discharge outcomes measured using the Modified Rankin Scale (MRS). A CT-based classification system (type 1-4) was devised based on the topography of the initial haemorrhage pattern. Results: Seventy-five percent of the patients had type 1 haemorrhage and all these patients had a good clinical outcome with a discharge MRS of {<=}1. Eight percent of the patients presented with type 2 haemorrhage, 62% of which were discharged with MRS of {<=}1 and 12% of patients had MRS 3 or 4. Type 3 haemorrhage was found in 10%, of which 16% had good clinical outcome, but 53% had moderate to severe disability (MRS 3 and 4) and 5% were discharged with severe disability (MRS 5). Six percent of patients presented with type 4 haemorrhage of which 42% of the patients had moderate to severe disability (MRS 3 and 4), 42% had severe disability and one-sixth of the patients died. Highly significant differences were found between type 1(1a and 1b) and type 2 (p = 0.003); type 2 and type 3 (p = 0.002); type 3 and type 4 (p = 0.001). Conclusion: Haemorrhages of the type 1 category are usually benign and do not warrant an extensive battery of clinical and radiological investigations. Type 2 haemorrhages have a varying prognosis and need to be investigated and managed along similar lines as that of an aneurysmal haemorrhage with emphasis towards radiological investigation. Type 3 and type 4 haemorrhages need to be extensively investigated to find an underlying cause.
... of this page please turn Javascript on. Feature: Clinical Trials What Are Clinical Trials? Past Issues / Fall 2010 Table of Contents Clinical ... conducted all the time. The Different Phases of Clinical Trials Clinical trials related to drugs are classified into ...
Participating in Clinical Trials
... this page please turn Javascript on. Participating in Clinical Trials About Clinical Trials A Research Study With Human Subjects A clinical ... to treat or cure a disease. Phases of Clinical Trials Clinical trials of drugs are usually described based ...
Participating in Clinical Trials
Full Text Available ... Z > Participating in Clinical Trials: About Clinical Trials In This Topic About Clinical Trials Risks and Benefits ... of this page please turn Javascript on. Participating in Clinical Trials About Clinical Trials A Research Study ...
Institute of Scientific and Technical Information of China (English)
LIU Ya-jun; TIAN Wei; LIU Bo; LI Qin; HU Lin; LI Zhi-yu; YUAN Qiang; L(U) Yan-wei; SUN Yu-zhen
2010-01-01
-fluoroscopy and CT-based navigation systems in future clinical applications.
[Reading a clinical trial report].
Bergmann, J F; Chassany, O
2000-04-15
To improve medical knowledge by reading clinical trial reports it is necessary to check for the respect of the methodological rules, and to analyze and criticize the results. A control group and a randomisation are always necessary. Double blind assessment, sample size calculation, intention to treat analysis, a unique primary end point are also important. The conclusions of the trial are valid only for the population included and the clinical signification of the results, depending on the control treatment, has to be evaluated. Respect of the reading rules is necessary to assess the reliability of the conclusions, in order to promote evidence-based practice.
Spreadsheet Based Scaling Calculations and Membrane Performance
Energy Technology Data Exchange (ETDEWEB)
Wolfe, T D; Bourcier, W L; Speth, T F
2000-12-28
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI
Ti-84 Plus graphing calculator for dummies
McCalla
2013-01-01
Get up-to-speed on the functionality of your TI-84 Plus calculator Completely revised to cover the latest updates to the TI-84 Plus calculators, this bestselling guide will help you become the most savvy TI-84 Plus user in the classroom! Exploring the standard device, the updated device with USB plug and upgraded memory (the TI-84 Plus Silver Edition), and the upcoming color screen device, this book provides you with clear, understandable coverage of the TI-84's updated operating system. Details the new apps that are available for download to the calculator via the USB cabl
Energy of plate tectonics calculation and projection
Directory of Open Access Journals (Sweden)
N. H. Swedan
2013-02-01
Full Text Available Mathematics and observations suggest that the energy of the geological activities resulting from plate tectonics is equal to the latent heat of melting, calculated at mantle's pressure, of the new ocean crust created at midocean ridges following sea floor spreading. This energy varies with the temperature of ocean floor, which is correlated with surface temperature. The objective of this manuscript is to calculate the force that drives plate tectonics, estimate the energy released, verify the calculations based on experiments and observations, and project the increase of geological activities with surface temperature rise caused by climate change.
Assessment of seismic margin calculation methods
Energy Technology Data Exchange (ETDEWEB)
Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.
1989-03-01
Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs.
Program Calculates Current Densities Of Electronic Designs
Cox, Brian
1996-01-01
PDENSITY computer program calculates current densities for use in calculating power densities of electronic designs. Reads parts-list file for given design, file containing current required for each part, and file containing size of each part. For each part in design, program calculates current density in units of milliamperes per square inch. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19588). PC version of program (NPO-19171).
Hamming generalized corrector for reactivity calculation
Energy Technology Data Exchange (ETDEWEB)
Suescun-Diaz, Daniel; Ibarguen-Gonzalez, Maria C.; Figueroa-Jimenez, Jorge H. [Pontificia Universidad Javeriana Cali, Cali (Colombia). Dept. de Ciencias Naturales y Matematicas
2014-06-15
This work presents the Hamming method generalized corrector for numerically resolving the differential equation of delayed neutron precursor concentration from the point kinetics equations for reactivity calculation, without using the nuclear power history or the Laplace transform. A study was carried out of several correctors with their respective modifiers with different time step calculations, to offer stability and greater precision. Better results are obtained for some correctors than with other existing methods. Reactivity can be calculated with precision of the order h{sup 5}, where h is the time step. (orig.)
Pressure vessel calculations for VVER-440 reactors.
Hordósy, G; Hegyi, Gy; Keresztúri, A; Maráczy, Cs; Temesvári, E; Vértes, P; Zsolnay, E
2005-01-01
For the determination of the fast neutron load of the reactor pressure vessel a mixed calculational procedure was developed. The procedure was applied to the Unit II of Paks NPP, Hungary. The neutron source on the outer surfaces of the reactor was determined by a core design code, and the neutron transport calculations outside the core were performed by the Monte Carlo code MCNP. The reaction rate in the activation detectors at surveillance positions and at the cavity were calculated and compared with measurements. In most cases, fairly good agreement was found.
The WFIRST Galaxy Survey Exposure Time Calculator
Hirata, Christopher M.; Gehrels, Neil; Kneib, Jean-Paul; Kruk, Jeffrey; Rhodes, Jason; Wang, Yun; Zoubian, Julien
2013-01-01
This document describes the exposure time calculator for the Wide-Field Infrared Survey Telescope (WFIRST) high-latitude survey. The calculator works in both imaging and spectroscopic modes. In addition to the standard ETC functions (e.g. background and SN determination), the calculator integrates over the galaxy population and forecasts the density and redshift distribution of galaxy shapes usable for weak lensing (in imaging mode) and the detected emission lines (in spectroscopic mode). The source code is made available for public use.
Radiation therapy calculations using an on-demand virtual cluster via cloud computing
Keyes, Roy W; Arnold, Dorian; Luan, Shuang
2010-01-01
Computer hardware costs are the limiting factor in producing highly accurate radiation dose calculations on convenient time scales. Because of this, large-scale, full Monte Carlo simulations and other resource intensive algorithms are often considered infeasible for clinical settings. The emerging cloud computing paradigm promises to fundamentally alter the economics of such calculations by providing relatively cheap, on-demand, pay-as-you-go computing resources over the Internet. We believe that cloud computing will usher in a new era, in which very large scale calculations will be routinely performed by clinics and researchers using cloud-based resources. In this research, several proof-of-concept radiation therapy calculations were successfully performed on a cloud-based virtual Monte Carlo cluster. Performance evaluations were made of a distributed processing framework developed specifically for this project. The expected 1/n performance was observed with some caveats. The economics of cloud-based virtual...
Using a Calculated Pulse Rate with an Artificial Neural Network to Detect Irregular Interbeats.
Yeh, Bih-Chyun; Lin, Wen-Piao
2016-03-01
Heart rate is an important clinical measure that is often used in pathological diagnosis and prognosis. Valid detection of irregular heartbeats is crucial in the clinical practice. We propose an artificial neural network using the calculated pulse rate to detect irregular interbeats. The proposed system measures the calculated pulse rate to determine an "irregular interbeat on" or "irregular interbeat off" event. If an irregular interbeat is detected, the proposed system produces a danger warning, which is helpful for clinicians. If a non-irregular interbeat is detected, the proposed system displays the calculated pulse rate. We include a flow chart of the proposed software. In an experiment, we measure the calculated pulse rates and achieve an error percentage of pulse rates to detect irregular interbeats, we find such irregular interbeats in eight participants.
Directory of Open Access Journals (Sweden)
A Gulia
2016-01-01
Full Text Available Background: Although conventional four- field radiotherapy based on bony landmarks has been traditionally used, areas of geographical miss due to individual variation in pelvic anatomy have been identified with advanced imaging techniques. AIMS: The primary aim of this study is to evaluate the geographical miss in patientswhen using the conventional four-field planningplanning and to find out the impact of 3-D conformal CT based in patients with locally advanced carcinoma cervix.Materials and Methods: In 50 patients, target volume delineation was done on planning computed tomography (CT scans, according to guidelines by Taylor et al. Patients were treated with modified four field plan, except for the superior, where field border was kept at L4-L5 interspace A dosimetric comparison was done between the conventional four-field based on bony landmarks and the target volume delineated on computed tomography. The disease free survival, pelvic and para aortic nodal free survival, distant failures free survival were calculated using Kaplan Meir Product Limit Method. Results: Patients were followed-up for a median period of 11 months. The median V95 for conventional and modified extended four field plans were 89.4% and 91.3% respectively. Patients with V95 for modified extended pelvic fields less than 91.3% had a trend toward inferior disease free survival (mean DFS 9.8 vs. 13.9 months though the difference was not statistically significant log rank test.Conclusions: Our preliminary data shows trend toward lower DFS in patients with inadequate target volume coverage. We recommend routine use of CT based planning for four field technique.
Temperature calculation in fire safety engineering
Wickström, Ulf
2016-01-01
This book provides a consistent scientific background to engineering calculation methods applicable to analyses of materials reaction-to-fire, as well as fire resistance of structures. Several new and unique formulas and diagrams which facilitate calculations are presented. It focuses on problems involving high temperature conditions and, in particular, defines boundary conditions in a suitable way for calculations. A large portion of the book is devoted to boundary conditions and measurements of thermal exposure by radiation and convection. The concepts and theories of adiabatic surface temperature and measurements of temperature with plate thermometers are thoroughly explained. Also presented is a renewed method for modeling compartment fires, with the resulting simple and accurate prediction tools for both pre- and post-flashover fires. The final chapters deal with temperature calculations in steel, concrete and timber structures exposed to standard time-temperature fire curves. Useful temperature calculat...
Measured and Calculated Volumes of Wetland Depressions
U.S. Environmental Protection Agency — Measured and calculated volumes of wetland depressions This dataset is associated with the following publication: Wu, Q., and C. Lane. Delineation and quantification...
Spectra: Time series power spectrum calculator
Gallardo, Tabaré
2017-01-01
Spectra calculates the power spectrum of a time series equally spaced or not based on the Spectral Correlation Coefficient (Ferraz-Mello 1981, Astron. Journal 86 (4), 619). It is very efficient for detection of low frequencies.
Large Numbers and Calculators: A Classroom Activity.
Arcavi, Abraham; Hadas, Nurit
1989-01-01
Described is an activity demonstrating how a scientific calculator can be used in a mathematics classroom to introduce new content while studying a conventional topic. Examples of reading and writing large numbers, and reading hidden results are provided. (YP)
Fair and Reasonable Rate Calculation Data -
Department of Transportation — This dataset provides guidelines for calculating the fair and reasonable rates for U.S. flag vessels carrying preference cargoes subject to regulations contained at...
Quantum Monte Carlo Calculations of Light Nuclei
Pieper, Steven C
2007-01-01
During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.
Multigrid Methods in Electronic Structure Calculations
Briggs, E L; Bernholc, J
1996-01-01
We describe a set of techniques for performing large scale ab initio calculations using multigrid accelerations and a real-space grid as a basis. The multigrid methods provide effective convergence acceleration and preconditioning on all length scales, thereby permitting efficient calculations for ill-conditioned systems with long length scales or high energy cut-offs. We discuss specific implementations of multigrid and real-space algorithms for electronic structure calculations, including an efficient multigrid-accelerated solver for Kohn-Sham equations, compact yet accurate discretization schemes for the Kohn-Sham and Poisson equations, optimized pseudo\\-potentials for real-space calculations, efficacious computation of ionic forces, and a complex-wavefunction implementation for arbitrary sampling of the Brillioun zone. A particular strength of a real-space multigrid approach is its ready adaptability to massively parallel computer architectures, and we present an implementation for the Cray-T3D with essen...
46 CFR 170.090 - Calculations.
2010-10-01
... necessary to compute and plot any of the following curves as part of the calculations required in this subchapter, these plots must also be submitted: (1) Righting arm or moment curves. (2) Heeling arm or...
Representation and calculation of economic uncertainties
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2002-01-01
Management and decision making when certain information is available may be a matter of rationally choosing the optimal alternative by calculation of the utility function. When only uncertain information is available (which is most often the case) decision-making calls for more complex methods...... of representation and calculation and the basis for choosing the optimal alternative may become obscured by uncertainties of the utility function. In practice, several sources of uncertainties of the required information impede optimal decision making in the classical sense. In order to be able to better handle...... to uncertain economic numbers are discussed. When solving economic models for decision-making purposes calculation of uncertain functions will have to be carried out in addition to the basic arithmetical operations. This is a challenging numerical problem since improper methods of calculation may introduce...
Note about socio-economic calculations
DEFF Research Database (Denmark)
Landex, Alex; Andersen, Jonas Lohmann Elkjær; Salling, Kim Bang
2006-01-01
these effects must be described qualitatively. This note describes the socio-economic evaluation based on market prices and not factor prices which has been the tradition in Denmark till now. This is due to the recommendation from the Ministry of Transport to start using calculations based on market prices......This note gives a short introduction of how to make socio-economic evaluations in connection with the teaching at the Centre for Traffic and Transport (CTT). It is not a manual for making socio-economic calculations in transport infrastructure projects – in this context we refer to the guidelines...... for socio-economic calculations within the transportation area (Ministry of Traffic, 2003). The note also explains the theory of socio-economic calculations – reference is here made to ”Road Infrastructure Planning – a Decision-oriented approach” (Leleur, 2000). Socio-economic evaluations of infrastructure...
Obliged to Calculate: "My School", Markets, and Equipping Parents for Calculativeness
Gobby, Brad
2016-01-01
This paper argues neoliberal programs of government in education are equipping parents for calculativeness. Regimes of testing and the publication of these results and other organizational data are contributing to a public economy of numbers that increasingly oblige citizens to calculate. Using the notions of calculative and market devices, this…
Benchmarking analytical calculations of proton doses in heterogeneous matter.
Ciangaru, George; Polf, Jerimy C; Bues, Martin; Smith, Alfred R
2005-12-01
A proton dose computational algorithm, performing an analytical superposition of infinitely narrow proton beamlets (ASPB) is introduced. The algorithm uses the standard pencil beam technique of laterally distributing the central axis broad beam doses according to the Moliere scattering theory extended to slablike varying density media. The purpose of this study was to determine the accuracy of our computational tool by comparing it with experimental and Monte Carlo (MC) simulation data as benchmarks. In the tests, parallel wide beams of protons were scattered in water phantoms containing embedded air and bone materials with simple geometrical forms and spatial dimensions of a few centimeters. For homogeneous water and bone phantoms, the proton doses we calculated with the ASPB algorithm were found very comparable to experimental and MC data. For layered bone slab inhomogeneity in water, the comparison between our analytical calculation and the MC simulation showed reasonable agreement, even when the inhomogeneity was placed at the Bragg peak depth. There also was reasonable agreement for the parallelepiped bone block inhomogeneity placed at various depths, except for cases in which the bone was located in the region of the Bragg peak, when discrepancies were as large as more than 10%. When the inhomogeneity was in the form of abutting air-bone slabs, discrepancies of as much as 8% occurred in the lateral dose profiles on the air cavity side of the phantom. Additionally, the analytical depth-dose calculations disagreed with the MC calculations within 3% of the Bragg peak dose, at the entry and midway depths in the phantom. The distal depth-dose 20%-80% fall-off widths and ranges calculated with our algorithm and the MC simulation were generally within 0.1 cm of agreement. The analytical lateral-dose profile calculations showed smaller (by less than 0.1 cm) 20%-80% penumbra widths and shorter fall-off tails than did those calculated by the MC simulations. Overall
Uppal, Elaine
2016-01-01
This article is part of the Advancing practice series which is aimed at exploring practice issues in more depth, considering topics that are frequently encountered and facilitating the development of new insights. Elaine Uppal focuses on the importance of all midwives developing guideline writing skills to ensure that local, national and international midwifery/maternity guidelines are up to date, relevant and reflect midwifery knowledge alongside 'gold' standard evidence. The article aims to consider the development, use and critical appraisal of clinical guidelines. It will define and explain guidelines; discuss their development and dissemination; and consider issues relating to their use in practice. Techniques to critique and develop guidelines using the AGREE tool will be outlined in the form of practice challenges to be undertaken by the individual or in a group.
A revised calculational model for fission
Energy Technology Data Exchange (ETDEWEB)
Atchison, F.
1998-09-01
A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)
A Java Interface for Roche Lobe Calculations
Leahy, D. A.; Leahy, J. C.
2015-09-01
A JAVA interface for calculating various properties of the Roche lobe has been created. The geometry of the Roche lobe is important for studying interacting binary stars, particularly those with compact objects which have a companion which fills the Roche lobe. There is no known analytic solution to the Roche lobe problem. Here the geometry of the Roche lobe is calculated numerically to high accuracy and made available to the user for arbitrary input mass ratio, q.
Realistic level density calculation for heavy nuclei
Energy Technology Data Exchange (ETDEWEB)
Cerf, N. [Institut de Physique Nucleaire, Orsay (France); Pichon, B. [Observatoire de Paris, Meudon (France); Rayet, M.; Arnould, M. [Institut d`Astronomie et d`Astrophysique, Bruxelles (Belgium)
1994-12-31
A microscopic calculation of the level density is performed, based on a combinatorial evaluation using a realistic single-particle level scheme. This calculation relies on a fast Monte Carlo algorithm, allowing to consider heavy nuclei (i.e., large shell model spaces) which could not be treated previously in combinatorial approaches. An exhaustive comparison of the predicted neutron s-wave resonance spacings with experimental data for a wide range of nuclei is presented.
Flow calculation of a bulb turbine
Energy Technology Data Exchange (ETDEWEB)
Goede, E.; Pestalozzi, J.
1987-01-01
In recent years remarkable progress has been made in the field of theoretical flow calculation. Studying the relevant literature one might receive the impression that most problems have been solved. But probing more deeply into details one becomes aware that by no means all questions are answered. The report tries to point out what may be expected of the quasi-three-dimensional flow calculation method employed and - much more important - what it must not be expected to accomplish. (orig.)
Green's function calculations of light nuclei
Sun, ZhongHao; Wu, Qiang; Xu, FuRong
2016-09-01
The influence of short-range correlations in nuclei was investigated with realistic nuclear force. The nucleon-nucleon interaction was renormalized with V lowk technique and applied to the Green's function calculations. The Dyson equation was reformulated with algebraic diagrammatic constructions. We also analyzed the binding energy of 4He, calculated with chiral potential and CD-Bonn potential. The properties of Green's function with realistic nuclear forces are also discussed.
Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution
DEFF Research Database (Denmark)
Fog, Agner
2008-01-01
distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...
Users enlist consultants to calculate costs, savings
Energy Technology Data Exchange (ETDEWEB)
1982-05-24
Consultants who calculate payback provide expertise and a second opinion to back up energy managers' proposals. They can lower the costs of an energy-management investment by making complex comparisons of systems and recommending the best system for a specific application. Examples of payback calculations include simple payback for a school system, a university, and a Disneyland hotel, as well as internal rate of return for a corporate office building and a chain of clothing stores. (DCK)
DOWNSCALE APPLICATION OF BOILER THERMAL CALCULATION APPROACH
Zelený, Zbynĕk; Hrdlička, Jan
2016-01-01
Commonly used thermal calculation methods are intended primarily for large scale boilers. Hot water small scale boilers, which are commonly used for home heating have many specifics, that distinguish them from large scale boilers especially steam boilers. This paper is focused on application of thermal calculation procedure that is designed for large scale boilers, on a small scale boiler for biomass combustion of load capacity 25 kW. Special issue solved here is influence of formation of dep...
Reciprocity Theorems for Ab Initio Force Calculations
Wei, C; Mele, E J; Rappe, A M; Lewis, Steven P.; Rappe, Andrew M.
1996-01-01
We present a method for calculating ab initio interatomic forces which scales quadratically with the size of the system and provides a physically transparent representation of the force in terms of the spatial variation of the electronic charge density. The method is based on a reciprocity theorem for evaluating an effective potential acting on a charged ion in the core of each atom. We illustrate the method with calculations for diatomic molecules.
R-matrix calculation for photoionization
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
We have employed the R-matrix method to calculate differe ntial cross sections for photoionization of helium leaving helium ion in an exci ted state for incident photon energy between the N=2 and N=3 thresholds (69～73 eV) of He+ ion. Differential cross sections for photoionization in the N=2 level at emission angle 0° are provide. Our results are in good agreem ent with available experimental data and theoretical calculations.
Efficient Finite Element Calculation of Nγ
DEFF Research Database (Denmark)
Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.
2007-01-01
This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....
Computerized calculation of material balances in carbonization
Energy Technology Data Exchange (ETDEWEB)
Chistyakov, A.M.
1980-09-01
Charge formulations and carbonisation schedules are described by empirical formulae used to calculate the yield of coking products. An algorithm is proposed for calculating the material balance, and associated computer program. The program can be written in conventional languages, e.g. Fortran, Algol etc. The information obtained can be used for on-line assessment of the effects of charge composition and properties on the coke and by-products yields, as well as the effects of the carbonisation conditions.
Calculating Cumulative Binomial-Distribution Probabilities
Scheuer, Ernest M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.
PROSPECTS OF MANAGEMENT ACCOUNTING AND COST CALCULATION
Directory of Open Access Journals (Sweden)
Marian ŢAICU
2014-11-01
Full Text Available Progress in improving production technology requires appropriate measures to achieve an efficient management of costs. This raises the need for continuous improvement of management accounting and cost calculation. Accounting information in general, and management accounting information in particular, have gained importance in the current economic conditions, which are characterized by risk and uncertainty. The future development of management accounting and cost calculation is essential to meet the information needs of management.
Linear Response Calculations of Spin Fluctuations
Savrasov, S. Y.
1998-09-01
A variational formulation of the time-dependent linear response based on the Sternheimer method is developed in order to make practical ab initio calculations of dynamical spin susceptibilities of solids. Using gradient density functional and a muffin-tin-orbital representation, the efficiency of the approach is demonstrated by applications to selected magnetic and strongly paramagnetic metals. The results are found to be consistent with experiment and are compared with previous theoretical calculations.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Energy Technology Data Exchange (ETDEWEB)
Park, J; Lee, J [Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of); Kim, H [Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of); Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul (Korea, Republic of); Kim, I [Department of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of); Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul (Korea, Republic of); Ye, S [Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of); Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul (Korea, Republic of); Advanced Institutes of Convergence Technology, Seoul National University, Suwon (Korea, Republic of)
2015-06-15
Purpose: To evaluate the effect of a tungsten eye-shield on the dose distribution of a patient. Methods: A 3D scanner was used to extract the dimension and shape of a tungsten eye-shield in the STL format. Scanned data was transferred into a 3D printer. A dummy eye shield was then produced using bio-resin (3D systems, VisiJet M3 Proplast). For a patient with mucinous carcinoma, the planning CT was obtained with the dummy eye-shield placed on the patient’s right eye. Field shaping of 6 MeV was performed using a patient-specific cerrobend block on the 15 x 15 cm{sup 2} applicator. The gantry angle was 330° to cover the planning target volume near by the lens. EGS4/BEAMnrc was commissioned from our measurement data from a Varian 21EX. For the CT-based dose calculation using EGS4/DOSXYZnrc, the CT images were converted to a phantom file through the ctcreate program. The phantom file had the same resolution as the planning CT images. By assigning the CT numbers of the dummy eye-shield region to 17000, the real dose distributions below the tungsten eye-shield were calculated in EGS4/DOSXYZnrc. In the TPS, the CT number of the dummy eye-shield region was assigned to the maximum allowable CT number (3000). Results: As compared to the maximum dose, the MC dose on the right lens or below the eye shield area was less than 2%, while the corresponding RTP calculated dose was an unrealistic value of approximately 50%. Conclusion: Utilizing a 3D scanner and a 3D printer, a dummy eye-shield for electron treatment can be easily produced. The artifact-free CT images were successfully incorporated into the CT-based Monte Carlo simulations. The developed method was useful in predicting the realistic dose distributions around the lens blocked with the tungsten shield.
A decision tool to adjust the prescribed dose after change in the dose calculation algorithm
Directory of Open Access Journals (Sweden)
Abdulhamid Chaikh
2014-12-01
Full Text Available Purpose: This work aims to introduce a method to quantify and assess the differences in monitor unites MUs when changing to new dose calculation software that uses a different algorithm, and to evaluate the need and extent of adjustment of the prescribed dose to maintain the same clinical results. Methods: Doses were calculated using two classical algorithms based on the Pencil Beam Convolution PBC model, using 6 patients presenting lung cancers. For each patient, 3 treatment plans were generated: Plan 1 was calculated using reference algorithm PBC without heterogeneity correction, Plan 2 was calculated using test algorithm with heterogeneity correction, and in plan 3 the dose was recalculated using test algorithm and monitor unites MUs obtained from plan 1 as input. To assess the differences in the calculated MUs, isocenter dose, and spatial dose distributions using a gamma index were compared. Statistical analysis was based on a Wilcoxon signed rank test. Results: The test algorithm in plan 2 calculated significantly less MUs than reference algorithm in plan 1 by on average 5%, (p < 0.001. We also found underestimating dose for target volumes using 3D gamma index analysis. In this example, in order to obtain the same clinical outcomes with the two algorithms the prescribed dose should be adjusted by 5%.Conclusion: This method provides a quantitative evaluation of the differences between two dose calculation algorithms and the consequences on the prescribed dose. It could be used to adjust the prescribed dose when changing calculation software to maintain the same clinical results as obtained with the former software. In particular, the gamma evaluation could be applied to any situation where changes in the dose calculation occur in radiotherapy.
[Hydration in clinical practice].
Maristany, Cleofé Pérez-Portabella; Segurola Gurruchaga, Hegoi
2011-01-01
Water is an essential foundation for life, having both a regulatory and structural function. The former results from active and passive participation in all metabolic reactions, and its role in conserving and maintaining body temperature. Structurally speaking it is the major contributer to tissue mass, accounting for 60% of the basis of blood plasma, intracellular and intersticial fluid. Water is also part of the primary structures of life such as genetic material or proteins. Therefore, it is necessary that the nurse makes an early assessment of patients water needs to detect if there are signs of electrolyte imbalance. Dehydration can be a very serious problem, especially in children and the elderly. Dehydrations treatment with oral rehydration solution decreases the risk of developing hydration disorders, but even so, it is recommended to follow preventive measures to reduce the incidence and severity of dehydration. The key to having a proper hydration is prevention. Artificial nutrition encompasses the need for precise calculation of water needs in enteral nutrition as parenteral, so the nurse should be part of this process and use the tools for calculating the patient's requirements. All this helps to ensure an optimal nutritional status in patients at risk. Ethical dilemmas are becoming increasingly common in clinical practice. On the subject of artificial nutrition and hydration, there isn't yet any unanimous agreement regarding hydration as a basic care. It is necessary to take decisions in consensus with the health team, always thinking of the best interests of the patient.
Good Practices in Free-energy Calculations
Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher
2013-01-01
As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.
Comparison of Polar Cap (PC) index calculations.
Stauning, P.
2012-04-01
The Polar Cap (PC) index introduced by Troshichev and Andrezen (1985) is derived from polar magnetic variations and is mainly a measure of the intensity of the transpolar ionospheric currents. These currents relate to the polar cap antisunward ionospheric plasma convection driven by the dawn-dusk electric field, which in turn is generated by the interaction of the solar wind with the Earth's magnetosphere. Coefficients to calculate PCN and PCS index values from polar magnetic variations recorded at Thule and Vostok, respectively, have been derived by several different procedures in the past. The first published set of coefficients for Thule was derived by Vennerstrøm, 1991 and is still in use for calculations of PCN index values by DTU Space. Errors in the program used to calculate index values were corrected in 1999 and again in 2001. In 2005 DMI adopted a unified procedure proposed by Troshichev for calculations of the PCN index. Thus there exists 4 different series of PCN index values. Similarly, at AARI three different sets of coefficients have been used to calculate PCS indices in the past. The presentation discusses the principal differences between the various PC index procedures and provides comparisons between index values derived from the same magnetic data sets using the different procedures. Examples from published papers are examined to illustrate the differences.
Accurate free energy calculation along optimized paths.
Chen, Changjun; Xiao, Yi
2010-05-01
The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.
2014-01-01
related metrics for detecting sepsis and multior- gan failure, improvement of HRC calculations may help detect significant changes from baseline values...calculations. Equiva- lence tests between mean HRC values derived from man- ually verified sequences and those derived from automatically detected peaks...assessment of HRC in critically ill patients. Keywords Signal detection analysis Electrocardiography Heart rate Clinical decision support
A GPU implementation of a track-repeating algorithm for proton radiotherapy dose calculations
Yepes, Pablo P; Taddei, Phillip J
2010-01-01
An essential component in proton radiotherapy is the algorithm to calculate the radiation dose to be delivered to the patient. The most common dose algorithms are fast but they are approximate analytical approaches. However their level of accuracy is not always satisfactory, especially for heterogeneous anatomic areas, like the thorax. Monte Carlo techniques provide superior accuracy, however, they often require large computation resources, which render them impractical for routine clinical use. Track-repeating algorithms, for example the Fast Dose Calculator, have shown promise for achieving the accuracy of Monte Carlo simulations for proton radiotherapy dose calculations in a fraction of the computation time. We report on the implementation of the Fast Dose Calculator for proton radiotherapy on a card equipped with graphics processor units (GPU) rather than a central processing unit architecture. This implementation reproduces the full Monte Carlo and CPU-based track-repeating dose calculations within 2%, w...
Perturbation calculation of thermodynamic density of states.
Brown, G; Schulthess, T C; Nicholson, D M; Eisenbach, M; Stocks, G M
2011-12-01
The density of states g (ε) is frequently used to calculate the temperature-dependent properties of a thermodynamic system. Here a derivation is given for calculating the warped density of states g*(ε) resulting from the addition of a perturbation. The method is validated for a classical Heisenberg model of bcc Fe and the errors in the free energy are shown to be second order in the perturbation. Taking the perturbation to be the difference between a first-principles quantum-mechanical energy and a corresponding classical energy, this method can significantly reduce the computational effort required to calculate g(ε) for quantum systems using the Wang-Landau approach.
Using Inverted Indices for Accelerating LINGO Calculations
DEFF Research Database (Denmark)
Kristensen, Thomas Greve; Nielsen, Jesper; Pedersen, Christian Nørgaard Storm
2011-01-01
The ever growing size of chemical data bases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating...... the Tanimoto coefficient which is called the LINGOsim when used on LINGO multisets. This paper introduces a verbose representation for storing LINGO multisets which makes it possible to transform them into sparse fingerprints such that fingerprint data structures and algorithms can be used to accelerate...... queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialised hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices it is possible to calculate LINGOsim...
Using inverted indices for accelerating LINGO calculations.
Kristensen, Thomas G; Nielsen, Jesper; Pedersen, Christian N S
2011-03-28
The ever growing size of chemical databases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating the Tanimoto coefficient, which is called LINGOsim when used on LINGO multisets. This paper introduces a verbose representation for storing LINGO multisets, which makes it possible to transform them into sparse fingerprints such that fingerprint data structures and algorithms can be used to accelerate queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialized hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices, it is possible to calculate LINGOsim similarity matrices roughly 2.6 times faster than existing methods without relying on specialized hardware.
Automated one-loop calculations with GOSAM
Energy Technology Data Exchange (ETDEWEB)
Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)
2011-11-15
We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)
Benchmarking calculations of excitonic couplings between bacteriochlorophylls
Kenny, Elise P
2015-01-01
Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. We also caution ...
Detailed Burnup Calculations for Research Reactors
Energy Technology Data Exchange (ETDEWEB)
Leszczynski, F. [Centro Atomico Bariloche (CNEA), 8400 S. C. de Bariloche (Argentina)
2011-07-01
A general method (RRMCQ) has been developed by introducing a microscopic burn up scheme which uses the Monte Carlo calculated spatial power distribution of a research reactor core and a depletion code for burn up calculations, as a basis for solving nuclide material balance equations for each spatial region in which the system is divided. Continuous energy dependent cross-section libraries and full 3D geometry of the system is input for the calculations. The resulting predictions for the system at successive burn up time steps are thus based on a calculation route where both geometry and cross-sections are accurately represented, without geometry simplifications and with continuous energy data. The main advantage of this method over the classical deterministic methods currently used is that RRMCQ System is a direct 3D method without the limitations and errors introduced on the homogenization of geometry and condensation of energy of deterministic methods. The Monte Carlo and burn up codes adopted until now are the widely used MCNP5 and ORIGEN2 codes, but other codes can be used also. For using this method, there is a need of a well-known set of nuclear data for isotopes involved in burn up chains, including burnable poisons, fission products and actinides. For fixing the data to be included on this set, a study of the present status of nuclear data is performed, as part of the development of RRMCQ method. This study begins with a review of the available cross-section data of isotopes involved in burn up chains for research nuclear reactors. The main data needs for burn up calculations are neutron cross-sections, decay constants, branching ratios, fission energy and yields. The present work includes results of selected experimental benchmarks and conclusions about the sensitivity of different sets of cross-section data for burn up calculations, using some of the main available evaluated nuclear data files. Basically, the RRMCQ detailed burn up method includes four
Dose calculations for intakes of ore dust
Energy Technology Data Exchange (ETDEWEB)
O`Brien, R.S
1998-08-01
This report describes a methodology for calculating the committed effective dose for mixtures of radionuclides, such as those which occur in natural radioactive ores and dusts. The formulae are derived from first principles, with the use of reasonable assumptions concerning the nature and behaviour of the radionuclide mixtures. The calculations are complicated because these `ores` contain a range of particle sizes, have different degrees of solubility in blood and other body fluids, and also have different biokinetic clearance characteristics from the organs and tissues in the body. The naturally occurring radionuclides also tend to occur in series, i.e. one is produced by the radioactive decay of another `parent` radionuclide. The formulae derived here can be used, in conjunction with a model such as LUDEP, for calculating total dose resulting from inhalation and/or ingestion of a mixture of radionuclides, and also for deriving annual limits on intake and derived air concentrations for these mixtures. 15 refs., 14 tabs., 3 figs.
Numerical inductance calculations based on first principles.
Shatz, Lisa F; Christensen, Craig W
2014-01-01
A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration.
Challenges in Large Scale Quantum Mechanical Calculations
Ratcliff, Laura E; Huhs, Georg; Deutsch, Thierry; Masella, Michel; Genovese, Luigi
2016-01-01
During the past decades, quantum mechanical methods have undergone an amazing transition from pioneering investigations of experts into a wide range of practical applications, made by a vast community of researchers. First principles calculations of systems containing up to a few hundred atoms have become a standard in many branches of science. The sizes of the systems which can be simulated have increased even further during recent years, and quantum-mechanical calculations of systems up to many thousands of atoms are nowadays possible. This opens up new appealing possibilities, in particular for interdisciplinary work, bridging together communities of different needs and sensibilities. In this review we will present the current status of this topic, and will also give an outlook on the vast multitude of applications, challenges and opportunities stimulated by electronic structure calculations, making this field an important working tool and bringing together researchers of many different domains.
Cosmology calculations almost without general relativity
Jordan, T F
2003-01-01
The Friedmann equation can be derived for a Newtonian universe. Changing mass density to energy density gives exactly the Friedmann equation of general relativity. Accounting for work done by pressure then yields the two Einstein equations that govern the expansion of the universe. Descriptions and explanations of radiation pressure and vacuum pressure are added to complete a basic kit of cosmology tools. It provides a basis for teaching cosmology to undergraduates in a way that quickly equips them to do basic calculations. This is demonstrated with calculations involving: characteristics of the expansion for densities dominated by radiation, matter, or vacuum; the closeness of the density to the critical density; how much vacuum energy compared to matter energy is needed to make the expansion accelerate; and how little is needed to make it stop. Travel time and luninosity distance are calculated in terms of the redshift and the densities of matter and vacuum energy, using a scaled Friedmann equation with the...
Parallel scalability of Hartree–Fock calculations
Energy Technology Data Exchange (ETDEWEB)
Chow, Edmond, E-mail: echow@cc.gatech.edu; Liu, Xing [School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0765 (United States); Smelyanskiy, Mikhail; Hammond, Jeff R. [Parallel Computing Lab, Intel Corporation, Santa Clara, California 95054-1549 (United States)
2015-03-14
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
Lagrange interpolation for the radiation shielding calculation
Isozumi, Y; Miyatake, H; Kato, T; Tosaki, M
2002-01-01
Basing on some formulas of Lagrange interpolation derived in this paper, a computer program for table calculations has been prepared. Main features of the program are as follows; 1) maximum degree of polynomial in Lagrange interpolation is 10, 2) tables with both one variable and two variables can be applied, 3) logarithmic transformations of function and/or variable values can be included and 4) tables with discontinuities and cusps can be applied. The program has been carefully tested by using the data tables in the manual of shielding calculation for radiation facilities. For all available tables in the manual, calculations with the program have been reasonably performed under conditions of 1) logarithmic transformation of both function and variable values and 2) degree 4 or 5 of the polynomial.
eQuilibrator--the biochemical thermodynamics calculator.
Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron
2012-01-01
The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like 'how much Gibbs energy is released by ATP hydrolysis at pH 5?' are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use.
Daylight calculations using constant luminance curves
Energy Technology Data Exchange (ETDEWEB)
Betman, E. [CRICYT, Mendoza (Argentina). Laboratorio de Ambiente Humano y Vivienda
2005-02-01
This paper presents a simple method to manually estimate daylight availability and to make daylight calculations using constant luminance curves calculated with local illuminance and irradiance data and the all-weather model for sky luminance distribution developed in the Atmospheric Science Research Center of the University of New York (ARSC) by Richard Perez et al. Work with constant luminance curves has the advantage that daylight calculations include the problem's directionality and preserve the information of the luminous climate of the place. This permits accurate knowledge of the resource and a strong basis to establish conclusions concerning topics related to the energy efficiency and comfort in buildings. The characteristics of the proposed method are compared with the method that uses the daylight factor. (author)
Calculation of Radiation Damage in SLAC Targets
Energy Technology Data Exchange (ETDEWEB)
Wirth, B D; Monasterio, P; Stein, W
2008-04-03
Ti-6Al-4V alloys are being considered as a positron producing target in the Next Linear Collider, with an incident photon beam and operating temperatures between room temperature and 300 C. Calculations of displacement damage in Ti-6Al-4V alloys have been performed by combining high-energy particle FLUKA simulations with SPECTER calculations of the displacement cross section from the resulting energy-dependent neutron flux plus the displacements calculated from the Lindhard model from the resulting energy-dependent ion flux. The radiation damage calculations have investigated two cases, namely the damage produced in a Ti-6Al-4V SLAC positron target where the irradiation source is a photon beam with energies between 5 and 11 MeV. As well, the radiation damage dose in displacements per atom, dpa, has been calculated for a mono-energetic 196 MeV proton irradiation experiment performed at Brookhaven National Laboratory (BLIP experiment). The calculated damage rate is 0.8 dpa/year for the Ti-6Al-4V SLAC photon irradiation target, and a total damage exposure of 0.06 dpa in the BLIP irradiation experiment. In both cases, the displacements are predominantly ({approx}80%) produced by recoiling ions (atomic nuclei) from photo-nuclear collisions or proton-nuclear collisions, respectively. Approximately 25% of the displacement damage results from the neutrons in both cases. Irradiation effects studies in titanium alloys have shown substantial increases in the yield and ultimate strength of up to 500 MPa and a corresponding decrease in uniform ductility for neutron and high energy proton irradiation at temperatures between 40 and 300 C. Although the data is limited, there is an indication that the strength increases will saturate by doses on the order of a few dpa. Microstructural investigations indicate that the dominant features responsible for the strength increases were dense precipitation of a {beta} (body-centered cubic) phase precipitate along with a high number density
Precise calculations of the deuteron quadrupole moment
Energy Technology Data Exchange (ETDEWEB)
Gross, Franz L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2016-06-01
Recently, two calculations of the deuteron quadrupole moment have have given predictions that agree with the measured value to within 1%, resolving a long-standing discrepancy. One of these uses the covariant spectator theory (CST) and the other chiral effective field theory (cEFT). In this talk I will first briefly review the foundations and history of the CST, and then compare these two calculations with emphasis on how the same physical processes are being described using very different language. The comparison of the two methods gives new insights into the dynamics of the low energy NN interaction.
Local orbitals in electron scattering calculations*
Winstead, Carl L.; McKoy, Vincent
2016-05-01
We examine the use of local orbitals to improve the scaling of calculations that incorporate target polarization in a description of low-energy electron-molecule scattering. After discussing the improved scaling that results, we consider the results of a test calculation that treats scattering from a two-molecule system using both local and delocalized orbitals. Initial results are promising. Contribution to the Topical Issue "Advances in Positron and Electron Scattering", edited by Paulo Limao-Vieira, Gustavo Garcia, E. Krishnakumar, James Sullivan, Hajime Tanuma and Zoran Petrovic.
Numerical calculation of impurity charge state distributions
Energy Technology Data Exchange (ETDEWEB)
Crume, E. C.; Arnurius, D. E.
1977-09-01
The numerical calculation of impurity charge state distributions using the computer program IMPDYN is discussed. The time-dependent corona atomic physics model used in the calculations is reviewed, and general and specific treatments of electron impact ionization and recombination are referenced. The complete program and two examples relating to tokamak plasmas are given on a microfiche so that a user may verify that his version of the program is working properly. In the discussion of the examples, the corona steady-state approximation is shown to have significant defects when the plasma environment, particularly the electron temperature, is changing rapidly.
The new pooled cohort equations risk calculator
DEFF Research Database (Denmark)
Preiss, David; Kristensen, Søren L
2015-01-01
total cardiovascular risk score. During development of joint guidelines released in 2013 by the American College of Cardiology (ACC) and American Heart Association (AHA), the decision was taken to develop a new risk score. This resulted in the ACC/AHA Pooled Cohort Equations Risk Calculator. This risk...... disease and any measure of social deprivation. An early criticism of the Pooled Cohort Equations Risk Calculator has been its alleged overestimation of ASCVD risk which, if confirmed in the general population, is likely to result in statin therapy being prescribed to many individuals at lower risk than...
Idiot savant calendrical calculators: maths or memory?
O'Connor, N; Hermelin, B
1984-11-01
Eight idiot savant calendrical calculators were tested on dates in the years 1963, 1973, 1983, 1986 and 1993. The study was carried out in 1983. Speeds of correct response were minimal in 1983 and increased markedly into the past and the future. The response time increase was matched by an increase in errors. Speeds of response were uncorrelated with measured IQ, but the numbers were insufficient to justify any inference in terms of IQ-independence. Results are interpreted as showing that memory alone is inadequate to explain the calendrical calculating performance of the idiot savant subjects.
Calculated Electron Fluxes at Airplane Altitudes
Schaefer, R K; Stanev, T
1993-01-01
A precision measurement of atmospheric electron fluxes has been performed on a Japanese commercial airliner (Enomoto, {\\it et al.}, 1991). We have performed a monte carlo calculation of the cosmic ray secondary electron fluxes expected in this experiment. The monte carlo uses the hadronic portion of our neutrino flux cascade program combined with the electromagnetic cascade portion of the CERN library program GEANT. Our results give good agreement with the data, provided we boost the overall normalization of the primary cosmic ray flux by 12\\% over the normalization used in the neutrino flux calculation.
Program Calculates Power Demands Of Electronic Designs
Cox, Brian
1995-01-01
CURRENT computer program calculates power requirements of electronic designs. For given design, CURRENT reads in applicable parts-list file and file containing current required for each part. Program also calculates power required for circuit at supply potentials of 5.5, 5.0, and 4.5 volts. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19590). PC version of program (NPO-19111).
Calculated optical absorption of different perovskite phases
DEFF Research Database (Denmark)
Castelli, Ivano Eligio; Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2015-01-01
We present calculations of the optical properties of a set of around 80 oxides, oxynitrides, and organometal halide cubic and layered perovskites (Ruddlesden-Popper and Dion-Jacobson phases) with a bandgap in the visible part of the solar spectrum. The calculations show that for different classes...... of perovskites the solar light absorption efficiency varies greatly depending not only on bandgap size and character (direct/indirect) but also on the dipole matrix elements. The oxides exhibit generally a fairly weak absorption efficiency due to indirect bandgaps while the most efficient absorbers are found...... in the classes of oxynitride and organometal halide perovskites with strong direct transitions....
Relaxation Method For Calculating Quantum Entanglement
Tucci, R R
2001-01-01
In a previous paper, we showed how entanglement of formation can be defined as a minimum of the quantum conditional mutual information (a.k.a. quantum conditional information transmission). In classical information theory, the Arimoto-Blahut method is one of the preferred methods for calculating extrema of mutual information. We present a new method akin to the Arimoto-Blahut method for calculating entanglement of formation. We also present several examples computed with a computer program called Causa Comun that implements the ideas of this paper.
DFT calculations with the exact functional
Burke, Kieron
2014-03-01
I will discuss several works in which we calculate the exact exchange-correlation functional of density functional theory, mostly using the density-matrix renormalization group method invented by Steve White, our collaborator. We demonstrate that a Mott-Hubard insulator is a band metal. We also perform Kohn-Sham DFT calculations with the exact functional and prove that a simple algoritm always converges. But we find convergence becomes harder as correlations get stronger. An example from transport through molecular wires may also be discussed. Work supported by DOE grant DE-SC008696.
Improving on calculation of martensitic phenomenological theory
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
Exemplified by the martensitic transformation from DO3 to 18R in Cu-14.2Al-4.3Ni alloy and according to the principle that invariant-habit-plane can be obtained by self-accommodation between variants with twin relationships, and on the basis of displacement vector, volume fractions of two variants with twin relationships in martensitic transformation, habit-plane indexes, and orientation relationships between martensite and austenite after phase transformation can be calculated. Because no additional rotation matrixes are needed to be considered and mirror symmetric operations are used, the calculation process is simple and the results are accurate.
Transmission pipeline calculations and simulations manual
Menon, E Shashi
2014-01-01
Transmission Pipeline Calculations and Simulations Manual is a valuable time- and money-saving tool to quickly pinpoint the essential formulae, equations, and calculations needed for transmission pipeline routing and construction decisions. The manual's three-part treatment starts with gas and petroleum data tables, followed by self-contained chapters concerning applications. Case studies at the end of each chapter provide practical experience for problem solving. Topics in this book include pressure and temperature profile of natural gas pipelines, how to size pipelines for specified f
Pumping slots: Coupling impedance calculations and estimates
Energy Technology Data Exchange (ETDEWEB)
Kurennoy, S.
1993-08-01
Coupling impedances of small pumping holes in vacuum-chamber walls have been calculated at low frequencies, i.e., for wavelengths large compared to a typical hole size, in terms of electric and magnetic polarizabilities of the hole. The polarizabilities can be found by solving and electro- or magnetostatic problem and are known analytically for the case of the elliptic shape of the hole in a thin wall. The present paper studies the case of pumping slots. Using results of numerical calculations and analytical approximations of polarizabilities, we give formulae for practically important estimates of slot contribution to low-frequency coupling impedances.
Necessity of Exact Calculation for Transition Probability
Institute of Scientific and Technical Information of China (English)
LIU Fu-Sui; CHEN Wan-Fang
2003-01-01
This paper shows that exact calculation for transition probability can make some systems deviate fromFermi golden rule seriously. This paper also shows that the corresponding exact calculation of hopping rate inducedby phonons for deuteron in Pd-D system with the many-body electron screening, proposed by Ichimaru, can explainthe experimental fact observed in Pd-D system, and predicts that perfection and low-dimension of Pd lattice are veryimportant for the phonon-induced hopping rate enhancement in Pd-D system.
[Clinical research VI. Clinical relevance].
Talavera, Juan O; Rivas-Ruiz, Rodolfo
2011-01-01
Usually, in clinical practice the maneuver selected is the one that achieves a favorable outcome with a direct percentage of superiority of at least 10 %, or when the number needed to treat is approximately equal to 10. While this percentage difference is practical for estimating the magnitude of an association, we need to differentiate the impact measures (attributable risk, preventable fraction), measures of association (RR, OR, HR), and frequency measures (incidence and prevalence) applicable when the outcome is nominal. And we must identify ways to measure the strength of association and the magnitude of the association when the outcome variable is quantitative. It is not uncommon to interpret the measures of association as if they were impact measures. For example, for a RR of 0.68, it is common to assume a 32 % reduction of the outcome, but we must consider that this is a relative reduction, which comes from relations of 0.4/0.6, 0.04/0.06, or 0.00004/0.00006. However the direct reduction is 20 % (60 % - 40 %), 2 %, and 2 per 100,000, respectively. Therefore, to estimate the impact of a maneuver it is important to have the direct difference and/or NNT.
Calculation of U-value for Concrete Element
DEFF Research Database (Denmark)
Rose, Jørgen
1997-01-01
This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme.......This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme....
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The improved form of calculation formula for the activities of the components in binary liquids and solid alloys has been derived based on the free volume theory considering excess entropy and Miedema's model for calculating the formation heat of binary alloys. A calculation method of excess thermodynamic functions for binary alloys, the formulas of integral molar excess properties and partial molar excess properties for solid ordered or disordered binary alloys have been developed. The calculated results are in good agreement with the experimental values.
Engineering calculations in radiative heat transfer
Gray, W A; Hopkins, D W
1974-01-01
Engineering Calculations in Radiative Heat Transfer is a six-chapter book that first explains the basic principles of thermal radiation and direct radiative transfer. Total exchange of radiation within an enclosure containing an absorbing or non-absorbing medium is then described. Subsequent chapters detail the radiative heat transfer applications and measurement of radiation and temperature.
Net analyte signal calculation for multivariate calibration
Ferre, J.; Faber, N.M.
2003-01-01
A unifying framework for calibration and prediction in multivariate calibration is shown based on the concept of the net analyte signal (NAS). From this perspective, the calibration step can be regarded as the calculation of a net sensitivity vector, whose length is the amount of net signal when the
Towards the exact calculation of medium nuclei
Energy Technology Data Exchange (ETDEWEB)
Gandolfi, Stefano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carlson, Joseph Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lonardoni, Diego [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wang, Xiaobao [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-12-19
The prediction of the structure of light and medium nuclei is crucial to test our knowledge of nuclear interactions. The calculation of the nuclei from two- and three-nucleon interactions obtained from rst principle is, however, one of the most challenging problems for many-body nuclear physics.
Complex Kohn calculations on an overset grid
Greenman, Loren; Lucchese, Robert; McCurdy, C. William
2016-05-01
An implentation of the overset grid method for complex Kohn scattering calculations is presented, along with static exchange calculations of electron-molecule scattering for small molecules including methane. The overset grid method uses multiple numerical grids, for instance Finite Element Method - Discrete Variable Representation (FEM-DVR) grids, expanded radially around multiple centers (corresponding to the individual atoms in each molecule as well as the center-of-mass of the molecule). The use of this flexible grid allows the complex angular dependence of the wavefunctions near the atomic centers to be well-described, but also allows scattering wavefunctions that oscillate rapidly at large distances to be accurately represented. Additionally, due to the use of multiple grids (and also grid shells), the method is easily parallelizable. The method has been implemented in ePolyscat, a multipurpose suite of programs for general molecular scattering calculations. It is interfaced with a number of quantum chemistry programs (including MolPro, Gaussian, GAMESS, and Columbus), from which it can read molecular orbitals and wavefunctions obtained using standard computational chemistry methods. The preliminary static exchange calculations serve as a test of the applicability.
Calculation of Nucleon Electromagnetic Form Factors
Renner, D B; Dolgov, D S; Eicker, N; Lippert, T; Negele, J W; Pochinsky, A V; Schilling, K; Lippert, Th.
2002-01-01
The fomalism is developed to express nucleon matrix elements of the electromagnetic current in terms of form factors consistent with the translational, rotational, and parity symmetries of a cubic lattice. We calculate the number of these form factors and show how appropriate linear combinations approach the continuum limit.
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Calculating Traffic based on Road Sensor Data
Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Sikora, Monika
2014-01-01
Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for la
Computational chemistry: Making a bad calculation
Winter, Arthur
2015-06-01
Computations of the energetics and mechanism of the Morita-Baylis-Hillman reaction are ``not even wrong'' when compared with experiments. While computational abstinence may be the purest way to calculate challenging reaction mechanisms, taking prophylactic measures to avoid regrettable outcomes may be more realistic.
Ammonia synthesis from first principles calculations
DEFF Research Database (Denmark)
Honkala, Johanna Karoliina; Hellman, Anders; Remediakis, Ioannis
2005-01-01
The rate of ammonia synthesis over a nanoparticle ruthenium catalyst can be calculated directly on the basis of a quantum chemical treatment of the problem using density functional theory. We compared the results to measured rates over a ruthenium catalyst supported on magnesium aluminum spinet...
Calculation of tubular joints as compound shells
Golovanov, A. I.
A scheme for joining isoparametric finite shell elements with a bend in the middle surface is described. A solution is presented for the problem of the stress-strain state of a T-joint loaded by internal pressure. A refined scheme is proposed for calculating structures of this kind with allowance for the stiffness of the welded joint.
Gaseous Nitrogen Orifice Mass Flow Calculator
Ritrivi, Charles
2013-01-01
The Gaseous Nitrogen (GN2) Orifice Mass Flow Calculator was used to determine Space Shuttle Orbiter Water Spray Boiler (WSB) GN2 high-pressure tank source depletion rates for various leak scenarios, and the ability of the GN2 consumables to support cooling of Auxiliary Power Unit (APU) lubrication during entry. The data was used to support flight rationale concerning loss of an orbiter APU/hydraulic system and mission work-arounds. The GN2 mass flow-rate calculator standardizes a method for rapid assessment of GN2 mass flow through various orifice sizes for various discharge coefficients, delta pressures, and temperatures. The calculator utilizes a 0.9-lb (0.4 kg) GN2 source regulated to 40 psia (.276 kPa). These parameters correspond to the Space Shuttle WSB GN2 Source and Water Tank Bellows, but can be changed in the spreadsheet to accommodate any system parameters. The calculator can be used to analyze a leak source, leak rate, gas consumables depletion time, and puncture diameter that simulates the measured GN2 system pressure drop.
Block Tridiagonal Matrices in Electronic Structure Calculations
DEFF Research Database (Denmark)
Petersen, Dan Erik
This thesis focuses on some of the numerical aspects of the treatment of the electronic structure problem, in particular that of determining the ground state electronic density for the non–equilibrium Green’s function formulation of two–probe systems and the calculation of transmission in the Lan...
Vibrational Spectra and Quantum Calculations of Ethylbenzene
Institute of Scientific and Technical Information of China (English)
Jian Wang; Xue-jun Qiu; Yan-mei Wang; Song Zhang; Bing Zhang
2012-01-01
Normal vibrations of ethylbenzene in the first excited state have been studied using resonant two-photon ionization spectroscopy.The band origin of ethylbenzene of S1←S0 transition appeared at 37586 cm-1.A vibrational spectrum of 2000 cm-1 above the band origin in the first excited state has been obtained.Several chain torsions and normal vibrations are obtained in the spectrum.The energies of the first excited state are calculated by the time-dependent density function theory and configuration interaction singles (CIS) methods with various basis sets.The optimized structures and vibrational frequencies of the S0 and S1 states are calculated using Hartree-Fock and CIS methods with 6-311++G(2d,2p) basis set.The calculated geometric structures in the S0 and S1 states are gauche conformations that the symmetric plane of ethyl group is perpendicular to the ring plane.All the observed spectral bands have been successfully assigned with the help of our calculations.
Calculation of Thermochemical Constants of Propellants
Directory of Open Access Journals (Sweden)
K. P. Rao
1979-01-01
Full Text Available A method for calculation of thermo chemical constants and products of explosion of propellants from the knowledge of molecular formulae and heats of formation of the ingredients is given. A computer programme in AUTOMATH-400 has been established for the method. The results of application of the method for a number of propellants are given.
Calculations of dietary exposure to acrylamide
Boon, P.E.; Mul, de A.; Voet, van der H.; Donkersgoed, van G.; Brette, M.; Klaveren, van J.D.
2005-01-01
In this paper we calculated the usual and acute exposure to acrylamide (AA) in the Dutch population and young children (1-6 years). For this AA levels of different food groups were used as collected by the Institute for Reference Materials and Measurements (IRMM) of the European Commission's Directo
Precipitates/Salts Model Sensitivity Calculation
Energy Technology Data Exchange (ETDEWEB)
P. Mariner
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.
Heat pipe thermosyphon heat performance calculation
Novomestský, Marcel; Kapjor, Andrej; Papučík, Štefan; Siažik, Ján
2016-06-01
In this article the heat performance of the heat pipe thermosiphon is achieved through numerical model. The heat performance is calculated from few simplified equations which depends on the working fluid and geometry. Also the thermal conductivity is good to mentioning, because is really interesting how big differences are between heat pipes and full solid surfaces.
Conductance calculations with a wavelet basis set
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel
2003-01-01
. The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...
40 CFR 1065.650 - Emission calculations.
2010-07-01
... into the system boundary, this work flow rate signal becomes negative; in this case, include these negative work rate values in the integration to calculate total work from that work path. Some work paths... interval. When power flows into the system boundary, the power/work flow rate signal becomes negative;...
7 CFR 760.307 - Payment calculation.
2010-01-01
...) The monthly feed cost calculated by using the normal carrying capacity of the eligible grazing land of...) By 56. (j) The monthly feed cost using the normal carrying capacity of the eligible grazing land... pastureland by (ii) The normal carrying capacity of the specific type of eligible grazing land or...
Tubular stabilizer bars – calculations and construction
Directory of Open Access Journals (Sweden)
Adam-Markus WITTEK
2011-01-01
Full Text Available The article outlines the calculation methods for tubular stabilizer bars. Modern technological and structural solutions in contemporary cars are reflected also in the construction, selection and manufacturing of tubular stabilizer bars. A proper construction and the selection of parameters influence the strength properties, the weight, durability and reliability as well as the selection of an appropriate production method.
Stabilizer bars: Part 1. Calculations and construction
Directory of Open Access Journals (Sweden)
Adam-Markus WITTEK
2010-01-01
Full Text Available The article outlines the calculation methods for stabilizer bars. Modern technological and structural solutions in contemporary cars are reflected also in the construction and manufacturing of stabilizer bars. A proper construction and the selection of parameters influence the strength properties, the weight, durability and reliability as well as the selection of an appropriate production method.
7 CFR 1416.704 - Payment calculation.
2010-01-01
... for: (1) Seedlings or cuttings, for trees, bushes or vine replanting; (2) Site preparation and debris...) Replacement, rehabilitation, and pruning; and (6) Labor used to transplant existing seedlings established..., the county committee shall calculate payment based on the number of qualifying trees, bushes or...
On the calculation of Mossbauer isomer shift
Filatov, Michael
2007-01-01
A quantum chemical computational scheme for the calculation of isomer shift in Mossbauer spectroscopy is suggested. Within the described scheme, the isomer shift is treated as a derivative of the total electronic energy with respect to the radius of a finite nucleus. The explicit use of a finite nuc
Normalisation of database expressions involving calculations
Denneheuvel, S. van; Renardel de Lavalette, G.R.
2008-01-01
In this paper we introduce a relational algebra extended with a calculate operator and derive, for expressions in the corresponding language PCSJL, a normalisation procedure. PCSJL plays a role in the implementation of the Rule Language RL; the normalisation is to be used for query optimisation.
Using Angle calculations to demonstrate vowel shifts
DEFF Research Database (Denmark)
Fabricius, Anne
2008-01-01
This paper gives an overview of the long-term trends of diachronic changes evident within the short vowel system of RP during the 20th century. more specifically, it focusses on changing juxtapositions of the TRAP, STRUT and LOT, FOOT vowel centroid positions. The paper uses geometric calculation...
Procedures for Calculating Residential Dehumidification Loads
Energy Technology Data Exchange (ETDEWEB)
Winkler, Jon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Booten, Chuck [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-06-01
Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity. The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.
Radionuclide release calculations for SAR-08
Energy Technology Data Exchange (ETDEWEB)
Thomson, Gavin; Miller, Alex; Smith, Graham; Jackson, Duncan (Enviros Consulting Ltd, Wolverhampton (United Kingdom))
2008-04-15
Following a review by the Swedish regulatory authorities of the post-closure safety assessment of the SFR 1 disposal facility for low and intermediate waste (L/ILW), SAFE, the SKB has prepared an updated assessment called SAR-08. This report describes the radionuclide release calculations that have been undertaken as part of SAR-08. The information, assumptions and data used in the calculations are reported and the results are presented. The calculations address issues raised in the regulatory review, but also take account of new information including revised inventory data. The scenarios considered include the main case of expected behaviour of the system, with variants; low probability releases, and so-called residual scenarios. Apart from these scenario uncertainties, data uncertainties have been examined using a probabilistic approach. Calculations have been made using the AMBER software. This allows all the component features of the assessment model to be included in one place. AMBER has been previously used to reproduce results the corresponding calculations in the SAFE assessment. It is also used in demonstration of the IAEA's near surface disposal assessment methodology ISAM and has been subject to very substantial verification tests and has been used in verifying other assessment codes. Results are presented as a function of time for the release of radionuclides from the near field, and then from the far field into the biosphere. Radiological impacts of the releases are reported elsewhere. Consideration is given to each radionuclide and to each component part of the repository. The releases from the entire repository are also presented. The peak releases rates are, for most scenarios, due to organic C-14. Other radionuclides which contribute to peak release rates include inorganic C-14, Ni-59 and Ni-63. (author)
A case of scrub typhus complicated by acute calculous cholecystitis.
Lee, Su Jin; Cho, Young Hye; Lee, Sang Yeoup; Jeong, Dong Wook; Choi, Eun Jung; Kim, Yun Jin; Lee, Jeong Gyu; Lee, Yu Hyun
2012-07-01
We report a case of acute calculous cholecystitis through scrub typhus. A 69-year-old woman presented with a history of general myalgia, fever, and right abdominal pain. She referred to our hospital for surgical treatment of clinically suspected acute cholecystitis. Physicians concluded the cause of cholecystitis as gall bladder (GB) stone and proper antibiotics treatment of scrub typhus was started later. The patient developed acute respiratory distress syndrome and multi organ failure through scrub typhus. Five days after admission, the patient was treated with proper antibiotics and discharged on the 13th day after starting doxycycline treatment without any sequelae. In areas endemic for tsutsugamushi disease, even though a patient with GB stone presents with symptoms of acute cholecystitis, careful history and physical examination are required to reveal the existence of eschars or skin eruptions.
SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU
Energy Technology Data Exchange (ETDEWEB)
Moriya, S; Sato, M [Komazawa University, Setagaya, Tokyo (Japan); Tachibana, H [National Cancer Center Hospital East, Kashiwa, Chiba (Japan)
2015-06-15
Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation running on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.
Calculating Contained Firing Facility (CFF) explosive
Energy Technology Data Exchange (ETDEWEB)
Lyle, J W.
1998-10-20
The University of California awarded LLNL contract No. B345381 for the design of the facility to Parsons Infrastructure Technology, Inc., of Pasadena, California. The Laboratory specified that the firing chamber be able to withstand repeated fxings of 60 Kg of explosive located in the center of the chamber, 4 feet above the floor, and repeated firings of 35 Kg of explosive at the same height and located anywhere within 2 feet of the edge of a region on the floor called the anvil. Other requirements were that the chamber be able to accommodate the penetrations of the existing bullnose of the Bunker 801 flash X-ray machine and the roof of the underground camera room. These requirements and provisions for blast-resistant doors formed the essential basis for the design. The design efforts resulted in a steel-reinforced concrete snucture measuring (on the inside) 55 x 5 1 feet by 30 feet high. The walls and ceiling are to be approximately 6 feet thick. Because the 60-Kg charge is not located in the geometric center of the volume and a 35-K:: charge could be located anywhere in a prescribed area, there will be different dynamic pressures and impulses on the various walls floor, and ceiling, depending upon the weights and locations of the charges. The detailed calculations and specifications to achieve the design criteria were performed by Parsons and are included in Reference 1. Reference 2, Structures to Resist the E xts of Accidental L%plosions (TMS- 1300>, is the primary design manual for structures of this type. It includes an analysis technique for the calculation of blast loadings within a cubicle or containment-type structure. Parsons used the TM5- 1300 methods to calculate the loadings on the various fling chamber surfaces for the design criteria explosive weights and locations. At LLNL the same methods were then used to determine the firing zones for other weights and elevations that would give the same or lesser loadings. Although very laborious, a hand
2001-09-01
on your PDA. Some calculations that might be performed are Apgar score , Bayes Theorem, Oxygen Index, Pregnancy due date, and Vascular Resistances...Calculators – They allow numerous formulas and clinical scores to be rapidly calculated. • Patient Billing and Coding Databases – They provide Evaluation...clinical scores the majority of which have accompanying references and clinical-use hints all listed in alphabetical order, MedCalc is a favorite among
Wang, Guang-Ji
2007-07-01
A slide rule has been designed to calculate the ocular accommodation of an ametrope corrected with a spectacle lens. The slide rule makes the calculation itself easier to perform than with traditional methods and is easily applicable in a clinical setting. In the slide rule, there are 3 scales indicating the power of the spectacle lens, the viewing distance, and the ocular accommodation. The most accurate accommodative unit was used to design the slide rule. The ocular accommodation is the product of the accommodative unit and the dioptric viewing distance. The calculating results are accurate from +21 diopters to all minus powers of the spectacle lens. In a clinical setting, the patients can be advised how much accommodation they exert before and after the refractive surgeries.
Leaf trajectory calculation for dynamic multileaf collimation to realize optimized fluence profiles
Energy Technology Data Exchange (ETDEWEB)
Dirkx, M.L.P.; Heijmen, B.J.M.; Santvoort, J.P.C. van [University Hospital Rotterdam/Daniel den Hoed Cancer Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)
1998-05-01
An algorithm for the calculation of the required leaf trajectories to generate optimized intensity modulated beam profiles by means of dynamic multileaf collimation is presented. This algorithm iteratively accounts for leaf transmission and collimator scatter and fully avoids tongue-and-groove underdosage effects. Tests on a large number of intensity modulated fields show that only a limited number of iterations, generally less than 10, are necessary to minimize the differences between optimized and realized fluence profiles. To assess the accuracy of the algorithm in combination with the dose calculation algorithm of the Cadplan 3D treatment planning system, predicted absolute dose distributions for optimized fluence profiles were compared with dose distributions measured on the MM50 Racetrack Microtron and resulting from the calculated leaf trajectories. Both theoretical and clinical cases yield an agreement within 2%, or within 2 mm in regions with a high dose gradient, showing that the accuracy is adequate for clinical application. (author)
Leaf trajectory calculation for dynamic multileaf collimation to realize optimized fluence profiles
Dirkx, M. L. P.; Heijmen, B. J. M.; van Santvoort, J. P. C.
1998-05-01
An algorithm for the calculation of the required leaf trajectories to generate optimized intensity modulated beam profiles by means of dynamic multileaf collimation is presented. This algorithm iteratively accounts for leaf transmission and collimator scatter and fully avoids tongue-and-groove underdosage effects. Tests on a large number of intensity modulated fields show that only a limited number of iterations, generally less than 10, are necessary to minimize the differences between optimized and realized fluence profiles. To assess the accuracy of the algorithm in combination with the dose calculation algorithm of the Cadplan 3D treatment planning system, predicted absolute dose distributions for optimized fluence profiles were compared with dose distributions measured on the MM50 Racetrack Microtron and resulting from the calculated leaf trajectories. Both theoretical and clinical cases yield an agreement within 2%, or within 2 mm in regions with a high dose gradient, showing that the accuracy is adequate for clinical application.
CANISTER HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2005-04-07
This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for
Independent calculation of monitor units for VMAT and SPORT
Energy Technology Data Exchange (ETDEWEB)
Chen, Xin; Bush, Karl; Ding, Aiping; Xing, Lei, E-mail: lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)
2015-02-15
Purpose: Dose and monitor units (MUs) represent two important facets of a radiation therapy treatment. In current practice, verification of a treatment plan is commonly done in dose domain, in which a phantom measurement or forward dose calculation is performed to examine the dosimetric accuracy and the MU settings of a given treatment plan. While it is desirable to verify directly the MU settings, a computational framework for obtaining the MU values from a known dose distribution has yet to be developed. This work presents a strategy to calculate independently the MUs from a given dose distribution of volumetric modulated arc therapy (VMAT) and station parameter optimized radiation therapy (SPORT). Methods: The dose at a point can be expressed as a sum of contributions from all the station points (or control points). This relationship forms the basis of the proposed MU verification technique. To proceed, the authors first obtain the matrix elements which characterize the dosimetric contribution of the involved station points by computing the doses at a series of voxels, typically on the prescription surface of the VMAT/SPORT treatment plan, with unit MU setting for all the station points. An in-house Monte Carlo (MC) software is used for the dose matrix calculation. The MUs of the station points are then derived by minimizing the least-squares difference between doses computed by the treatment planning system (TPS) and that of the MC for the selected set of voxels on the prescription surface. The technique is applied to 16 clinical cases with a variety of energies, disease sites, and TPS dose calculation algorithms. Results: For all plans except the lung cases with large tissue density inhomogeneity, the independently computed MUs agree with that of TPS to within 2.7% for all the station points. In the dose domain, no significant difference between the MC and Eclipse Anisotropic Analytical Algorithm (AAA) dose distribution is found in terms of isodose contours
Types of Treatment: Clinical Trials
... Disease Information Treatment Types of Treatment Clinical Trials Clinical Trials Clinical Trials SHARE: Print Glossary Taking part in a clinical ... for cancer are based on previous clinical trials. Clinical Trial Service: LLS provides personalized clinical trial navigation when ...
Methods of calculating radiation absorbed dose.
Wegst, A V
1987-01-01
The new tumoricidal radioactive agents being developed will require a careful estimate of radiation absorbed tumor and critical organ dose for each patient. Clinical methods will need to be developed using standard imaging or counting instruments to determine cumulated organ activities with tracer amounts before the therapeutic administration of the material. Standard MIRD dosimetry methods can then be applied.
First-principles calculations of novel materials
Sun, Jifeng
Computational material simulation is becoming more and more important as a branch of material science. Depending on the scale of the systems, there are many simulation methods, i.e. first-principles calculation (or ab-initio), molecular dynamics, mesoscale methods and continuum methods. Among them, first-principles calculation, which involves density functional theory (DFT) and based on quantum mechanics, has become to be a reliable tool in condensed matter physics. DFT is a single-electron approximation in solving the many-body problems. Intrinsically speaking, both DFT and ab-initio belong to the first-principles calculation since the theoretical background of ab-initio is Hartree-Fock (HF) approximation and both are aimed at solving the Schrodinger equation of the many-body system using the self-consistent field (SCF) method and calculating the ground state properties. The difference is that DFT introduces parameters either from experiments or from other molecular dynamic (MD) calculations to approximate the expressions of the exchange-correlation terms. The exchange term is accurately calculated but the correlation term is neglected in HF. In this dissertation, DFT based first-principles calculations were performed for all the novel materials and interesting materials introduced. Specifically, the DFT theory together with the rationale behind related properties (e.g. electronic, optical, defect, thermoelectric, magnetic) are introduced in Chapter 2. Starting from Chapter 3 to Chapter 5, several representative materials were studied. In particular, a new semiconducting oxytelluride, Ba2TeO is studied in Chapter 3. Our calculations indicate a direct semiconducting character with a band gap value of 2.43 eV, which agrees well with the optical experiment (˜ 2.93 eV). Moreover, the optical and defects properties of Ba2TeO are also systematically investigated with a view to understanding its potential as an optoelectronic or transparent conducting material. We find
Institute of Scientific and Technical Information of China (English)
黄爱兵; 罗骁; 宋长辉; 齐岩松; 王少杰; 张正政; 张继英; 杨永强; 余家阔
2016-01-01
with the three-dimensional models.An anthropometric analysis of variance was used with the models to detect differences between genders both before and after resecting the patellae.Results There was good intra-and inter-observer reliability regarding the dimensional measurements in this study (Intra-class correlation coefficient,ICC ＞ 0.75).Significant gender differences in the patellar width,height,thickness and length of articulating fact were found (P ＜ 0.05).The average patellar width/thickness ratio was 1.95±0.11,regardless of gender,and there was a good correlation between the patellar width and thickness (Male r=0.67,P=0.00; Female r=0.63,P=0.00).After virtual resection,the mean thickness of resected patellae was 9.59±1.53 mm.The resected thickness in male and female group was 10.21±1.53 mm and 8.98± 1.27 mm,respectively.It was found that 89％ female patellae were resected between 8 and 11 mm and 82％ of male patellae were resected between 9 and 12 mm.Conclusion CT based 3D computer assisted technology is an accurate and efficient tool in evaluating the anthropometric patellar dimensions.The anthropometric dimensions of this study could provide basic data for guiding surgical management of the patella in TKA.
High-Power Wind Turbine: Performance Calculation
Directory of Open Access Journals (Sweden)
Goldaev Sergey V.
2015-01-01
Full Text Available The paper is devoted to high-power wind turbine performance calculation using Pearson’s chi-squared test the statistical hypothesis on distribution of general totality of air velocities by Weibull-Gnedenko. The distribution parameters are found by numerical solution of transcendental equation with the definition of the gamma function interpolation formula. Values of the operating characteristic of the incomplete gamma function are defined by numerical integration using Weddle’s rule. The comparison of the calculated results using the proposed methodology with those obtained by other authors found significant differences in the values of the sample variance and empirical Pearson. The analysis of the initial and maximum wind speed influence on performance of the high-power wind turbine is done
Isogeometric analysis in electronic structure calculations
Cimrman, Robert; Kolman, Radek; Tůma, Miroslav; Vackář, Jiří
2016-01-01
In electronic structure calculations, various material properties can be obtained by means of computing the total energy of a system as well as derivatives of the total energy w.r.t. atomic positions. The derivatives, also known as Hellman-Feynman forces, require, because of practical computational reasons, the discretized charge density and wave functions having continuous second derivatives in the whole solution domain. We describe an application of isogeometric analysis (IGA), a spline modification of finite element method (FEM), to achieve the required continuity. The novelty of our approach is in employing the technique of B\\'ezier extraction to add the IGA capabilities to our FEM based code for ab-initio calculations of electronic states of non-periodic systems within the density-functional framework, built upon the open source finite element package SfePy. We compare FEM and IGA in benchmark problems and several numerical results are presented.
Equation of State from Lattice QCD Calculations
Energy Technology Data Exchange (ETDEWEB)
Gupta, Rajan [Los Alamos National Laboratory
2011-01-01
We provide a status report on the calculation of the Equation of State (EoS) of QCD at finite temperature using lattice QCD. Most of the discussion will focus on comparison of recent results obtained by the HotQCD and Wuppertal-Budapest collaborations. We will show that very significant progress has been made towards obtaining high precision results over the temperature range of T = 150-700 MeV. The various sources of systematic uncertainties will be discussed and the differences between the two calculations highlighted. Our final conclusion is that these lattice results of EoS are precise enough to be used in the phenomenological analysis of heavy ion experiments at RHIC and LHC.
Labview virtual instruments for calcium buffer calculations.
Reitz, Frederick B; Pollack, Gerald H
2003-01-01
Labview VIs based upon the calculator programs of Fabiato and Fabiato (J. Physiol. Paris 75 (1979) 463) are presented. The VIs comprise the necessary computations for the accurate preparation of multiple-metal buffers, for the back-calculation of buffer composition given known free metal concentrations and stability constants used, for the determination of free concentrations from a given buffer composition, and for the determination of apparent stability constants from absolute constants. As implemented, the VIs can concurrently account for up to three divalent metals, two monovalent metals and four ligands thereof, and the modular design of the VIs facilitates further extension of their capacity. As Labview VIs are inherently graphical, these VIs may serve as useful templates for those wishing to adapt this software to other platforms.
Tearing mode stability calculations with pressure flattening
Ham, C J; Cowley, S C; Hastie, R J; Hender, T C; Liu, Y Q
2013-01-01
Calculations of tearing mode stability in tokamaks split conveniently into an external region, where marginally stable ideal MHD is applicable, and a resonant layer around the rational surface where sophisticated kinetic physics is needed. These two regions are coupled by the stability parameter. Pressure and current perturbations localized around the rational surface alter the stability of tearing modes. Equations governing the changes in the external solution and - are derived for arbitrary perturbations in axisymmetric toroidal geometry. The relationship of - with and without pressure flattening is obtained analytically for four pressure flattening functions. Resistive MHD codes do not contain the appropriate layer physics and therefore cannot predict stability directly. They can, however, be used to calculate -. Existing methods (Ham et al. 2012 Plasma Phys. Control. Fusion 54 025009) for extracting - from resistive codes are unsatisfactory when there is a finite pressure gradient at the rational surface ...
Normal mode calculations of trigonal selenium
DEFF Research Database (Denmark)
Hansen, Flemming Yssing; McMurry, H. L.
1980-01-01
symmetry. The intrachain force field is projected from a valence type field including a bond stretch, angle bend, and dihedral torsion. With these coordinates we obtain the strong dispersion of the upper optic modes as observed by neutron scattering, where other models have failed and give flat bands......The phonon dispersion relations for trigonal selenium have been calculated on the basis of a short range potential field model. Electrostatic long range forces have not been included. The force field is defined in terms of symmetrized coordinates which reflect partly the symmetry of the space group....... In this way we have eliminated the ambiguity in the choice of valence coordinates, which has been a problem in previous models which used valence type interactions. Calculated sound velocities and elastic moduli are also given. The Journal of Chemical Physics is copyrighted by The American Institute...
A Methodology for Calculating Radiation Signatures
Energy Technology Data Exchange (ETDEWEB)
Klasky, Marc Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wilcox, Trevor [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bathke, Charles G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); James, Michael R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-05-01
A rigorous formalism is presented for calculating radiation signatures from both Special Nuclear Material (SNM) as well as radiological sources. The use of MCNP6 in conjunction with CINDER/ORIGEN is described to allow for the determination of both neutron and photon leakages from objects of interest. In addition, a description of the use of MCNP6 to properly model the background neutron and photon sources is also presented. Examinations of the physics issues encountered in the modeling are investigated so as to allow for guidance in the user discerning the relevant physics to incorporate into general radiation signature calculations. Furthermore, examples are provided to assist in delineating the pertinent physics that must be accounted for. Finally, examples of detector modeling utilizing MCNP are provided along with a discussion on the generation of Receiver Operating Curves, which are the suggested means by which to determine detectability radiation signatures emanating from objects.
Numerical calculations of magnetic properties of nanostructures
Kapitan, Vitalii; Nefedev, Konstantin
2015-01-01
Magnetic force microscopy and scanning tunneling microscopy data could be used to test computer numerical models of magnetism. The elaborated numerical model of a face-centered lattice Ising spins is based on pixel distribution in the image of magnetic nanostructures obtained by using scanning microscope. Monte Carlo simulation of the magnetic structure model allowed defining the temperature dependence of magnetization; calculating magnetic hysteresis curves and distribution of magnetization on the surface of submonolayer and monolayer nanofilms of cobalt, depending on the experimental conditions. Our developed package of supercomputer parallel software destined for a numerical simulation of the magnetic-force experiments and allows obtaining the distribution of magnetization in one-dimensional arrays of nanodots and on their basis. There has been determined interpretation of magneto-force microscopy images of magnetic nanodots states. The results of supercomputer simulations and numerical calculations are in...
A priori calculations for the rotational stabilisation
Directory of Open Access Journals (Sweden)
Iwata Yoritaka
2013-12-01
Full Text Available The synthesis of chemical elements are mostly realised by low-energy heavy-ion reactions. Synthesis of exotic and heavy nuclei as well as that of superheavy nuclei is essential not only to find out the origin and the limit of the chemical elements but also to clarify the historical/chemical evolution of our universe. Despite the life time of exotic nuclei is not so long, those indispensable roles in chemical evolution has been pointed out. Here we are interested in examining the rotational stabilisation. In this paper a priori calculation (before microscopic density functional calculations is carried out for the rotational stabilisation effect in which the balance between the nuclear force, the Coulomb force and the centrifugal force is taken into account.
Energy Technology Data Exchange (ETDEWEB)
Gamiz, E.; /CAFPE, Granada /Granada U., Theor. Phys. Astrophys. /Fermilab; DeTar, C.; /Utah U.; El-Khadra, A.X.; /Illinois U., Urbana; Kronfeld, A.S.; /Fermilab; Mackenzie, P.B.; /Fermilab; Simone, J.; /Fermilab
2011-11-01
We report on the status of the Fermilab-MILC calculation of the form factor f{sub +}{sup K}{pi}(q{sup 2} = 0), needed to extract the CKM matrix element |V{sub us}| from experimental data on K semileptonic decays. The HISQ formulation is used in the simulations for the valence quarks, while the sea quarks are simulated with the asqtad action (MILC N{sub f} = 2 + 1 configurations). We discuss the general methodology of the calculation, including the use of twisted boundary conditions to get values of the momentum transfer close to zero and the different techniques applied for the correlators fits. We present initial results for lattice spacings a {approx} 0.12 fm and a {approx} 0.09 fm, and several choices of the light quark masses.
Pressure Correction in Density Functional Theory Calculations
Lee, S H
2008-01-01
First-principles calculations based on density functional theory have been widely used in studies of the structural, thermoelastic, rheological, and electronic properties of earth-forming materials. The exchange-correlation term, however, is implemented based on various approximations, and this is believed to be the main reason for discrepancies between experiments and theoretical predictions. In this work, by using periclase MgO as a prototype system we examine the discrepancies in pressure and Kohn-Sham energy that are due to the choice of the exchange-correlation functional. For instance, we choose local density approximation and generalized gradient approximation. We perform extensive first-principles calculations at various temperatures and volumes and find that the exchange-correlation-based discrepancies in Kohn-Sham energy and pressure should be independent of temperature. This implies that the physical quantities, such as the equation of states, heat capacity, and the Gr\\"{u}neisen parameter, estimat...
The Gravity- Powered Calculator, a Galilean Exhibit
Cerreta, Pietro
2014-04-01
The Gravity-Powered Calculator is an exhibit of the Exploratorium in San Francisco. It is presented by its American creators as an amazing device that extracts the square roots of numbers, using only the force of gravity. But if you analyze his concept construction one can not help but recall the research of Galileo on falling bodies, the inclined plane and the projectile motion; exactly what the American creators did not put into prominence with their exhibit. Considering the equipment only for what it does, in my opinion, is very reductive compared to the historical roots of the Galilean mathematical physics contained therein. Moreover, if accurate deductions are contained in the famous study of S. Drake on the Galilean drawings and, in particular on Folio 167 v, the parabolic paths of the ball leaping from its launch pad after descending a slope really actualize Galileo's experiments. The exhibit therefore may be best known as a `Galilean calculator'.
Scaling Calculations for a Relativistic Gyrotron.
2014-09-26
a relativistic gyrotron. The results of calculations are given in Section 3. The non- linear , slow-time-scale equations of motion used for these...corresponds to a cylindrical resonator and a thin annular electron beam ;, " with the beam radius chosen to coincide with a maximum of the resonator...entering the cavity. A tractable set of non- linear equations based on a slow-time-scale formulation developed previously was used. For this
A Paleolatitude Calculator for Paleoclimate Studies.
van Hinsbergen, Douwe J J; de Groot, Lennart V; van Schaik, Sebastiaan J; Spakman, Wim; Bijl, Peter K; Sluijs, Appy; Langereis, Cor G; Brinkhuis, Henk
2015-01-01
Realistic appraisal of paleoclimatic information obtained from a particular location requires accurate knowledge of its paleolatitude defined relative to the Earth's spin-axis. This is crucial to, among others, correctly assess the amount of solar energy received at a location at the moment of sediment deposition. The paleolatitude of an arbitrary location can in principle be reconstructed from tectonic plate reconstructions that (1) restore the relative motions between plates based on (marine) magnetic anomalies, and (2) reconstruct all plates relative to the spin axis using a paleomagnetic reference frame based on a global apparent polar wander path. Whereas many studies do employ high-quality relative plate reconstructions, the necessity of using a paleomagnetic reference frame for climate studies rather than a mantle reference frame appears under-appreciated. In this paper, we briefly summarize the theory of plate tectonic reconstructions and their reference frames tailored towards applications of paleoclimate reconstruction, and show that using a mantle reference frame, which defines plate positions relative to the mantle, instead of a paleomagnetic reference frame may introduce errors in paleolatitude of more than 15° (>1500 km). This is because mantle reference frames cannot constrain, or are specifically corrected for the effects of true polar wander. We used the latest, state-of-the-art plate reconstructions to build a global plate circuit, and developed an online, user-friendly paleolatitude calculator for the last 200 million years by placing this plate circuit in three widely used global apparent polar wander paths. As a novelty, this calculator adds error bars to paleolatitude estimates that can be incorporated in climate modeling. The calculator is available at www.paleolatitude.org. We illustrate the use of the paleolatitude calculator by showing how an apparent wide spread in Eocene sea surface temperatures of southern high latitudes may be in part
Prediction and calculation for new energy development
Institute of Scientific and Technical Information of China (English)
Fu Yuhua; Fu Anjie
2008-01-01
Some important questions for new energy development were discussed, such as the prediction and calculation of sea surface temperature, ocean wave, offshore platform price, typhoon track, fn'e status, vibration due to earth-quake, energy price, stock market's trend and so on with the fractal methods ( including the four ones of constant di-mension fractal, variable dimension fractal, complex number dimension fractal and fractal series) and the improved res-caled range analysis (R/S analysis).
Calculation and application of liquidus projection
Institute of Scientific and Technical Information of China (English)
CHEN Shuanglin; CAO Weisheng; YANG Ying; ZHANG Fan; WU Kaisheng; DU Yong; Y.Austin Chang
2006-01-01
Liquidus projection usually refers to a two-dimensional projection of ternary liquidus univariant lines at constant pressure. The algorithms used in Pandat for the calculation of liquidus projection with isothermal lines and invariant reaction equations in a ternary system are presented. These algorithms have been extended to multicomponent liquidus projections and have also been implemented in Pandat. Some examples on ternary and quaternary liquidus projections are presented.
Flow calculation in a bulb turbine
Energy Technology Data Exchange (ETDEWEB)
Goede, E.; Pestalozzi, J.
1987-02-01
In recent years remarkable progress has been made in the field of computational fluid dynamics. Sometimes the impression may arise when reading the relevant literature that most of the problems in this field have already been solved. Upon studying the matter more deeply, however, it is apparent that some questions still remain unanswered. The use of the quasi-3D (Q3D) computational method for calculating the flow in a fuel hydraulic turbine is described.
Calculation of reactor antineutrino spectra in TEXONO
Chen Dong Liang; Mao Ze Pu; Wong, T H
2002-01-01
In the low energy reactor antineutrino physics experiments, either for the researches of antineutrino oscillation and antineutrino reactions, or for the measurement of abnormal magnetic moment of antineutrino, the flux and the spectra of reactor antineutrino must be described accurately. The method of calculation of reactor antineutrino spectra was discussed in detail. Furthermore, based on the actual circumstances of NP2 reactors and the arrangement of detectors, the flux and the spectra of reactor antineutrino in TEXONO were worked out
Perturbative calculation of quasi-normal modes
Siopsis, G
2005-01-01
I discuss a systematic method of analytically calculating the asymptotic form of quasi-normal frequencies. In the case of a four-dimensional Schwarzschild black hole, I expand around the zeroth-order approximation to the wave equation proposed by Motl and Neitzke. In the case of a five-dimensional AdS black hole, I discuss a perturbative solution of the Heun equation. The analytical results are in agreement with the results from numerical analysis.
Theoretical Calculations of Atomic Data for Spectroscopy
Bautista, Manuel A.
2000-01-01
Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.
Calculation of Loudspeaker Cabinet Diffraction and Correction
Institute of Scientific and Technical Information of China (English)
LE Yi; SHEN Yong; XIA Jie
2011-01-01
A method of calculating the cabinet edge diffractions for loudspeaker driver when mounted in an enclosure is proposed,based on the extended Biot-Tolstoy-Medwin model.Up to the third order,cabinet diffractions are discussed in detail and the diffractive effects on the radiated sound field of the loudspeaker system are quantitatively described,with a correction function built to compensate for the diffractive interference.The method is applied to a practical loudspeaker enclosure that has rectangular facets.The diffractive effects of the cabinet on the forward sound radiation are investigated and predictions of the calculations show quite good agreements with experimental measurements.Most loudspeaker systems employ box-like cabinets.The response of a loudspeaker mounted in a box is much rougher than that of the same driver mounted on a large baffle.Although resonances in the box are partly responsible for the lack of smoothness,a major contribution is the diffraction of the cabinet edges,which aggravates the final response performance.Consequently,an analysis of the cabinet diffraction problem is required.%A method of calculating the cabinet edge diffractions for loudspeaker driver when mounted in an enclosure is proposed, based on the extended Biot-Tolstoy-Medwin model. Up to the third order, cabinet diffractions are discussed in detail and the diffractive effects on the radiated sound field of the loudspeaker system are quantitatively described, with a correction function built to compensate for the diffractive interference. The method is applied to a practical loudspeaker enclosure that has rectangular facets. The diffractive effects of the cabinet on the forward sound radiation are investigated and predictions of the calculations show quite good agreements with experimental measurements.
Configuration mixing calculations in soluble models
Cambiaggio, M. C.; Plastino, A.; Szybisz, L.; Miller, H. G.
1983-07-01
Configuration mixing calculations have been performed in two quasi-spin models using basis states which are solutions of a particular set of Hartree-Fock equations. Each of these solutions, even those which do not correspond to the global minimum, is found to contain interesting physical information. Relatively good agreement with the exact lowest-lying states has been obtained. In particular, one obtains a better approximation to the ground state than that provided by Hartree-Fock.
A Paleolatitude Calculator for Paleoclimate Studies.
Directory of Open Access Journals (Sweden)
Douwe J J van Hinsbergen
Full Text Available Realistic appraisal of paleoclimatic information obtained from a particular location requires accurate knowledge of its paleolatitude defined relative to the Earth's spin-axis. This is crucial to, among others, correctly assess the amount of solar energy received at a location at the moment of sediment deposition. The paleolatitude of an arbitrary location can in principle be reconstructed from tectonic plate reconstructions that (1 restore the relative motions between plates based on (marine magnetic anomalies, and (2 reconstruct all plates relative to the spin axis using a paleomagnetic reference frame based on a global apparent polar wander path. Whereas many studies do employ high-quality relative plate reconstructions, the necessity of using a paleomagnetic reference frame for climate studies rather than a mantle reference frame appears under-appreciated. In this paper, we briefly summarize the theory of plate tectonic reconstructions and their reference frames tailored towards applications of paleoclimate reconstruction, and show that using a mantle reference frame, which defines plate positions relative to the mantle, instead of a paleomagnetic reference frame may introduce errors in paleolatitude of more than 15° (>1500 km. This is because mantle reference frames cannot constrain, or are specifically corrected for the effects of true polar wander. We used the latest, state-of-the-art plate reconstructions to build a global plate circuit, and developed an online, user-friendly paleolatitude calculator for the last 200 million years by placing this plate circuit in three widely used global apparent polar wander paths. As a novelty, this calculator adds error bars to paleolatitude estimates that can be incorporated in climate modeling. The calculator is available at www.paleolatitude.org. We illustrate the use of the paleolatitude calculator by showing how an apparent wide spread in Eocene sea surface temperatures of southern high
Index calculation by means of harmonic expansion
Imamura, Yosuke
2015-01-01
We review derivation of superconformal indices by means of supersymmetric localization and spherical harmonic expansion for 3d N=2, 4d N=1, and 6d N=(1,0) supersymmetric gauge theories. We demonstrate calculation of indices for vector multiplets in each dimensions by analysing energy eigenmodes in S^pxR. For the 6d index we consider the perturbative contribution only. We put focus on technical details of harmonic expansion rather than physical applications.
Bias in Dynamic Monte Carlo Alpha Calculations
Energy Technology Data Exchange (ETDEWEB)
Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nolen, Steven Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adams, Terry R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-06
A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.
Preconditioned iterations to calculate extreme eigenvalues
Energy Technology Data Exchange (ETDEWEB)
Brand, C.W.; Petrova, S. [Institut fuer Angewandte Mathematik, Leoben (Austria)
1994-12-31
Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.
CALCULATION OF KAON ELECTROMAGNETIC FORM FACTOR
Institute of Scientific and Technical Information of China (English)
WANG ZHI-GANG; WAN SHAO-LONG; WANG KE-LIN
2001-01-01
The kaon meson electromagnetic form factor is calculated in the framework of coupled Schwinger-Dyson and Bethe-Salpeter formulation in simplified impulse approximation (dressed vertex) with modified fiat-bottom potential,which is a combination of the flat-bottom potential taking into consideration the infrared and ultraviolet asymptotic behaviours of the effective quark-gluon coupling. All the numerical results give a good fit to experimental values.
TINTE. Nuclear calculation theory description report
Energy Technology Data Exchange (ETDEWEB)
Gerwin, H.; Scherer, W.; Lauer, A. [Forschungszentrum Juelich GmbH (DE). Institut fuer Energieforschung (IEF), Sicherheitsforschung und Reaktortechnik (IEF-6); Clifford, I. [Pebble Bed Modular Reactor (Pty) Ltd. (South Africa)
2010-01-15
The Time Dependent Neutronics and Temperatures (TINTE) code system deals with the nuclear and the thermal transient behaviour of the primary circuit of the High-temperature Gas-cooled Reactor (HTGR), taking into consideration the mutual feedback effects in twodimensional axisymmetric geometry. This document contains a complete description of the theoretical basis of the TINTE nuclear calculation, including the equations solved, solution methods and the nuclear data used in the solution. (orig.)
Warhead Performance Calculations for Threat Hazard Assessment
1996-08-01
correlation can be drawn between an explosive’s heat of combustion, heat of detonation , and its EWF. The method of Baroody and Peters41 was used to calculate...from air-blast tests can be rationalized to a combination of an explosive’s heat of combustion and heat of detonation ratioed to the heat of...Center, China Lake, California, NWC TM 3754, February 1979. 41. Baroody, E. and Peters, S., Heats of Explosion, Heat of Detonation , and Reaction
Toward a nitrogen footprint calculator for Tanzania
Hutton, Mary Olivia; Leach, Allison M.; Leip, Adrian; Galloway, James N.; Bekunda, Mateete; Sullivan, Clare; Lesschen, Jan Peter
2017-03-01
We present the first nitrogen footprint model for a developing country: Tanzania. Nitrogen (N) is a crucial element for agriculture and human nutrition, but in excess it can cause serious environmental damage. The Sub-Saharan African nation of Tanzania faces a two-sided nitrogen problem: while there is not enough soil nitrogen to produce adequate food, excess nitrogen that escapes into the environment causes a cascade of ecological and human health problems. To identify, quantify, and contribute to solving these problems, this paper presents a nitrogen footprint tool for Tanzania. This nitrogen footprint tool is a concept originally designed for the United States of America (USA) and other developed countries. It uses personal resource consumption data to calculate a per-capita nitrogen footprint. The Tanzania N footprint tool is a version adapted to reflect the low-input, integrated agricultural system of Tanzania. This is reflected by calculating two sets of virtual N factors to describe N losses during food production: one for fertilized farms and one for unfertilized farms. Soil mining factors are also calculated for the first time to address the amount of N removed from the soil to produce food. The average per-capita nitrogen footprint of Tanzania is 10 kg N yr‑1. 88% of this footprint is due to food consumption and production, while only 12% of the footprint is due to energy use. Although 91% of farms in Tanzania are unfertilized, the large contribution of fertilized farms to N losses causes unfertilized farms to make up just 83% of the food production N footprint. In a developing country like Tanzania, the main audiences for the N footprint tool are community leaders, planners, and developers who can impact decision-making and use the calculator to plan positive changes for nitrogen sustainability in the developing world.
Automation of 2-loop Amplitude Calculations
Jones, S P
2016-01-01
Some of the tools and techniques that have recently been used to compute Higgs boson pair production at NLO in QCD are discussed. The calculation relies on the use of integral reduction, to reduce the number of integrals which must be computed, and expressing the amplitude in terms of a quasi-finite basis, which simplifies their numeric evaluation. Emphasis is placed on sector decomposition and Quasi-Monte Carlo (QMC) integration which are used to numerically compute the master integrals.
Uncertainty calculation in (operational) modal analysis
Pintelon, R.; Guillaume, P.; Schoukens, J.
2007-08-01
In (operational) modal analysis the modal parameters of a structure are identified from the response of that structure to (unmeasurable operational) perturbations. A key issue that remains to be solved is the calculation of uncertainty bounds on the estimated modal parameters. The present paper fills this gap. The theory is illustrated by means of a simulation and a real measurement example (operational modal analysis of a bridge).
Eigenvalue translation method for mode calculations.
Gerck, E; Cruz, C H
1979-05-01
A new method is described for the first few modes calculations in a interferometer that has several advantages over the Allmat subroutine, the Prony method, and the Fox and Li method. In the illustrative results shown for some cases it can be seen that the eigenvalue translation method is typically 100-fold times faster than the usual Fox and Li method and ten times faster than Allmat.
Inductance Calculations of Variable Pitch Helical Inductors
2015-08-01
current. Using the classical skin depth definition , we can adjust the effec- tive diameters used to calculate the inductances. The classical skin depth can...are not. The definition of classical skin depth is an approximation that assumes that all the cmrent is flowing evenly within the region encompassed...inductance can be applied to other more complex forms of geometry, including tapered coils, by simply using the more general forms of the self- and
Practical Rhumb Line Calculations on the Spheroid
Bennett, G. G.
About ten years ago this author wrote the software for a suite of navigation programmes which was resident in a small hand-held computer. In the course of this work it became apparent that the standard text books of navigation were perpetuating a flawed method of calculating rhumb lines on the Earth considered as an oblate spheroid. On further investigation it became apparent that these incorrect methods were being used in programming a number of calculator/computers and satellite navigation receivers. Although the discrepancies were not large, it was disquieting to compare the results of the same rhumb line calculations from a number of such devices and find variations of some miles when the output was given, and therefore purported to be accurate, to a tenth of a mile in distance and/or a tenth of a minute of arc in position. The problem has been highlighted in the past and the references at the end of this show that a number of methods have been proposed for the amelioration of this problem. This paper summarizes formulae that the author recommends should be used for accurate solutions. Most of these may be found in standard geodetic text books, such as, but also provided are new formulae and schemes of solution which are suitable for use with computers or tables. The latter also take into account situations when a near-indeterminate solution may arise. Some examples are provided in an appendix which demonstrate the methods. The data for these problems do not refer to actual terrestrial situations but have been selected for illustrative purposes only. Practising ships' navigators will find the methods described in detail in this paper to be directly applicable to their work and also they should find ready acceptance because they are similar to current practice. In none of the references cited at the end of this paper has the practical task of calculating, using either a computer or tabular techniques, been addressed.
TEA: A Code Calculating Thermochemical Equilibrium Abundances
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
Coupled-cluster calculations of nucleonic matter
Hagen, G; Ekström, A; Wendt, K A; Baardsen, G; Gandolfi, S; Hjorth-Jensen, M; Horowitz, C J
2014-01-01
Background: The equation of state (EoS) of nucleonic matter is central for the understanding of bulk nuclear properties, the physics of neutron star crusts, and the energy release in supernova explosions. Purpose: This work presents coupled-cluster calculations of infinite nucleonic matter using modern interactions from chiral effective field theory (EFT). It assesses the role of correlations beyond particle-particle and hole-hole ladders, and the role of three-nucleon-forces (3NFs) in nuclear matter calculations with chiral interactions. Methods: This work employs the optimized nucleon-nucleon NN potential NNLOopt at next-to-next-to leading-order, and presents coupled-cluster computations of the EoS for symmetric nuclear matter and neutron matter. The coupled-cluster method employs up to selected triples clusters and the single-particle space consists of a momentum-space lattice. We compare our results with benchmark calculations and control finite-size effects and shell oscillations via twist-averaged bound...
Modified embedded atom method calculations of interfaces
Energy Technology Data Exchange (ETDEWEB)
Baskes, M.I.
1996-05-01
The Embedded Atom Method (EAM) is a semi-empirical calculational method developed a decade ago to calculate the properties of metallic systems. By including many-body effects this method has proven to be quite accurate in predicting bulk and surface properties of metals and alloys. Recent modifications have extended this applicability to a large number of elements in the periodic table. For example the modified EAM (MEAM) is able to include the bond-bending forces necessary to explain the elastic properties of semiconductors. This manuscript will briefly review the MEAM and its application to the binary systems discussed below. Two specific examples of interface behavior will be highlighted to show the wide applicability of the method. In the first example a thin overlayer of nickel on silicon will be studied. Note that this example is representative of an important technological class of materials, a metal on a semiconductor. Both the structure of the Ni/Si interface and its mechanical properties will be presented. In the second example the system aluminum on sapphire will be examined. Again the class of materials is quite different, a metal on an ionic material. The calculated structure and energetics of a number of (111) Al layers on the (0001) surface of sapphire will be compared to recent experiments.
Starting Time Calculation for Induction Motor.
Directory of Open Access Journals (Sweden)
Abhishek Garg
2015-05-01
Full Text Available This Paper Presents The Starting Time Calculation For A Squirrel Cage Induction Motor. The Importance Of Starting Time Lies In Determining The Duration Of Large Current, Which Flows During The Starting Of An Induction Motor. Normally, The Starting Current Of An Induction Motor Is Six To Eight Time Of Full Load Current. Plenty Of Methods Have Been Discovered To Start Motor In A Quick Time, But Due To Un-Economic Nature, Use Are Limited. Hence, For Large Motors Direct Online Starting Is Most Popular Amongst All Due To Its Economic And Feasible Nature. But Large Current With Dol Starting Results In A Heavy Potential Drop In The Power System. Thus, Special Care And Attention Is Required In Order To Design A Healthy System. A Very Simple Method To Calculate The Starting Time Of Motor Is Proposed In This Paper. Respective Simulation Study Has Been Carried Out Using Matlab 7.8.0 Environment, Which Demonstrates The Effectiveness Of The Starting Time Calculation.
How Accurately can we Calculate Thermal Systems?
Energy Technology Data Exchange (ETDEWEB)
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Vestibule and Cask Preparation Mechanical Handling Calculation
Energy Technology Data Exchange (ETDEWEB)
N. Ambre
2004-05-26
The scope of this document is to develop the size, operational envelopes, and major requirements of the equipment to be used in the vestibule, cask preparation area, and the crane maintenance area of the Fuel Handling Facility. This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAIC Company L.L.C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Ref. 167124). This correspondence was appended by further correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-01R W12101; TDL No. 04-024'' (Ref. 16875 1). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process.
Methods for Calculation of Geogenetic Depth
Institute of Scientific and Technical Information of China (English)
Liu Ruixun; Lü Guxian; Wang Fangzheng; Wei Changshan; Guo Chusun
2004-01-01
Some current methods for the calculation of the geogenetic depth are based on the hydrostatic model, it is induced that the depth in certain underground place is equal to the pressure divided by the specific weight of rock, on the assumption that the rock is hydrostatic and overlain by no other force but gravity. However, most of rock is in a deformation environment and non-hydrostatic state, especially in an orogenic belt, so that the calculated depth may be exaggerated in comparison with the actual depth according to the hydrostatic formula. In the finite slight deformation and elastic model, the relative actual depth value from the 3-axis strain data was obtained with the measurement of strain including that of superimposed tectonic forces but excluding that of time factor for the strain. If some data on the strain speed are obtained, the depth would be more realistically calculated according to the rheological model because the geological body often experiences long-term creep strains.
Calculation of sulfide capacities of multicomponent slags
Pelton, Arthur D.; Eriksson, Gunnar; Romero-Serrano, Antonio
1993-10-01
The Reddy-Blander model for the sulfide capacities of slags has been modified for the case of acid slags and to include A12O3 and TiO2 as components. The model has been extended to calculate a priori sulfide capacities of multicomponent slags, from a knowledge of the thermodynamic activities of the component oxides, with no adjustable parameters. Agreement with measurements is obtained within experimental uncertainty for binary, ternary, and quinary slags involving the components SiO2-Al2O3-TiO2-CaO-MgO-FeO-MnO over wide ranges of composition. The oxide activities used in the computations are calculated from a database of model parameters obtained by optimizing thermodynamic and phase equilibrium data for oxide systems. Sulfur has now been included in this database. A computing system with automatic access to this and other databases has been developed to permit the calculation of the sulfur content of slags in multicomponent slag/metal/gas/solid equilibria.
Quantum mechanical calculations and mineral spectroscopy
Kubicki, J. D.
2006-05-01
Interpretation of spectra in systems of environmental interest is not generally straightforward due to the lack of close analogs and a clear structure of some components of the system. Computational chemistry can be used as an objective method to test interpretations of spectra. This talk will focus on applying ab initio methods to complement vibrational, NMR, and EXAFS spectroscopic information. Examples of systems studied include phosphate/Fe-hydroxides, arsenate/Al- and Fe-hydroxide, fractured silica surfaces. Phosphate interactions with Fe-hydroxides are important in controlling nutrient availability in soils and transport within streams. In addition, organo-phosphate bonding may be a key attachment mechanism for bacteria at Fe-oxide surfaces. Interpretation of IR spectra is enhanced by model predictions of vibrational frequencies for various surface complexes. Ab initio calculations were used to help explain As(V) and As(III) adsorption behavior onto amorphous Al- and Fe-hydroxides in conjunction with EXAFS measurements. Fractured silica surfaces have been implicated in silicosis. These calculations test structures that could give rise to radical formation on silica surfaces. Calculations to simulate the creation of Si and SiO radical species on sufaces and their subsequent production of OH radicals will be discussed.
Treatment Registration and Nuclide Decay Calculation System
Institute of Scientific and Technical Information of China (English)
WU Jian-guo; XU Bo; CHEN Zhi-jun; ZHOU Ai-qing; WANG Xue-qin; ZHANG Bin; MA Tao; SHEN Jun-jin; LIU Jie; JIN Hai-xia
2008-01-01
Objective:To design a software to do the complicated and multiple calcula-tions automatically in routine internal radionuclide irradiation therapy to avoid mistakes and shorten patients waiting times. Methods:The software is designed on the Microsoft Windows XP operating system. Visual Basic 5.0 and Microsoft Access 2000 are used re-spectively as the programming language and database system here. The data and DBGrid controls and VB data window guide of Visual Basic were used to control access to and Ac-cess database. Results: Not only can the radioactivity of any radionuclide be calculated, but also the administered total iodine dose of therapy for hyperthyroidism or thyroid cancer and the total administered 153 Sm-EDTMP solutions for remedy of bone metastasis of malig-nant tumor can be ciphered out. Conclusion: The work becomes easier, faster, more cor-rect and interesting when the software can make the complicated and multiple calculations automatically. Patients' information, diagnosis and treatment can be recorded for further study.
Ability of medical students to calculate drug doses in children after their paediatric attachment
Directory of Open Access Journals (Sweden)
Oshikoya KA
2008-12-01
Full Text Available Dose calculation errors constitute a significant part of prescribing errors which might have resulted from informal teaching of the topic in medical schools. Objectives: To determine adequacy of knowledge and skills of drug dose calculations in children acquired by medical students during their clinical attachment in paediatrics.Methods: Fifty two 5th year medical students of the Lagos State University College of Medicine (LASUCOM, Ikeja were examined on drug dose calculations from a vial and ampoules of injections, syrup and suspension, and tablet formulation. The examination was with a structured questionnaire mostly in the form of multiple choice questions.Results: Thirty-six (69.2% and 30 (57.7% students were taught drug dose calculation in neonatal posting and during ward rounds/ bed-side teaching, respectively. Less than 50% of the students were able to calculate the correct doses of each of adrenaline, gentamicin, chloroquine and sodium bicarbonate injections required by the patient. Dose calculation was however relatively better with adrenalin when compared with the other injections. The proportion of female students that calculated the correct doses of quinine syrup and cefuroxime suspension were significantly higher than those of their male counterparts (p<0.05 and p<0.01, respectively; Chi-square test. When doses calculated in mg/dose and mL/dose was compared for adrenalin injection and each of quinine syrup and cefuroxime suspension, there were significant differences (adrenaline and quinine, p=0.005; adrenaline and cefuroxime, p=0.003: Fischer’s exact test. Dose calculation errors of similar magnitude to injections, syrup and suspension were also observed with tablet formulation.Conclusions: LASUCOM medical students lacked the basic knowledge of paediatric drug dose calculations but were willing to learn if the topic was formally taught. Drug dose calculations should be given a prominent consideration in the undergraduate medical
Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo
2016-03-01
Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PACS number(s): 87.55.Gh.
DEFF Research Database (Denmark)
Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca;
2006-01-01
A study of the performance of five commercial radiotherapy treatment planning systems (TPSs) for common treatment sites regarding their ability to model heterogeneities and scattered photons has been performed. The comparison was based on CT information for prostate, head and neck, breast and lung...... correction-based equivalent path length algorithms to model-based algorithms. These were divided into two groups based on how changes in electron transport are accounted for ((a) not considered and (b) considered). Increasing the complexity from the relatively homogeneous pelvic region to the very...
Lattice QCD Calculation of Nucleon Structure
Energy Technology Data Exchange (ETDEWEB)
Liu, Keh-Fei [University of Kentucky, Lexington, KY (United States). Dept. of Physics and Astronomy; Draper, Terrence [University of Kentucky, Lexington, KY (United States). Dept. of Physics and Astronomy
2016-08-30
It is emphasized in the 2015 NSAC Long Range Plan that "understanding the structure of hadrons in terms of QCD's quarks and gluons is one of the central goals of modern nuclear physics." Over the last three decades, lattice QCD has developed into a powerful tool for ab initio calculations of strong-interaction physics. Up until now, it is the only theoretical approach to solving QCD with controlled statistical and systematic errors. Since 1985, we have proposed and carried out first-principles calculations of nucleon structure and hadron spectroscopy using lattice QCD which entails both algorithmic development and large-scale computer simulation. We started out by calculating the nucleon form factors -- electromagnetic, axial-vector, πNN, and scalar form factors, the quark spin contribution to the proton spin, the strangeness magnetic moment, the quark orbital angular momentum, the quark momentum fraction, and the quark and glue decomposition of the proton momentum and angular momentum. The first round of calculations were done with Wilson fermions in the `quenched' approximation where the dynamical effects of the quarks in the sea are not taken into account in the Monte Carlo simulation to generate the background gauge configurations. Beginning in 2000, we have started implementing the overlap fermion formulation into the spectroscopy and structure calculations. This is mainly because the overlap fermion honors chiral symmetry as in the continuum. It is going to be more and more important to take the symmetry into account as the simulations move closer to the physical point where the u and d quark masses are as light as a few MeV only. We began with lattices which have quark masses in the sea corresponding to a pion mass at ~ 300 MeV and obtained the strange form factors, charm and strange quark masses, the charmonium spectrum and the D_{s} meson decay constant f_{Ds}, the strangeness and charmness, the meson mass
Lattice QCD Calculation of Nucleon Structure
Energy Technology Data Exchange (ETDEWEB)
Liu, Keh-Fei; Draper, Terrence
2016-08-30
It is emphasized in the 2015 NSAC Long Range Plan [1] that \\understanding the structure of hadrons in terms of QCD's quarks and gluons is one of the central goals of modern nuclear physics." Over the last three decades, lattice QCD has developed into a powerful tool for ab initio calculations of strong-interaction physics. Up until now, it is the only theoretical approach to solving QCD with controlled statistical and systematic errors. Since 1985, we have proposed and carried out rst-principles calculations of nucleon structure and hadron spectroscopy using lattice QCD which entails both algorithmic development and large scale computer simulation. We started out by calculating the nucleon form factors { electromagnetic [2], axial-vector [3], NN [4], and scalar [5] form factors, the quark spin contribution [6] to the proton spin, the strangeness magnetic moment [7], the quark orbital angular momentum [8], the quark momentum fraction [9], and the quark and glue decomposition of the proton momentum and angular momentum [10]. These rst round of calculations were done with Wilson fermions in the `quenched' approximation where the dynamical e ects of the quarks in the sea are not taken into account in the Monte Carlo simulation to generate the background gauge con gurations. Beginning in 2000, we have started implementing the overlap fermion formulation into the spectroscopy and structure calculations [11, 12]. This is mainly because the overlap fermion honors chiral symmetry as in the continuum. It is going to be more and more important to take the symmetry into account as the simulations move closer to the physical point where the u and d quark masses are as light as a few MeV only. We began with lattices which have quark masses in the sea corresponding to a pion mass at 300 MeV and obtained the strange form factors [13], charm and strange quark masses, the charmonium spectrum and the Ds meson decay constant fDs [14], the strangeness and charmness [15], the
Likelihood ratios: Clinical application in day-to-day practice
Directory of Open Access Journals (Sweden)
Parikh Rajul
2009-01-01
Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.
Basu, Sandip; Alavi, Abass
2016-07-01
It is imperative that the thrust of clinical practice in the ensuing years would be to develop personalized management model for various disorders. PET-computed tomography (PET-CT) based molecular functional imaging has been increasingly utilized for assessment of tumor and other nonmalignant disorders and has the ability to explore disease phenotype on an individual basis and address critical clinical decision making questions related to practice of personalized medicine. Hence, it is essential to make a concerted systematic effort to explore and define the appropriate place of PET-CT in personalized clinical practice in each of malignancies, which would strengthen the concept further. The potential advantages of PET based disease management can be classified into broad categories: (1) Traditional: which includes assessment of disease extent such as initial disease staging and restaging, treatment response evaluation particularly early in the course and thus PET-CT response adaptive decision for continuing the same regimen or switching to salvage schedules; there has been continuous addition of newer application of PET based disease restaging in oncological parlance (eg, Richter transformation); (2) Recent and emerging developments: this includes exploring tumor biology with FDG and non-FDG PET tracers. The potential of multitracer PET imaging (particularly new and novel tracers, eg, 68Ga-DOTA-TOC/NOC/TATE in NET, 68Ga-PSMA and 18F-fluorocholine in prostate carcinoma, 18F-fluoroestradiol in breast carcinoma) has provided a scientific basis to stratify and select appropriate targeted therapies (both radionuclide and nonradionuclide treatment), a major boost for individualized disease management in clinical oncology. Integrating the molecular level information obtained from PET with structural imaging further individualizing treatment plan in radiation oncology, precision of interventions and biopsies of a particular lesion and forecasting disease prognosis.
Limitations of analytical dose calculations for small field proton radiosurgery
Geng, Changran; Daartz, Juliane; Lam-Tin-Cheung, Kimberley; Bussiere, Marc; Shih, Helen A.; Paganetti, Harald; Schuemann, Jan
2017-01-01
The purpose of the work was to evaluate the dosimetric uncertainties of an analytical dose calculation engine and the impact on treatment plans using small fields in intracranial proton stereotactic radiosurgery (PSRS) for a gantry based double scattering system. 50 patients were evaluated including 10 patients for each of 5 diagnostic indications of: arteriovenous malformation (AVM), acoustic neuroma (AN), meningioma (MGM), metastasis (METS), and pituitary adenoma (PIT). Treatment plans followed standard prescription and optimization procedures for PSRS. We performed comparisons between delivered dose distributions, determined by Monte Carlo (MC) simulations, and those calculated with the analytical dose calculation algorithm (ADC) used in our current treatment planning system in terms of dose volume histogram parameters and beam range distributions. Results show that the difference in the dose to 95% of the target (D95) is within 6% when applying measured field size output corrections for AN, MGM, and PIT. However, for AVM and METS, the differences can be as great as 10% and 12%, respectively. Normalizing the MC dose to the ADC dose based on the dose of voxels in a central area of the target reduces the difference of the D95 to within 6% for all sites. The generally applied margin to cover uncertainties in range (3.5% of the prescribed range + 1 mm) is not sufficient to cover the range uncertainty for ADC in all cases, especially for patients with high tissue heterogeneity. The root mean square of the R90 difference, the difference in the position of distal falloff to 90% of the prescribed dose, is affected by several factors, especially the patient geometry heterogeneity, modulation and field diameter. In conclusion, implementation of Monte Carlo dose calculation techniques into the clinic can reduce the uncertainty of the target dose for proton stereotactic radiosurgery. If MC is not available for treatment planning, using MC dose distributions to
Numerical precision calculations for LHC physics
Energy Technology Data Exchange (ETDEWEB)
Reuschle, Christian Andreas
2013-02-05
In this thesis I present aspects of QCD calculations, which are related to the fully numerical evaluation of next-to-leading order (NLO) QCD amplitudes, especially of the one-loop contributions, and the efficient computation of associated collider observables. Two interrelated topics have thereby been of concern to the thesis at hand, which give rise to two major parts. One large part is focused on the general group-theoretical behavior of one-loop QCD amplitudes, with respect to the underlying SU(N{sub c}) theory, in order to correctly and efficiently handle the color degrees of freedom in QCD one-loop amplitudes. To this end a new method is introduced that can be used in order to express color-ordered partial one-loop amplitudes with multiple quark-antiquark pairs as shuffle sums over cyclically ordered primitive one-loop amplitudes. The other large part is focused on the local subtraction of divergences off the one-loop integrands of primitive one-loop amplitudes. A method for local UV renormalization has thereby been developed, which uses local UV counterterms and efficient recursive routines. Together with suitable virtual soft and collinear subtraction terms, the subtraction method is extended to the virtual contributions in the calculations of NLO observables, which enables the fully numerical evaluation of the one-loop integrals in the virtual contributions. The method has been successfully applied to the calculation of jet rates in electron-positron annihilation to NLO accuracy in the large-N{sub c} limit.
Future requirements. Clinical investigations
DEFF Research Database (Denmark)
Qvist, V.
2002-01-01
Biocompatability, Cariology, Clinical trials, Dental materials, Helath services research, Human, Pedodontics......Biocompatability, Cariology, Clinical trials, Dental materials, Helath services research, Human, Pedodontics...
Using reciprocity in Boundary Element Calculations
DEFF Research Database (Denmark)
Juhl, Peter Møller; Cutanda Henriquez, Vicente
2010-01-01
The concept of reciprocity is widely used in both theoretical and experimental work. In Boundary Element calculations reciprocity is sometimes employed in the solution of computationally expensive scattering problems, which sometimes can be more efficiently dealt with when formulated...... as the reciprocal radiation problem. The present paper concerns the situation of having a point source (which is reciprocal to a point receiver) at or near a discretized boundary element surface. The accuracy of the original and the reciprocal problem is compared in a test case for which an analytical solution...
Parallel solutions of correlation dimension calculation
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The calculation of correlation dimension is a key problem of the fractals. The standard algorithm requires O(N2) computations. The previous improvement methods endeavor to sequentially reduce redundant computation on condition that there are many different dimensional phase spaces, whose application area and performance improvement degree are limited. This paper presents two fast parallel algorithms: O(N2/p + logp) time p processors PRAM algorithm and O(N2/p) time p processors LARPBS algorithm. Analysis and results of numeric computation indicate that the speedup of parallel algorithms relative to sequence algorithms is efficient. Compared with the PRAM algorithm, The LARPBS algorithm is practical, optimally scalable and cost optimal.
Calculations in fundamental physics mechanics and heat
Heddle, T
2013-01-01
Calculations in Fundamental Physics, Volume I: Mechanics and Heat focuses on the mechanisms of heat. The manuscript first discusses motion, including parabolic, angular, and rectilinear motions, relative velocity, acceleration of gravity, and non-uniform acceleration. The book then discusses combinations of forces, such as polygons and resolution, friction, center of gravity, shearing force, and bending moment. The text looks at force and acceleration, energy and power, and machines. Considerations include momentum, horizontal or vertical motion, work and energy, pulley systems, gears and chai
Calculational Investigation for Mine-Clearance Experiments
1981-08-31
Charge Calculation LFT7 Dese ir p= 10 k/m310 lb/ft Charge AMB1IENT AIR Ojrn FIGURE 17. SAP Problem 5.0013 Initial Mesh Configuration 3’ zones of air...LI co OCO -c i C 0 *Lt’ 3d ) 0as~ a a 4)45 I I I I ~14-~ 12 Problem 5.0008 S10 Ii aa I 044 Qiý4 4 0 I 2- 7,Ref lected Shock Brief Negative Phase 0 2 4
Rooftop Unit Comparison Calculator User Manual
Energy Technology Data Exchange (ETDEWEB)
Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-04-30
This document serves as a user manual for the Packaged rooftop air conditioners and heat pump units comparison calculator (RTUCC) and is an aggregation of the calculator’s website documentation. Content ranges from new-user guide material like the “Quick Start” to the more technical/algorithmic descriptions of the “Methods Pages.” There is also a section listing all the context-help topics that support the features on the “Controls” page. The appendix has a discussion of the EnergyPlus runs that supported the development of the building-response models.
Speed mathematics secrets skills for quick calculation
Handley, Bill
2011-01-01
Using this book will improve your understanding of math and haveyou performing like a genius!People who excel at mathematics use better strategies than the restof us; they are not necessarily more intelligent.Speed Mathematics teaches simple methods that will enable you tomake lightning calculations in your head-including multiplication,division, addition, and subtraction, as well as working withfractions, squaring numbers, and extracting square and cube roots.Here's just one example of this revolutionary approach to basicmathematics:96 x 97 =Subtract each number from 100.96 x 97 =4 3Subtract
What Factors Affect Intraocular Lens Power Calculation?
Fayette, Rose M; Cakiner-Egilmez, Tulay
2015-01-01
Obtaining precise postoperative target refraction is of utmost importance in today's modern cataract and refractive surgery. Emerging literature has linked postoperative surprises to corneal curvature, axial length, and estimation of the effective IOL position. As demonstrated in this case presentation, an inaccuracy in the axial length measurement can lead to a myopic surprise. A review of the literature has demonstrated that prevention of postoperative refractive surprises requires highly experienced nurses, technicians, and/ or biometrists to take meticulous measurements using biometry devices, and surgeons to re-evaluate these calculations prior to the surgery.
A Lattice Calculation of Parton Distributions
Alexandrou, Constantia; Hadjiyiannakou, Kyriakos; Jansen, Karl; Steffens, Fernanda; Wiese, Christian
2016-01-01
We present results for the $x$ dependence of the unpolarized, helicity, and transversity isovector quark distributions in the proton using lattice QCD, employing the method of quasi-distributions proposed by Ji in 2013. Compared to a previous calculation by us, the errors are reduced by a factor of about 2.5. Moreover, we present our first results for the polarized sector of the proton, which indicate an asymmetry in the proton sea in favor of the $u$ antiquarks for the case of helicity distributions, and an asymmetry in favor of the $d$ antiquarks for the case of transversity distributions.
Motor Torque Calculations For Electric Vehicle
Directory of Open Access Journals (Sweden)
Saurabh Chauhan
2015-08-01
Full Text Available Abstract It is estimated that 25 of the total cars across the world will run on electricity by 2025. An important component that is an integral part of all electric vehicles is the motor. The amount of torque that the driving motor delivers is what plays a decisive role in determining the speed acceleration and performance of an electric vehicle. The following work aims at simplifying the calculations required to decide the capacity of the motor that should be used to drive a vehicle of particular specifications.
Configurational space continuity and free energy calculations
Tian, Pu
2016-01-01
Free energy is arguably the most importance function(al) for understanding of molecular systems. A number of rigorous and approximate free energy calculation/estimation methods have been developed over many decades. One important issue, the continuity of an interested macrostate (or path) in configurational space, has not been well articulated, however. As a matter of fact, some important special cases have been intensively discussed. In this perspective, I discuss the relevance of configurational space continuity in development of more efficient and reliable next generation free energy methodologies.
Calculations in bridge aeroelasticity via CFD
Energy Technology Data Exchange (ETDEWEB)
Brar, P.S.; Raul, R.; Scanlan, R.H. [Johns Hopkins Univ., Baltimore, MD (United States)
1996-12-31
The central focus of the present study is the numerical calculation of flutter derivatives. These aeroelastic coefficients play an important role in determining the stability or instability of long, flexible structures under ambient wind loading. A class of Civil Engineering structures most susceptible to such an instability are long-span bridges of the cable-stayed or suspended-span variety. The disastrous collapse of the Tacoma Narrows suspension bridge in the recent past, due to a flutter instability, has been a big impetus in motivating studies in flutter of bridge decks.
Molecular transport calculations with Wannier Functions
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2005-01-01
We present a scheme for calculating coherent electron transport in atomic-scale contacts. The method combines a formally exact Green's function formalism with a mean-field description of the electronic structure based on the Kohn-Sham scheme of density functional theory. We use an accurate plane...... is applied to a hydrogen molecule in an infinite Pt wire and a benzene-dithiol (BDT) molecule between Au(111) surfaces. We show that the transmission function of BDT in a wide energy window around the Fermi level can be completely accounted for by only two molecular orbitals. (c) 2005 Elsevier B.V. All...
Electrical Conductivity Calculations from the Purgatorio Code
Energy Technology Data Exchange (ETDEWEB)
Hansen, S B; Isaacs, W A; Sterne, P A; Wilson, B G; Sonnad, V; Young, D A
2006-01-09
The Purgatorio code [Wilson et al., JQSRT 99, 658-679 (2006)] is a new implementation of the Inferno model describing a spherically symmetric average atom embedded in a uniform plasma. Bound and continuum electrons are treated using a fully relativistic quantum mechanical description, giving the electron-thermal contribution to the equation of state (EOS). The free-electron density of states can also be used to calculate scattering cross sections for electron transport. Using the extended Ziman formulation, electrical conductivities are then obtained by convolving these transport cross sections with externally-imposed ion-ion structure factors.
On the Calculation of Formal Concept Stability
Directory of Open Access Journals (Sweden)
Hui-lai Zhi
2014-01-01
Full Text Available The idea of stability has been used in many applications. However, computing stability is still a challenge and the best algorithms known so far have algorithmic complexity quadratic to the size of the lattice. To improve the effectiveness, a critical term is introduced in this paper, that is, minimal generator, which serves as the minimal set that makes a concept stable when deleting some objects from the extent. Moreover, by irreducible elements, minimal generator is derived. Finally, based on inclusion-exclusion principle and minimal generator, formulas for the calculation of concept stability are proposed.
Calculated Bulk Properties of the Actinide Metals
DEFF Research Database (Denmark)
Skriver, Hans Lomholt; Andersen, O. K.; Johansson, B.
1978-01-01
Self-consistent relativistic calculations of the electronic properties for seven actinides (Ac-Am) have been performed using the linear muffin-tin orbitals method within the atomic-sphere approximation. Exchange and correlation were included in the local spin-density scheme. The theory explains...... the variation of the atomic volume and the bulk modulus through the 5f series in terms of an increasing 5f binding up to plutonium followed by a sudden localisation (through complete spin polarisation) in americium...
Quantum Statistical Calculation of Exchange Bias
Institute of Scientific and Technical Information of China (English)
WANG Huai-Yu; DAI Zhen-Hong
2004-01-01
The phenomenon of exchange bias of ferromagnetic (FM) films, which are coupled with an antiferromagnetic (AFM) film, is studied by Heisenberg model by use of the many-body Green's function method of quantum statistical theory for the uncompensated case. Exchange bias HE and coercivity Hc are calculated as functions of the FM film thickness L, temperature, the strength of the exchange interaction across the interface between FM and AFM and the anisotropy of the FM. Hc decreases with increasing L when the FM film is beyond some thickness. The dependence of the exchange bias HE on the FM film thickness and on temperature is also qualitatively in agreement with experiments.
Calculation of thermal noise in grating reflectors
Heinert, Daniel; Friedrich, Daniel; Hild, Stefan; Kley, Ernst-Bernhard; Leavey, Sean; Martin, Iain W; Nawrodt, Ronny; Tünnermann, Andreas; Vyatchanin, Sergey P; Yamamoto, Kazuhiro
2013-01-01
Grating reflectors have been repeatedly discussed to improve the noise performance of metrological applications due to the reduction or absence of any coating material. So far, however, no quantitative estimate on the thermal noise of these reflective structures exists. In this work we present a theoretical calculation of a grating reflector's noise. We further apply it to a proposed 3rd generation gravitational wave detector. Depending on the grating geometry, the grating material and the temperature we obtain a thermal noise decrease by up to a factor of ten compared to conventional dielectric mirrors. Thus the use of grating reflectors can substantially improve the noise performance in metrological applications.
A Lattice Calculation of Parton Distributions
Energy Technology Data Exchange (ETDEWEB)
Alexandrou, Constantia [Cyprus Univ. Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus); Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Drach, Vincent [Univ. of Southern Denmark, Odense (Denmark). CP3-Origins; Univ. of Southern Denmark, Odense (Denmark). Danish IAS; Garcia-Ramos, Elena [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Hadjiyiannakou, Kyriakos [Cyprus Univ. Nicosia (Cyprus). Dept. of Physics; Jansen, Karl; Steffens, Fernanda; Wiese, Christian [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC
2015-04-15
We report on our exploratory study for the direct evaluation of the parton distribution functions from lattice QCD, based on a recently proposed new approach. We present encouraging results using N{sub f}=2+1+1 twisted mass fermions with a pion mass of about 370 MeV. The focus of this work is a detailed description of the computation, including the lattice calculation, the matching to an infinite momentum and the nucleon mass correction. In addition, we test the effect of gauge link smearing in the operator to estimate the influence of the Wilson line renormalization, which is yet to be done.
Photoionization of zinc by TDLDA calculations
Stener, M.; Decleva, P.
1997-10-01
Absolute photoionization cross section profiles of Zn have been calculated at TDLDA and LDA level, employing a very accurate B-spline basis set and the modified Sternheimer approach. The van Leeuwen - Baerends exchange correlation potential has been used, since its correct asymptotic behaviour is able to support virtual states and describe core-excited resonances. A comparison with available theoretical and experimental data has been performed when possible. The present method has been proven to be robust to analyse wide photon energy regions (from threshold up to 200 eV) and discuss the various shapes of one-electron resonances.
Energy Technology Data Exchange (ETDEWEB)
NONE
1963-07-01
This note constitutes the first edition of a Handbook for the calculation of reactor protections. This handbook makes it possible to calculate simply the different neutron and gamma fluxes and consequently, to fix the minimum quantities of materials necessary under general safety conditions both for the personnel and for the installations. It contains a certain amount of nuclear data, calculation methods, and constants corresponding to the present state of our knowledge. (authors) [French] Cette note constitue la premiere edition du 'Formulaire sur le calcul de la protection des reacteurs'. Ce formulaire permet de calculer de facon simple les difterents flux de neutrons et de gamma et, par suite, de fixer les quantites minima de materiaux a utiliser pour que les conditions generales de securite soient respectees, tant pour le personnel que pour les installations. Il contient un certain nombre de donnees nucleaires, de methodes de calcul et de constantes correspondant a l'etat actuel de nos connaissances. (auteurs)
The spacing calculator software—A Visual Basic program to calculate spatial properties of lineaments
Ekneligoda, Thushan C.; Henkel, Herbert
2006-05-01
A software tool is presented which calculates the spatial properties azimuth, length, spacing, and frequency of lineaments that are defined by their starting and ending co-ordinates in a two-dimensional (2-D) planar co-ordinate system. A simple graphical interface with five display windows creates a user-friendly interactive environment. All lineaments are considered in the calculations, and no secondary sampling grid is needed for the elaboration of the spatial properties. Several rule-based decisions are made to determine the nearest lineament in the spacing calculation. As a default procedure, the programme defines a window that depends on the mode value of the length distribution of the lineaments in a study area. This makes the results more consistent, compared to the manual method of spacing calculation. Histograms are provided to illustrate and elaborate the distribution of the azimuth, length and spacing. The core of the tool is the spacing calculation between neighbouring parallel lineaments, which gives direct information about the variation of block sizes in a given category of structures. The 2-D lineament frequency is calculated for the actual area that is occupied by the lineaments.
Student nurses need more than maths to improve their drug calculating skills.
Wright, Kerri
2007-05-01
Nurses need to be able to calculate accurate drug calculations in order to safely administer drugs to their patients (NMC, 2002). Studies have shown however that nurses do not always have the necessary skills to calculate accurate drug dosages and are potentially administering incorrect dosages of drugs to their patients (Hutton, M. 1998. Nursing Mathematics: the importance of application. Nursing Standard 13(11), 35-38; Kapborg, I. 1994. Calculation and administration of drug dosage by Swedish nurses, Student Nurses and Physicians. International Journal for Quality in Health Care 6(4), 389-395; O'Shea, E. 1999. Factors contributing to medication errors: a literature review. Journal of Advanced Nursing 8, 496-504; Wilson, A. 2003. Nurses maths: researching a practical approach. Nursing Standard 17(47), 33-36). The literature indicates that in order to improve drug calculations strategies need to focus on both the mathematical skills and conceptual skills of student nurses so they can interpret clinical data into drug calculations to be solved. A study was undertaken to investigate the effectiveness of implementing several strategies which focussed on developing the mathematical and conceptual skills of student nurses to improve their drug calculation skills. The study found that implementing a range of strategies which addressed these two developmental areas significantly improved the drug calculation skills of nurses. The study also indicates that a range of strategies has the potential ensuring that the skills taught are retained by the student nurses. Although the strategies significantly improved the drug calculation skills of student nurses, the fact that only 2 students were able to achieve 100% in their drug calculation test indicates a need for further research into this area.
Clinical toxicology: clinical science to public health.
Bateman, D N
2005-11-01
1. The aims of the present paper are to: (i) review progress in clinical toxicology over the past 40 years and to place it in the context of modern health care by describing its development; and (ii) illustrate the use of clinical toxicology data from Scotland, in particular, as a tool for informing clinical care and public health policy with respect to drugs. 2. A historical literature review was conducted with amalgamation and comparison of a series of published and unpublished clinical toxicology datasets from NPIS Edinburgh and other sources. 3. Clinical databases within poisons treatment centres offer an important method of collecting data on the clinical effects of drugs in overdose. These data can be used to increase knowledge on drug toxicity mechanisms that inform licensing decisions, contribute to evidence-based care and clinical management. Combination of this material with national morbidity datasets provides another valuable approach that can inform public health prevention strategies. 4. In conclusion, clinical toxicology datasets offer clinical pharmacologists a new study area. Clinical toxicology treatment units and poisons information services offer an important health resource.
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
Calculating lunar retreat rates using tidal rhythmites
Kvale, E.P.; Johnson, H.W.; Sonett, C.P.; Archer, A.W.; Zawistoski, A.N.N.
1999-01-01
Tidal rhythmites are small-scale sedimenta??r}- structures that can preserve a hierarchy of astronomically induced tidal periods. They can also preserve a record of periodic nontidal sedimentation. If properly interpreted and understood, tidal rhjthmites can be an important component of paleoastronomy and can be used to extract information on ancient lunar orbital dynamics including changes in Earth-Moon distance through geologic time. Herein we present techniques that can be used to calculate ancient Earth-Moon distances. Each of these techniques, when used on a modern high-tide data set, results in calculated estimates of lunar orbital periods and an EarthMoon distance that fall well within 1 percent of the actual values. Comparisons to results from modern tidal data indicate that ancient tidal rhythmite data as short as 4 months can provide suitable estimates of lunar orbital periods if these tidal records are complete. An understanding of basic tidal theory allows for the evaluation of completeness of the ancient tidal record as derived from an analysis of tidal rhythmites. Utilizing the techniques presented herein, it appears from the rock record that lunar orbital retreat slowed sometime during the midPaleozoic. Copyright ??1999, SEPM (Society for Sedimentary Geology).
Fastlim. A fast LHC limit calculator
Energy Technology Data Exchange (ETDEWEB)
Papucci, Michele [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Sakurai, Kazuki [King' s College London (United Kingdom). Physics Dept.; Weiler, Andreas [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Zeune, Lisa [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2014-02-15
Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the exclusion p-value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straight-forward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide, and indicates the conventions and approximations used.
Fastlim: a fast LHC limit calculator
Energy Technology Data Exchange (ETDEWEB)
Papucci, Michele [University of Michigan, Michigan Center for Theoretical Physics, Ann Arbor, MI (United States); Sakurai, Kazuki [King' s College London, Physics Department, London (United Kingdom); Weiler, Andreas [CERN TH-PH Division, Meyrin (Switzerland); Zeune, Lisa [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)
2014-11-15
Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections (cross sections after event selection cuts) from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the CL{sub s} value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straightforward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide and indicates the conventions and approximations used. (orig.)
Calculation of fractional electron capture probabilities
Schoenfeld, E
1998-01-01
A 'Table of Radionuclides' is being prepared which will supersede the 'Table de Radionucleides' formerly issued by the LMRI/LPRI (France). In this effort it is desirable to have a uniform basis for calculating theoretical values of fractional electron capture probabilities. A table has been compiled which allows one to calculate conveniently and quickly the fractional probabilities P sub K , P sub L , P sub M , P sub N and P sub O , their ratios and the assigned uncertainties for allowed and non-unique first forbidden electron capture transitions of known transition energy for radionuclides with atomic numbers from Z=3 to 102. These results have been applied to a total of 28 transitions of 14 radionuclides ( sup 7 Be, sup 2 sup 2 Na, sup 5 sup 1 Cr, sup 5 sup 4 Mn, sup 5 sup 5 Fe, sup 6 sup 8 Ge , sup 6 sup 8 Ga, sup 7 sup 5 Se, sup 1 sup 0 sup 9 Cd, sup 1 sup 2 sup 5 I, sup 1 sup 3 sup 9 Ce, sup 1 sup 6 sup 9 Yb, sup 1 sup 9 sup 7 Hg, sup 2 sup 0 sup 2 Tl). The values are in reasonable agreement with measure...
Criticality Calculations with MCNP6 - Practical Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3)
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.
Fastlim: a fast LHC limit calculator.
Papucci, Michele; Sakurai, Kazuki; Weiler, Andreas; Zeune, Lisa
Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections (cross sections after event selection cuts) from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the [Formula: see text] value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straightforward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide and indicates the conventions and approximations used.
Nonlinear calculating method of pile settlement
Institute of Scientific and Technical Information of China (English)
贺炜; 王桂尧; 王泓华
2008-01-01
To study calculating method of settlement on top of extra-long large-diameter pile, the relevant research results were summarized. The hyperbola model, a nonlinear load transfer function, was introduced to establish the basic differential equation with load transfer method. Assumed that the displacement of pile shaft was the high order power series of buried depth, through merging the same orthometric items and arranging the relevant coefficients, the solution which could take the nonlinear pile-soil interaction and stratum properties of soil into account was solved by power series. On the basis of the solution, by determining the load transfer depth with criterion of settlement on pile tip, the method by making boundary conditions compatible was advised to solve the load-settlement curve of pile. The relevant flow chart and mathematic expressions of boundary conditions were also listed. Lastly, the load transfer methods based on both two-broken-line model and hyperbola model were applied to analyzing a real project. The related coefficients of fitting curves by hyperbola were not less than 0.96, which shows that the hyperbola model is truthfulness, and is propitious to avoid personal error. The calculating value of load-settlement curve agrees well with the measured one, which indicates that it can be applied in engineering practice and making the theory that limits the design bearing capacity by settlement on pile top comes true.