WorldWideScience

Sample records for stenotic area calculation

  1. Usefulness of radionuclide angiocardiography in predicting stenotic mitral orifice area

    International Nuclear Information System (INIS)

    Burns, R.J.; Armitage, D.L.; Fountas, P.N.; Tremblay, P.C.; Druck, M.N.

    1986-01-01

    Fifteen patients with pure mitral stenosis (MS) underwent high-temporal-resolution radionuclide angiocardiography for calculation of the ratio of peak left ventricular (LV) filling rate divided by mean LV filling rate (filling ratio). Whereas LV filling normally occurs in 3 phases, in MS it is more uniform. Thus, in 13 patients the filling ratio was below the normal range of 2.21 to 2.88 (p less than 0.001). In 11 patients in atrial fibrillation, filling ratio divided by mean cardiac cycle length and by LV ejection fraction provided good correlation (r = 0.85) with modified Gorlin formula derived mitral area and excellent correlation with echocardiographic mitral area (r = 0.95). Significant MS can be detected using radionuclide angiocardiography to calculate filling ratio. In the absence of the confounding influence of atrial systole calculation of 0.14 (filling ratio divided by cardiac cycle length divided by LV ejection fraction) + 0.40 cm2 enables accurate prediction of mitral area (+/- 4%). Our data support the contention that the modified Gorlin formula, based on steady-state hemodynamics, provides less certain estimates of mitral area for patients with MS and atrial fibrillation, in whom echocardiography and radionuclide angiocardiography may be more accurate

  2. Transfer Area Mechanical Handling Calculation

    International Nuclear Information System (INIS)

    Dianda, B.

    2004-01-01

    This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC--28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC--28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use of these components or their

  3. Hysteroscopic management of a stenotic cervix.

    Science.gov (United States)

    Suen, Michael W H; Bougie, Olga; Singh, Sukhbir S

    2017-06-01

    To demonstrate an approach to the hysteroscopic management of a stenotic cervix. Step-by-step explanation of the techniques using video and animation (educational video). Academic tertiary level referral center. Patients with cervical stenosis, inclusive of both reproductive age and postmenopausal women. Gynecologists require intrauterine access for many procedures, but a stenotic cervix can obstruct surgery. Blind dilation of a stenotic cervix can lead to a cervical laceration or uterine perforation, with concomitant complications. The hysteroscopic management of a stenotic cervix includes optimizing the surgical environment, performing vaginoscopy and "no-touch" hysteroscopy, and revision of the cervical canal. Revision can be performed using microscissors, micrograspers, or a cutting loop electrode. Partial cervical canal excision to aid in hysteroscopy access should be reserved in women who are not interested in future pregnancy or those who are postmenopausal. Outpatient hysteroscopy uses smaller instruments and shows operative success with patient satisfaction. Although these techniques are demonstrated in an outpatient hysteroscopy setting, they can be adapted for use in an operating theater. The individual steps and approach are emphasized. Intrauterine access can be achieved with various techniques. The "see-and-treat" approach demonstrated in this video can allow access into the uterine cavity despite a stenotic cervix. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  4. Preliminary Study of Hemodynamic Distribution in Patient-Specific Stenotic Carotid Bifurcation by Image-Based Computational Fluid Dynamics

    International Nuclear Information System (INIS)

    Xue, Y.J.; Gao, P.Y.; Duan, Q.; Lin, Y.; Dai, C.B.

    2008-01-01

    Background: Regions prone to atherosclerosis, such as bends and bifurcations, tend to exhibit a certain degree of non-planarity or curvature, and these geometric features are known to strongly influence local flow patterns. Recently, computational fluid dynamics (CFD) has been used as a means of enhancing understanding of the mechanisms involved in atherosclerotic plaque formation and development. Purpose: To analyze flow patterns and hemodynamic distribution in stenotic carotid bifurcation in vivo by combining CFD with magnetic resonance angiography (MRA). Material and Methods: Twenty-one patients with carotid atherosclerosis proved by digital subtraction angiography (DSA) and/or Doppler ultrasound underwent contrast-enhanced MR angiography of the carotid bifurcation by a 3.0T MR scanner. Hemodynamic variables and flow patterns of the carotid bifurcation were calculated and visualized by combining vascular imaging postprocessing with CFD. Results: In mild stenotic cases, there was much more streamlined flow in the bulbs, with reduced or disappeared areas of weakly turbulent flow. Also, the corresponding areas of low wall shear stress (WSS) were reduced or even disappeared. As the extent of stenosis increased, stronger blood jets formed at the portion of narrowing, and more prominent eddy flows and slow back flows were noted in the lee of the stenosis. Regions of elevated WSS were predicted at the portion of stenosis and in the path of the downstream jet. Areas of low WSS were predicted on the leeward side of the stenosis, corresponding with the location of slowly turbulent flows. Conclusion: CFD combined with MRA can simulate flow patterns and calculate hemodynamic variables in stenotic carotid bifurcations as well as normal ones. It provides a new method to investigate the relationship of vascular geometry and flow condition with atherosclerotic pathological changes

  5. Clinical value of MSCTA in the interventional treatment of the initial origin stenotic segment of the internal carotid artery

    International Nuclear Information System (INIS)

    Qi Yueyong; Zou Liguang; Chen Lin; Sun Qingrong; Shuai Jie; Zhou Zheng; Huang Lan

    2007-01-01

    Objective: To assess the clinical value of MSCTA in the interventional treatment of the initial origin stenotic segment of internal carotid artery. Methods: Forty two patients with stenosis of initial origin stenotic segment of internal carotid artery underwent interventional treatment and MSCTA were analyzed retrospectively. Results: Forty two patients were diagnosed correctly through MSCTA. The percentages of stenotic area were measured from the multiplanar reconstruction (MPR)images of MSCTA, including mild stenosis( 70%)in 30, obstruction in 4 (>100%)and normal in 18. Plaques and endoscopic views of stenosis were delineated on MSCTA and CTVE. Conclusion: MSCTA is an accurate method for the assessment of the stenosis and plaques of the stenotic origin segment of internal carotid artery. MSCTA can be used as a convenient follow-up modality for instent restenosis. (authors)

  6. Noninvasive Diagnostic Technique in Stenotic Coronary Atherosclerosis

    Directory of Open Access Journals (Sweden)

    A. Yu. Vasilyev

    2005-01-01

    Full Text Available Objective: to determine the sensitivity and specificity of combined stress echocardiography (EchoCG using dipyri-damole and dobutamine in diagnosing and defining the extent of stenotic coronary lesions in coronary heart disease (CHD in a group of critically ill patients who are unable to perform a physical exercise.Materials and methods: the study included 57 male patients with suspected acute coronary syndrome who underwent stress EchoCG using dipyridamole in high doses in combination with dobutamine, as well as coronary angiography.Results: stress EchoCG could bring up to the diagnostic criteria in all the patients, of whom 9 patients were found at coronary angiography to have no coronary lesion, 34 and 14 patients had one- and many-vessel lesions, respectively. The sensitivity and specificity of combined stress EchoCG were significantly higher than those of EchoCG used in the diagnosis of CHD.Conclusion: stress EchoCG using dipyridamole in combination with dobutamine is a highly informative safe noninvasive technique for diagnosing CHD, its helps to identify patients with atypical acute coronary syndrome and to form a group of patients to be subject to urgent coronarography and angiosurgical intervention. The pattern of segmental contractile disorders at the height of exercise during combined stress Echo-CG makes it possible to define the site of stenotic coronary atherosclerosis with 97.3% sensitivity and to diagnose many-vessel lesion with 100% sensitivity and 100%specificity.

  7. Evaluation of Facet Joint Arthrosis in Stenotic and Normal Lumbar Spines with MRI

    Directory of Open Access Journals (Sweden)

    Ebru Ozan

    2013-10-01

    Full Text Available Aim: To reveal the prevalence of lumbar facet joint arthrosis in normal and stenotic lumbar spines with magnetic resonance imaging. Material and Method: Study group consisted of 30 patients with complaints and findings of lower back pain, neurologic claudicatio and lumbar spinal stenosis detected at L3-4, L4-5 and/or L5-S1 with magnetic resonance imaging (cross section area of the dural sac

  8. Color Doppler flow mapping of stenotic and regurgitant natural heart valves

    International Nuclear Information System (INIS)

    Nanda, N.C.

    1986-01-01

    Color Doppler echocardiography has found widest application in reliable detection and assessment of severity of both atrio-ventricular and semi-lunar valve incompetence. The authors believe both the sensitivity and specificity of color Doppler for the detection of mitral and aortic regurgitation is very high in patients with adequate acoustic windows. In 82 patients with proven mitral regurgitation studied, the best correlations with angiography were noted when the maximum or average regurgitant jet are obtained by color Doppler from three standard 2-D echo planes (parasternal long and short axis and apical four chamber view) and expressed as a percentage of the left atrial area were considered. The criteria the authors used for assessment of tricuspid and pulmonary valve incompetence are similar to those used for mitral and aortic valve incompetence, but the lack of a good ''gold'' standard has hampered validation. The color Doppler technique also supplements conventional Doppler in the assessment of severity of stenotic lesions by facilitating parallel alignment of the continuous wave Doppler cursor line with the stenotic jet for accurate recording of maximal velocities and pressure gradients. The authors have found this method especially useful in the assessment of aortic stenosis. In conclusion, color Doppler flow mapping combined with conventional echocardiography provides, for the first time, a comprehensive noninvasive assessement of the severity of regurgitant and stenotic lesions

  9. Cardiac Computed Tomography versus Echocardiography in the Assessment of Stenotic Rheumatic Mitral Valve.

    Science.gov (United States)

    Unal Aksu, Hale; Gorgulu, Sevket; Diker, Mustafa; Celik, Omer; Aksu, Huseyin; Ozturk, Derya; Kırıs, Adem; Kalkan, Ali Kemal; Erturk, Mehmet; Bakır, İhsan

    2016-03-01

    There are different clinical cardiac applications of dual source computed tomography (DSCT). Here, we aimed to compare the DSCT with the transthoracic echocardiography (TTE) for evaluating the Wilkins score and planimetric mitral valve area (MVA) of a rheumatic stenotic mitral valve. We prospectively evaluated mitral valvular structure and function in 31 patients with known mitral stenosis undergoing electrocardiogram-gated, second-generation DSCT, in our heart center for different indications. Mitral valve was evaluated using Wilkins score, and also, the planimetric MVA was assessed. We found a significant difference between MVAs determined by DSCT (average 1.42 ± 0.44 cm2) and MVAs determined by TTE (average 1.35 ± 0.43 cm2 ; difference 0.07 ± 0.16 cm2; P = 0.018). Linear regression analysis revealed a good correlation between the two techniques (r = 0.934; P < 0.0001). The limits of agreement for DSCT and TTE in the Bland-Altman analysis were ±0.31 cm2 . DSCT using TTE as the reference enabled good discrimination between mild and moderate-to-severe stenosis and had an area under the ROC curve of 0.967 (CI 0.912-1.023; P < 0.0001). Wilkins scores obtained by DSCT (7.51 ± 1.17, range 5-10) and TTE (8.16 ± 1.27, range 6-10) had a moderate correlation (r = 0.686; P < 0.0001). We found that planimetric MVA measurements assessed by DSCT were closely correlated with MVA calculations by TTE. The moderate correlation was observed for the Wilkins score. © 2015, Wiley Periodicals, Inc.

  10. Sequential stenotic strictures of the small bowel leading to obstruction

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Small bowel obstructions (SBOs) are primarily caused by adhesions, hernias, neoplasms, or inflammatory strictures. Intraluminal strictures are an uncommon cause of SBO. This report describes our findings in a unique case of sequential, stenotic intraluminal strictures of the small intestine, discusses the differential diagnosis of intraluminal intestinal strictures, and reviews the literature regarding intraluminal pathology.

  11. A novel diagnostic parameter, foraminal stenotic ratio using three-dimensional magnetic resonance imaging, as a discriminator for surgery in symptomatic lumbar foraminal stenosis.

    Science.gov (United States)

    Yamada, Kentaro; Abe, Yuichiro; Satoh, Shigenobu; Yanagibashi, Yasushi; Hyakumachi, Takahiko; Masuda, Takeshi

    2017-08-01

    No previous studies have reported the radiological features of patients requiring surgery in symptomatic lumbar foraminal stenosis (LFS). This study aims to investigate the diagnostic accuracy of a novel technique, foraminal stenotic ratio (FSR), using three-dimensional magnetic resonance imaging for LFS at L5-S by comparing patients requiring surgery, patients with successful conservative treatment, and asymptomatic patients. This is a retrospective radiological comparative study. We assessed the magnetic resonance imaging (MRI) results of 84 patients (168 L5-S foramina) aged ≥40 years without L4-L5 lumbar spinal stenosis. The foramina were divided into three groups following standardized treatment: stenosis requiring surgery (20 foramina), stenosis with successful conservative treatment (26 foramina), and asymptomatic stenotic foramen (122 foramina). Foraminal stenotic ratio was defined as the ratio of the length of the stenosis to the length of the foramen on the reconstructed oblique coronal image, referring to perineural fat obliterations in whole oblique sagittal images. We also evaluated the foraminal nerve angle and the minimum nerve diameter on reconstructed images, and the Lee classification on conventional T1 images. The differences in each MRI parameter between the groups were investigated. To predict which patients require surgery, receiver operating characteristic (ROC) curves were plotted after calculating the area under the ROC curve. The FSR showed a stepwise increase when comparing asymptomatic, conservative, and surgical groups (mean, 8.6%, 38.5%, 54.9%, respectively). Only FSR was significantly different between the surgical and conservative groups (p=.002), whereas all parameters were significantly different comparing the symptomatic and asymptomatic groups. The ROC curve showed that the area under the curve for FSR was 0.742, and the optimal cutoff value for FSR for predicting a surgical requirement in symptomatic patients was 50

  12. Fluid-structure interaction analysis of the flow through a stenotic aortic valve

    Science.gov (United States)

    Maleki, Hoda; Labrosse, Michel R.; Durand, Louis-Gilles; Kadem, Lyes

    2009-11-01

    In Europe and North America, aortic stenosis (AS) is the most frequent valvular heart disease and cardiovascular disease after systemic hypertension and coronary artery disease. Understanding blood flow through an aortic stenosis and developing new accurate non-invasive diagnostic parameters is, therefore, of primarily importance. However, simulating such flows is highly challenging. In this study, we considered the interaction between blood flow and the valve leaflets and compared the results obtained in healthy valves with stenotic ones. One effective method to model the interaction between the fluid and the structure is to use Arbitrary Lagrangian-Eulerian (ALE) approach. Our two-dimensional model includes appropriate nonlinear and anisotropic materials. It is loaded during the systolic phase by applying pressure curves to the fluid domain at the inflow. For modeling the calcified stenotic valve, calcium will be added on the aortic side of valve leaflets. Such simulations allow us to determine the effective orifice area of the valve, one of the main parameters used clinically to evaluate the severity of an AS, and to correlate it with changes in the structure of the leaflets.

  13. New Products and Technologies, Based on Calculations Developed Areas

    Directory of Open Access Journals (Sweden)

    Gheorghe Vertan

    2013-09-01

    Full Text Available Following statistics, currently prosperous and have high GDP / capita, only countries that have and fructify intensively large natural resources and/or produce and export products massive based on patented inventions accordingly. Without great natural wealth and the lowest GDP / capita in the EU, Romania will prosper only with such products. Starting from the top experience in the country, some patented, can develop new and competitive technologies and patentable and exportable products, based on exact calculations of developed areas, such as that double shells welded assemblies and plating of ships' propellers and blade pump and hydraulic turbines.

  14. East Area Irradiation Test Facility: Preliminary FLUKA calculations

    CERN Document Server

    Lebbos, E; Calviani, M; Gatignon, L; Glaser, M; Moll, M; CERN. Geneva. ATS Department

    2011-01-01

    In the framework of the Radiation to Electronics (R2E) mitigation project, the testing of electronic equipment in a radiation field similar to the one occurring in the LHC tunnel and shielded areas to study its sensitivity to single even upsets (SEU) is one of the main topics. Adequate irradiation test facilities are therefore required, and one installation is under consideration in the framework of the PS East area renovation activity. FLUKA Monte Carlo calculations were performed in order to estimate the radiation field which could be obtained in a mixed field facility using the slowly extracted 24 GeV/c proton beam from the PS. The prompt ambient dose equivalent as well as the equivalent residual dose rate after operation was also studied and results of simulations are presented in this report.

  15. In vitro model of platelet aggregation in stenotic arteries

    International Nuclear Information System (INIS)

    Morley, D.; Santamore, W.P.

    1988-01-01

    Clinical and experimental evidence suggest a strong relationship between arterial stenosis, platelet aggregation, and subsequent thrombus formation. To facilitate the study of platelet accumulation in stenotic arteries, we developed an in vitro preparation. Arterial segments were perfused with whole citrated blood. A stenosis was created by applying an external plastic constrictor to the artery. Platelet accumulation within the stenosis was assessed by scanning electron microscopy and by radioactive counts from Indium-111 labeled platelets. Utilizing this preparation, 30 carotid arterial segments from 10 mongrel dogs were perfused at 100 mmHg for 15 min. In 10 arteries without a stenosis, scanning electron microscopy and radioactive counts demonstrated little platelet accumulation. In contrast, extensive platelet aggregation was observed in 10 arteries with stenoses. Moreover, in 10 stenotic arteries exposed to the thromboxane mimetic, U46619 (Upjohn Diagnostic Group), scanning electron microscopy and radioactive counts demonstrated a significant increase in platelet deposition. Conversely, we demonstrated a dimunition of platelet accumulation in stenosed arterial segments exposed to the prostacyclin analogue platelet inhibitor, Iloprost. The in vitro preparation allows precise control of hemodynamic variables and makes it possible to perform multiple tests on segments of the same vessel from the same animal

  16. Similar degree of intimal hyperplasia in surgically detected stenotic and nonstenotic arteriovenous fistula segments: a preliminary report.

    Science.gov (United States)

    Duque, Juan C; Tabbara, Marwan; Martinez, Laisel; Paez, Angela; Selman, Guillermo; Salman, Loay H; Velazquez, Omaida C; Vazquez-Padron, Roberto I

    2018-04-01

    Intimal hyperplasia has been historically associated with improper venous remodeling and stenosis after creation of an arteriovenous fistula. Recently, however, we showed that intimal hyperplasia by itself does not explain the failure of maturation of 2-stage arteriovenous fistulas. We seek to evaluate whether intimal hyperplasia plays a role in the development of focal stenosis of an arteriovenous fistula. This study compares intimal hyperplasia lesions in stenotic and nearby nonstenotic segments collected from the same arteriovenous fistula. Focal areas of stenosis were detected in the operating room in patients (n= 14) undergoing the second-stage vein transposition procedure. The entire vein was inspected, and areas of stenosis were visually located with the aid of manual palpation and hemodynamic changes in the vein peripheral and central to the narrowing. Stenotic and nonstenotic segments were documented by photography before tissue collection (14 tissue pairs). Intimal area and thickness, intima-media thickness, and intima to media area ratio were measured in hematoxylin and eosin stained cross-sections followed by pairwise statistical comparisons. The intimal area in stenotic and nonstenotic segments ranged from 1.25 to 11.61 mm 2 and 1.29 to 5.81 mm 2 , respectively. There was no significant difference between these 2 groups (P=.26). Maximal intimal thickness (P=.22), maximal intima-media thickness (P=.13), and intima to media area ratio (P=.73) were also similar between both types of segments. This preliminary study indicates that postoperative intimal hyperplasia by itself is not associated with the development of focal venous stenosis in 2-stage fistulas. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Surgery for postintubation tracheal and tracheosubglottic stenotic lesions

    International Nuclear Information System (INIS)

    Ashour, M.; Al-Kattan, K.; Rafay, M.A.; El-Bakry, A.K.; El-Dawlatly, A.; Naguib, M.; Seraj, M.; Joharjy, I.; Al-Serhani, A.

    1996-01-01

    Postintubation tracheal stenosis is a recognized problem. Although its incidence has recently decreased, it is still a difficult complication to treat. We have reviewed our experience with 10 patients with tracheal stenosis over the last five years between 1990 and 1995. There were seven male and three female patients with an average age of 14.2+-4 years (range 6 to 48 years). Resection and reconstruction with primary anastomosis was performed in seven patients, while conservative treatment with dilation was performed in two patients. One patient refused surgery. Operations performed included resection of tracheocricoid segment with tracheothyroid anastomosis (N=3) and tracheal resection with end-to-end anastomosis (N=4). The resected airway ranged from 3 cm to 6 cm. In view of the intense inflammatory and fibrotic process in and around stenotic segment, the practice of tracheostomy for the relief of postintubation acute tracheal obstruction should not be taken lightly, as it adds not only to the severity of the inflammatory process, but also increases the length of the tracheal segment to be resected. Postoperatively, all patients were extubated; this was accomplished by the end of surgery in six patients, while the seventh patient was extubated three weeks later. There was no mortality in this series. When normal functional activity and airway patency were taken as two parameters to judge the outcome of the surgery, results were good in six (86%) patients and satisfactory in one. These results support the validity of the one-stage reconstruction approach as one alternative for the treatment of postintubation tracheal and tracheosubglottic stenotic lesions. (author)

  18. Non-stenotic intracranial arteries have atherosclerotic changes in acute ischemic stroke patients: a 3T MRI study

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woo Jin; Choi, Hyun Seok; Jang, Jinhee; Sung, Jinkyeong; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo [The Catholic University of Korea, Department of Radiology, Seoul St. Mary' s Hospital, College of Medicine, Seoul (Korea, Republic of); Kim, Tae-Won; Koo, Jaseong [The Catholic University of Korea, Department of Neurology, College of Medicine, Seoul (Korea, Republic of); Shin, Yong Sam [The Catholic University of Korea, Department of Neurosurgery, College of Medicine, Seoul (Korea, Republic of)

    2015-10-15

    The aim of this study is to evaluate the degree of atherosclerotic changes in intracranial arteries by assessing arterial wall thickness using T1-weighted 3D-turbo spin echo (3D-TSE) and time-of-flight MR angiography (TOF-MRA) in patients with acute ischemic stroke as compared with unaffected controls. Thirty-three patients with acute ischemic stroke and 36 control patients were analyzed. Acute ischemic stroke patients were divided according to TOAST classification. At both distal internal carotid arteries and basilar artery without stenosis, TOF-MRA was used to select non-stenotic portion of assessed arteries. 3D-TSE was used to measure the area including the lumen and wall (Area{sub Outer}) and luminal area (Area{sub Inner}). The area of the vessel wall (Area{sub VW}) of assessed intracranial arteries and the ratio index (RI) of each patient were determined. Area{sub Inner}, Area{sub Outer}, Area{sub VW}, and RI showed good inter-observer reliability and excellent intra-observer reliability. Area{sub Inner} did not significantly differ between stroke patients and controls (P = 0.619). However, Area{sub Outer}, Area{sub VW}, and RI were significantly larger in stroke patients (P < 0.001). The correlation coefficient between Area{sub Inner} and Area{sub Outer} was higher in the controls (r = 0.918) than in large vessel disease patients (r = 0.778). RI of large vessel disease patients was significantly higher than that of normal control, small vessel disease, and cardioembolic groups. In patients with acute ischemic stroke, wall thickening and positive remodeling are evident in non-stenotic intracranial arteries. This change is more definite in stroke subtype that is related to atherosclerosis than that in other subtypes which are not. (orig.)

  19. Medicare Data to Calculate Your Primary Service Areas

    Data.gov (United States)

    U.S. Department of Health & Human Services — The following data is being made available to applicants to the Medicare Shared Savings Program (Shared Savings Program), in order to allow them to calculate their...

  20. Preoperative 3D FSE T1-Weighted MR Plaque Imaging for Severely Stenotic Cervical ICA: Accuracy of Predicting Emboli during Carotid Endarterectomy

    Directory of Open Access Journals (Sweden)

    Yasushi Ogasawara

    2016-10-01

    Full Text Available The aim of the present study was to determine whether preoperative three-dimensional (3D fast spin-echo (FSE T1-weighted magnetic resonance (MR plaque imaging for severely stenotic cervical carotid arteries could accurately predict the development of artery-to-artery emboli during exposure of the carotid arteries in carotid endarterectomy (CEA. Seventy-five patients underwent preoperative MR plaque imaging and CEA under transcranial Doppler ultrasonography of the ipsilateral middle cerebral artery. On reformatted axial MR image slices showing the maximum plaque occupation rate (POR and maximum plaque intensity for each patient, the contrast ratio (CR was calculated by dividing the internal carotid artery plaque signal intensity by the sternocleidomastoid muscle signal intensity. For all patients, the area under the receiver operating characteristic curve (AUC—used to discriminate between the presence and absence of microembolic signals—was significantly greater for the CR on the axial image with maximum plaque intensity (CRmax intensity (0.941 than for that with the maximum POR (0.885 (p < 0.05. For 32 patients in whom both the maximum POR and the maximum plaque density were identified, the AUCs for the CR were 1.000. Preoperative 3D FSE T1-weighted MR plaque imaging accurately predicts the development of artery-to-artery emboli during exposure of the carotid arteries in CEA.

  1. 4D spiral imaging of flows in stenotic phantoms and subjects with aortic stenosis.

    Science.gov (United States)

    Negahdar, M J; Kadbi, Mo; Kendrick, Michael; Stoddard, Marcus F; Amini, Amir A

    2016-03-01

    The utility of four-dimensional (4D) spiral flow in imaging of stenotic flows in both phantoms and human subjects with aortic stenosis is investigated. The method performs 4D flow acquisitions through a stack of interleaved spiral k-space readouts. Relative to conventional 4D flow, which performs Cartesian readout, the method has reduced echo time. Thus, reduced flow artifacts are observed when imaging high-speed stenotic flows. Four-dimensional spiral flow also provides significant savings in scan times relative to conventional 4D flow. In vitro experiments were performed under both steady and pulsatile flows in a phantom model of severe stenosis (one inch diameter at the inlet, with 87% area reduction at the throat of the stenosis) while imaging a 6-cm axial extent of the phantom, which included the Gaussian-shaped stenotic narrowing. In all cases, gradient strength and slew rate for standard clinical acquisitions, and identical field of view and resolution were used. For low steady flow rates, quantitative and qualitative results showed a similar level of accuracy between 4D spiral flow (echo time [TE] = 2 ms, scan time = 40 s) and conventional 4D flow (TE = 3.6 ms, scan time = 1:01 min). However, in the case of high steady flow rates, 4D spiral flow (TE = 1.57 ms, scan time = 38 s) showed better visualization and accuracy as compared to conventional 4D flow (TE = 3.2 ms, scan time = 51 s). At low pulsatile flow rates, a good agreement was observed between 4D spiral flow (TE = 2 ms, scan time = 10:26 min) and conventional 4D flow (TE = 3.6 ms, scan time = 14:20 min). However, in the case of high flow-rate pulsatile flows, 4D spiral flow (TE = 1.57 ms, scan time = 10:26 min) demonstrated better visualization as compared to conventional 4D flow (TE = 3.2 ms, scan time = 14:20 min). The feasibility of 4D spiral flow was also investigated in five normal volunteers and four subjects with mild-to-moderate aortic stenosis. The approach achieved TE = 1.68 ms and scan

  2. Calculating the Areas of Polygons with a Smartphone Light Sensor

    Science.gov (United States)

    Kapucu, Serkan; Simsek, Mertkan; Öçal, Mehmet Fatih

    2017-01-01

    This study explores finding the areas of polygons with a smartphone light sensor. A square and an irregular pentagon were chosen as our polygons. During the activity, the LED light was placed at the vertices of our polygons, and the illuminance values of this LED light were detected by the smartphone light sensor. The smartphone was placed on a…

  3. Atmospheric dispersion calculations in a low mountain area

    International Nuclear Information System (INIS)

    Schmid, S.

    1987-01-01

    The applicability of the Gaussian model for assessing the short-range environmental exposure from an emission source in a topographically inhomogeneous terrain is tested. An atmospheric dispersion model of general applicability is used, which is based on results of hydrodynamic flow models. Approaches for turbulence and radiation parameterization are tested by means of a vertically one-dimensional flow model. In order to introduce the effects of the topography in the boundary-layer simulations, the three-dimensional mesoscale model (Ulrich) is applied. The two models are verified by way of episode simulation using wind profile measurements. The differences in the models' results are to show the topographic influence. The calculated flow fields serve as input to a randomwalk model applied for calculating ground-level concentration fields in the vicinity of an emission source. The Gaussian model underestimates the pollution under stable conditions. Convectivity conditions may change the effective source hight through vertical effects caused by orography which, depending on the direction of free flow, leads to an increase or decrease of pollutant concentration at ground level. Applying the more complex dispersion model, the concentration maxima under stable conditions are closer to the source by a factor five, and under unstable conditions about one and a half times more remote. (orig./HP) [de

  4. Value of coronary stenotic flow velocity acceleration on the prediction of long-term improvement in functional status after angioplasty

    NARCIS (Netherlands)

    Albertal, M.; Regar, E.; Piek, J. J.; van Langenhove, G.; Carlier, S. G.; Thury, A.; Sianos, G.; Boersma, E.; de Bruyne, B.; di Mario, C.; Serruys, P. W.

    2001-01-01

    The coronary flow velocity acceleration at the stenotic site (SVA), defined as a > or = 50% increase in resting stenotic velocity when compared with the reference segment, has been shown to be highly sensitive and specific for the diagnosis of a hemodynamically significant stenosis. In this study,

  5. Usefulness of BMIPP SPECT to evaluate myocardial viability, contractile reserve and coronary stenotic progression after reperfusion in acute myocardial infarction

    International Nuclear Information System (INIS)

    Katsunuma, Eita; Kurokawa, Shingo; Takahashi, Motoi; Fukuda, Naoto; Kurosawa, Toshiro; Izumi, Tohru

    2001-01-01

    Using combined 123 I-BMIPP (BMIPP), 201 Tl (Tl) and 99m Tc-PYP (PYP) myocardial SPECT imaging, risk areas of acute myocardial infarction were documented in the acute stage, and then these images were evaluated for how well they reflected muscle viability, contractile reserve and coronary stenotic progression subsequent to reperfusion therapy. Patients who only experienced a first attack of myocardial infarction were enrolled. In total, 36 cases who had had the occluded artery successfully reperfused were examined during the past year. They had no significant vessel disease except for the culprit single artery. The patients were comprised of 32 men and 4 women. The mean age was 59.5 years. All patients underwent coronary angiography and left ventricular (LV) angiography in the emergency room. BMIPP/Tl and PYP myocardial SPECT were conducted in the acute stage and chronic stage. In the chronic stage LV angiography was repeated to assess the improvement of LV wall motion. The response to postextrasystolic potentiation (PESP) testing was performed to estimate myocardial contractile reserve. The risk area of acute myocardial infarction (AMI) was documented by reduced BMIPP accumulation. The size of reduced BMIPP accumulation was larger than that of PYP accumulation. A BMIPP/Tl discrepancy and PYP accumulation were documented to assess myocardial viability. Both improvement in LV wall motion and augmentation of PESP response were more closely related to a BMIPP/Tl discrepancy in the presence or absence of PYP accumulation. Therefore, it would be possible to evaluate myocardial viability and contractile reserve by the BMIPP/Tl discrepancy. In patients with good viability, it is important to predict whether there is coronary stenotic progression or not. In this study, we demonstrated that most patients with improved BMIPP images had no significant progression at the site of intervention. Serial observation of BMIPP images from the acute stage to the chronic stage might

  6. Accuracy of detecting stenotic changes on coronary cineangiograms using computer image processing

    International Nuclear Information System (INIS)

    Sugahara, Tetsuo; Kimura, Koji; Maeda, Hirofumi.

    1990-01-01

    To accurately interprets stenotic changes on coronary cineangiograms, an automatic method of detecting stenotic lesion using computer image processing was developed. First, tracing of artery was performed. The vessel edges were then determined by unilateral Gaussian fitting. The stenotic change was detected on the basis of the reference diameter estimated by Hough transformation. This method was evaluated in 132 segments of 27 arteries in 18 patients. Three observers carried out visual interpretation and computer-aided interpretation. The rate of detection by visual interpretation was 6.1, 28.8 and 20.5%, and by computer-aided interpretation, 39.4, 39.4 and 45.5%. With computer-aided interpretation, the agreement between any two observers on lesions and non-lesions was 40.2% and 59.8%, respectively. Therefore, visual interpretation tended to underestimate the stenotic changes on coronary cineangiograms. We think that computer-aided interpretation increase the reliability of diagnosis on coronary cineangiograms. (author)

  7. A biomedical solicitation examination of nanoparticles as drug agents to minimize the hemodynamics of a stenotic channel

    Science.gov (United States)

    Ijaz, S.; Nadeem, S.

    2017-11-01

    A theoretical examination is presented in this analysis to study the flow of a bio-nanofluid through a curved stenotic channel. The curved channel is considered with an overlapping stenotic region. The effect of convective conditions is incorporated to discuss the heat transfer characteristic. The mathematical problem of a curved stenotic channel is formulated and then solved by using the exact technique. To discuss the hemodynamics of a curved stenotic channel the expression of resistance to blood is evaluated by dividing the channel into pre-stenotic, stenotic and post stenotic region. In this investigation gold, silver and copper nanoparticles are used as drug carriers. The outcomes of the graphical illustration reveal that with an increase in nanoparticle concentration hemodynamics effects of stenosed curved channel are reduced and they also conclude that the drug Au nanoparticles are more effective to minimize hemodynamics when compared to the drug Ag and Cu nanoparticles. This analysis finds valuable theoretical information for nanoparticles used as drug agents in the field of bio-inspired applications.

  8. Increase in stenotic resistance following a brief coronary occlusion in the anesthetized open-chest dog.

    OpenAIRE

    Saito, Daiji; Yasuhara, Koichiro; Takeda, Hikaru; Hyodo, Tatuo; Yamada, Nobuyuki; Uchida, Toshiaki; Haraoka, Shoichi; Nagashima, Hideo

    1982-01-01

    Changes in the stenotic resistance of a coronary artery following brief coronary occlusion were studied in the anesthetized open-chest dog. A critical coronary stenosis was constructed by tying a thick string around the circumflex coronary artery (LCx) near its origin. The LCx was occluded for 5, 10, 15, 20 and 30 seconds with and without coronary stenosis then the reactive hyperemia was observed. In the absence of the stenosis, resistance of the segment of the large coronary artery remained ...

  9. Diffuse stenotic change in large intracranial arteries following irradiation therapy for medulloblastoma

    International Nuclear Information System (INIS)

    Yamakami, Iwao; Sugaya, Yuichi; Sato, Masanori; Osato, Katunobu; Yamaura, Akira; Makino, Hiroyasu.

    1990-01-01

    We reported a case of a patient who developed a diffuse stenotic change in the large intracranial arteries and repeated episodes of cerebral infarction after irradiation therapy for medulloblastoma. A three-year-old girl underwent the subtotal removal of cerebellar medulloblastoma and the subsequent irradiation therapy in the whole brain and spine (30 Gy in the whole brain, 20 Gy in the local brain, and 25 Gy in the whole spine). Two years later, she again underwent surgery and irradiation therapy because a recurrence of medulloblastoma had manifested itself in the frontal lobe; (40 Gy in the whole brain, 20 Gy in the local brain, and 25 Gy in the whole spine). One and half years after the second irradiation, she started suffering from frequent and refractory cerebral ischemic attacks. Cerebral angiography revealed a diffuse narrowing, and multifocal stenoses in the bilateral anterior and middle cerebral arteries. Computerized tomography demonstrated multiple cerebral infarctions. Her neurological condition deteriorated because of recurring strokes and she died at ten years of age. Most of the reported cases of patients who developed stenotic arteriopathy were children in the first decade of their life, and who were irradiated for parasellar brain tumor of low malignancy. Stenotic arteriopathy after irradiation has rarely been recognized in patients with malignant brain tumor. However, life expectancy is increasing even for those with malignant brain tumor, and it may make stenotic arteriopathy after irradiation recognized more commonly in patients with malignant brain tumor. Careful irradiation and subsequent angiographical examination should be required even in patients with malignant brain tumor. (author)

  10. Mathematical Modeling of Bingham Plastic Model of Blood Flow Through Stenotic Vessel

    OpenAIRE

    S.R. Verma

    2014-01-01

    The aim of the present paper is to study the axially symmetric, laminar, steady, one-dimensional flow of blood through narrow stenotic vessel. Blood is considered as Bingham plastic fluid. The analytical results such as pressure drop, resistance to flow and wall shear stress have been obtained. Effect of yield stress and shape of stenosis on resistance to flow and wall shear stress have been discussed through tables and graphically. It has been shown that resistance to flow and th...

  11. One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry, E-mail: zzhong@anl.gov, E-mail: alby@anl.gov, E-mail: gohar@anl.gov [Nuclear Engineering Division, Argonne National Laboratory, IL (United States)

    2011-07-01

    The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β{sub eff} has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β{sub eff} was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β{sub eff}, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have

  12. One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity

    International Nuclear Information System (INIS)

    Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry

    2011-01-01

    The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β_e_f_f has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β_e_f_f was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β_e_f_f, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have been

  13. Early superoxide scavenging accelerates renal microvascular rarefaction and damage in the stenotic kidney.

    Science.gov (United States)

    Kelsen, Silvia; He, Xiaochen; Chade, Alejandro R

    2012-08-15

    Renal artery stenosis (RAS), the main cause of chronic renovascular disease (RVD), is associated with significant oxidative stress. Chronic RVD induces renal injury partly by promoting renal microvascular (MV) damage and blunting MV repair in the stenotic kidney. We tested the hypothesis that superoxide anion plays a pivotal role in MV dysfunction, reduction of MV density, and progression of renal injury in the stenotic kidney. RAS was induced in 14 domestic pigs and observed for 6 wk. Seven RAS pigs were chronically treated with the superoxide dismutase mimetic tempol (RAS+T) to reduce oxidative stress. Single-kidney hemodynamics and function were quantified in vivo using multidetector computer tomography (CT) and renal MV density was quantified ex vivo using micro-CT. Expression of angiogenic, inflammatory, and apoptotic factors was measured in renal tissue, and renal apoptosis and fibrosis were quantified in tissue sections. The degree of RAS and blood pressure were similarly increased in RAS and RAS+T. Renal blood flow (RBF) and glomerular filtration rate (GFR) were reduced in the stenotic kidney (280.1 ± 36.8 and 34.2 ± 3.1 ml/min, P < 0.05 vs. control). RAS+T kidneys showed preserved GFR (58.5 ± 6.3 ml/min, P = not significant vs. control) but a similar decreases in RBF (293.6 ± 85.2 ml/min) and further decreases in MV density compared with RAS. These changes were accompanied by blunted angiogenic signaling and increased apoptosis and fibrosis in the stenotic kidney of RAS+T compared with RAS. The current study shows that tempol administration provided limited protection to the stenotic kidney. Despite preserved GFR, renal perfusion was not improved by tempol, and MV density was further reduced compared with untreated RAS, associated with increased renal apoptosis and fibrosis. These results suggest that a tight balance of the renal redox status is necessary for a normal MV repair response to injury, at least at the early stage of RVD, and raise caution

  14. Optimization Strategies for Bruch's Membrane Opening Minimum Rim Area Calculation: Sequential versus Simultaneous Minimization.

    Science.gov (United States)

    Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M

    2017-10-24

    To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.

  15. A system for 3D representation of burns and calculation of burnt skin area.

    Science.gov (United States)

    Prieto, María Felicidad; Acha, Begoña; Gómez-Cía, Tomás; Fondón, Irene; Serrano, Carmen

    2011-11-01

    In this paper a computer-based system for burnt surface area estimation (BAI), is presented. First, a 3D model of a patient, adapted to age, weight, gender and constitution is created. On this 3D model, physicians represent both burns as well as burn depth allowing the burnt surface area to be automatically calculated by the system. Each patient models as well as photographs and burn area estimation can be stored. Therefore, these data can be included in the patient's clinical records for further review. Validation of this system was performed. In a first experiment, artificial known sized paper patches were attached to different parts of the body in 37 volunteers. A panel of 5 experts diagnosed the extent of the patches using the Rule of Nines. Besides, our system estimated the area of the "artificial burn". In order to validate the null hypothesis, Student's t-test was applied to collected data. In addition, intraclass correlation coefficient (ICC) was calculated and a value of 0.9918 was obtained, demonstrating that the reliability of the program in calculating the area is of 99%. In a second experiment, the burnt skin areas of 80 patients were calculated using BAI system and the Rule of Nines. A comparison between these two measuring methods was performed via t-Student test and ICC. The hypothesis of null difference between both measures is only true for deep dermal burns and the ICC is significantly different, indicating that the area estimation calculated by applying classical techniques can result in a wrong diagnose of the burnt surface. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.

  16. Analysis of the analytic formulae application area for free oscillation frequency calculation in isochronous cyclotrons

    International Nuclear Information System (INIS)

    Kiyan, I.N.; Taraszkiewicz, R.

    2005-01-01

    Selection of optimal analytic formulae for calculation of free oscillation frequencies of the particles in isochronous cyclotrons, ν r (r) and ν z (r), and their application area are described. The selected formulae are used in the program BORP SR - Betatron Oscillation Research Program Second Release - written in C++ with the help of MS Visual C++ .NET. The free oscillation frequencies, calculated by using the program, are used for the evaluation of the modeled regimes of the work of the AIC144 isochronous cyclotron. The analytic formulae were selected by comparing the results of the calculations performed by using formulae adduced by T.Stammbach, Y.Jongen - S.Zaremba, V.V.Kolga with the results of the calculations performed by using the CYCLOPS iterative program, developed by M.M.Gordon. The least difference in the calculation results was obtained for the analytic formulae adduced by V.V.Kolga. The ν r (r) calculation difference ranged from -0.5 to 1.5% and the ν z (r) calculation difference ranged from -5 to 4% for the working radii of the isochronous cyclotron. As the beam was obtained, the selected analytic formulae can be successfully used in the program BORP SR for free oscillation frequency calculation during the evaluation of the modeled regimes of the work of different isochronous cyclotrons

  17. Stormwater Management: Calculation of Traffic Area Runoff Loads and Traffic Related Emissions

    Directory of Open Access Journals (Sweden)

    Maximilian Huber

    2016-07-01

    Full Text Available Metals such as antimony, cadmium, chromium, copper, lead, nickel, and zinc can be highly relevant pollutants in stormwater runoff from traffic areas because of their occurrence, toxicity, and non-degradability. Long-term measurements of their concentrations, the corresponding water volumes, the catchment areas, and the traffic volumes can be used to calculate specific emission loads and annual runoff loads that are necessary for mass balances. In the literature, the annual runoff loads are often specified by a distinct catchment area (e.g., g/ha. These loads were summarized and discussed in this paper for all seven metals and three types of traffic areas (highways, parking lots, and roads; 45 sites. For example, the calculated median annual runoff loads of all sites are 355 g/ha for copper, 110 g/ha for lead (only data of the 21st century, and 1960 g/ha for zinc. In addition, historical trends, annual variations, and site-specific factors were evaluated for the runoff loads. For Germany, mass balances of traffic related emissions and annual heavy metal runoff loads from highways and total traffic areas were calculated. The influences on the mass fluxes of the heavy metal emissions and the runoff pollution were discussed. However, a statistical analysis of the annual traffic related metal fluxes, in particular for different traffic area categories and land uses, is currently not possible because of a lack of monitoring data.

  18. Improving the reproducibility in capillary electrophoresis by incorporating current drift in mobility and peak area calculations

    DEFF Research Database (Denmark)

    Petersen, Nickolaj J.; Hansen, Steen H

    2012-01-01

    The traditional way of calculating mobility and peak areas in capillary electrophoresis does not take into account the changes in the buffer viscosity at different thermostatic control and that the analytes may accelerate during the individual runs due to Joule heating effects. We present a method...

  19. Correlative assessment of cerebral blood flow obtained with perfusion CT and positron emission tomography in symptomatic stenotic carotid disease

    Energy Technology Data Exchange (ETDEWEB)

    Bisdas, Sotirios [JWG University Hospital, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Nemitz, Ole; Becker, Hartmut; Donnerstag, Frank [Hannover Medical School, Department of Neuroradiology, Hannover (Germany); Berding, Georg [Hannover Medical School, Department of Nuclear Medicine, Hannover (Germany); Weissenborn, Karin; Ahl, Bjoern [Hannover Medical School, Department of Neurology, Hannover (Germany)

    2006-10-15

    Twelve patients with ICA stenosis underwent dynamic perfusion computed tomography (CT) and positron emission tomography (PET) studies at rest and after acetazolamide challenge. Cerebral blood flow (CBF) maps on perfusion CT resulted from a deconvolution of parenchymal time-concentration curves by an arterial input function (AIF) in the anterior cerebral artery as well as in both anterior choroidal arteries. CBF was measured by [{sup 15}O]H{sub 2}O PET using multilinear least-squares minimization procedure based on the one-compartment model. In corresponding transaxial PET scans, CBF values were extracted using standardized ROIs. The baseline perfusion CT-CBF values were lower in perfusion CT than in PET (P>0.05). CBF values obtained by perfusion CT were significantly correlated with those measured by PET before (P<0.05) and after (P<0.01) acetazolamide challenge. Nevertheless, the cerebrovascular reserve capacity was overestimated (P=0.05) using perfusion CT measurements. The AIF selection relative to the side of carotid stenosis did not significantly affect calculated perfusion CT-CBF values. In conclusion, the perfusion CT-CBF measurements correlate significantly with the PET-CBF measurements in chronic carotid stenotic disease and contribute useful information to the evaluation of the altered cerebral hemodynamics. (orig.)

  20. Correlative assessment of cerebral blood flow obtained with perfusion CT and positron emission tomography in symptomatic stenotic carotid disease

    International Nuclear Information System (INIS)

    Bisdas, Sotirios; Nemitz, Ole; Becker, Hartmut; Donnerstag, Frank; Berding, Georg; Weissenborn, Karin; Ahl, Bjoern

    2006-01-01

    Twelve patients with ICA stenosis underwent dynamic perfusion computed tomography (CT) and positron emission tomography (PET) studies at rest and after acetazolamide challenge. Cerebral blood flow (CBF) maps on perfusion CT resulted from a deconvolution of parenchymal time-concentration curves by an arterial input function (AIF) in the anterior cerebral artery as well as in both anterior choroidal arteries. CBF was measured by [ 15 O]H 2 O PET using multilinear least-squares minimization procedure based on the one-compartment model. In corresponding transaxial PET scans, CBF values were extracted using standardized ROIs. The baseline perfusion CT-CBF values were lower in perfusion CT than in PET (P>0.05). CBF values obtained by perfusion CT were significantly correlated with those measured by PET before (P<0.05) and after (P<0.01) acetazolamide challenge. Nevertheless, the cerebrovascular reserve capacity was overestimated (P=0.05) using perfusion CT measurements. The AIF selection relative to the side of carotid stenosis did not significantly affect calculated perfusion CT-CBF values. In conclusion, the perfusion CT-CBF measurements correlate significantly with the PET-CBF measurements in chronic carotid stenotic disease and contribute useful information to the evaluation of the altered cerebral hemodynamics. (orig.)

  1. Soil map, area and volume calculations in Orrmyrberget catchment basin at Gideaa, Northern Sweden

    International Nuclear Information System (INIS)

    Ittner, T.; Tammela, P.T.; Gustafsson, E.

    1991-06-01

    Fallout studies in the Gideaa study site after the Chernobyl fallout in 1986, has come to the point that a more exact surface mapping of the studied catchment basin is needed. This surface mapping is mainly made for area calculations of different soil types within the study site. The mapping focus on the surface, as the study concerns fallout redistribution and it is extended to also include materials down to a depth of 0.5 meter. Volume calculations are made for the various soil materials within the top 0.5 m. These volume and area calculations will then be used in the modelling of the migration and redistribution of the fallout radionuclides within the studied catchment basin. (au)

  2. Particle-in-cell vs straight line Gaussian calculations for an area of complex topography

    International Nuclear Information System (INIS)

    Lange, R.; Sherman, C.

    1977-01-01

    Two numerical models for the calculation of time integrated air concentraton and ground deposition of airborne radioactive effluent releases are compared. The time dependent Particle-in-Cell (PIC) model and the steady state Gaussian plume model were used for the simulation. The area selected for the comparison was the Hudson River Valley, New York. Input for the models was synthesized from meteorological data gathered in previous studies by various investigators. It was found that the PIC model more closely simulated the three-dimensional effects of the meteorology and topography. Overall, the Gaussian model calculated higher concentrations under stable conditions. In addition, because of its consideration of exposure from the returning plume after flow reversal, the PIC model calculated air concentrations over larger areas than did the Gaussian model

  3. FreeSASA: An open source C library for solvent accessible surface area calculations.

    Science.gov (United States)

    Mitternacht, Simon

    2016-01-01

    Calculating solvent accessible surface areas (SASA) is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards' and Shrake and Rupley's approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.

  4. Calculation of climate factors as an additional criteria to determine agriculturally less favoured areas

    Directory of Open Access Journals (Sweden)

    Tjaša POGAČAR

    2016-04-01

    Full Text Available Climate factors that are proposed to determine agriculturally less favoured areas (LFA in Slovenia were analyzed for the period 1981–2010. Following the instructions of European Commission prepared by Joint Research Centre (JRC 30-years averages of low air temperatures criteria (the vegetation period duration and sums of effective air temperatures and aridity criteria (aridity index AI have to be calculated. Calculations were additionally done using Slovenian Environment Agency (ARSO method, which is slightly different when determining temperature thresholds. Only hilly areas are below the LFA low air temperatures threshold with the lowest located meteorological station in Rateče. According to aridity criteria no area in Slovenia is below the threshold, so meteorological water balance was also examined. Average water balance in the period 1981–2010 was in most of locations lower than in the period 1971–2000. Climate change impacts are already expressed as trend presence in time series of studied variables, so it is recommended to calculate trends and take them into account or to perform regular iterations of calculations.

  5. The Guayas Estuary and sea level corrections to calculate flooding areas for climate change scenarios

    Science.gov (United States)

    Moreano, H. R.; Paredes, N.

    2011-12-01

    The Guayas estuary is the inner area of the Gulf of Guayaquil, it holds a water body of around 5000 km2 and the Puna island divides the water flow in two main streams : El Morro and Estero Salado Channel (length: 90 Km.) and Jambeli and Rio Guayas Channel (length: 125km.). The geometry of the estuarine system with the behavior of the tidal wave (semidiurnal) makes tidal amplitude higher at the head than at the mouth, whereas the wave crest at the head is delayed from one and a half to two hours from that at the mouth and sea level recorded by gages along the estuary are all different because of the wave propagation and mean sea level (msl) calculated for each gage show differences with that of La Libertad which is the base line for all altitudes on land (zero level). A leveling and calculations were made to correct such differences in a way that all gages (msl) records were linked to La Libertad and this in turn allowed a better estimates of flooding areas and draw them on topographic maps where zero level corresponds to the mean sea level at La Libertad. The procedure and mathematical formulation could be applied to any estuary or coastal area and it is a useful tool to calculate such areas especially when impacts are on people or capital goods and related to climate change scenarios.

  6. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Pathologic implications of severely stenotic carotid artery in disparity to the contralateral asymptomatic artery

    International Nuclear Information System (INIS)

    Cacayorin, E.D.; Schwartz, R.A.; Park, S.H.

    1989-01-01

    In 15 patients (eight women, seven men; age range 56-67 years), arteriography showed severely stenotic internal carotid artery in contrast to the contralateral asymptomatic carotid artery. The patients with recent neurologic manifestations of transient ischemic attack and amaurosis fugax underwent carotid endarterectomy and were subsequently proved to have hemorrhagic atheromatous plaques on gross and histologic examinations. The disparity was unusually significant: 80%-95% stenosis for the symptomatic side, and 0%-20% stenosis for the asymptomatic side. The authors conclude that this arteriographic finding suggests high likelihood of focal subintimal hemorrhage occurring locally; such pathologic change might actually precipitate a cerebroembolic event

  8. Improved method for calculation of population doses from nuclear complexes over large geographical areas

    International Nuclear Information System (INIS)

    Corley, J.P.; Baker, D.A.; Hill, E.R.; Wendell, L.L.

    1977-09-01

    To simplify the calculation of potential long-distance environmental impacts, an overall average population exposure coefficient (P.E.C.) for the entire contiguous United States was calculated for releases to the atmosphere from Hanford facilities. The method, requiring machine computation, combines Bureau of Census population data by census enumeration district and an annual average atmospheric dilution factor (anti chi/Q') derived from 12-hourly gridded wind analyses provided by the NOAA's National Meteorological Center. A variable-trajectory puff-advection model was used to calculate an hourly anti chi/Q' for each grid square, assuming uniform hourly releases; seasonal and annual averages were then calculated. For Hanford, using 1970 census data, a P.E.C. of 2 x 10 -3 man-seconds per cubic meter was calculated. The P.E.C. is useful for both radioactive and nonradioactive releases. To calculate population doses for the entire contiguous United States, the P.E.C. is multiplied by the annual average release rate and then by the dose factor (rem/yr per Ci/m 3 ) for each radionuclide, and the dose contribution in man-rem is summed for all radionuclides. For multiple pathways, the P.E.C. is still useful, provided that doses from a unit release can be obtained from a set of atmospheric dose factors. The methodology is applicable to any point source, any set of population data by map grid coordinates, and any geographical area covered by equivalent meteorological data

  9. Rainfall threshold calculation for debris flow early warning in areas with scarcity of data

    Science.gov (United States)

    Pan, Hua-Li; Jiang, Yuan-Jun; Wang, Jun; Ou, Guo-Qiang

    2018-05-01

    Debris flows are natural disasters that frequently occur in mountainous areas, usually accompanied by serious loss of lives and properties. One of the most commonly used approaches to mitigate the risk associated with debris flows is the implementation of early warning systems based on well-calibrated rainfall thresholds. However, many mountainous areas have little data regarding rainfall and hazards, especially in debris-flow-forming regions. Therefore, the traditional statistical analysis method that determines the empirical relationship between rainstorms and debris flow events cannot be effectively used to calculate reliable rainfall thresholds in these areas. After the severe Wenchuan earthquake, there were plenty of deposits deposited in the gullies, which resulted in several debris flow events. The triggering rainfall threshold has decreased obviously. To get a reliable and accurate rainfall threshold and improve the accuracy of debris flow early warning, this paper developed a quantitative method, which is suitable for debris flow triggering mechanisms in meizoseismal areas, to identify rainfall threshold for debris flow early warning in areas with a scarcity of data based on the initiation mechanism of hydraulic-driven debris flow. First, we studied the characteristics of the study area, including meteorology, hydrology, topography and physical characteristics of the loose solid materials. Then, the rainfall threshold was calculated by the initiation mechanism of the hydraulic debris flow. The comparison with other models and with alternate configurations demonstrates that the proposed rainfall threshold curve is a function of the antecedent precipitation index (API) and 1 h rainfall. To test the proposed method, we selected the Guojuanyan gully, a typical debris flow valley that during the 2008-2013 period experienced several debris flow events, located in the meizoseismal areas of the Wenchuan earthquake, as a case study. The comparison with other

  10. Application of RAD-BCG calculator to Hanford's 300 area shoreline characterization dataset

    Energy Technology Data Exchange (ETDEWEB)

    Antonio, Ernest J.; Poston, Ted M.; Tiller, Brett L.; Patton, Gene W.

    2003-07-01

    Abstract. In 2001, a multi-agency study was conducted to characterize potential environmental effects from radiological and chemical contaminants on the near-shore environment of the Columbia River at the 300 Area of the U.S. Department of Energy’s Hanford Site. Historically, the 300 Area was the location of nuclear fuel fabrication and was the main location for research and development activities from the 1940s until the late 1980s. During past waste handling practices uranium, copper, and other heavy metals were routed to liquid waste streams and ponds near the Columbia River shoreline. The Washington State Department of Health and the Pacific Northwest National Laboratory’s Surface Environmental Surveillance Project sampled various environmental components including river water, riverbank spring water, sediment, fishes, crustaceans, bivalve mollusks, aquatic insects, riparian vegetation, small mammals, and terrestrial invertebrates for analyses of radiological and chemical constituents. The radiological analysis results for water and sediment were used as initial input into the RAD-BCG Calculator. The RAD-BCG Calculator, a computer program that uses an Excel® spreadsheet and Visual Basic® software, showed that maximum radionuclide concentrations measured in water and sediment were lower than the initial screening criteria for concentrations to produce dose rates at existing or proposed limits. Radionuclide concentrations measured in biota samples were used to calculate site-specific bioaccumulation coefficients (Biv) to test the utility of the RAD-BCG-Calculator’s site-specific screening phase. To further evaluate site-specific effects, the default Relative Biological Effect (RBE) for internal alpha particle emissions was reduced by half and the program’s kinetic/allometric calculation approach was initiated. The subsequent calculations showed the initial RAD-BCG Calculator results to be conservative, which is appropriate for screening purposes.

  11. Pancreatic duct drainage using EUS-guided rendezvous technique for stenotic pancreaticojejunostomy.

    Science.gov (United States)

    Takikawa, Tetsuya; Kanno, Atsushi; Masamune, Atsushi; Hamada, Shin; Nakano, Eriko; Miura, Shin; Ariga, Hiroyuki; Unno, Jun; Kume, Kiyoshi; Kikuta, Kazuhiro; Hirota, Morihisa; Yoshida, Hiroshi; Katayose, Yu; Unno, Michiaki; Shimosegawa, Tooru

    2013-08-21

    The patient was a 30-year-old female who had undergone excision of the extrahepatic bile duct and Roux-en-Y hepaticojejunostomy for congenital biliary dilatation at the age of 7. Thereafter, she suffered from recurrent acute pancreatitis due to pancreaticobiliary maljunction and received subtotal stomach-preserving pancreaticoduodenectomy. She developed a pancreatic fistula and an intra-abdominal abscess after the operation. These complications were improved by percutaneous abscess drainage and antibiotic therapy. However, upper abdominal discomfort and the elevation of serum pancreatic enzymes persisted due to stenosis from the pancreaticojejunostomy. Because we could not accomplish dilation of the stenosis by endoscopic retrograde cholangiopancreatography, we tried an endoscopic ultrasonography (EUS) guided rendezvous technique for pancreatic duct drainage. After transgastric puncture of the pancreatic duct using an EUS-fine needle aspiration needle, the guidewire was inserted into the pancreatic duct and finally reached to the jejunum through the stenotic anastomosis. We changed the echoendoscope to an oblique-viewing endoscope, then grasped the guidewire and withdrew it through the scope. The stenosis of the pancreaticojejunostomy was dilated up to 4 mm, and a pancreatic stent was put in place. Though the pancreatic stent was removed after three months, the patient remained symptom-free. Pancreatic duct drainage using an EUS-guided rendezvous technique was useful for the treatment of a stenotic pancreaticojejunostomy after pancreaticoduodenectomy.

  12. IB-LBM simulation of the haemocyte dynamics in a stenotic capillary.

    Science.gov (United States)

    Yuan-Qing, Xu; Xiao-Ying, Tang; Fang-Bao, Tian; Yu-Hua, Peng; Yong, Xu; Yan-Jun, Zeng

    2014-01-01

    To study the behaviour of a haemocyte when crossing a stenotic capillary, the immersed boundary-lattice Boltzmann method was used to establish a quantitative analysis model. The haemocyte was assumed to be spherical and to have an elastic cell membrane, which can be driven by blood flow to adopt a highly deformable character. In the stenotic capillary, the spherical blood cell was stressed both by the flow and the wall dimension, and the cell shape was forced to be stretched to cross the stenosis. Our simulation investigated the haemocyte crossing process in detail. The velocity and pressure were anatomised to obtain information on how blood flows through a capillary and to estimate the degree of cell damage caused by excessive pressure. Quantitative velocity analysis results demonstrated that a large haemocyte crossing a small stenosis would have a noticeable effect on blood flow, while quantitative pressure distribution analysis results indicated that the crossing process would produce a special pressure distribution in the cell interior and to some extent a sudden change between the cell interior and the surrounding plasma.

  13. Revascularization of the internal carotid artery for isolated, stenotic, and symptomatic kinking.

    Science.gov (United States)

    Illuminati, Giulio; Calió, Francesco G; Papaspyropoulos, Vassilios; Montesano, Giuseppe; D'Urso, Antonio

    2003-02-01

    The operation for isolated, stenotic, and symptomatic kinking of the internal carotid artery is safe and effective in preventing stroke and relieving the symptoms of cerebral ischemia. A consecutive sample clinical study with a mean follow-up of 44 months. The surgical department of an academic tertiary care center and an affiliated secondary care center. Fifty-four patients with a mean age of 67 years underwent 55 revascularizations of the internal carotid artery. The surgical procedures consisted of the following: shortening and reimplantation in the common carotid artery in 36 cases, bypass grafting in 15 cases, and transposition into the external carotid artery in 4 cases. Cumulative survival, primary patency, and stroke-free and neurologic symptom-free rates expressed by standard life-table analysis. No patients died in the postoperative period. The postoperative stroke rate was 1.8%. The cumulative rates (SEs) at 5 years were as follows: survival, 70% (10.2%); primary patency, 89% (7.8%); overall stroke free, 92% (6.8%); ipsilateral stroke free, 96% (5.3%); neurologic symptom free, 90% (7.5%); and ipsilateral symptom free, 93% (6.5%). Revascularization of the internal carotid artery for the treatment of isolated, stenotic, and symptomatic kinking is safe and effective in preventing stroke and relieving symptoms of cerebrovascular insufficiency.

  14. Stenotic ligature: a simple technique for managing distal hypoperfusion ischemic syndrome following arteriovenous fistulas

    Directory of Open Access Journals (Sweden)

    Ene Cristian Roata

    2018-05-01

    Full Text Available Introduction. Distal Hypoperfusion Ischemic Syndrome (DHIS is a multifactorial debilitating condition causing peripheral ischemia and potentially tissue necrosis. In an effort to further refine its surgical treatment we aim to describe a modified, simple and reliable technique for managing DHIS in patients with arteriovenous fistulas. Materials and Methods. Twenty-nine consecutive patients with DHIS operated by a single surgical team over a period of 7 years were included in the study. All patients underwent the same surgical technique: stenotic ligature. Outcomes were analyzed clinically and the effectiveness of the procedure was proven using McNemar test. Clinical variables were statistically analyzed in SPSS 17.0 for Windows. Results. The technique we used consists in performing a stenosing ligature on the vein, using a 0-silk suture, and adjusting the suture in order to achieve either a radial pulse or capillary pulse, while maintaining a good thrill at palpation of the vein. The procedure was successful in 83% of patients proved by immediate symptomatic relief. Paired data analysis showed significant decrease of all symptoms: cold extremity (p=0,021, paraesthesia (p<0,001, pain (p<0,001. History of coronary artery disease, arteriopathy or the absence of radial pulse is statistically correlated with an increased risk of developing DHIS. Conclusions. Stenotic ligature is a simple, cheap and reliable technique for managing DHIS with lower septic risks which can be easily performed under local anesthesia.

  15. Potential Indoor Worker Exposure From Handling Area Leakage: Dose Calculation Methodology and Example Consequence Analysis

    International Nuclear Information System (INIS)

    Nes, Razvan; Benke, Roland R.

    2008-01-01

    The U.S. Department of Energy (DOE) is currently considering design options for preclosure facilities in a license application for a geologic repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. The Center for Nuclear Waste Regulatory Analyses (CNWRA) developed the PCSA Tool Version 3.0.0 software for the U.S. Nuclear Regulatory Commission (NRC) to aid in the regulatory review of a potential DOE license application. The objective of this paper is to demonstrate PCSA Tool modeling capabilities (i.e., a generic two-compartment, mass-balance model) for estimating radionuclide concentrations in air and radiological dose consequences to indoor workers in a control room from potential leakage of radioactively contaminated air from an adjacent handling area. The presented model computes internal and external worker doses from inhalation and submersion in a finite cloud of contaminated air in the control room and augments previous capabilities for assessing indoor worker dose. As a complement to the example event sequence frequency analysis in the companion paper, example consequence calculations are presented in this paper for the postulated event sequence. In conclusion: this paper presents a model for estimating radiological doses to indoor workers for the leakage of airborne radioactive material from handling areas. Sensitivity of model results to changes in various input parameters was investigated via illustrative example calculations. Indoor worker dose estimates were strongly dependent on the duration of worker exposure and the handling-area leakage flow rate. In contrast, doses were not very sensitive to handling-area exhaust ventilation flow rates. For the presented example, inhalation was the dominant radiological dose pathway. The two companion papers demonstrate independent analysis capabilities of the regulator for performing confirmatory calculations of frequency and consequence, which assist the assessment of worker

  16. Non-contact ulcer area calculation system for neuropathic foot ulcer.

    Science.gov (United States)

    Shah, Parth; Mahajan, Siddaram; Nageswaran, Sharmila; Paul, Sathish Kumar; Ebenzer, Mannam

    2017-08-11

    Around 125,785 new cases in year 2013-14 of leprosy were detected in India as per WHO report on leprosy in September 2015 which accounts to approximately 62% of the total new cases. Anaesthetic foot caused by leprosy leads to uneven loading of foot leading to ulcer in approximately 20% of the cases. Much efforts have gone in identifying newer techniques to efficiently monitor the progress of ulcer healing. Current techniques followed in measuring the size of ulcers, have not been found to be so accurate but are still is followed by clinicians across the globe. Quantification of prognosis of the condition would be required to understand the efficacy of current treatment methods and plan for further treatment. This study aims at developing a non contact technique to precisely measure the size of ulcer in patients affected by leprosy. Using MATLAB software, GUI was designed to process the acquired ulcer image by segmenting and calculating the pixel area of the image. The image was further converted to a standard measurement using a reference object. The developed technique was tested on 16 ulcer images acquired from 10 leprosy patients with plantar ulcers. Statistical analysis was done using MedCalc analysis software to find the reliability of the system. The analysis showed a very high correlation coefficient (r=0.9882) between the ulcer area measurements done using traditional technique and the newly developed technique, The reliability of the newly developed technique was significant with a significance level of 99.9%. The designed non-contact ulcer area calculating system using MATLAB is found to be a reliable system in calculating the size of ulcers. The technique would help clinicians have a reliable tool to monitor the progress of ulcer healing and help modify the treatment protocol if needed. Copyright © 2017 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.

  17. Environmental remediation cost in Fukushima area. Trial calculation using the unit cost factor method

    International Nuclear Information System (INIS)

    Ishikura, Takeshi; Fujita, Reiko

    2013-01-01

    In order to perform environmental remediation in Fukushima area in a swift and adequate way, it is necessary to obtain perspective of total cost and allocate resources adequately. At present such had not been fixed as what decontamination method should be applied to relevant contaminated places in Fukushima area or what disposition and processing process should be applied to radioactive soils and wastes produced by decontamination, it would be difficult to assess the cost exactly. But it would be better to calculate rough cost on trial and then upgrade the accuracy of the cost gradually based on latest knowledge. Cleanup subcommittee of AESJ utilized published process flow and unit cost and based on original proposed scenario: soils produced by decontamination were classified into intermediate storage facility and controllable processing place based on their contamination concentration and with limited reuse, rough estimated cost was obtained as 6 - 9 trillion yen for basic case. (T. Tanaka)

  18. Dose calculations for the concrete water tunnels at 190-C Area, Hanford Site

    International Nuclear Information System (INIS)

    Kamboj, S.; Yu, C.

    1997-01-01

    The RESRAD-BUILD code was used to calculate the radiological dose from the contaminated concrete water tunnels at the 190-C Area at the Hanford Site. Two exposure scenarios, recreationist and maintenance worker, were considered. A residential scenario was not considered because the material was assumed to be left intact (i.e., the concrete would not be rubbleized because the location would not be suitable for construction of a house). The recreationist was assumed to use the tunnel for 8 hours per day for 1 week as an overnight shelter. The maintenance worker was assumed to spend 20 hours per year working in the tunnel. Six exposure pathways were considered in calculating the dose. Three external exposure pathways involved penetrating radiation emitted directly from the contaminated tunnel floor, emitted from radioactive particulates deposited on the tunnel floor, and resulting from submersion in airborne radioactive particulates. Three internal exposure pathways involved inhalation of airborne radioactive particulates; inadvertent direct ingestion of removable, contaminated material on the tunnel floor; and inadvertent indirect ingestion of airborne particulates deposited on the tunnel floor. The gradual removal of surface contamination over time and the ingrowth of decay products were considered in calculating the dose at different times. The maximum doses were estimated to be 1.5 mrem/yr for the recreationist and 0.34 mrem/yr for the maintenance worker

  19. Outcomes of the Endovascular Treatment of Stenotic Lesions versus Chronic Total Occlusions in the Iliac Sector.

    Science.gov (United States)

    Revuelta Suero, Sergio; Martínez López, Isaac; Hernández Mateo, Manuela; Marqués de Marino, Pablo; Cernuda Artero, Iñaki; Cabrero Fernández, Maday; Serrano Hernando, Francisco Javier

    2016-07-01

    This study compares outcomes of the endovascular treatment (EVT) of iliac artery occlusive disease according to whether the treated lesion is a stenosis or a chronic total occlusion (CTO). Patients undergoing EVT from 2003 to 2013 for iliac artery occlusive disease were identified and the lesions treated stratified into stenotic (Group 1, n = 375) or CTO (Group 2, n = 87). Patients were followed clinically and hemodynamically (thigh-brachial index, TBI). Comorbidities, procedural factors, and outcomes were compared between the 2 groups using Kaplan-Meier, Breslow, and Cox models. Four hundred sixty-two iliac endovascular procedures in 378 patients were included in a retrospective study. The 2 groups only differed in preprocedural TBI [0.77 (Group 1) vs. 0.67 (Group 2), P P2) patency rates [P1 93.0% and 85.8% vs. 83.1% and 74.7%, hazard ratio (HR) 1.90 (1.15-3.14), P = 0.018; P2 97.8% and 96.8% vs. 93.0% and 87.4%, HR 2.86 (1.39-5.90), P = 0.007] and freedom from reintervention (FFR) rates [91.6% and 83.5% vs. 84.1% and 78.9%, HR 1.51 (0.90-2.53), P = 0.132]. In a multivariate analysis, CTO showed a worse P2 than stenotic lesions [HR 2.81 (1.17-6.76), P = 0.021], yet no differences emerged in P1 [HR 1.41 (0.76-2.63), P = 0.277] or FFR [HR 1.43 (0.79-2.57), P = 0.237]. A lower preprocedural TBI was correlated with a greater risk of EVT failure in terms of patency and FFR (P 40 mm were related to a worse stent patency. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. AUTOMATIC CALCULATION OF OIL SLICK AREA FROM MULTIPLE SAR ACQUISITIONS FOR DEEPWATER HORIZON OIL SPILL

    Directory of Open Access Journals (Sweden)

    B. Osmanoğlu

    2012-07-01

    Full Text Available The Deepwater Horizon oil spill occurred in the Gulf of Mexico in April 2010 and became the largest accidental marine oil spill in history. Oil leaked continuously between April 20th and July 15th of 2010, releasing about 780, 000m3 of crude oil into the Gulf of Mexico. The oil spill caused extensive economical and ecological damage to the areas it reached, affecting the marine and wildlife habitats along with fishing and tourism industries. For oil spill mitigation efforts, it is important to determine the areal extent, and most recent position of the contaminated area. Satellitebased oil pollution monitoring systems are being used for monitoring and in hazard response efforts. Due to their high accuracy, frequent acquisitions, large area coverage and day-and-night operation Synthetic Aperture Radar (SAR satellites are a major contributer of monitoring marine environments for oil spill detection. We developed a new algorithm for determining the extent of the oil spill from multiple SAR images, that are acquired with short temporal intervals using different sensors. Combining the multi-polarization data from Radarsat-2 (C-band, Envisat ASAR (C-band and Alos-PALSAR (L-band sensors, we calculate the extent of the oil spill with higher accuracy than what is possible from only one image. Short temporal interval between acquisitions (hours to days allow us to eliminate artifacts and increase accuracy. Our algorithm works automatically without any human intervention to deliver products in a timely manner in time critical operations. Acquisitions using different SAR sensors are radiometrically calibrated and processed individually to obtain oil spill area extent. Furthermore the algorithm provides probability maps of the areas that are classified as oil slick. This probability information is then combined with other acquisitions to estimate the combined probability map for the spill.

  1. Calculation of entrance exposed area from recorded images in cardiac diagnostic and interventional procedures

    International Nuclear Information System (INIS)

    Bibbo, G.; Balman, D.

    2000-01-01

    With increasing number of interventional radiological procedures performed on patients of all ages, it is important to determine the skin entrance dose of patients to limit the side effects of radiation. In most cases the skin dose is measured using thermoluminescent detectors (TLD). However, these detectors need to be placed in the radiation field on the skin of the patient, which may interfere with the procedure. Also, not all radiological practices are equipped with TLD readers which are expensive or have staff with the appropriate knowledge and expertise to be able to make use of TLD. The alternative to TLD is to use the dose area product (DAP) measured with a diamentor fitted to the angiography x-ray equipment. The difficulties in using DAP to calculate skin dose is that the irradiated area of the skin is not known. The area could change in size and location during the procedure as the radiologist/medical specialist varies the collimation and region of interest. For angiography equipment the distance between the anode and image intensifier is variable, as is the height of the examination table. The only point of reference is the isocentre. With recorded images it is possible to determine the irradiated area of the patient at the isocentre plane using the stenosis algorithm. The recorded image is calibrated such that it corresponds to the physical size in the plane of the isocentre. For non-recorded images, it may be necessary to assume that collimation has not changed and that the irradiated area is the same as that shown on the recorded images. The Women's and Children's Hospital has a Toshiba DFP2000 Biplane Digital Imaging system used for all cardiac and general angiography and interventional procedures. With this system the exposure factors (kVp, mA, field sizes) are recorded with the images. The source to image distance (SID), magnification factor (calibration factor of the recorded images) and angle of rotation are displayed on the Display Panel of the

  2. Numerical calculation of hydrodynamic characteristics of tidal currents for submarine excavation engineering in coastal area

    Directory of Open Access Journals (Sweden)

    Jian-hua Li

    2016-04-01

    Full Text Available In coastal areas with complicated flow movement, deposition and scour readily occur in submarine excavation projects. In this study, a small-scale model, with a high resolution in the vertical direction, was used to simulate the tidal current around a submarine excavation project. The finite volume method was used to solve Navier-Stokes equations and the Reynolds stress transport equation, and the entire process of the tidal current was simulated with unstructured meshes, generated in the irregular shape area, and structured meshes, generated in other water areas. The meshes near the bottom and free surface were densified with a minimum layer thickness of 0.05 m. The volume of fluid method was used to track the free surface, the volume fraction of cells on the upstream boundary was obtained from the volume fraction of adjacent cells, and that on the downstream boundary was determined by the water level process. The numerical results agree with the observed data, and some conclusions can be drawn: after the foundation trench excavation, the flow velocity decreases quite a bit through the foundation trench, with reverse flow occurring on the lee slope in the foundation trench; the swirling flow impedes inflow, leading to the occurrence of dammed water above the foundation trench; the turbulent motion is stronger during ebbing than in other tidal stages, the range with the maximum value of turbulent viscosity, occurring on the south side of the foundation trench at maximum ebbing, is greater than those in other tidal stages in a tidal cycle, and the maximum value of Reynolds shear stress occurs on the south side of the foundation trench at maximum ebbing in a tidal cycle. The numerical calculation method shows a strong performance in simulation of the hydrodynamic characteristics of tidal currents in the foundation trench, providing a basis for submarine engineering construction in coastal areas.

  3. An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.

    Science.gov (United States)

    Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard

    2016-09-01

    Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting. © 2016 Society for the Experimental Analysis of Behavior.

  4. Influence of Iliac Stenotic Lesions on Blood Flow Patterns Near a Covered Endovascular Reconstruction of the Aortic Bifurcation (CERAB) Stent Configuration

    NARCIS (Netherlands)

    Jebbink, Erik Groot; Engelhard, Stefan; Lajoinie, Guillaume; de Vries, Jean-Paul P.M.; Versluis, Michel; Reijnen, Michel M.P.J.

    2017-01-01

    Purpose: To investigate the effect of distal stenotic lesions on flow patterns near a covered endovascular reconstruction of the aortic bifurcation (CERAB) configuration used in the treatment of aortoiliac occlusive disease. Method: Laser particle image velocimetry measurements were performed using

  5. A new approach for calculation of volume confined by ECR surface and its area in ECR ion source

    International Nuclear Information System (INIS)

    Filippov, A.V.

    2007-01-01

    The volume confined by the resonance surface and its area are important parameters of the balance equations model for calculation of ion charge-state distribution (CSD) in the electron-cyclotron resonance (ECR) ion source. A new approach for calculation of these parameters is given. This approach allows one to reduce the number of parameters in the balance equations model

  6. Pretest parametric calculations for the heated pillar experiment in the WIPP In-Situ Experimental Area

    International Nuclear Information System (INIS)

    Branstetter, L.J.

    1983-03-01

    Results are presented for a pretest parametric study of several configurations and heat loads for the heated pillar experiment (Room H) in the Waste Isolation Pilot Plant (WIPP) In Situ Experimental Area. The purpose of this study is to serve as a basis for selection of a final experiment geometry and heat load. The experiment consists of a pillar of undisturbed rock salt surrounded by an excavated annular room. The pillar surface is covered by a blanket heat source which is externally insulated. A total of five thermal and ten structural calculations are described in a four to five year experimental time frame. Results are presented which include relevant temperature-time histories, deformations, rock salt stress component and effective stress profiles, and maximum stresses in anhydrite layers which are in close proximity to the room. Also included are predicted contours of a conservative post-processed measure of potential salt failure. Observed displacement histories are seen to be highly dependent on pillar and room height, but insensitive to other geometrical variations. The use of a tensile cutoff across slidelines is seen to produce more accurate predictions of anhydrite maximum stress, but to have little effect on rock salt stresses. The potential for salt failure is seen to be small in each case for the time frame of interest, and is only seen at longer times in the center of the room floor

  7. Measurement of stenotic rate and blood flow of carotid artery of the dogs with digital subtraction angiography

    International Nuclear Information System (INIS)

    Kobayashi, Keisuke; Kagawa, Masaaki; Asai, Masaaki; Yasue, Hiroshi; Kawabata, Kazuhiro; Yue, Shuzengmr

    1987-01-01

    Hemodynamic analysis of stenotic rate and local mean blood flow of the common carotid artery of the dogs with electromagnetic flowmeter and DSA was evaluated. Measurement of stenotic rate using local profile curve was very accurate and it was though to be useful in evaluation of local blood flow of the cervical carotid artery in the patients with carotid stenosis pre-and postoperatively. Although the measurement of absolute blood flow in the case of known diameter of the vessel is exactly reliable, the measured flow is not so reliable in the clinical application for the difficulty in the accurate measurement of the diameter. But hemodynamic analysis of the relative blood flow in the clinical ground can be estimated from this study. The theory and practical measurement are discussed. (author)

  8. Carotenoids co-localize with hydroxyapatite, cholesterol, and other lipids in calcified stenotic aortic valves. Ex vivo Raman maps compared to histological patterns.

    OpenAIRE

    Bonetti, A.; Bonifacio, A.; Della Mora, A.; Livi, U.; Marchini, M.; Ortolani, F.

    2015-01-01

    Unlike its application for atherosclerotic plaque analysis, Raman microspectroscopy was sporadically used to check the sole nature of bioapatite deposits in stenotic aortic valves, neglecting the involvement of accumulated lipids/lipoproteins in the calcific process. Here, Raman microspectroscopy was employed for examination of stenotic aortic valve leaflets to add information on nature and distribution of accumulated lipids and their correlation with mineralization in the light of its potent...

  9. Three-year results after directional atherectomy of calcified stenotic lesions of the superficial femoral artery.

    Science.gov (United States)

    Minko, P; Buecker, A; Jaeger, S; Katoh, M

    2014-10-01

    To investigate the 3-year outcome of patients with peripheral arterial disease (PAD) and heavily calcified stenotic lesions of the superficial femoral artery after directional atherectomy. Fifty-three patients (mean age 67 ± 10 years; 18 females, 35 males, TASC B and C, mean lesion length 7.9 ± 3.5 cm) with PAD (Rutherford 2-6) were enrolled into this prospective monocentric study. In total, 59 calcified lesions of the superficial femoral artery were treated with the Silverhawk atherectomy device (Covidien, Plymouth, MN, USA). Patients were followed-up for 36 months with a 6-month interval to perform clinical re-evaluation, including measurement of maximum walking distance and ankle-brachial index (ABI) as well as duplex-sonography. The primary success rate of the procedure was 92 %. In five cases (8 %), additional balloon-PTA and/or stent-PTA was necessary. Procedure-related embolization occurred in seven cases (12 %), which were all successfully treated by aspiration. The primary patency rate after 3 years was 55 %. Median Rutherford score decreased significantly from 5 to 0 after 36 months (p atherectomy was successfully applied to decrease the plaque burden. Results after 3 years showed a significant decrease of Rutherford score with persistent improvement of ABI and reasonable patency rate.

  10. Inflammatory aortic arch syndrome: contrast-enhanced, three-dimensional MR - angiography in stenotic lesions

    International Nuclear Information System (INIS)

    Both, M.; Mueller-Huelsbeck, S.; Biederer, J.; Heller, M.; Reuter, M.

    2004-01-01

    Purpose: To determine the value of contrast-enhanced, three-dimensional MR angiography for the evaluation of stenotic and occlusive vascular lesions in inflammatory aortic arch syndrome. Materials and Methods: 14 patients with inflammatory aortic arch syndrome (giant cell arteritis: n = 8, Takayasu arteritis: n = 4, ankylosing spondylitis: n = 1 sarcoidosis: n = 1) underwent MR angiography of the aortic arch and the supra-aortic vessels (n = 15,2 patients were examined twice) and of the abdominal aorta (n = 2). MRA was performed using a 3D-FLASH sequence (TR/TE 4.6/1.8 ms, flip angle 30 ) on a 1.5T system. MRA imaging was compared with the findings of DSA, which served as gold standard. Results: In a total of 467 examined vascular territories, DSA revealed 50 stenoses and 35 occlusions. All lesions were detected by MRA. In 23 segments, the degree of stenosis was overestimated by MRA. Sensitivity and specificity of MRA were 100% and 94,3%, positive and negative predictive values were 73.6 and 100%, and the accuracy was 95,1%. Conclusions: Despite a tendency to overestimate stenoses, contrast-enhanced three-dimensional MR angiography is a valid, non-invasive technique in the assessment of inflammatory aortic arch syndrome. (orig.) [de

  11. Efficient Calculation of Dewatered and Entrapped Areas Using Hydrodynamic Modeling and GIS

    International Nuclear Information System (INIS)

    Richmond, Marshall C.; Perkins, William A.

    2009-01-01

    River waters downstream of a hydroelectric project are often subject to rapidly changing discharge. Abrupt decreases in discharge can quickly dewater and expose some areas and isolate other areas from the main river channel, potentially stranding or entrapping fish, which often results in mortality. A methodology is described to estimate the areas dewatered or entrapped by a specific reduction in upstream discharge. A one-dimensional hydrodynamic model was used to simulate steady flows. Using flow simulation results from the model and a geographic information system (GIS), estimates of dewatered and entrapped areas were made for a wide discharge range. The methodology was applied to the Hanford Reach of the Columbia River in central Washington State. Results showed that a 280 m 3 /s discharge reduction affected the most area at discharges less than 3400 m 3 /s. At flows above 3400 m 3 /s, the affected area by a 280 m 3 /s discharge reduction (about 25 ha) was relatively constant. A 280 m 3 /s discharge reduction at lower flows affected about twice as much area. The methodology and resulting area estimates were, at the time of writing, being used to identify discharge regimes, and associated water surface elevations, that might be expected to minimize adverse impacts on juvenile fall chinook salmon (Oncorhynchus tshawytscha) that rear in the shallow near-shore areas in the Hanford Reach

  12. Contact area calculation between elastic solids bounded by mound rough surfaces

    NARCIS (Netherlands)

    Palasantzas, G

    In this work, we investigate the influence of mound roughness on the contact area between elastic bodies. The mound roughness is described by the r.m.s. roughness amplitude w, the average mound separation Lambda, and the system correlation length xi. In general, the real contact area has a complex

  13. A computer-based method for precise detection and calculation of affected skin areas

    DEFF Research Database (Denmark)

    Henriksen, Sille Mølvig; Nybing, Janus Damm; Bouert, Rasmus

    2016-01-01

    BACKGROUND: The aim of this study was to describe and validate a method to obtain reproducible and comparable results concerning extension of a specific skin area, unaffected by individual differences in body surface area. METHODS: A phantom simulating the human torso was equipped with three irre...

  14. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    Science.gov (United States)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  15. Calculation method of rate and area of sedimentation, by non-conventional mathematical process of data treatment

    International Nuclear Information System (INIS)

    Cota, P.L.

    1987-01-01

    The used methods for calculating the rate and area of sedimentation are based in techniques of graphical resolution. The solution of the problem by a mathematical resolution, using computational methods, is more fast and more accuracy. The comparison between the results from this methods and the conventional method is shown. (E.G.) [pt

  16. Calculation of the correlation coefficients between the numbers of counts (peak areas and backgrounds) obtained from gamma-ray spectra

    International Nuclear Information System (INIS)

    Korun, M.; Vodenik, B.; Zorko, B.

    2016-01-01

    Two simple methods for calculating the correlations between peaks appearing in gamma-ray spectra are described. We show how the areas are correlated when the peaks do not overlap, but the spectral regions used for the calculation of the background below the peaks do. When the peaks overlap, the correlation can be stronger than in the case of the non-overlapping peaks. The methods presented are simplified to the extent of allowing their implementation with manual calculations. They are intended for practitioners as additional tools to be used when the correlations between the areas of the peaks in the gamma-ray spectra are to be calculated. Also, the correlation coefficient between the number of counts in the peak and the number of counts in the continuous background below the peak is derived. - Highlights: • The correlation coefficients between areas of closely spaced peaks are assessed. • For isolated peaks the correlation arises from the common continuous background. • If peaks overlap the correlation coefficient depends on how much they overlap. • If peaks overlap also the background height affects the correlation coefficient. • The correlation coefficient between the peak area and its background is −1.

  17. Hanford 300 Area Treated Effluent Disposal Facility inventory at risk calculations and safety analysis

    International Nuclear Information System (INIS)

    Olander, A.R.

    1995-11-01

    The 300 Area Treated Effluent Disposal Facility (TEDF) is a wastewater treatment plant being constructed to treat the 300 Area Process Sewer and Retention Process Sewer. This document analyzes the TEDF for safety consequences. It includes radionuclide and hazardous chemical inventories, compares these inventories to appropriate regulatory limits, documents the compliance status with respect to these limits, and identifies administrative controls necessary to maintain this status

  18. FreeSASA: An open source C library for solvent accessible surface area calculations [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Simon Mitternacht

    2016-02-01

    Full Text Available Calculating solvent accessible surface areas (SASA is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards’ and Shrake and Rupley’s approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.

  19. Dose reconstruction in radioactively contaminated areas based on radiation transport calculations and measurements

    International Nuclear Information System (INIS)

    Hiller, Mauritius Michael

    2015-01-01

    The external radiation exposure at the former village of Metlino, Russia, was reconstructed. The Techa river in Metlino was contaminated by water from the Majak plant. The village was evacuated in 1956 and a reservoir lake created. Absorbed doses in bricks were measured and a model of the present-day and the historic Metlino was created for Monte Carlo calculations. By combining both, the air kerma at shoreline could be reconstructed to evaluate the Techa River Dosimetry System.

  20. A Mathematical Method to Calculate Tumor Contact Surface Area: An Effective Parameter to Predict Renal Function after Partial Nephrectomy.

    Science.gov (United States)

    Hsieh, Po-Fan; Wang, Yu-De; Huang, Chi-Ping; Wu, Hsi-Chin; Yang, Che-Rei; Chen, Guang-Heng; Chang, Chao-Hsiang

    2016-07-01

    We proposed a mathematical formula to calculate contact surface area between a tumor and renal parenchyma. We examined the applicability of using contact surface area to predict renal function after partial nephrectomy. We performed this retrospective study in patients who underwent partial nephrectomy between January 2012 and December 2014. Based on abdominopelvic computerized tomography or magnetic resonance imaging, we calculated the contact surface area using the formula (2*π*radius*depth) developed by integral calculus. We then evaluated the correlation between contact surface area and perioperative parameters, and compared contact surface area and R.E.N.A.L. (Radius/Exophytic/endophytic/Nearness to collecting system/Anterior/Location) score in predicting a reduction in renal function. Overall 35, 26 and 45 patients underwent partial nephrectomy with open, laparoscopic and robotic approaches, respectively. Mean ± SD contact surface area was 30.7±26.1 cm(2) and median (IQR) R.E.N.A.L. score was 7 (2.25). Spearman correlation analysis showed that contact surface area was significantly associated with estimated blood loss (p=0.04), operative time (p=0.04) and percent change in estimated glomerular filtration rate (p contact surface area and R.E.N.A.L. score independently affected percent change in estimated glomerular filtration rate (p contact surface area was a better independent predictor of a greater than 10% change in estimated glomerular filtration rate compared to R.E.N.A.L. score (AUC 0.86 vs 0.69). Using this simple mathematical method, contact surface area was associated with surgical outcomes. Compared to R.E.N.A.L. score, contact surface area was a better predictor of functional change after partial nephrectomy. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  1. Impact of flow routing on catchment area calculations, slope estimates, and numerical simulations of landscape development

    Science.gov (United States)

    Shelef, Eitan; Hilley, George E.

    2013-12-01

    Flow routing across real or modeled topography determines the modeled discharge and wetness index and thus plays a central role in predicting surface lowering rate, runoff generation, likelihood of slope failure, and transition from hillslope to channel forming processes. In this contribution, we compare commonly used flow-routing rules as well as a new routing rule, to commonly used benchmarks. We also compare results for different routing rules using Airborne Laser Swath Mapping (ALSM) topography to explore the impact of different flow-routing schemes on inferring the generation of saturation overland flow and the transition between hillslope to channel forming processes, as well as on location of saturation overland flow. Finally, we examined the impact of flow-routing and slope-calculation rules on modeled topography produced by Geomorphic Transport Law (GTL)-based simulations. We found that different rules produce substantive differences in the structure of the modeled topography and flow patterns over ALSM data. Our results highlight the impact of flow-routing and slope-calculation rules on modeled topography, as well as on calculated geomorphic metrics across real landscapes. As such, studies that use a variety of routing rules to analyze and simulate topography are necessary to determine those aspects that most strongly depend on a chosen routing rule.

  2. Non Machinable Volume Calculation Method for 5-Axis Roughing Based on Faceted Models through Closed Bounded Area Evaluation

    Directory of Open Access Journals (Sweden)

    Kiswanto Gandjar

    2017-01-01

    Full Text Available The increase in the volume of rough machining on the CBV area is one of the indicators of increased efficiencyof machining process. Normally, this area is not subject to the rough machining process, so that the volume of the rest of the material is still big. With the addition of CC point and tool orientation to CBV area on a complex surface, the finishing will be faster because the volume of the excess material on this process will be reduced. This paper presents a method for volume calculation of the parts which do not allow further occurrence of the machining process, particulary for rough machining on a complex object. By comparing the total volume of raw materials and machining area volume, the volume of residual material,on which machining process cannot be done,can be determined. The volume of the total machining area has been taken into account for machiningof the CBV and non CBV areas. By using delaunay triangulation for the triangle which includes the machining and CBV areas. The volume will be calculated using Divergence(Gaussian theorem by focusing on the direction of the normal vector on each triangle. This method can be used as an alternative to selecting tothe rough machining methods which select minimum value of nonmachinable volume so that effectiveness can be achieved in the machining process.

  3. Escaping the correction for body surface area when calculating glomerular filtration rate in children

    International Nuclear Information System (INIS)

    Piepsz, Amy; Tondeur, Marianne; Ham, Hamphrey

    2008-01-01

    51 Cr ethylene diamine tetraacetic acid ( 51 Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right 99m Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for 51 Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)

  4. On the calculation of atmospheric thermal pollution resulted from a flat area source

    International Nuclear Information System (INIS)

    Perkauskas, D.Ch.; Senuta, K.A.

    1984-01-01

    A spatial distribution of thermal atmospheric pollution from a flat area source - a great city or a lake-cooler of NPP was investigated. The numerical solution obtained lets to evaluate the horizontal and vertical spreading of the thermal atmospheric pollution by the different wind velocities in dependence of the inhomogeneities in humidity of the earth's surface

  5. Escaping the correction for body surface area when calculating glomerular filtration rate in children

    Energy Technology Data Exchange (ETDEWEB)

    Piepsz, Amy; Tondeur, Marianne [CHU St. Pierre, Department of Radioisotopes, Brussels (Belgium); Ham, Hamphrey [University Hospital Ghent, Department of Nuclear Medicine, Ghent (Belgium)

    2008-09-15

    {sup 51}Cr ethylene diamine tetraacetic acid ({sup 51}Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right {sup 99m}Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for {sup 51}Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)

  6. Calculating the consequences of recovery, a European model for inhabited areas

    DEFF Research Database (Denmark)

    Charnock, T.W.; Jones, J.A.; Singer, L.N.

    2009-01-01

    The European Model for Inhabited Areas (ERMIN) was developed to allow a user to explore different recovery options following the contamination of an urban environment with radioactive material and to refine an appropriate strategy for the whole region affected. The input data include a description......, the contamination on urban surfaces, the activity concentration in air from resuspension, the doses to workers undertaking the recovery work, the quantity and activity of waste generated and the cost and work required to implement the countermeasure. ERMIN has been designed to be implemented as a tool that supports...... the approach of decision-makers and allows the area to be broken down into smaller regions where different conditions prevail and different countermeasure packages are enacted....

  7. Olson method for locating and calculating the extent of transmural ischemic areas at risk of infarction.

    Science.gov (United States)

    Olson, Charles W; Wagner, Galen S; Terkelsen, Christian Juhl; Stickney, Ronald; Lim, Tobin; Pahlm, Olle; Estes, E Harvey

    2014-01-01

    The purpose of this study is to present a new and improved method for translating the electrocardiographic changes of acute myocardial ischemia into a display which reflects the location and extent of the ischemic area and the associated culprit coronary artery. This method could be automated to present a graphic image of the ischemic area in a manner understandable by all levels of caregivers; from emergency transport personnel to the consulting cardiologist. Current methods for the ECG diagnosis of ST elevated myocardial infarction (STEMI) are criteria driven, and complex, and beyond the interpretive capability of many caregivers. New methods are needed to accurately diagnose the presence of acute transmural myocardial ischemia in order to accelerate a patient's clinical "door to balloon time." The proposed new method could potentially provide the information needed to accomplish this objective. The new method improves the precision of diagnosis and quantification of ischemia by normalizing the ST segment inputs from the standard 12 lead ECG, transforming these into a three dimensional vector representation of the ischemia at the electrical center of the heart. The myocardial areas likely to be involved in this ischemia are separately analyzed to assess the probability that they contributed to this event. The source of the ischemia is revealed as a specific region of the heart, and the likely location of the associated culprit coronary artery. Seventy 12 lead ECGs from subjects with known single artery occlusion in one of the three main coronary arteries were selected to test this new method. Graphic plots of the distribution of ischemia as indicated by the method are consistent with the known occlusion. The analysis of the distribution of ischemic areas in the myocardium reveals that the relationships between leads with either ST elevation or ST depression, provide critical information improving the current method. Copyright © 2014 Elsevier Inc. All rights

  8. A simple algorithm for calculating the area of an arbitrary polygon

    Directory of Open Access Journals (Sweden)

    K.R. Wijeweera

    2017-06-01

    Full Text Available Computing the area of an arbitrary polygon is a popular problem in pure mathematics. The two methods used are Shoelace Method (SM and Orthogonal Trapezoids Method (OTM. In OTM, the polygon is partitioned into trapezoids by drawing either horizontal or vertical lines through its vertices. The area of each trapezoid is computed and the resultant areas are added up. In SM, a formula which is a generalization of Green’s Theorem for the discrete case is used. The most of the available systems is based on SM. Since an algorithm for OTM is not available in literature, this paper proposes an algorithm for OTM along with efficient implementation. Conversion of a pure mathematical method into an efficient computer program is not straightforward. In order to reduce the run time, minimal computation needs to be achieved. Handling of indeterminate forms and special cases separately can support this. On the other hand, precision error should also be avoided. Salient feature of the proposed algorithm is that it successfully handles these situations achieving minimum run time. Experimental results of the proposed method are compared against that of the existing algorithm. However, the proposed algorithm suggests a way to partition a polygon into orthogonal trapezoids which is not an easy task. Additionally, the proposed algorithm uses only basic mathematical concepts while the Green’s theorem uses complicated mathematical concepts. The proposed algorithm can be used when the simplicity is important than the speed.

  9. Evaluation of dose calculation models for inhabited areas applicable in nuclear accident consequence assessment codes

    International Nuclear Information System (INIS)

    Katalin Eged; Zoltan Kis; Natalia Semioschkina; Gabriele Voigt

    2004-01-01

    One of the objectives of the EC project EVANET-TERRA is to provide suitable inputs to the RODOS system. This study gives an overview on urban dose calculation models with special emphasis on the RECLAIM-EDEM2M and TEMAS-urban codes. The TEMAS-urban code is more complex compared to the RECLAIM-EDEM2M code although both models use similar and some times even same model parameters. The database and the way of its data collection as used in RECLAIM-EDEM2M is recommended as a preferred option because it contains many data from local and regional measurements. However in a decision situation the outputs of the TEMASurban model may better help stake holders by providing a ranking of the surfaces to be decontaminated. (author)

  10. Modelling and simulation of temperature and concentration dispersion in a couple stress nanofluid flow through stenotic tapered arteries

    Science.gov (United States)

    Ramana Reddy, J. V.; Srikanth, D.; Das, Samir K.

    2017-08-01

    A couple stress fluid model with the suspension of silver nanoparticles is proposed in order to investigate theoretically the natural convection of temperature and concentration. In particular, the flow is considered in an artery with an obstruction wherein the rheology of blood is taken as a couple stress fluid. The effects of the permeability of the stenosis and the treatment procedure involving a catheter are also considered in the model. The obtained non-linear momentum, temperature and concentration equations are solved using the homotopy perturbation method. Nanoparticles and the two viscosities of the couple stress fluid seem to play a significant role in the flow regime. The pressure drop, flow rate, resistance to the fluid flow and shear stress are computed and their effects are analyzed with respect to various fluids and geometric parameters. Convergence of the temperature and its dependency on the degree of deformation is effectively depicted. It is observed that the Nusselt number increases as the volume fraction increases. Hence magnification of molecular thermal dispersion can be achieved by increasing the nanoparticle concentration. It is also observed that concentration dispersion is greater for severe stenosis and it is maximum at the first extrema. The secondary flow of the axial velocity in the stenotic region is observed and is asymmetric in the tapered artery. The obtained results can be utilized in understanding the increase in heat transfer and enhancement of mass dispersion, which could be used for drug delivery in the treatment of stenotic conditions.

  11. 3D flow study in a mildly stenotic coronary artery phantom using a whole volume PIV method.

    Science.gov (United States)

    Brunette, J; Mongrain, R; Laurier, J; Galaz, R; Tardif, J C

    2008-11-01

    Blood flow dynamics has an important role in atherosclerosis initiation, progression, plaque rupture and thrombosis eventually causing myocardial infarction. In particular, shear stress is involved in platelet activation, endothelium function and secondary flows have been proposed as possible variables in plaque erosion. In order to investigate these three-dimensional flow characteristics in the context of a mild stenotic coronary artery, a whole volume PIV method has been developed and applied to a scaled-up transparent phantom. Experimental three-dimensional velocity data was processed to estimate the 3D shear stress distributions and secondary flows within the flow volume. The results show that shear stress reaches values out of the normal and atheroprotective range at an early stage of the obstructive pathology and that important secondary flows are also initiated at an early stage of the disease. The results also support the concept of a vena contracta associated with the jet in the context of a coronary artery stenosis with the consequence of higher shear stresses in the post-stenotic region in the blood domain than at the vascular wall.

  12. Artificial Neural Network Application for Power Transfer Capability and Voltage Calculations in Multi-Area Power System

    Directory of Open Access Journals (Sweden)

    Palukuru NAGENDRA

    2010-12-01

    Full Text Available In this study, the use of artificial neural network (ANN based model, multi-layer perceptron (MLP network, to compute the transfer capabilities in a multi-area power system was explored. The input for the ANN is load status and the outputs are the transfer capability among the system areas, voltage magnitudes and voltage angles at concerned buses of the areas under consideration. The repeated power flow (RPF method is used in this paper for calculating the power transfer capability, voltage magnitudes and voltage angles necessary for the generation of input-output patterns for training the proposed MLP neural network. Preliminary investigations on a three area 30-bus system reveal that the proposed model is computationally faster than the conventional method.

  13. Maximum skin dose assessment in interventional cardiology: large area detectors and calculation methods

    International Nuclear Information System (INIS)

    Quail, E.; Petersol, A.

    2002-01-01

    Advances in imaging technology have facilitated the development of increasingly complex radiological procedures for interventional radiology. Such interventional procedures can involve significant patient exposure, although often represent alternatives to more hazardous surgery or are the sole method for treatment. Interventional radiology is already an established part of mainstream medicine and is likely to expand further with the continuing development and adoption of new procedures. Between all medical exposures, interventional radiology is first of the list of the more expansive radiological practice in terms of effective dose per examination with a mean value of 20 mSv. Currently interventional radiology contribute 4% to the annual collective dose, in spite of contributing to total annual frequency only 0.3% but considering the perspectives of this method can be expected a large expansion of this value. In IR procedures the potential for deterministic effects on the skin is a risk to be taken into account together with stochastic long term risk. Indeed, the International Commission on Radiological Protection (ICRP) in its publication No 85, affirms that the patient dose of priority concern is the absorbed dose in the area of skin that receives the maximum dose during an interventional procedure. For the mentioned reasons, in IR it is important to give to practitioners information on the dose received by the skin of the patient during the procedure. In this paper maximum local skin dose (MSD) is called the absorbed dose in the area of skin receiving the maximum dose during an interventional procedure

  14. Nuclear spectroscopy - maximum attainable accuracy in the calculation of peak area

    International Nuclear Information System (INIS)

    Supian Samat; Evans, C.J.

    1989-01-01

    The general principles are discussed for the analysis of a peak of arbitrary shape (including the case of multiple peaks) superimposed on a background of arbitrary shape. Application of these principles to the case of a small Gaussian peak on a flat background gives a rule for determining how many channels should be included in the analysis so that accuracy should not be lost, and how many channels in the background should be included in estimating the standard error in the peak area. It is shown that the use of an approximate method of analysis may lead to a significant loss of accuracy, and to a significant over-estimation of the standard error. (author)

  15. The calculation of absorbed dose rate in freshwater fish from high background natural radioactivity areas

    International Nuclear Information System (INIS)

    Pereira, W.S.; Moraes, S.R.; Cavalcante, J.J.V.; Pinto, C.E.C.; Kelecom, A.

    2017-01-01

    Areas of increased radiation may expose biota to radiation doses greater than the world averages, and depending on the magnitude of the exposure causing biota damage. The region of the municipality of Caldas, MG, BR is considered a region of increased natural radioactivity. The present work aims to evaluate the exposure of biota to natural radionuclides in the region of Caldas, MG. In order to evaluate the biota exposure in the region, the concentrations of the natural radionuclides U nat , 226 Ra, 210 Pb and 232 Th and 228 Ra were evaluated in two species of fishes: lambari (Astymax spp.) And traíra (Hoplias spp.). The dose rates of the analyzed fish were: for Astymax spp of 0.08 μGy d -1 and for Hoplias spp of 0.12 μGy∙d -1 . With these dose rate values no measurable deleterious effects are expected in the species studied

  16. On the calculation of brain area shifts due to cerebral tumors

    International Nuclear Information System (INIS)

    Labudde, D.; Hartmann, S.; Synowitz, M.

    2002-01-01

    A precise knowledge of the localization of an intracerebral mass is a basic requirement for the planning of neurosurgical operations. Stereotactic atlases offer the possibility to adapt pre-operative imaging data onto normal anatomical conditions in the CNS. These atlases, however, reflect the standard variants of the CNS and do not allow to draw conclusions on local and secondary changes of the anatomy caused by the presence of pathological processes. The physical model proposed in this paper provides an estimate of the displacement of brain areas by an intracerebral mass. The modeling of brain parenchyma deformation is based on the mechanics of deformed media. The implementation of the model is successful in the group of primary brain tumors and meningiomas, and uses empirically-obtained data of a prospectively-selected patient population. The aim of the proposed model is, as further step, the integration and adaptation in apposite software solutions for the stereotactic orientation in the CNS. (orig.) [de

  17. [Water environmental capacity calculation model for the rivers in drinking water source conservation area].

    Science.gov (United States)

    Chen, Ding-jiang; Lü, Jun; Shen, Ye-na; Jin, Shu-quan; Shi, Yi-ming

    2008-09-01

    Based on the one-dimension model for water environmental capacity (WEC) in river, a new model for the WEC estimation in river-reservoir system was developed in drinking water source conservation area (DWSCA). In the new model, the concept was introduced that the water quality target of the rivers in DWSCA was determined by the water quality demand of reservoir for drinking water source. It implied that the WEC of the reservoir could be used as the water quality control target at the reach-end of the upstream rivers in DWSCA so that the problems for WEC estimation might be avoided that the differences of the standards for a water quality control target between in river and in reservoir, such as the criterions differences for total phosphorus (TP)/total nitrogen (TN) between in reservoir and in river according to the National Surface Water Quality Standard of China (GB 3838-2002), and the difference of designed hydrology conditions for WEC estimation between in reservoir and in river. The new model described the quantitative relationship between the WEC of drinking water source and of the river, and it factually expressed the continuity and interplay of these low water areas. As a case study, WEC for the rivers in DWSCA of Laohutan reservoir located in southeast China was estimated using the new model. Results indicated that the WEC for TN and TP was 65.05 t x a(-1) and 5.05 t x a(-1) in the rivers of the DWSCA, respectively. According to the WEC of Laohutan reservoir and current TN and TP quantity that entered into the rivers, about 33.86 t x a(-1) of current TN quantity should be reduced in the DWSCA, while there was 2.23 t x a(-1) of residual WEC of TP in the rivers. The modeling method was also widely applicable for the continuous water bodies with different water quality targets, especially for the situation of higher water quality control target in downstream water body than that in upstream.

  18. Calculation of the magnitude of long term contaminated area with COSYMA and MACCS

    International Nuclear Information System (INIS)

    Grupa, J.

    1996-09-01

    A severe nuclear accident will contaminate large areas of land. This paper discusses the output that can be obtained with COSYMA and MACCS to evaluate this contamination. Both codes associate contamination with deposition of given nuclides and the severity of contamination is expressed in terms of the ground concentration (Bq/m 2 ). However, for this analysis we decided to judge the severity of the land contamination by the dose rate (Sv/year) to the local inhabitants. To explain the differences between the COSYMA and MACCS results some details of the results were compared. This revealed that the results depend strongly on the choice of the grid if severe contamination occurs beyond about 50 to 100 km from the source. Another important factor to take into account when judging the severity of land contamination is the duration of the contamination; i.e. the time it takes until the contamination has decreased below a given level. Since we judge the contamination by the dose to the local public, the 'averted dose' concept has been used to evaluate the duration of the contamination. (orig.)

  19. Reversal of renal dysfunction by targeted administration of VEGF into the stenotic kidney: a novel potential therapeutic approach.

    Science.gov (United States)

    Chade, Alejandro R; Kelsen, Silvia

    2012-05-15

    Renal microvascular (MV) damage and loss contribute to the progression of renal injury in renovascular disease (RVD). Whether a targeted intervention in renal microcirculation could reverse renal damage is unknown. We hypothesized that intrarenal vascular endothelial growth factor (VEGF) therapy will reverse renal dysfunction and decrease renal injury in experimental RVD. Unilateral renal artery stenosis (RAS) was induced in 14 pigs, as a surrogate of chronic RVD. Six weeks later, renal blood flow (RBF) and glomerular filtration rate (GFR) were quantified in vivo in the stenotic kidney using multidetector computed tomography (CT). Then, intrarenal rhVEGF-165 or vehicle was randomly administered into the stenotic kidneys (n = 7/group), they were observed for 4 additional wk, in vivo studies were repeated, and then renal MV density was quantified by 3D micro-CT, and expression of angiogenic factors and fibrosis was determined. RBF and GFR, MV density, and renal expression of VEGF and downstream mediators such as p-ERK 1/2, Akt, and eNOS were significantly reduced after 6 and at 10 wk of untreated RAS compared with normal controls. Remarkably, administration of VEGF at 6 wk normalized RBF (from 393.6 ± 50.3 to 607.0 ± 45.33 ml/min, P < 0.05 vs. RAS) and GFR (from 43.4 ± 3.4 to 66.6 ± 10.3 ml/min, P < 0.05 vs. RAS) at 10 wk, accompanied by increased angiogenic signaling, augmented renal MV density, and attenuated renal scarring. This study shows promising therapeutic effects of a targeted renal intervention, using an established clinically relevant large-animal model of chronic RAS. It also implies that disruption of renal MV integrity and function plays a pivotal role in the progression of renal injury in the stenotic kidney. Furthermore, it shows a high level of plasticity of renal microvessels to a single-dose VEGF-targeted intervention after established renal injury, supporting promising renoprotective effects of a novel potential therapeutic intervention to

  20. Peri-stent aneurysm formation following a stent implant for stenotic intracranial vertebral artery dissection: a technical report of two cases successfully treated with coil embolization.

    Science.gov (United States)

    Ishimaru, Hideki; Nakashima, Kazuaki; Takahata, Hideaki; Matsuoka, Yohjiro

    2013-02-01

    Although stenting for stenotic vertebral artery dissection (VAD) improves compromised blood flow, subsequent peri-stent aneurysm (PSA) formation is not well-known. We report two cases with PSA successfully treated with coil embolization. Three patients with stenotic intracranial VAD underwent endovascular angioplasty at our institution because they had acute infarction in posterior circulation territory and clinical evidence of hemodynamic insufficiency. In two of three patients balloon angioplasty at first session failed to relieve the stenosis, and a coronary stent was implanted. Angiography immediately after stenting showed no abnormality in case 1 and minimal slit-like projection at proximal portion of the stent in case 2. Angiography obtained 16 months after the stenting revealed PSA in case 1. In case 2, angiography performed 3 months later showed that the projection at proximal portion enlarged and formed an aneurysm outside the stent. Because follow-up angiographies showed growth of the aneurysm in both cases, endovascular aneurysmal embolization was performed. We advanced a microcatheter into the aneurysm through the strut of existing stent and delivered detachable coils into the aneurysm lumen successfully in both cases. The post-procedural course was uneventful, and complete obliteration of aneurysm was confirmed on angiography in both cases. Stenting for stenotic intracranial VAD may result in delayed PSA; therefore, follow-up angiographies would be necessary after stenting for stenotic intracranial arterial dissection. Coil embolization through the stent strut would be a solution for enlarging PSA.

  1. Calculating ellipse area by the Monte Carlo method and analysing dice poker with Excel at high school

    Science.gov (United States)

    Benacka, Jan

    2016-08-01

    This paper reports on lessons in which 18-19 years old high school students modelled random processes with Excel. In the first lesson, 26 students formulated a hypothesis on the area of ellipse by using the analogy between the areas of circle, square and rectangle. They verified the hypothesis by the Monte Carlo method with a spreadsheet model developed in the lesson. In the second lesson, 27 students analysed the dice poker game. First, they calculated the probability of the hands by combinatorial formulae. Then, they verified the result with a spreadsheet model developed in the lesson. The students were given a questionnaire to find out if they found the lesson interesting and contributing to their mathematical and technological knowledge.

  2. Results of Surgical Treatment of Patients with Critical Limb Ischemia and Stenotic Lesions of the Brachiocephalic Arteries

    Directory of Open Access Journals (Sweden)

    Alexei L. Charyshkin

    2017-06-01

    Full Text Available The aim of our study was to evaluate the results of the surgical treatment for patients with critical limb ischemia (CLI and stenotic lesions of the brachiocephalic arteries. Methods and Results: We examined 72 patients (68/87.2% men and 4/7.3% women aged from 46 to 78 years (mean age, 62.2±4.3 years with CLI and stenotic lesions of the brachiocephalic arteries. Conservative treatment was performed in 17(23.6% patients and surgical treatment in 55(76.4%. It has been carried out 73 surgical operations: femoral popliteal bypass (5/6.8%, lumbar sympathectomy (4/5.5%, thrombectomy of occluded aortofemoral graft (2/2.7%, limb amputation (4/5.5%, iliofemoral bypass (4/5.5%, aortofemoral bifurcation bypass (10/13.1%, endovascular surgery (1/1.6%, limb amputation at thigh level - 4(5.5%, thrombectomy of occluded distal arteries (4/5.5%, femoro-femoral cross-over bypass (1/1.6%, resection of popliteal artery aneurysm and prosthesis of the popliteal artery (1/1.6%, semi-closed loop endarterectomy of occluded arteries of the lower limbs (8/10.9%, carotid endarterectomy (23/31.5%, and carotid-subclavian bypass (2/2.7%. After the surgical intervention, we observed the disappearance or reduction of pain, restoration of sensitivity and motor activity, and healing of trophic ulcers in 75% of patients. In the late postoperative period, we detected the progression of limb ischemia in 4(5.5% patients; in connection with that, we performed limb amputation at thigh level. Ischemic stroke with a lethal outcome developed in one patient (1.4%. Conclusion: In patients with multifocal atherosclerosis, multilevel reconstructive surgical interventions must be performed in stages, due to the high operational risk, and risk of complications, secondary amputations and lethality in the postoperative period.

  3. A strategy to calculate cyclosporin A area under the time-concentration curve in pediatric renal transplantation.

    Science.gov (United States)

    David-Neto, Elias; Araujo, Lilian Pereira; Feres Alves, Cristiane; Sumita, Nairo; Romano, Pascoalina; Yagyu, Elisa Midori; Nahas, William Carlos; Ianhez, Luiz Estevam

    2002-08-01

    The complete area under the time-concentration curve (AUC) is considered the gold standard for cyclosporin A (CsA) monitoring, particularly in pediatric kidney graft recipients who have great absorption and drug clearance variability. However, complete AUC is time-consuming and expensive. For this reason, we retrospectively reviewed 131 complete 4-h AUC (AUC0-4) performed in 34 children (mean age 10.6 +/- 2 yr) in order to construct an equation to calculate AUC0-4. The median time after transplantation was 540 (range: 247-1,358) days. Multiple regression analysis was performed either with a single variable or with a combination of two variables. CsA blood concentration at the second hour after the oral morning dose (C2) was the best predictor of AUC0-4, where AUC0-4 = 424 + (2.65 x C2), R2 = 0.81, p time-periods, C2 was the best parameter to use to calculate AUC0-4. The equations obtained during these two time-periods were very close to the one for the whole population. Our data shows that C2 can be safely used to estimate AUC0-4. However, for values above 4,000 ng/h/mL, the formula overestimates the trapezoidal AUC0-4. The C2 equation simplifies the CsA monitoring as a result of its high predictive value and clinical feasibility.

  4. Brain areas associated with numbers and calculations in children: Meta-analyses of fMRI studies

    Directory of Open Access Journals (Sweden)

    Marie Arsalidou

    2018-04-01

    Full Text Available Children use numbers every day and typically receive formal mathematical training from an early age, as it is a main subject in school curricula. Despite an increase in children neuroimaging studies, a comprehensive neuropsychological model of mathematical functions in children is lacking. Using quantitative meta-analyses of functional magnetic resonance imaging (fMRI studies, we identify concordant brain areas across articles that adhere to a set of selection criteria (e.g., whole-brain analysis, coordinate reports and report brain activity to tasks that involve processing symbolic and non-symbolic numbers with and without formal mathematical operations, which we called respectively number tasks and calculation tasks. We present data on children 14 years and younger, who solved these tasks. Results show activity in parietal (e.g., inferior parietal lobule and precuneus and frontal (e.g., superior and medial frontal gyri cortices, core areas related to mental-arithmetic, as well as brain regions such as the insula and claustrum, which are not typically discussed as part of mathematical problem solving models. We propose a topographical atlas of mathematical processes in children, discuss findings within a developmental constructivist theoretical model, and suggest practical methodological considerations for future studies. Keywords: Mathematical cognition, Meta-analyses, fMRI, Children, Development, Insula

  5. Influence of model boundary conditions on blood flow patterns in a patient specific stenotic right coronary artery.

    Science.gov (United States)

    Liu, Biyue; Zheng, Jie; Bach, Richard; Tang, Dalin

    2015-01-01

    In literature, the effect of the inflow boundary condition was investigated by examining the impact of the waveform and the shape of the spatial profile of the inlet velocity on the cardiac hemodynamics. However, not much work has been reported on comparing the effect of the different combinations of the inlet/outlet boundary conditions on the quantification of the pressure field and flow distribution patterns in stenotic right coronary arteries. Non-Newtonian models were used to simulate blood flow in a patient-specific stenotic right coronary artery and investigate the influence of different boundary conditions on the phasic variation and the spatial distribution patterns of blood flow. The 3D geometry of a diseased artery segment was reconstructed from a series of IVUS slices. Five different combinations of the inlet and the outlet boundary conditions were tested and compared. The temporal distribution patterns and the magnitudes of the velocity, the wall shear stress (WSS), the pressure, the pressure drop (PD), and the spatial gradient of wall pressure (WPG) were different when boundary conditions were imposed using different pressure/velocity combinations at inlet/outlet. The maximum velocity magnitude in a cardiac cycle at the center of the inlet from models with imposed inlet pressure conditions was about 29% lower than that from models using fully developed inlet velocity data. Due to the fact that models with imposed pressure conditions led to blunt velocity profile, the maximum wall shear stress at inlet in a cardiac cycle from models with imposed inlet pressure conditions was about 29% higher than that from models with imposed inlet velocity boundary conditions. When the inlet boundary was imposed by a velocity waveform, the models with different outlet boundary conditions resulted in different temporal distribution patterns and magnitudes of the phasic variation of pressure. On the other hand, the type of different boundary conditions imposed at the

  6. Carotenoids co-localize with hydroxyapatite, cholesterol, and other lipids in calcified stenotic aortic valves. Ex vivo Raman maps compared to histological patterns

    Directory of Open Access Journals (Sweden)

    A. Bonetti

    2015-04-01

    Full Text Available Unlike its application for atherosclerotic plaque analysis, Raman microspectroscopy was sporadically used to check the sole nature of bioapatite deposits in stenotic aortic valves, neglecting the involvement of accumulated lipids/lipoproteins in the calcific process. Here, Raman microspectroscopy was employed for examination of stenotic aortic valve leaflets to add information on nature and distribution of accumulated lipids and their correlation with mineralization in the light of its potential precocious diagnostic use. Cryosections from surgically explanted stenotic aortic valves (n=4 were studied matching Raman maps against specific histological patterns. Raman maps revealed the presence of phospholipids/triglycerides and cholesterol, which showed spatial overlapping with one another and Raman-identified hydroxyapatite. Moreover, the Raman patterns correlated with those displayed by both von-Kossa-calcium- and Nile-blue-stained serial cryosections. Raman analysis also provided the first identification of carotenoids, which co-localized with the identified lipid moieties. Additional fit concerned the distribution of collagen and elastin. The good correlation of Raman maps with high-affinity staining patterns proved that Raman microspectroscopy is a reliable tool in evaluating calcification degree, alteration/displacement of extracellular matrix components, and accumulation rate of different lipid forms in calcified heart valves. In addition, the novel identification of carotenoids supports the concept that valve stenosis is an atherosclerosis-like valve lesion, consistently with their previous Raman microspectroscopical identification inside atherosclerotic plaques.

  7. Carotenoids co-localize with hydroxyapatite, cholesterol, and other lipids in calcified stenotic aortic valves. Ex vivo Raman maps compared to histological patterns.

    Science.gov (United States)

    Bonetti, A; Bonifacio, A; Della Mora, A; Livi, U; Marchini, M; Ortolani, F

    2015-04-20

    Unlike its application for atherosclerotic plaque analysis, Raman microspectroscopy was sporadically used to check the sole nature of bioapatite deposits in stenotic aortic valves, neglecting the involvement of accumulated lipids/lipoproteins in the calcific process. Here, Raman microspectroscopy was employed for examination of stenotic aortic valve leaflets to add information on nature and distribution of accumulated lipids and their correlation with mineralization in the light of its potential precocious diagnostic use. Cryosections from surgically explanted stenotic aortic valves (n=4) were studied matching Raman maps against specific histological patterns. Raman maps revealed the presence of phospholipids/triglycerides and cholesterol, which showed spatial overlapping with one another and Raman-identified hydroxyapatite. Moreover, the Raman patterns correlated with those displayed by both von-Kossa-calcium- and Nile-blue-stained serial cryosections. Raman analysis also provided the first identification of carotenoids, which co-localized with the identified lipid moieties. Additional fit concerned the distribution of collagen and elastin. The good correlation of Raman maps with high-affinity staining patterns proved that Raman microspectroscopy is a reliable tool in evaluating calcification degree, alteration/displacement of extracellular matrix components, and accumulation rate of different lipid forms in calcified heart valves. In addition, the novel identification of carotenoids supports the concept that valve stenosis is an atherosclerosis-like valve lesion, consistently with their previous Raman microspectroscopical identification inside atherosclerotic plaques.

  8. A patient-specific virtual stenotic model of the coronary artery to analyze the relationship between fractional flow reserve and wall shear stress.

    Science.gov (United States)

    Lee, Kyung Eun; Kim, Gook Tae; Lee, Jeong Sang; Chung, Ju-Hyun; Shin, Eun-Seok; Shim, Eun Bo

    2016-11-01

    As the stenotic severity of a patient increases, fractional flow reserve (FFR) decreases, whereas the maximum wall shear stress (WSSmax) increases. However, the way in which these values can change according to stenotic severity has not previously been investigated. The aim of this study is to devise a virtual stenosis model to investigate variations in the coronary hemodynamic parameters of patients according to stenotic severity. To simulate coronary hemodynamics, a three-dimensional (3D) coronary artery model of computational fluid dynamics is coupled with a lumped parameter model of the coronary micro-vasculature and venous system. To validate the present method, we first simulated 13 patient-specific models of the coronary arteries and compared the results with those obtained clinically. Then, virtually narrowed coronary arterial models derived from the patient-specific cases were simulated to obtain the WSSmax and FFR values. The variations in FFR and WSSmax against the percentage of diameter stenosis in clinical cases were reproducible by the virtual stenosis models. We also found that the simulated FFR values were linearly correlated with the WSSmax values, but the linear slope varied by patient. We implemented 130 additional virtual models of stenosed coronary arteries based on data from 13 patients and obtained statistically meaningful results that were identical to the large-scale clinical studies. And the slope of the correlation line between FFR and WSSmax may help clinicians to design treatment plans for patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  10. Area balance method for calculation of air interchange in fire-resesistance testing laboratory for building products and constructions

    Directory of Open Access Journals (Sweden)

    Sargsyan Samvel Volodyaevich

    2014-09-01

    Full Text Available Fire-resistance testing laboratory for building products and constructions is a production room with a substantial excess heat (over 23 W/m . Significant sources of heat inside the aforementioned laboratory are firing furnace, designed to simulate high temperature effects on structures and products of various types in case of fire development. The excess heat production in the laboratory during the tests is due to firing furnaces. The laboratory room is considered as an object consisting of two control volumes (CV, in each of which there may be air intake and air removal, pollutant absorption or emission. In modeling air exchange conditions the following processes are being considered: the processes connected with air movement in the laboratory room: the jet stream in a confined space, distribution of air parameters, air motion and impurity diffusion in the ventilated room. General upward ventilation seems to be the most rational due to impossibility of using local exhaust ventilation. It is connected with the peculiarities of technological processes in the laboratory. Air jets spouted through large-perforated surface mounted at the height of 2 m from the floor level, "flood" the lower control volume, entrained by natural convective currents from heat sources upward and removed from the upper area. In order to take advantage of the proposed method of the required air exchange calculation, you must enter additional conditions, taking into account the provision of sanitary-hygienic characteristics of the current at the entrance of the service (work area. Exhaust air containing pollutants (combustion products, is expelled into the atmosphere by vertical jet discharge. Dividing ventilated rooms into two control volumes allows describing the research process in a ventilated room more accurately and finding the air exchange in the lab room during the tests on a more reasonable basis, allowing to provide safe working conditions for the staff without

  11. Results in a consecutive series of 83 surgical corrections of symptomatic stenotic kinking of the internal carotid artery.

    Science.gov (United States)

    Illuminati, Giulio; Ricco, Jean-Baptiste; Caliò, Francesco G; D'Urso, Antonio; Ceccanei, Gianluca; Vietri, Francesco

    2008-01-01

    Although there is a growing body of evidence to document the safety and efficacy of operative treatment of carotid stenosis, surgical indications for elongation and kinking of the internal carotid artery remain controversial. The goal of this study was to evaluate the efficacy of surgical correction of internal carotid artery kinking in patients with persistent hemispheric symptoms despite antiplatelet therapy. A consecutive series of 81 patients (mean age, 64 years) underwent 83 surgical procedures to correct kinking of the internal carotid artery either by shortening and reimplanting the vessel on the common carotid artery, inserting a bypass graft, or transposing the vessel onto the external carotid artery. Mean follow-up was 56 months (range, 15-135 months). Study endpoints were 30-day mortality and any stroke occurring during follow-up. No postoperative death was observed. The postoperative stroke rate was 1%. Primary patency, freedom from neurologic symptoms, and late survival at 5 years (x +/- standard deviation) were 89 +/- 4.1%, 92 +/- 4%, and 71 +/- 6%, respectively. The findings of this study indicate that surgical correction for symptomatic stenotic kinking of the internal carotid artery is safe and effective in relieving symptoms and preventing stroke. Operative correction should be considered as the standard treatment for patients with symptomatic carotid kinking that does not respond to antiplatelet therapy.

  12. Hyperbaric area index calculated from ABPM elucidates the condition of CKD patients: the CKD-JAC study.

    Science.gov (United States)

    Iimuro, Satoshi; Imai, Enyu; Watanabe, Tsuyoshi; Nitta, Kosaku; Akizawa, Tadao; Matsuo, Seiichi; Makino, Hirofumi; Ohashi, Yasuo; Hishida, Akira

    2015-02-01

    High prevalence of masked hypertension as well as persistent hypertension was observed in the Chronic Kidney Disease Japan Cohort (CKD-JAC) study. We proposed a novel indicator of blood pressure (BP) load, hyperbaric area index (HBI), calculated from ambulatory blood pressure monitoring (ABPM) data. The characteristic of this index and its relationship with kidney function were also evaluated. The CKD-JAC study, enrolled 2,977 patients, is a prospective observational study started in September 2007. ABPM was conducted in a sub-group from September 2007 to April 2010 and baseline ABPM data of 1,075 subjects (63.4 % male, 60.7 years old) were analyzed. Mean systolic HBI of male and female patients were 242.3 and 176.5 mmHg×h, respectively. HBI sensitively reflected sex (54.7 mmHg×h higher in males than in females), seasonal effects (51.6 mmHg×h higher in winter than in summer), and advancing CKD stage [(16.5 mmHg×h higher) per -10 mL/min/1.73 m(2) in eGFR]. The HBI was a significant factor to associate with reduced kidney function, after adjusting with nocturnal BP change (NBPC), sex, and other variables (p value <0.001). Our findings suggested that HBI might be a novel sensitive indicator for the reduction of kidney function, independent of patterns of NBPC.

  13. Measure of activities and calculation of effective dose of indoor 222Rn in some dwellings and enclosed areas in Morrocco

    International Nuclear Information System (INIS)

    Choukri, A.; Hakam, O.-K.

    2010-01-01

    ). The calculated effective dose in studied houses varies between 0.55 and 2.39 mSv/year with an average value of about 1.41 mSv/year. In enclosed areas it varies between 0.38 and 11.9mSv/year Conclusions The measurements performed in 9 dwellings and 7 enclosed work areas in different regions of Morocco show that: The obtained values of volumic activities of radon in dwellings and in enclosed work areas and the calculated effective dose are comparable to those obtained in the other regions in the word and they are below the action level recommended by the ICRP (3 to 10 mSv/year corresponding to volumic activities 200-600 Bq/m3 for houses and 500-1500 Bq/m3 for workplaces) The relatively higher volumic activities of 222 Rn in Youssoufia and khouribga towns are obtained because Youssoufia and khouribga are situated in regions rich in phosphate deposits. 12 The volumic activity of radon increases with depth, this is most probably due to decreased ventilation. This is the case of the geophysical observatory of Berchid where the reached high value of above 1884 Bq/m3 don't present any risk for workers health because they spend only a few minutes by day in cave to control and reregister data. A maximal value of radon volumic activity was measured in winter and a minimal value of this activity was measured in summer. This difference results especially from an important aeration in summer. The use of air conditioners in summer and the possible natural ventilation in winter help to keep concentration levels of indoor radon low. The measured volumic activities of radon depend on some parameters such type of construction, the height of building and the depth of the underground. The radon concentration levels found in this study are below the action level recommended by the ICRP. To protect human health, efforts are always necessary to reach low effective dose for the public as it was recommended by ICPR and HWO

  14. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    Science.gov (United States)

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  15. Modeling the irradiation facility in the Deir Al-Hajar area to calculate the spatial gamma dose distribution using the MCNP code

    International Nuclear Information System (INIS)

    Khattab, K.; Bush, M; Kassery, H.

    2009-03-01

    A 3-D model for the irradiation plant which belongs to the Atomic Energy Commission, Department of Radiation Technology in the Deir Al-Hajar area near Damascus, is presented in this work using the MCNP-4C code. This model is used to calculate the spatial gamma ray dose in the (x, y, z) coordinate. Good agreements are noticed between the measured and the calculated results. (author)

  16. How do migratory species add ecosystem service value to wilderness? Calculating the spatial subsidies provided by protected areas

    Science.gov (United States)

    Lopez-Hoffman, Laura; Semmens, Darius J.; Diffendorfer, Jay

    2013-01-01

    Species that migrate through protected and wilderness areas and utilize their resources, deliver ecosystem services to people in faraway locations. The mismatch between the areas that most support a species and those areas where the species provides most benefits to society can lead to underestimation of the true value of protected areas such as wilderness. We present a method to communicate the “off-site” value of wilderness and protected areas in providing habitat to migratory species that, in turn, provide benefits to people in distant locations. Using northern pintail ducks (Anas acuta) as an example, the article provides a method to estimate the amount of subsidy – the value of the ecosystem services provided by a migratory species in one area versus the cost to support the species and its habitat elsewhere.

  17. Myocardial perfusion magnetic resonance imaging using sliding-window conjugate-gradient HYPR methods in canine with stenotic coronary arteries.

    Science.gov (United States)

    Ge, Lan; Kino, Aya; Lee, Daniel; Dharmakumar, Rohan; Carr, James C; Li, Debiao

    2010-01-01

    First-pass perfusion magnetic resonance imaging (MRI) is a promising technique for detecting ischemic heart disease. However, the diagnostic value of the method is limited by the low spatial coverage, resolution, signal-to-noise ratio (SNR), and cardiac motion-related image artifacts. A combination of sliding window and conjugate-gradient HighlY constrained back-PRojection reconstruction (SW-CG-HYPR) method has been proposed in healthy volunteer studies to reduce the acquisition window for each slice while maintaining the temporal resolution of 1 frame per heartbeat in myocardial perfusion MRI. This method allows for improved spatial coverage, resolution, and SNR. In this study, we use a controlled animal model to test whether the myocardial territory supplied by a stenotic coronary artery can be detected accurately by SW-CG-HYPR perfusion method under pharmacological stress. Results from 6 mongrel dogs (15-25 kg) studies demonstrate the feasibility of SW-CG-HYPR to detect regional perfusion defects. Using this method, the acquisition time per cardiac cycle was reduced by a factor of 4, and the spatial coverage was increased from 2 to 3 slices to 6 slices as compared with the conventional techniques including both turbo-Fast Low Angle Short (FLASH) and echoplanar imaging (EPI). The SNR of the healthy myocardium at peak enhancement with SW-CG-HYPR (12.68 ± 2.46) is significantly higher (P < 0.01) than the turbo-FLASH (8.65 ± 1.93) and EPI (5.48 ± 1.24). The spatial resolution of SW-CG-HYPR images is 1.2 × 1.2 × 8.0 mm, which is better than the turbo-FLASH (1.8 × 1.8 × 8.0 mm) and EPI (2.0 × 1.8 × 8.0 mm). Sliding-window CG-HYPR is a promising technique for myocardial perfusion MRI. This technique provides higher image quality with respect to significantly improved SNR and spatial resolution of the myocardial perfusion images, which might improve myocardial perfusion imaging in a clinical setting.

  18. Changes in the mechanical environment of stenotic arteries during interaction with stents: computational assessment of parametric stent designs.

    Science.gov (United States)

    Holzapfel, Gerhard A; Stadler, Michael; Gasser, Thomas C

    2005-02-01

    Clinical studies have identified factors such as the stent design and the deployment technique that are one cause for the success or failure of angioplasty treatments. In addition, the success rate may also depend on the stenosis type. Hence, for a particular stenotic artery, the optimal intervention can only be identified by studying the influence of factors such as stent type, strut thickness, geometry of the stent cell, and stent-artery radial mismatch with the wall. We propose a methodology that allows a set of stent parameters to be varied, with the aim of evaluating the difference in the mechanical environment within the wall before and after angioplasty with stenting. Novel scalar quantities attempt to characterize the wall changes inform of the contact pressure caused by the stent struts, and the stresses within the individual components of the wall caused by the stent. These quantities are derived numerically and serve as indicators, which allow the determination of the correct size and type of the stent for each individual stenosis. In addition, the luminal change due to angioplasty may be computed as well. The methodology is demonstrated by using a full three-dimensional geometrical model of a postmortem specimen of a human iliac artery with a stenosis using imaging data. To describe the material behavior of the artery, we considered mechanical data of eight different vascular tissues, which formed the stenosis. The constitutive models for the tissue components capture the typical anisotropic, nonlinear and dissipative characteristics under supra-physiological loading conditions. Three-dimensional stent models were parametrized in such a way as to enable new designs to be generated simply with regard to variations in their geometric structure. For the three-dimensional stent-artery interaction we use a contact algorithm based on smooth contact surfaces of at least C-continuity, which prevents numerical problems known from standard facet-based contact

  19. Kind of approximate theoretical calculating formula of heat-exchange area for the vertical U-bend tube natural-circuit steam generator

    International Nuclear Information System (INIS)

    Luo Mingkun; Wang Fei; Huang Wei; Zhang Wenqi; Zhao Shan; Lu Lianghong

    2001-01-01

    A kind of approximate theoretical calculating formula of the vertical U-bend tube natural-circuit steam generator is deduced by using an approximate method, the results of this formula is compared with the heat exchanging areas of the real vertical U-bend tube natural-circuit steam generators, the absolute errors of them are below 8%

  20. Calculation of Wind Speeds for Return Period Using Weibull Parameter: A Case Study of Hanbit NPP Area

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jongk Uk; Lee, Kwan Hee; Kim, Sung Il; Yook, Dae Sik; Ahn, Sang Myeon [KINS, Daejeon (Korea, Republic of)

    2016-05-15

    Evaluation of the meteorological characteristics at the nuclear power plant and in the surrounding area should be performed in determining the site suitability for safe operation of the nuclear power plant. Under unexpected emergency condition, knowledge of meteorological information on the site area is important to provide the basis for estimating environmental impacts resulting from radioactive materials released in gaseous effluents during the accident condition. In the meteorological information, wind speed and direction are the important meteorological factors for examination of the safety analysis in the nuclear power plant area. Wind characteristics was analyzed on Hanbit NPP area. It was found that the Weibull parameters k and c vary 2.56 to 4.77 and 4.53 to 6.79 for directional wind speed distribution, respectively. Maximum wind frequency was NE and minimum was NNW.

  1. Testing the assumption of normality in body sway area calculations during unipedal stance tests with an inertial sensor.

    Science.gov (United States)

    Kyoung Jae Kim; Lucarevic, Jennifer; Bennett, Christopher; Gaunaurd, Ignacio; Gailey, Robert; Agrawal, Vibhor

    2016-08-01

    The quantification of postural sway during the unipedal stance test is one of the essentials of posturography. A shift of center of pressure (CoP) is an indirect measure of postural sway and also a measure of a person's ability to maintain balance. A widely used method in laboratory settings to calculate the sway of body center of mass (CoM) is through an ellipse that encloses 95% of CoP trajectory. The 95% ellipse can be computed under the assumption that the spatial distribution of the CoP points recorded from force platforms is normal. However, to date, this assumption of normality has not been demonstrated for sway measurements recorded from a sacral inertial measurement unit (IMU). This work provides evidence for non-normality of sway trajectories calculated at a sacral IMU with injured subjects as well as healthy subjects.

  2. Guidelines for planning interventions against external exposure in industrial area after a nuclear accident. Pt. 2. Calculation of doses using Monte Carlo method

    International Nuclear Information System (INIS)

    Kis, Z.; Eged, K.; Meckbach, R.; Mueller, H.

    2003-01-01

    Countermeasures being different from the usual urban ones and largely applicable in industrial area are collected and evaluated in a separate report. The industrial area is defined here as such an area where productive and/or commercial activity is carried out. A good example is a supermarket or a factory. Based on the history of calculation models it is unambiguous that the Monte Carlo based simulation is the perspective to the dose assessment from external exposures in such a complex environment. A method of the calculation of doses from external exposures in urban-industrial environment is presented. Moreover, this report gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. Using a hypothetical scenario (a supermarket area contaminated by 137 Cs) the details of an exemplary calculation are given directly addressing the dose and averted dose blocks of the templates of industrial countermeasures. In addition, a sensitivity analysis of the results is presented. (orig.)

  3. FLUST-2D - A computer code for the calculation of the two-dimensional flow of a compressible medium in coupled retangular areas

    International Nuclear Information System (INIS)

    Enderle, G.

    1979-01-01

    The computer-code FLUST-2D is able to calculate the two-dimensional flow of a compressible fluid in arbitrary coupled rectangular areas. In a finite-difference scheme the program computes pressure, density, internal energy and velocity. Starting with a basic set of equations, the difference equations in a rectangular grid are developed. The computational cycle for coupled fluid areas is described. Results of test calculations are compared to analytical solutions and the influence of time step and mesh size are investigated. The program was used to precalculate the blowdown experiments of the HDR experimental program. Downcomer, plena, internal vessel region, blowdown pipe and a containment area have been modelled two-dimensionally. The major results of the precalculations are presented. This report also contains a description of the code structure and user information. (orig.) [de

  4. Mixing effects on geothermometric calculations of the Newdale geothermal area in the Eastern Snake River Plain, Idaho

    Energy Technology Data Exchange (ETDEWEB)

    Ghanashayam Neupane; Earl D. Mattson; Travis L. McLing; Cody J. Cannon; Thomas R. Wood; Trevor A. Atkinson; Patrick F. Dobson; Mark E. Conrad

    2016-02-01

    The Newdale geothermal area in Madison and Fremont Counties in Idaho is a known geothermal resource area whose thermal anomaly is expressed by high thermal gradients and numerous wells producing warm water (up to 51 °C). Geologically, the Newdale geothermal area is located within the Eastern Snake River Plain (ESRP) that has a time-transgressive history of sustained volcanic activities associated with the passage of Yellowstone Hotspot from the southwestern part of Idaho to its current position underneath Yellowstone National Park in Wyoming. Locally, the Newdale geothermal area is located within an area that was subjected to several overlapping and nested caldera complexes. The Tertiary caldera forming volcanic activities and associated rocks have been buried underneath Quaternary flood basalts and felsic volcanic rocks. Two southeast dipping young faults (Teton dam fault and an unnamed fault) in the area provide the structural control for this localized thermal anomaly zone. Geochemically, water samples from numerous wells in the area can be divided into two broad groups – Na-HCO3 and Ca-(Mg)-HCO3 type waters and are considered to be the product of water-rhyolite and water-basalt interactions, respectively. Each type of water can further be subdivided into two groups depending on their degree of mixing with other water types or interaction with other rocks. For example, some bivariate plots indicate that some Ca-(Mg)-HCO3 water samples have interacted only with basalts whereas some samples of this water type also show limited interaction with rhyolite or mixing with Na-HCO3 type water. Traditional geothermometers [e.g., silica variants, Na-K-Ca (Mg-corrected)] indicate lower temperatures for this area; however, a traditional silica-enthalpy mixing model results in higher reservoir temperatures. We applied a new multicomponent equilibrium geothermometry tool (e.g., Reservoir Temperature Estimator, RTEst) that is based on inverse geochemical modeling which

  5. A Method for Calculating the Area of Zostera marina Leaves from Digital Images with Noise Induced by Humidity Content

    Directory of Open Access Journals (Sweden)

    Cecilia Leal-Ramirez

    2014-01-01

    Full Text Available Despite the ecological importance of eelgrass, nowadays anthropogenic influences have produced deleterious effects in many meadows worldwide. Transplantation plots are commonly used as a feasible remediation scheme. The characterization of eelgrass biomass and its dynamics is an important input for the assessment of the overall status of both natural and transplanted populations. Particularly, in restoration plots it is desirable to obtain nondestructive assessments of these variables. Allometric models allow the expression of above ground biomass and productivity of eelgrass in terms of leaf area, which provides cost effective and nondestructive assessments. Leaf area in eelgrass can be conveniently obtained by the product of associated length and width. Although these variables can be directly measured on most sampled leaves, digital image methods could be adapted in order to simplify measurements. Nonetheless, since width to length ratios in eelgrass leaves could be even negligible, noise induced by leaf humidity content could produce misidentification of pixels along the peripheral contour of leaves images. In this paper, we present a procedure aimed to produce consistent estimations of eelgrass leaf area in the presence of the aforementioned noise effects. Our results show that digital image procedures can provide reliable, nondestructive estimations of eelgrass leaf area.

  6. Measurements of SNAC2 area dosimeters placed in different configurations around the PROSPERO reactor and comparison with TRIPOLI-4 calculations

    Energy Technology Data Exchange (ETDEWEB)

    Rousseau, G.; Chambru, L.; Authier, N. [CEA, Centre de Valduc, 21120 Is-sur-Tille, (France)

    2015-07-01

    In the context of criticality accident alarm system tests, several experiments were carried out in 2013 on the PROSPERO reactor to study the response to neutron and gamma of different devices and dosimeters, particularly on the SNAC2 dosimeter. This article presents the results of this criticality dosimeter in different configurations, and compares the experimental measurements with the results of calculation performed with the TRIPOLI-4 Monte-Carlo Neutral Particles transport code. PROSPERO is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located at the French CEA Research Center of Valduc. The core, surrounded by a reflector of depleted uranium, is composed of 2 horizontal cylindrical blocks made of a highly enriched uranium alloy which can be placed in contact, and of 4 depleted uranium control rods which allow the reactor to be driven. This reactor, placed in a cell 10 m x 8 m x 6 m high, with 1.4-meter-thick concrete walls, is used as a fast neutron spectrum source and is operated at stable power level in delayed critical state, which can vary from 3 mW to 3 kW. PROSPERO is extensively used for electronic hardening or to study the effect of the neutrons on various materials. The SNAC2 criticality dosimeter is a zone dosimeter allowing the off line measurement of criticality accident neutron doses. This dosimeter consists of the pile up of seven activation foils embedded into a 23 mm diameter x 21 mm height cadmium container. The activation measurement of each foil, using a gamma spectroscopy technique, gives information about the neutron reaction rates. The SNAC2 software allows the spectrum unfolding from these values, taking into account the hypothesis of a particular spectrum shape, in three components: a Maxwell spectrum component for the thermal range, a 1/E component for the epithermal range, and a Watt spectrum component for the high energy range. Moreover, from the neutron spectrum, the SNAC

  7. A comparison of semiconductor gamma spectrometric analysis using the peak net area calculations and the whole spectrum processing

    International Nuclear Information System (INIS)

    Krnac, S.; Koskelo, M.; Venkatamaran, R.

    1998-01-01

    This study was conducted to compare the results of gamma spectrometric analysis using the Scaling Confirmatory Factor Analysis (SCFA) method to that of Genie2K, which uses a more traditional method. Gamma ray spectra had had been acquired for several gamma standard sources, all of which except Co-57 and Eu-152 being single gamma ray emitting nuclides. These standard sources spanned the energy range from 60 keV (Am-241) to 1116 keV (Zn-65). The standard sources were counted at 3 different geometries at 3 different geometries, with source-detector distances of 0, 5, and 15 cm. Using single gamma ray spectra collected at a given counting geometry, and the certificate file, an efficiency calibration was created for that geometry. Three different test spectra, one for each counting geometry, had been created by combining several of the standard source spectra. The efficiency calibrations created for the 3 geometries were loaded into the respective spectrum files. Each test spectrum was analyzed using the standard Genie2K engines; Peak locate, Peak search, Interactive peak fit, Background subs-traction, Efficiency correction, and Nuclide Identification with interference analysis. The results of the various calculation steps were reported. In all 3 test cases, the SCFA method identified all the nuclides correctly. The K-40 activities calculated by the SCFA method were reasonably close to that from Genie2K analysis. In general, the quantitative results of the SCFA method were impressive in all 3 cases. On a positive note, the SCFA method did identify low yield gamma lines in Eu-152, which were not identified by the Genie2K analysis. This substantiates claim that the SCFA is more sensitive than the traditional method of spectrum analysis. (authors)

  8. SECTIONAL AREA CALCULATION OF MATERIAL REMOVED FROM BLANK WHILE FORMING SPACE BETWEEN TWO TEETH OF SATELLITE GEAR OF PLANETARY PIN TOOTH REDUCER

    Directory of Open Access Journals (Sweden)

    N. G. Yankevich

    2009-01-01

    Full Text Available One of the most important values while forming gear wheels is a material section area Sс which is to be removed by a tool in the process of forming a space between two teeth in one pass. Cutting resistance which is proportional  to section area of  the layer to be cut and, correspondingly, a thermodynamic intensity in the polishing zone depend on Sс value.The paper proposes relations for calculation of a material section area Sс which is to be removed from a blank while forming a space between two teeth of a satellite gear of a planetary pin tooth reducer.Measurements being made in the AutoCAD packet have shown that any corrections of the profile do not make a significant influence on a section area Sс.

  9. Risk assessment calculations using MEPAS, an accepted screening methodology, and an uncertainty analysis for the reranking of Waste Area Groupings at Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Shevenell, L.; Hoffman, F.O.; MacIntosh, D.

    1992-03-01

    The Waste Area Groupings (WAGs) at the Oak Ridge National Laboratory (ORNL) were reranked with respect to on- and off-site human health risks using two different methods. Risks associated with selected contaminants from each WAG for occupants of WAG 2 or an off-site area were calculated using a modified formulation of the Multimedia Environmental Pollutant Assessment System (MEPAS) and a method suitable for screening, referred to as the ORNL/ESD method (the method developed by the Environmental Sciences Division at ORNL) in this report. Each method resulted in a different ranking of the WAGs. The rankings from the two methods are compared in this report. All risk assessment calculations, except the original MEPAS calculations, indicated that WAGs 1; 2, 6, 7 (WAGs 2, 6 and 7 as one combined WAG); and 4 pose the greatest potential threat to human health. However, the overall rankings of the WAGs using constant parameter values in the different methods were inconclusive because uncertainty in parameter values can change the calculated risk associated with particular pathways, and hence, the final rankings. Uncertainty analysis using uncertainties about all model parameters were used to reduce biases associated with parameter selection and to more reliably rank waste sites according to potential risks associated with site contaminants. Uncertainty analysis indicates that the WAGs should be considered for further investigation, or remediation, in the following order: (1) WAG 1; (2) WAGs 2, 6, and 7 (combined); and 4; (3) WAGs 3, 5, and 9; and, (4) WAG 8

  10. A Numerical Method for Calculating the Wave Drag of a Configuration from the Second Derivative of the Area Distribution of a Series of Equivalent Bodies of Revolution

    Science.gov (United States)

    Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.

    1959-01-01

    A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.

  11. Preoperative volume calculation of the hepatic venous draining areas with multi-detector row CT in adult living donor liver transplantation: impact on surgical procedure

    International Nuclear Information System (INIS)

    Frericks, Bernd B.J.; Kirchhoff, Timm D.; Shin, Hoen-Oh; Stamm, Georg; Merkesdal, Sonja; Abe, Takehiko; Galanski, Michael; Schenk, Andrea; Peitgen, Heinz-Otto; Klempnauer, Juergen; Nashan, Bjoern

    2006-01-01

    The purpose was to assess the volumes of the different hepatic territories and especially the drainage of the right paramedian sector in adult living donor liver transplantation (ALDLT). CT was performed in 40 potential donors of whom 28 underwent partial living donation. Data sets of all potential donors were postprocessed using dedicated software for segmentation, volumetric analysis and visualization of liver territories. During an initial period, volumes and shapes of liver parts were calculated based on the individual portal venous perfusion areas. After partial hepatic congestion occurring in three grafts, drainage territories with special regard to MHV tributaries from the right paramedian sector, and the IRHV were calculated additionally. Results were visualized three-dimensionally and compared to the intraoperative findings. Calculated graft volumes based on hepatic venous drainage and graft weights correlated significantly (r=0.86,P<0.001). Mean virtual graft volume was 930 ml and drained as follows: RHV: 680 ml, IRHV: 170 ml (n=11); segment 5 MHV tributaries: 100 ml (n=16); segment 8 MHV tributaries: 110 ml (n=20). When present, the mean aberrant venous drainage fraction of the right liver lobe was 28%. The evaluated protocol allowed a reliable calculation of the hepatic venous draining areas and led to a change in the hepatic venous reconstruction strategy at our institution. (orig.)

  12. Debris-flows scale predictions based on basin spatial parameters calculated from Remote Sensing images in Wenchuan earthquake area

    International Nuclear Information System (INIS)

    Zhang, Huaizhen; Chi, Tianhe; Liu, Tianyue; Wang, Wei; Yang, Lina; Zhao, Yuan; Shao, Jing; Yao, Xiaojing; Fan, Jianrong

    2014-01-01

    Debris flow is a common hazard in the Wenchuan earthquake area. Collapse and Landslide Regions (CLR), caused by earthquakes, could be located from Remote Sensing images. CLR are the direct material source regions for debris flow. The Spatial Distribution of Collapse and Landslide Regions (SDCLR) strongly impact debris-flow formation. In order to depict SDCLR, we referred to Strahler's Hypsometric analysis method and developed 3 functional models to depict SDCLR quantitatively. These models mainly depict SDCLR relative to altitude, basin mouth and main gullies of debris flow. We used the integral of functions as the spatial parameters of SDCLR and these parameters were employed during the process of debris-flows scale predictions. Grouping-occurring debris-flows triggered by the rainstorm, which occurred on September 24th 2008 in Beichuan County, Sichuan province China, were selected to build the empirical equations for debris-flows scale predictions. Given the existing data, only debris-flows runout zone parameters (Max. runout distance L and Lateral width B) were estimated in this paper. The results indicate that the predicted results were more accurate when the spatial parameters were used. Accordingly, we suggest spatial parameters of SDCLR should be considered in the process of debris-flows scale prediction and proposed several strategies to prevent debris flow in the future

  13. A calculation and uncertainty evaluation method for the effective area of a piston rod used in quasi-static pressure calibration

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2018-04-01

    This paper describes the merits and demerits of different sensors for measuring propellant gas pressure, the applicable range of the frequently used dynamic pressure calibration methods, and the working principle of absolute quasi-static pressure calibration based on the drop-weight device. The main factors affecting the accuracy of pressure calibration are analyzed from two aspects of the force sensor and the piston area. To calculate the effective area of the piston rod and evaluate the uncertainty between the force sensor and the corresponding peak pressure in the absolute quasi-static pressure calibration process, a method for solving these problems based on the least squares principle is proposed. According to the relevant quasi-static pressure calibration experimental data, the least squares fitting model between the peak force and the peak pressure, and the effective area of the piston rod and its measurement uncertainty, are obtained. The fitting model is tested by an additional group of experiments, and the peak pressure obtained by the existing high-precision comparison calibration method is taken as the reference value. The test results show that the peak pressure obtained by the least squares fitting model is closer to the reference value than the one directly calculated by the cross-sectional area of the piston rod. When the peak pressure is higher than 150 MPa, the percentage difference is less than 0.71%, which can meet the requirements of practical application.

  14. Radiocarbon Analysis to Calculate New End-Member Values for Biomass Burning Source Samples Specific to the Bay Area

    Science.gov (United States)

    Yoon, S.; Kirchstetter, T.; Fairley, D.; Sheesley, R. J.; Tang, X.

    2017-12-01

    Elemental carbon (EC), also known as black carbon or soot, is an important particulate air pollutant that contributes to climate forcing through absorption of solar radiation and to adverse human health impacts through inhalation. Both fossil fuel combustion and biomass burning, via residential firewood burning, agricultural burning, wild fires, and controlled burns, are significant sources of EC. Our ability to successfully control ambient EC concentrations requires understanding the contribution of these different emission sources. Radiocarbon (14C) analysis has been increasingly used as an apportionment tool to distinguish between EC from fossil fuel and biomass combustion sources. However, there are uncertainties associated with this method including: 1) uncertainty associated with the isolation of EC to be used for radiocarbon analysis (e.g., inclusion of organic carbon, blank contamination, recovery of EC, etc.) 2) uncertainty associated with the radiocarbon signature of the end member. The objective of this research project is to utilize laboratory experiments to evaluate some of these uncertainties, particularly for EC sources that significantly impact the San Francisco Bay Area. Source samples of EC only and a mix of EC and organic carbon (OC) were produced for this study to represent known emission sources and to approximate the mixing of EC and OC that would be present in the atmosphere. These samples include a combination of methane flame soot, various wood smoke samples (i.e. cedar, oak, sugar pine, pine at various ages, etc.), meat cooking, and smoldering cellulose smoke. EC fractions were isolated using a Sunset Laboratory's thermal optical transmittance carbon analyzer. For 14C analysis, samples were sent to Woods Hole Oceanographic Institution for isotope analysis using an accelerated mass spectrometry. End member values and uncertainties for the EC isolation utilizing this method will be reported.

  15. The software program Peridose to calculate the fetal dose or dose to other critical structures outside the target area in radiation therapy

    International Nuclear Information System (INIS)

    Giessen, P.H. van der

    2001-01-01

    An accurate estimate of the dose outside the target area is of utmost importance when pregnant patients have to undergo radiotherapy, something that occurs in every radiotherapy department once in a while. Such peripheral doses (PD) are also of interest for late effects risk estimations for doses to specific organs as well as estimations of dose to pacemakers. A software program, Peridose, is described to allow easy calculation of this peripheral dose. The calculation is based on data from many publications on peripheral dose measurements, including those by the author. Clinical measurements have shown that by using data averaged over many measurements and different machine types PDs can be estimated with an accuracy of ± 60% (2 standard deviations). The program allows easy and fairly accurate estimates of peripheral doses in patients. Further development to overcome some of the constraints and limitations is desirable. The use of average data is to be preferred if general applicability is to be maintained. (author)

  16. Calculation of the Carbon Footprint to Determine Sustainability Status: A Comparative Analysis of Some Selected Planned and Unplanned Areas of Dhaka Megacity

    Science.gov (United States)

    Iqbal, S. M. S.

    2015-12-01

    Resource scarcity is considered to be one of the most serious issues plaguing Dhaka city. Because of the massive pressure of increasing population (15.931 million), a very unsustainable situation is waiting for this city in the upcoming future. It is inevitable to know how far this city is from being sustainable. This paper embodies the comparative analysis of the carbon footprint of four different areas in Dhaka city. It is considered as one of the most important key indicators of sustainability. It calculates the amount of biologically productive land in order to produce all the resources consumed by an individual or a particular community. This research has been conducted in both the planned and unplanned areas of this city. Among compound, component and direct method, component method was used to calculate the carbon footprint. Primary data were collected from door to door questionnaire survey. Total 371 samples were drawn from all the study areas at 95 % confidence level and 5% confidence interval. After finishing data analysis it was clear that the per capita carbon footprint of the selected study areas exceeds the per capita biocapacity of Dhaka city. And there exists a huge variation between the planned and unplanned areas of Old Dhaka and New Dhaka. Per capita carbon footprint of Gulshan & Jhigatola (part of New Dhaka) is higher than the per capita carbon footprint of Gandaria & Wari (part of Old Dhaka) that means resource stress is higher in Gulshan & Jhigatola in comparison with Gandaria & Wari because of the difference of daily consumption pattern. One of the most important findings of this study is that the per capita carbon footprint is the highest in Gulshan (1.2407 gha) among all the study areas and it is 85.56 times greater than the per capita biocapacity of Dhaka city (0.0145 gha) that means a single resident of this area needs 1.2407 gha land in order to support his/her demand on nature but only 0.0145 gha land (in an average) is available for

  17. MosquitoMap and the Mal-area calculator: new web tools to relate mosquito species distribution with vector borne disease.

    Science.gov (United States)

    Foley, Desmond H; Wilkerson, Richard C; Birney, Ian; Harrison, Stanley; Christensen, Jamie; Rueda, Leopoldo M

    2010-02-18

    Mosquitoes are important vectors of diseases but, in spite of various mosquito faunistic surveys globally, there is a need for a spatial online database of mosquito collection data and distribution summaries. Such a resource could provide entomologists with the results of previous mosquito surveys, and vector disease control workers, preventative medicine practitioners, and health planners with information relating mosquito distribution to vector-borne disease risk. A web application called MosquitoMap was constructed comprising mosquito collection point data stored in an ArcGIS 9.3 Server/SQL geodatabase that includes administrative area and vector species x country lookup tables. In addition to the layer containing mosquito collection points, other map layers were made available including environmental, and vector and pathogen/disease distribution layers. An application within MosquitoMap called the Mal-area calculator (MAC) was constructed to quantify the area of overlap, for any area of interest, of vector, human, and disease distribution models. Data standards for mosquito records were developed for MosquitoMap. MosquitoMap is a public domain web resource that maps and compares georeferenced mosquito collection points to other spatial information, in a geographical information system setting. The MAC quantifies the Mal-area, i.e. the area where it is theoretically possible for vector-borne disease transmission to occur, thus providing a useful decision tool where other disease information is limited. The Mal-area approach emphasizes the independent but cumulative contribution to disease risk of the vector species predicted present. MosquitoMap adds value to, and makes accessible, the results of past collecting efforts, as well as providing a template for other arthropod spatial databases.

  18. Monte-Carlo calculation of the calibration factors for the interfacial area concentration and the velocity of the bubbles for double sensor conductivity probe

    International Nuclear Information System (INIS)

    Munoz-Cobo, J.L.; Pena, J.; Chiva, S.; Mendez, S.

    2007-01-01

    This paper presents a study of the estimation of the correction factors for the interfacial area concentration and the bubble velocity in two phase flow measurements using the double sensor conductivity probe. Monte-Carlo calculations of these correction factors have been performed for different values of the relative distance (ΔS/D) between the tips of the conductivity probe and different values of the relative bubble velocity fluctuation parameter. Also this paper presents the Monte-Carlo calculation of the expected value of the calibration factors for bubbly flow assuming a log-normal distribution of the bubble sizes. We have computed the variation of the expected values of the calibration factors with the relative distance (ΔS/D) between the tips and the velocity fluctuation parameter. Finally, we have performed a sensitivity study of the variation of the average values of the calibration factors for bubbly flow with the geometrical standard deviation of the log-normal distribution of bubble sizes. The results of these calculations show that the total interfacial area correction factor is very close to 2, and depends very weakly on the velocity fluctuation, and the relative distance between tips. For the velocity calibration factor, the Monte-Carlo results show that for moderate values of the relative bubble velocity fluctuation parameter (H max ≤ 0.3) and values of the relative distance between tips not too small (ΔS/D ≥ 0.2), the correction velocity factor for the bubble sensor conductivity probe is close to unity, ranging from 0.96 to 1

  19. Mass change calculations of hydrothermal alterations within the volcanogenic metasediments hosted Cu-Pb (-Zn) mineralization at Halilar area, NW Turkey

    Science.gov (United States)

    Kiran Yildirim, Demet; Abdelnasser, Amr; Doner, Zeynep; Kumral, Mustafa

    2016-04-01

    The Halilar Cu-Pb (-Zn) mineralization that is formed in the volcanogenic metasediments of Bagcagiz Formation at Balikesir province, NW Turkey, represents locally vein-type deposit as well as restricted to fault gouge zone directed NE-SW along with the lower boundary of Bagcagiz Formation and Duztarla granitic intrusion in the study area. Furthermore, This granite is traversed by numerous mineralized sheeted vein systems, which locally transgress into the surrounding metasediments. Therefore, this mineralization closely associated with intense hydrothermal alteration within brecciation, and quartz stockwork veining. The ore mineral assemblage includes chalcopyrite, galena, and some sphalerite with covellite and goethite formed during three phases of mineralization (pre-ore, main ore, and supergene) within an abundant gangue of quartz and calcite. The geologic and field relationships, petrographic and mineralogical studies reveal two alteration zones occurred with the Cu-Pb (-Zn) mineralization along the contact between the Bagcagiz Formation and Duztarla granite; pervasive phyllic alteration (quartz, sericite, and pyrite), and selective propylitic alteration (albite, calcite, epidote, sericite and/or chlorite). This work, by using the mass balance calculations, reports the mass/volume changes (gain and loss) of the chemical components of the hydrothermal alteration zones associated with Halilar Cu-Pb (-Zn) mineralization at Balikesir area (Turkey). It revealed that the phyllic alteration has enrichments of Si, Fe, K, Ba, and LOI with depletion of Mg, Ca, and Na reflect sericitization of alkali feldspar and destruction of ferromagnesian minerals. This zone has high Cu and Pb with Zn contents represents the main mineralized zone. On the other hand, the propylitic zone is characterized by addition of Ca, Na, K, Ti, P, and Ba with LOI and Cu (lower content) referring to the replacement of plagioclase and ferromagnesian minerals by albite, calcite, epidote, and sericite

  20. Stontium-90 contamination in vegetation from radioactive waste seepage areas at ORNL, and theoretical calculations of /sup 90/Sr accumulation by deer

    Energy Technology Data Exchange (ETDEWEB)

    Garten, C.T. Jr.; Lomax, R.D.

    1987-06-01

    This report describes data obtained during a preliminary characterization of /sup 90/Sr levels in browse vegetation from the vicinity of seeps adjacent to ORNL solid waste storage areas (SWSA) where deer (Odocoileus virginianus) were suspected to accumulate /sup 90/Sr through the food chain. The highest strontium concentrations in plant samples were found at seeps associated with SWSA-5. Strontium-90 concentrations in honeysuckle and/or blackberry shoots from two seeps in SWSA-5 averaged 39 and 19 nCi/g dry weight (DW), respectively. The maximum concentration observed was 90 nCi/g DW. Strontium-90 concentrations in honeysuckle and blackberry shoots averaged 7.4 nCi/g DW in a study area south of SWSA-4, and averaged 1.0 nCi/g DW in fescue grass from a seepage area located on SWSA-4. A simple model (based on metabolic data for mule deer) has been used to describe the theoretical accumulation of /sup 90/Sr in bone of whitetail deer following ingestion of contaminated vegetation. These model calculations suggest that if 30 pCi /sup 90/Sr/g deer bone is to be the accepted screening level for retaining deer killed on the reservation, then 5-pCi /sup 90/Sr/g DW vegetation should be considered as a possible action level in making decisions about the need for remedial measures, because unrestricted access and full utilization of vegetation contaminated with <5 pCi/g DW results in calculated steady-state (maximum) /sup 90/Sr bone concentrations of <30 pCi/g in a 45-kg buck.

  1. COMPARATIVE EVALUATION OF THE ANTIHYPERTENSIVE EFFECT OF PERINDOPRIL AND LOSARTAN POTASSIUM IN PATIENTS WITH ARTERIAL HYPERTENSION AND STENOTIC CORONARY ATHEROSCLEROSIS BEFORE REVASCULARIZATION: AN OPEN RANDOMIZED COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    O. A. Osipova

    2011-01-01

    Full Text Available Aim. To compare effects of perindopril and losartan potassium on the parameters of the ambulatory blood pressure (BP monitoring (ABPM and circadian BP profile in patients with arterial hypertension (HT and stenotic coronary atherosclerosis before myocardium revascularization. Material and methods. 59 patients with HT degree 2-3 at the age of 35-69 were examined. ABPM was performed in all patients. Daily profile was assessed by the degree of nocturnal BP reduction. Patients were randomized to receive perindopril or losartan potassium. Perindopril was administered at dose of 4 mg/day with subsequent rising up to 8 mg/day in next 7 days. The initial dose of losartan potassium was 25 mg with subsequent rising up to 50 mg 2 times a day. Duration of observation was 8 weeks. Results. Perindopril reduced 24-hour and daytime systolic BP (SBP by 17.2% (p<0.0001, nighttime SBP - by 22.5% (p<0.0001, 24-hour and daytime diastolic BP (DBP - by 18.3% and 17.6% (p<0.0001, respectively , nighttime DBP - by 27.2% (p<0.0001. Losartan potassium reduced 24-hour SBP by 25.7% (p<0.0001, daytime SBP - by 23.6% (p<0.0001, night-time SBP – by 25.5% (p<0.0001, 24-hour DBP - by 27.4%, daytime DBP - by 26.3%, nighttime DBP - by 18.5% (p=0.003. Perindopril decreased in number of non-dippers by 24,3% and night-peakers by 5.4% as well as increased in number of dippers by 27% and over-dippers by 2.7%. A number of patients with SAD profile corresponding to non-dipper type was 45.5% more in losartan taking than this when perindopril receiving (p=0.027. Conclusion. In patients with HT and stenotic coronary atherosclerosis perindopril therapy increases a number of patients with normal BP profile before myocardium revascularization.

  2. Hand calculation of safe separation distances between natural gas pipelines and boilers and nuclear facilities in the Hanford site 300 Area

    International Nuclear Information System (INIS)

    Daling, P.M.; Graham, T.M.

    1999-01-01

    The US Department of Energy has undertaken a project to reduce energy expenditures and improve energy system reliability in the 300 Area of the Hanford Site near Richland, Washington. This project replaced the centralized heating system with heating units for individual buildings or groups of buildings, constructed a new natural-gas distribution system to provide a fuel source for many of these units, and constructed a central control building to operate and maintain the system. The individual heating units include steam boilers that are housed in individual annex buildings located in the vicinity of a number of nuclear facilities operated by the Pacific Northwest National Laboratory (PNNL). The described analysis develops the basis for siting the package boilers and natural-gas distribution system used to supply steam to PNNL's 300 Area nuclear facilities. Minimum separation distances that would eliminate or reduce the risks of accidental dispersal of radioactive and hazardous materials in nearby nuclear facilities were calculated based on the effects of four potential fire and explosion (detonation) scenarios involving the boiler and natural-gas distribution system. These minimum separation distances were used to support siting decisions for the boilers and natural-gas pipelines

  3. Stontium-90 contamination in vegetation from radioactive waste seepage areas at ORNL, and theoretical calculations of 90Sr accumulation by deer

    International Nuclear Information System (INIS)

    Garten, C.T. Jr.; Lomax, R.D.

    1987-06-01

    This report describes data obtained during a preliminary characterization of 90 Sr levels in browse vegetation from the vicinity of seeps adjacent to ORNL solid waste storage areas (SWSA) where deer (Odocoileus virginianus) were suspected to accumulate 90 Sr through the food chain. The highest strontium concentrations in plant samples were found at seeps associated with SWSA-5. Strontium-90 concentrations in honeysuckle and/or blackberry shoots from two seeps in SWSA-5 averaged 39 and 19 nCi/g dry weight (DW), respectively. The maximum concentration observed was 90 nCi/g DW. Strontium-90 concentrations in honeysuckle and blackberry shoots averaged 7.4 nCi/g DW in a study area south of SWSA-4, and averaged 1.0 nCi/g DW in fescue grass from a seepage area located on SWSA-4. A simple model (based on metabolic data for mule deer) has been used to describe the theoretical accumulation of 90 Sr in bone of whitetail deer following ingestion of contaminated vegetation. These model calculations suggest that if 30 pCi 90 Sr/g deer bone is to be the accepted screening level for retaining deer killed on the reservation, then 5-pCi 90 Sr/g DW vegetation should be considered as a possible action level in making decisions about the need for remedial measures, because unrestricted access and full utilization of vegetation contaminated with 90 Sr bone concentrations of <30 pCi/g in a 45-kg buck

  4. Hemodynamics in stenotic vessels of small diameter under steady state conditions: Effect of viscoelasticity and migration of red blood cells.

    Science.gov (United States)

    Dimakopoulos, Yannis; Kelesidis, George; Tsouka, Sophia; Georgiou, Georgios C; Tsamopoulos, John

    2015-01-01

    In microcirculation, the non-Newtonian behavior of blood and the complexity of the microvessel network are responsible for the high flow resistance and the large reduction of the blood pressure. Red blood cell aggregation along with inward radial migration are two significant mechanisms determining the former. Yet, their impact on hemodynamics in non-straight vessels is not well understood. In this study, the steady state blood flow in stenotic rigid vessels is examined, employing a sophisticated non-homogeneous constitutive law. The effect of red blood cells migration on the hydrodynamics is quantified and the constitutive model's accuracy is evaluated. A numerical algorithm based on the two-dimensional mixed finite element method and the EVSS/SUPG technique for a stable discretization of the mass and momentum conservation equations in addition to the constitutive model is employed. The numerical simulations show that a cell-depleted layer develops along the vessel wall with an almost constant thickness for slow flow conditions. This causes the reduction of the drag force and the increase of the pressure gradient as the constriction ratio decreases. Viscoelastic effects in blood flow were found to be responsible for steeper decreases of tube and discharge hematocrits as decreasing function of constriction ratio.

  5. Vortex dynamics in Patient-Specific Stenotic Tricuspid and Bicuspid Aortic Valves pre- and post- Trans-catheter Aortic Valve Replacement

    Science.gov (United States)

    Hatoum, Hoda; Dasi, Lakshmi Prasad

    2017-11-01

    Understanding blood flow related adverse complications such as leaflet thrombosis post-transcatheter aortic valve implantation (TAVI) requires a deeper understanding of how patient-specific anatomic and hemodynamic factors, and relative valve positioning dictate sinus vortex flow and stasis regions. High resolution time-resolved particle image velocimetry measurements were conducted in compliant and transparent 3D printed patient-specific models of stenotic bicuspid and tricuspid aortic valve roots from patients who underwent TAVI. Using Lagrangian particle tracking analysis of sinus vortex flows and probability distributions of residence time and blood damage indices we show that (a) patient specific modeling provides a more realistic assessment of TAVI flows, (b) TAVI deployment alters sinus flow patterns by significantly decreasing sinus velocity and vorticity, and (c) relative valve positioning can control critical vortex structures that may explain preferential leaflet thrombosis corresponding to separated flow recirculation, secondary to valve jet vectoring relative to the aorta axis. This work provides new methods and understanding of the spatio-temporal aortic sinus vortex dynamics in post TAVI pathology. This study was supported by the Ohio State University DHLRI Trifit Challenge award.

  6. Using PIV to determine relative pressures in a stenotic phantom under steady flow based on the pressure-poisson equation.

    Science.gov (United States)

    Khodarahmi, Iman; Shakeri, Mostafa; Sharp, M; Amini, Amir A

    2010-01-01

    Pressure gradient across a Gaussian-shaped 87% area stenosis phantom was estimated by solving the pressure Poisson equation (PPE) for a steady flow mimicking the blood flow through the human iliac artery. The velocity field needed to solve the pressure equation was obtained using particle image velocimetry (PIV). A steady flow rate of 46.9 ml/s was used, which corresponds to a Reynolds number of 188 and 595 at the inlet and stenosis throat, respectively (in the range of mean Reynolds number encountered in-vivo). In addition, computational fluid dynamics (CFD) simulation of the same flow was performed. Pressure drops across the stenosis predicted by PPE/PIV and CFD were compared with those measured by a pressure catheter transducer. RMS errors relative to the measurements were 17% and 10% for PPE/PIV and CFD, respectively.

  7. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  8. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  9. Declination Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  10. Ambient Levels of Primary and Secondary Pollutants in a Residential Area: Population Risk and Hazard Index Calculation over a Three Years Study Period

    OpenAIRE

    S. Al-Salem; A. Al-Fadhlee

    2007-01-01

    This paper aims at presenting data collected over the period of three years (2004-2006) in a residential area in the state of Kuwait. The data collected include ambient levels of primary and secondary pollutants with a number of metrological parameters. A series of unfiltered and filtered concentration roses were plotted to determine the predominant sources as well as the prevailing winds affecting the area under investigation. Local and international air quality regulations were cross refere...

  11. CLASSICAL AREAS OF PHENOMENOLOGY: First-principles calculations for the elastic properties of Ni-base model superalloys: Ni/Ni3Al multilayers

    Science.gov (United States)

    Wang, Yun-Jiang; Wang, Chong-Yu

    2009-10-01

    A model system consisting of Ni[001](100)/Ni3Al[001](100) multi-layers are studied using the density functional theory in order to explore the elastic properties of single crystal Ni-based superalloys. Simulation results are consistent with the experimental observation that rafted Ni-base superalloys virtually possess a cubic symmetry. The convergence of the elastic properties with respect to the thickness of the multilayers are tested by a series of multilayers from 2γ'+2γ to 10γ'+10γ atomic layers. The elastic properties are found to vary little with the increase of the multilayer's thickness. A Ni/Ni3Al multilayer with 10γ'+10γ atomic layers (3.54 nm) can be used to simulate the mechanical properties of Ni-base model superalloys. Our calculated elastic constants, bulk modulus, orientation-dependent shear modulus and Young's modulus, as well as the Zener anisotropy factor are all compatible with the measured results of Ni-base model superalloys R1 and the advanced commercial superalloys TMS-26, CMSX-4 at a low temperature. The mechanical properties as a function of the γ' phase volume fraction are calculated by varying the proportion of the γ and γ' phase in the multilayers. Besides, the mechanical properties of two-phase Ni/Ni3Al multilayer can be well predicted by the Voigt-Reuss-Hill rule of mixtures.

  12. Calculation of the factor of the time's relativity in quantum area for different atoms based on the `Substantial motion' theory of Mulla Sadra

    Science.gov (United States)

    Gholibeigian, Hassan

    2015-03-01

    Iranian Philosopher, Mulla Sadra (1571-1640) in his theory of ``Substantial motion'' emphasized that ``the universe moves in its entity'', and ``the time is the fourth dimension of the universe'' This definition of space-time is proposed by him at three hundred years before Einstein. He argued that the time is magnitude of the motion (momentum) of the matter in its entity. In the other words, the time for each atom (body) is sum of the momentums of its involved fundamental particles. The momentum for each atom is different from the other atoms. In this methodology, by proposing some formulas, we can calculate the time for involved particles' momentum (time) for each atom in a second of the Eastern Time Zone (ETZ). Due to differences between these momentums during a second in ETZ, the time for each atom, will be different from the other atoms. This is the relativity in quantum physics. On the other hand, the God communicates with elementary particles via sub-particles (see my next paper) and transfers the packages (bit) of information and laws to them for processing and selection of their next step. Differences between packages like complexity and velocity of processing during the time, is the second variable in relativity of time for each atom which may be effective on the factor.

  13. Polycyclic aromatic hydrocarbons (PAH), nickel and vanadium in air dust from Bahrein (Persian Gulf): Measurements and Puff model calculations for this area during the burning of the oil wells in Kuwait

    International Nuclear Information System (INIS)

    Vaessen, H.A.M.G.; Wilbers, A.A.M.M.; Jekel, A.A.; Van Pul, W.A.J.; Van der Meulen, A.; Bloemen, H.J.Th.; De Boer, J.L.M.

    1993-01-01

    When Kuwait's oil wells were at fire in 1991, air particulate matter (inhalable fraction) was sampled in Bahrain (soot clouds were over that region at that time) and analysed for PAHs, nickel (Ni) and vanadium (V). Also in that period Puff-model calculations were carried out to forecast the dispersion of the combustion products and the impact on the environment in the Persian Gulf region. Based on the outcome of the model calculations and the analytical findings the major conclusions are that: (a) the PAH contamination level of the air particulate matter is equal or below that found for rural areas in the Netherlands and on average one order of magnitude below the findings of the model calculations; (b) there is no link between the air particulate matter content and the PAH contamination measured. The benzo(a)pyrene fraction of the PAH contamination is 10-14% which is surprisingly constant; (c) the strongly significant correlation between the Ni- and V-content both mutually and with respect to the air particulate matter content strongly suggests a common origin i.e. the burning oil wells in Kuwait; (d) the air particulate matter content measured is one up to two orders of magnitudes over the findings of the model calculations; (e) the emission factors applied in the Puff-model calculations, most probably, insufficiently match the combustion conditions of oil wells at fire. 6 figs., 3 tabs.,

  14. CONTAIN calculations

    International Nuclear Information System (INIS)

    Scholtyssek, W.

    1995-01-01

    In the first phase of a benchmark comparison, the CONTAIN code was used to calculate an assumed EPR accident 'medium-sized leak in the cold leg', especially for the first two days after initiation of the accident. The results for global characteristics compare well with those of FIPLOC, MELCOR and WAVCO calculations, if the same materials data are used as input. However, significant differences show up for local quantities such as flows through leakages. (orig.)

  15. Burnout calculation

    International Nuclear Information System (INIS)

    Li, D.

    1980-01-01

    Reviewed is the effect of heat flux of different system parameters on critical density in order to give an initial view on the value of several parameters. A thorough analysis of different equations is carried out to calculate burnout is steam-water flows in uniformly heated tubes, annular, and rectangular channels and rod bundles. Effect of heat flux density distribution and flux twisting on burnout and storage determination according to burnout are commended [ru

  16. Calculator calculus

    CERN Document Server

    McCarty, George

    1982-01-01

    How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en­ couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...

  17. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined 18F-FDG PET/MR imaging

    International Nuclear Information System (INIS)

    Hyafil, Fabien; Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias; Sepp, Dominik; Hoehn, Sabine; Poppert, Holger; Bayer-Karpinska, Anna; Boeckh-Behrens, Tobias; Hacker, Marcus; Nekolla, Stephan G.; Rominger, Axel; Dichgans, Martin; Schwaiger, Markus

    2016-01-01

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and 18 F-fluoro-deoxyglucose positron emission tomography ( 18 F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of 18 F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. 18 F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher 18 F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher 18 F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with 18 F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral to the stroke, suggesting a causal

  18. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined {sup 18}F-FDG PET/MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Hyafil, Fabien [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Bichat University Hospital, Department of Nuclear Medicine, Paris (France); Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias [Ludwig Maximilians University Hospital Munich, Institute for Clinical Radiology, Munich (Germany); Sepp, Dominik; Hoehn, Sabine; Poppert, Holger [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Bayer-Karpinska, Anna [Ludwig Maximilians University Hospital Munich, Institute for Stroke and Dementia Research, Munich (Germany); Boeckh-Behrens, Tobias [Technische Universitaet Muenchen, Department of Neuroradiology, Klinikum Rechts der Isar, Munich (Germany); Hacker, Marcus [Medical University of Vienna, Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Nekolla, Stephan G. [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Partner Site Munich Heart Alliance, German Centre for Cardiovascular Research (DZHK), Munich (Germany); Rominger, Axel [Ludwig Maximilians University Hospital Munich, Department of Nuclear Medicine, Munich (Germany); Dichgans, Martin [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Munich Cluster of Systems Neurology (SyNergy), Munich (Germany); Schwaiger, Markus [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany)

    2016-02-15

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and {sup 18}F-fluoro-deoxyglucose positron emission tomography ({sup 18}F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of {sup 18}F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. {sup 18}F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher {sup 18}F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher {sup 18}F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with {sup 18}F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral

  19. CALCULATION AND ANALYSIS OF THE EXPECTED IMPACT OF THE HYDROTECHNICAL CONSTRUCTION ON THE ENVIRONMENTAL CONDITION IN THE WATER AREA AND BOTTOM TOPOGRAPHY IN CASE OF THE CONSTRUCTION OF THE APPROACH CHANNEL TO THE SABETTA PORT

    Directory of Open Access Journals (Sweden)

    Vvedensky Alexei Rostislavovich

    2017-05-01

    Full Text Available The estimation of changes in the ecological situation and bottom topography in the Ob Bay, which may be caused by the construction of the approach channel to the port of Sabetta, was carried out with the help of the developed complex of numerical models. Particular attention was paid to the study of the displacement of the boundaries of the saline water expansion and the sediment accumulation in the channel area. It is established that the possible influence of the approach channel on the hydrologic-hydrochemical characteristics is less than their natural interannual and seasonal variability in the investigated water area. The change in the bottom topography after the construction of the approach channel does not entail a significant change in the regime of hydrologic-hydrochemical parameters of the Ob Bay and, consequently, should not affect the biocenosis. The calculated changes in the bottom topography in the approach channel area will not exceed 2 % of the depth per year and will not become a significant obstacle to the reliable and uninterrupted operation of the Sabetta port.

  20. Unusual Development of Iatrogenic Complex, Mixed Biliary and Duodenal Fistulas Complicating Roux-en-Y Antrectomy for Stenotic Peptic Disease of the Supraampullary Duodenum Requiring Whipple Procedure: An Uncommon Clinical Dilemma.

    Science.gov (United States)

    Polistina, Francesco A; Costantin, Giorgio; Settin, Alessandro; Lumachi, Franco; Ambrosino, Giovanni

    2010-10-23

    Complex fistulas of the duodenum and biliary tree are severe complications of gastric surgery. The association of duodenal and major biliary fistulas occurs rarely and is a major challenge for treatment. They may occur during virtually any kind of operation, but they are more frequent in cases complicated by the presence of difficult duodenal ulcers or cancer, with a mortality rate of up to 35%. Options for treatment are many and range from simple drainage to extended resections and difficult reconstructions. Conservative treatment is the choice for well-drained fistulas, but some cases require reoperation. Very little is known about reoperation techniques and technical selection of the right patients. We present the case of a complex iatrogenic duodenal and biliary fistula. A 42-year-old Caucasian man with a diagnosis of postoperative peritonitis had been operated on 3 days earlier; an antrectomy with a Roux-en-Y reconstruction for stenotic peptic disease was performed. Conservative treatment was attempted with mixed results. Two more operations were required to achieve a definitive resolution of the fistula and related local complications. The decision was made to perform a pancreatoduodenectomy with subsequent reconstruction on a double jejunal loop. The patient did well and was discharged on postoperative day 17. In our experience pancreaticoduodenectomy may be an effective treatment of refractory and complex iatrogenic fistulas involving both the duodenum and the biliary tree.

  1. Unusual Development of Iatrogenic Complex, Mixed Biliary and Duodenal Fistulas Complicating Roux-en-Y Antrectomy for Stenotic Peptic Disease of the Supraampullary Duodenum Requiring Whipple Procedure: An Uncommon Clinical Dilemma

    Directory of Open Access Journals (Sweden)

    Francesco A. Polistina

    2010-10-01

    Full Text Available Complex fistulas of the duodenum and biliary tree are severe complications of gastric surgery. The association of duodenal and major biliary fistulas occurs rarely and is a major challenge for treatment. They may occur during virtually any kind of operation, but they are more frequent in cases complicated by the presence of difficult duodenal ulcers or cancer, with a mortality rate of up to 35%. Options for treatment are many and range from simple drainage to extended resections and difficult reconstructions. Conservative treatment is the choice for well-drained fistulas, but some cases require reoperation. Very little is known about reoperation techniques and technical selection of the right patients. We present the case of a complex iatrogenic duodenal and biliary fistula. A 42-year-old Caucasian man with a diagnosis of postoperative peritonitis had been operated on 3 days earlier; an antrectomy with a Roux-en-Y reconstruction for stenotic peptic disease was performed. Conservative treatment was attempted with mixed results. Two more operations were required to achieve a definitive resolution of the fistula and related local complications. The decision was made to perform a pancreatoduodenectomy with subsequent reconstruction on a double jejunal loop. The patient did well and was discharged on postoperative day 17. In our experience pancreaticoduodenectomy may be an effective treatment of refractory and complex iatrogenic fistulas involving both the duodenum and the biliary tree.

  2. Lattice cell burnup calculation

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1977-01-01

    Accurate burnup prediction is a key item for design and operation of a power reactor. It should supply information on isotopic changes at each point in the reactor core and the consequences of these changes on the reactivity, power distribution, kinetic characters, control rod patterns, fuel cycles and operating strategy. A basic stage in the burnup prediction is the lattice cell burnup calculation. This series of lectures attempts to give a review of the general principles and calculational methods developed and applied in this area of burnup physics

  3. Calculation of cut-off values based on the Autoimmune Bullous Skin Disorder Intensity Score (ABSIS) and Pemphigus Disease Area Index (PDAI) pemphigus scoring systems for defining moderate, significant and extensive types of pemphigus.

    Science.gov (United States)

    Boulard, C; Duvert Lehembre, S; Picard-Dahan, C; Kern, J S; Zambruno, G; Feliciani, C; Marinovic, B; Vabres, P; Borradori, L; Prost-Squarcioni, C; Labeille, B; Richard, M A; Ingen-Housz-Oro, S; Houivet, E; Werth, V P; Murrell, D F; Hertl, M; Benichou, J; Joly, P

    2016-07-01

    Two pemphigus severity scores, Autoimmune Bullous Skin Disorder Intensity Score (ABSIS) and Pemphigus Disease Area Index (PDAI), have been proposed to provide an objective measure of disease activity. However, the use of these scores in clinical practice is limited by the absence of cut-off values that allow differentiation between moderate, significant and extensive types of pemphigus. To calculate cut-off values defining moderate, significant and extensive pemphigus based on the ABSIS and PDAI scores. In 31 dermatology departments in six countries, consecutive patients with newly diagnosed pemphigus were assessed for pemphigus severity, using ABSIS, PDAI, Physician's Global Assessment (PGA) and Dermatology Life Quality Index (DLQI) scores. Cut-off values defining moderate, significant and extensive subgroups were calculated based on the 25th and 75th percentiles of the ABSIS and PDAI scores. The median ABSIS, PDAI, PGA and DLQI scores of the three severity subgroups were compared in order to validate these subgroups. Ninety-six patients with pemphigus vulgaris (n = 77) or pemphigus foliaceus (n = 19) were included. The median PDAI activity and ABSIS total scores were 27·5 (range 3-84) and 34·8 points (range 0·5-90·5), respectively. The respective cut-off values corresponding to the first and third quartiles of the scores were 15 and 45 for the PDAI, and 17 and 53 for ABSIS. The moderate, significant and extensive subgroups were thus defined, and had distinguishing median ABSIS (P cut-off values of 15 and 45 for PDAI and 17 and 53 for ABSIS, to distinguish moderate, significant and extensive pemphigus forms. Identifying these pemphigus activity subgroups should help physicians to classify and manage patients with pemphigus. © 2016 British Association of Dermatologists.

  4. Calculation of Mitral Valve Area in Mitral Stenosis: Comparison of Continuity Equation and Pressure Half Time With Two-Dimensional Planimetry in Patients With and Without Associated Aortic or Mitral Regurgitation or Atrial Fibrillation

    Directory of Open Access Journals (Sweden)

    Roya Sattarzadeh

    2018-01-01

    Full Text Available Accurate measurement of Mitral Valve Area (MVA is essential to determining the Mitral Stenosis (MS severity and to achieving the best management strategies for this disease. The goal of the present study is to compare mitral valve area (MVA measurement by Continuity Equation (CE and Pressure Half-Time (PHT methods with that of 2D-Planimetry (PL in patients with moderate to severe mitral stenosis (MS. This comparison also was performed in subgroups of patients with significant Aortic Insufficiency (AI, Mitral Regurgitation (MR and Atrial Fibrillation (AF. We studied 70 patients with moderate to severe MS who were referred to echocardiography clinic. MVA was determined by PL, CE and PHT methods. The agreement and correlations between MVA’s obtained from various methods were determined by kappa index, Bland-Altman analysis, and linear regression analysis. The mean values for MVA calculated by CE was 0.81 cm (±0.27 and showed good correlation with those calculated by PL (0.95 cm, ±0.26 in whole population (r=0.771, P<0.001 and MR subgroup (r=0.763, P<0.001 and normal sinus rhythm and normal valve subgroups (r=0.858, P<0.001 and r=0.867, P<0.001, respectively. But CE methods didn’t show any correlation in AF and AI subgroups. MVA measured by PHT had a good correlation with that measured by PL in whole population (r=0.770, P<0.001 and also in NSR (r=0.814, P<0.001 and normal valve subgroup (r=0.781, P<0.001. Subgroup with significant AI and those with significant MR showed moderate correlation (r=0.625, P=0.017 and r=0.595, P=0.041, respectively. Bland Altman Analysis showed that CE would estimate MVA smaller in comparison with PL in the whole population and all subgroups and PHT would estimate MVA larger in comparison with PL in the whole population and all subgroups. The mean bias for CE and PHT are 0.14 cm and -0.06 cm respectively. In patients with moderate to severe mitral stenosis, in the absence of concomitant AF, AI or MR, the accuracy

  5. Defining the stenotic post-laryngectomy tracheostoma and its impact on the quality of life in laryngectomees: development and validation of a stoma function questionnaire.

    Science.gov (United States)

    Paleri, V; Wight, R G; Owen, S; Hurren, A; Stafford, F W

    2006-10-01

    The aims of this study were to identify if: (i) size of stoma contributes to quality of life (QoL) in laryngectomees; (ii) stoma size has an impact on routine stoma care and function; and (iii) an optimal stoma size exists below which patients experience stoma problems. Cross-sectional study of laryngectomees. Two tertiary care centres. Fifty-seven patients who had undergone total laryngectomy one to five years ago and using tracheo-oesophageal speech as their primary communication means. Three main measures were studied: 1 a new study specific questionnaire designed to assess problems with function and care of the end tracheosto- ma; 2 QoL as assessed by the head and neck QoL instrument; 3 a precision custom designed sizer to measure the minimum stoma diameter. The final study-specific questionnaire contained four items assessing different aspects of stomal function. From raw total scores an overall stomal score was generated. The stoma score was moderately correlated to emotion and speech domains in head and neck Quality of Life questionnaire, indicating that different concepts were being measured. The mean minimum stoma diameter was 15.9 +/- 2.9 mm. There was a significant increase in the area under the receiver operating characteristic curve beyond a threshold value of > or 15 mm; smaller sizes were associated with a poorer stoma score (Mann-Whitney test, P stoma sizer use distressing. Size of stoma significantly contributes to QoL in laryngectomees and stomas with minimum diameters of 14 mm or less are associated with adverse effects on routine stoma function. The study-specific stoma function questionnaire appears to be a useful instrument.

  6. Iodine intake by adult residents of a farming area in Iwate Prefecture, Japan, and the accuracy of estimated iodine intake calculated using the Standard Tables of Food Composition in Japan.

    Science.gov (United States)

    Nakatsuka, Haruo; Chiba, Keiko; Watanabe, Takao; Sawatari, Hideyuki; Seki, Takako

    2016-11-01

    Iodine intake by adults in farming districts in Northeastern Japan was evaluated by two methods: (1) government-approved food composition tables based calculation and (2) instrumental measurement. The correlation between these two values and a regression model for the calibration of calculated values was presented. Iodine intake was calculated, using the values in the Japan Standard Tables of Food Composition (FCT), through the analysis of duplicate samples of complete 24-h food consumption for 90 adult subjects. In cases where the value for iodine content was not available in the FCT, it was assumed to be zero for that food item (calculated values). Iodine content was also measured by ICP-MS (measured values). Calculated and measured values rendered geometric means (GM) of 336 and 279 μg/day, respectively. There was no statistically significant (p > 0.05) difference between calculated and measured values. The correlation coefficient was 0.646 (p GM, calculated) and 279 μg/day (GM, measured). Both values correlated so well, with a correlation coefficient of 0.646, that a regression model (Y = 130.8 + 1.9479X, where X and Y are measured and calculated values, respectively) could be used to calibrate calculated values.

  7. Closure and Sealing Design Calculation

    International Nuclear Information System (INIS)

    T. Lahnalampi; J. Case

    2005-01-01

    The purpose of the ''Closure and Sealing Design Calculation'' is to illustrate closure and sealing methods for sealing shafts, ramps, and identify boreholes that require sealing in order to limit the potential of water infiltration. In addition, this calculation will provide a description of the magma that can reduce the consequences of an igneous event intersecting the repository. This calculation will also include a listing of the project requirements related to closure and sealing. The scope of this calculation is to: summarize applicable project requirements and codes relating to backfilling nonemplacement openings, removal of uncommitted materials from the subsurface, installation of drip shields, and erecting monuments; compile an inventory of boreholes that are found in the area of the subsurface repository; describe the magma bulkhead feature and location; and include figures for the proposed shaft and ramp seals. The objective of this calculation is to: categorize the boreholes for sealing by depth and proximity to the subsurface repository; develop drawing figures which show the location and geometry for the magma bulkhead; include the shaft seal figures and a proposed construction sequence; and include the ramp seal figure and a proposed construction sequence. The intent of this closure and sealing calculation is to support the License Application by providing a description of the closure and sealing methods for the Safety Analysis Report. The closure and sealing calculation will also provide input for Post Closure Activities by describing the location of the magma bulkhead. This calculation is limited to describing the final configuration of the sealing and backfill systems for the underground area. The methods and procedures used to place the backfill and remove uncommitted materials (such as concrete) from the repository and detailed design of the magma bulkhead will be the subject of separate analyses or calculations. Post-closure monitoring will not

  8. Development of an EDV-supported decision instrument for site-pre-selection of nuclear power plants. EDV-supported instrument for calculation of the space distribution of the collective dose rate and area contamination. Vol. 3. Materials

    Energy Technology Data Exchange (ETDEWEB)

    Bruessermann, K; Eschhaus, M; Kreymborg, A; Muenster, M; Schommer, N

    1980-01-01

    Three FORTRAN-IV program systems have been developed and applied for calculating the radiation exposure due to the release of radioactive products through exhaust air and waste water. The documentation contains the materials from the regional data base, from the methods data base, as well as ecological background data.

  9. Calculation of the inventory and near-field release rates of radioactivity from neutron-activated metal parts discharged from the high flux isotope reactor and emplaced in solid waste storage area 6 at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Kelmers, A.D.; Hightower, J.R.

    1987-05-01

    Emplacement of contaminated reactor components involves disposal in lined and unlined auger holes in soil above the water table. The radionuclide inventory of disposed components was calculated. Information on the composition and weight of the components, as well as reasonable assumptions for the neutron flux fueling use, the time of neutron exposure, and radioactive decay after discharge, were employed in the inventory calculation. Near-field release rates of /sup 152/Eu, /sup 154/Eu, and /sup 155/Eu from control plates and cylinders were calculated for 50 years after emplacement. Release rates of the europium isotopes were uncertain. Two release-rate-limiting models were considered and a range of reasonable values were assumed for the time-to-failure of the auger-hole linear and aluminum cladding and europium solubility in SWSA-6 groundwater. The bounding europium radionuclide near-field release rates peaked at about 1.3 Ci/year total for /sup 152,154,155/Eu in 1987 for the lower bound, and at about 420 Ci/year in 1992 for the upper bound. The near-field release rates of /sup 55/Fe, /sup 59/Ni, /sup 60/Co, and /sup 63/Ni from stainless steel and cobalt alloy components, as well as of /sup 10/Be, /sup 41/Ca, and /sup 55/Fe from beryllium reflectors, were calculated for the next 100 years, assuming bulk waste corrosion was the release-rate-limiting step. Under the most conservative assumptions for the reflectors, the current (1986) total radionuclide release rate was calculated to be about 1.2 x 10/sup -4/ Ci/year, decreasing by 1992 to a steady release of about 1.5 x 10/sup -5/ Ci/year due primarily to /sup 41/Ca. 50 refs., 13 figs., 8 tabs.

  10. Magnetic Field Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...

  11. CO2 flowrate calculator

    International Nuclear Information System (INIS)

    Carossi, Jean-Claude

    1969-02-01

    A CO 2 flowrate calculator has been designed for measuring and recording the gas flow in the loops of Pegase reactor. The analog calculator applies, at every moment, Bernoulli's formula to the values that characterize the carbon dioxide flow through a nozzle. The calculator electronics is described (it includes a sampling calculator and a two-variable function generator), with its amplifiers, triggers, interpolator, multiplier, etc. Calculator operation and setting are presented

  12. Heterogeneous Calculation of {epsilon}

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Alf

    1961-02-15

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of {epsilon}. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer.

  13. Heterogeneous Calculation of ε

    International Nuclear Information System (INIS)

    Jonsson, Alf

    1961-02-01

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of ε. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer

  14. PHEBUS-FPTO Benchmark calculations

    International Nuclear Information System (INIS)

    Shepherd, I.; Ball, A.; Trambauer, K.; Barbero, F.; Olivar Dominguez, F.; Herranz, L.; Biasi, L.; Fermandjian, J.; Hocke, K.

    1991-01-01

    This report summarizes a set of pre-test predictions made for the first Phebus-FP test, FPT-O. There were many different calculations, performed by various organizations and they represent the first attempt to calculate the whole experimental sequence, from bundle to containment. Quantitative agreement between the various calculations was not good but the particular models in the code responsible for disagreements were mostly identified. A consensus view was formed as to how the test would proceed. It was found that a successful execution of the test will require a different operating procedure than had been assumed here. Critical areas which require close attention are the need to devize a strategy for the power and flow in the bundle that takes account of uncertainties in the modelling and the shroud conductivity and the necessity to develop a reliable method to achieve the desired thermalhydraulic conditions in the containment

  15. Quantifying disbond area

    Science.gov (United States)

    Lowden, D. W.

    1992-10-01

    Disbonds simulated in a composite helicopter rotor blade were profiled using eddy currents. The method is inherently accurate and reproducible. An algorithm is described for calculating disbond margin. Disbond area is estimated assuming in-service disbondments exhibit circular geometry.

  16. Core calculations of JMTR

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    1998-03-01

    In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)

  17. The calculation of absorbed dose rate in freshwater fish from high background natural radioactivity areas; Cálculo de taxa de dose absorvida em peixes de água doce de áreas de radioatividade natural aumentada

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, W.S.; Moraes, S.R.; Cavalcante, J.J.V.; Pinto, C.E.C. [Universidade Veiga de Almeida (UVA), Rio de Janeiro, RJ (Brazil); Kelecom, A. [Universidade Federal Fluminense (UFF), Niterói, RJ (Brazil)

    2017-07-01

    Areas of increased radiation may expose biota to radiation doses greater than the world averages, and depending on the magnitude of the exposure causing biota damage. The region of the municipality of Caldas, MG, BR is considered a region of increased natural radioactivity. The present work aims to evaluate the exposure of biota to natural radionuclides in the region of Caldas, MG. In order to evaluate the biota exposure in the region, the concentrations of the natural radionuclides U{sub nat}, {sup 226}Ra, {sup 210}Pb and {sup 232}Th and {sup 228}Ra were evaluated in two species of fishes: lambari (Astymax spp.) And traíra (Hoplias spp.). The dose rates of the analyzed fish were: for Astymax spp of 0.08 μGy d{sup -1} and for Hoplias spp of 0.12 μGy∙d{sup -1}. With these dose rate values no measurable deleterious effects are expected in the species studied.

  18. Electronics Environmental Benefits Calculator

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...

  19. Electrical installation calculations basic

    CERN Document Server

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo

  20. Electrical installation calculations advanced

    CERN Document Server

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio

  1. Radar Signature Calculation Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: The calculation, analysis, and visualization of the spatially extended radar signatures of complex objects such as ships in a sea multipath environment and...

  2. Waste Package Lifting Calculation

    International Nuclear Information System (INIS)

    H. Marr

    2000-01-01

    The objective of this calculation is to evaluate the structural response of the waste package during the horizontal and vertical lifting operations in order to support the waste package lifting feature design. The scope of this calculation includes the evaluation of the 21 PWR UCF (pressurized water reactor uncanistered fuel) waste package, naval waste package, 5 DHLW/DOE SNF (defense high-level waste/Department of Energy spent nuclear fuel)--short waste package, and 44 BWR (boiling water reactor) UCF waste package. Procedure AP-3.12Q, Revision 0, ICN 0, calculations, is used to develop and document this calculation

  3. Calculation of the Moments of Polygons.

    Science.gov (United States)

    1987-06-01

    2.1) VowUK-1N0+IDIO TUUNTKPlNO.YKNO C Calculate AREA YKXK-YKPIND*IKNO-YKNO*XKP1NO AIKA-hEEA4YKXX C Calculate ACEIT ACENT (1)- ACEIT ( 1) VSUNI4TKIK... ACEIT (2) -ACENT(2) .VSUNYKXK C Calculate SECHON 3ECNON (1) -SCNON( 1) TKXK*(XX~PIdO*VSUNXKKO**2) SECNO(2) -SEn N(2) .yrf* (XKP114*YKP1MO.XKO*YXO+VB1hi

  4. PWR core design calculations

    International Nuclear Information System (INIS)

    Trkov, A.; Ravnik, M.; Zeleznik, N.

    1992-01-01

    Functional description of the programme package Cord-2 for PWR core design calculations is presented. Programme package is briefly described. Use of the package and calculational procedures for typical core design problems are treated. Comparison of main results with experimental values is presented as part of the verification process. (author) [sl

  5. Uneconomical top calculation method

    International Nuclear Information System (INIS)

    De Noord, M.; Vanm Sambeek, E.J.W.

    2003-08-01

    The methodology used to calculate the financial gap of renewable electricity sources and technologies is described. This methodology is used for calculating the production subsidy levels (MEP subsidies) for new renewable electricity projects in 2004 and 2005 in the Netherlands [nl

  6. Control Areas

    Data.gov (United States)

    Department of Homeland Security — This feature class represents electric power Control Areas. Control Areas, also known as Balancing Authority Areas, are controlled by Balancing Authorities, who are...

  7. Dose calculation for electrons

    International Nuclear Information System (INIS)

    Hirayama, Hideo

    1995-01-01

    The joint working group of ICRP/ICRU is advancing the works of reviewing the ICRP publication 51 by investigating the data related to radiation protection. In order to introduce the 1990 recommendation, it has been demanded to carry out calculation for neutrons, photons and electrons. As for electrons, EURADOS WG4 (Numerical Dosimetry) rearranged the data to be calculated at the meeting held in PTB Braunschweig in June, 1992, and the question and request were presented by Dr. J.L. Chartier, the responsible person, to the researchers who are likely to undertake electron transport Monte Carlo calculation. The author also has carried out the requested calculation as it was the good chance to do the mutual comparison among various computation codes regarding electron transport calculation. The content that the WG requested to calculate was the absorbed dose at depth d mm when parallel electron beam enters at angle α into flat plate phantoms of PMMA, water and ICRU4-element tissue, which were placed in vacuum. The calculation was carried out by the versatile electron-photon shower computation Monte Carlo code, EGS4. As the results, depth dose curves and the dependence of absorbed dose on electron energy, incident angle and material are reported. The subjects to be investigated are pointed out. (K.I.)

  8. Large scale GW calculations

    International Nuclear Information System (INIS)

    Govoni, Marco; Argonne National Lab., Argonne, IL; Galli, Giulia; Argonne National Lab., Argonne, IL

    2015-01-01

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfaces with thousands of electrons

  9. Radioactive cloud dose calculations

    International Nuclear Information System (INIS)

    Healy, J.W.

    1984-01-01

    Radiological dosage principles, as well as methods for calculating external and internal dose rates, following dispersion and deposition of radioactive materials in the atmosphere are described. Emphasis has been placed on analytical solutions that are appropriate for hand calculations. In addition, the methods for calculating dose rates from ingestion are discussed. A brief description of several computer programs are included for information on radionuclides. There has been no attempt to be comprehensive, and only a sampling of programs has been selected to illustrate the variety available

  10. Handout on shielding calculation

    International Nuclear Information System (INIS)

    Heilbron Filho, P.F.L.

    1991-01-01

    In order to avoid the difficulties of the radioprotection supervisors in the tasks related to shielding calculations, is presented in this paper the basic concepts of shielding theory. It also includes exercises and examples. (author)

  11. Unit Cost Compendium Calculations

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...

  12. PHYSICOCHEMICAL PROPERTY CALCULATIONS

    Science.gov (United States)

    Computer models have been developed to estimate a wide range of physical-chemical properties from molecular structure. The SPARC modeling system approaches calculations as site specific reactions (pKa, hydrolysis, hydration) and `whole molecule' properties (vapor pressure, boilin...

  13. Magnetic Field Grid Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...

  14. Intercavitary implants dosage calculation

    International Nuclear Information System (INIS)

    Rehder, B.P.

    The use of spacial geometry peculiar to each treatment for the attainment of intercavitary and intersticial implants dosage calculation is presented. The study is made in patients with intercavitary implants by applying a modified Manchester technique [pt

  15. Casio Graphical Calculator Project.

    Science.gov (United States)

    Stott, Nick

    2001-01-01

    Shares experiences of a project aimed at developing and refining programs written on a Casio FX9750G graphing calculator. Describes in detail some programs used to develop mental strategies and problem solving skills. (MM)

  16. Small portable speed calculator

    Science.gov (United States)

    Burch, J. L.; Billions, J. C.

    1973-01-01

    Calculator is adapted stopwatch calibrated for fast accurate measurement of speeds. Single assembled unit is rugged, self-contained, and relatively inexpensive to manufacture. Potential market includes automobile-speed enforcement, railroads, and field-test facilities.

  17. Calculativeness and trust

    DEFF Research Database (Denmark)

    Frederiksen, Morten

    2014-01-01

    Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... to take actual subjective experience into consideration. It points out that, first, Løgstrup places trust alongside calculativeness as a different mode of engaging in social interaction, rather conceiving of trust as a state or the outcome of a decision-making process. Secondly, the analysis must take...

  18. Activities for Calculators.

    Science.gov (United States)

    Hiatt, Arthur A.

    1987-01-01

    Ten activities that give learners in grades 5-8 a chance to explore mathematics with calculators are provided. The activity cards involve such topics as odd addends, magic squares, strange projects, and conjecturing rules. (MNS)

  19. IRIS core criticality calculations

    International Nuclear Information System (INIS)

    Jecmenica, R.; Trontl, K.; Pevec, D.; Grgic, D.

    2003-01-01

    Three-dimensional Monte Carlo computer code KENO-VI of CSAS26 sequence of SCALE-4.4 code system was applied for pin-by-pin calculations of the effective multiplication factor for the first cycle IRIS reactor core. The effective multiplication factors obtained by the above mentioned Monte Carlo calculations using 27-group ENDF/B-IV library and 238-group ENDF/B-V library have been compared with the effective multiplication factors achieved by HELIOS/NESTLE, CASMO/SIMULATE, and modified CORD-2 nodal calculations. The results of Monte Carlo calculations are found to be in good agreement with the results obtained by the nodal codes. The discrepancies in effective multiplication factor are typically within 1%. (author)

  20. Editorial: Challenges and solutions in GW calculations for complex systems

    Science.gov (United States)

    Giustino, F.; Umari, P.; Rubio, A.

    2012-09-01

    We report key advances in the area of GW calculations, review the available software implementations and define standardization criteria to render the comparison between GW calculations from different codes meaningful, and identify future major challenges in the area of quasiparticle calculations. This Topical Issue should be a reference point for further developments in the field.

  1. Current interruption transients calculation

    CERN Document Server

    Peelo, David F

    2014-01-01

    Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,

  2. Source and replica calculations

    International Nuclear Information System (INIS)

    Whalen, P.P.

    1994-01-01

    The starting point of the Hiroshima-Nagasaki Dose Reevaluation Program is the energy and directional distributions of the prompt neutron and gamma-ray radiation emitted from the exploding bombs. A brief introduction to the neutron source calculations is presented. The development of our current understanding of the source problem is outlined. It is recommended that adjoint calculations be used to modify source spectra to resolve the neutron discrepancy problem

  3. Shielding calculations using FLUKA

    International Nuclear Information System (INIS)

    Yamaguchi, Chiri; Tesch, K.; Dinter, H.

    1988-06-01

    The dose equivalent on the surface of concrete shielding has been calculated using the Monte Carlo code FLUKA86 for incident proton energies from 10 to 800 GeV. The results have been compared with some simple equations. The value of the angular dependent parameter in Moyer's equation has been calculated from the locations where the values of the maximum dose equivalent occur. (author)

  4. Uncertainty calculations made easier

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-07-01

    The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

  5. Online plasma calculator

    Science.gov (United States)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  6. Daylight calculations in practice

    DEFF Research Database (Denmark)

    Iversen, Anne; Roy, Nicolas; Hvass, Mette

    The aim of the project was to obtain a better understanding of what daylight calculations show and also to gain knowledge of how the different daylight simulation programs perform compared with each other. Experience has shown that results for the same room, obtained from two daylight simulation...... programs can give different results. This can be due to restrictions in the program itself and/or be due to the skills of the persons setting up the models. This is crucial as daylight calculations are used to document that the demands and recommendations to daylight levels outlined by building authorities....... The aim of the project was to obtain a better understanding of what daylight calculations show and also to gain knowledge of how the different daylight simulation programs perform compared with each other. Furthermore the aim was to provide knowledge of how to build up the 3D models that were...

  7. Calculating Quenching Weights

    CERN Document Server

    Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim

    2003-01-01

    We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...

  8. Calculation of polarization effects

    International Nuclear Information System (INIS)

    Chao, A.W.

    1983-09-01

    Basically there are two areas of accelerator applications that involve beam polarization. One is the acceleration of a polarized beam (most likely a proton beam) in a synchrotron. Another concerns polarized beams in an electron storage ring. In both areas, numerical techniques have been very useful

  9. Three recent TDHF calculations

    International Nuclear Information System (INIS)

    Weiss, M.S.

    1981-05-01

    Three applications of TDHF are discussed. First, vibrational spectra of a post grazing collision 40 Ca nucleus is examined and found to contain many high energy components, qualitatively consistent with recent Orsay experiments. Second, the fusion cross section in energy and angular momentum are calculated for 16 O + 24 Mg to exhibit the parameters of the low l window for this system. A sensitivity of the fusion cross section to the effective two body potential is discussed. Last, a preliminary analysis of 86 Kr + 139 La at E/sub lab/ = 505 MeV calculated in the frozen approximation is displayed, compared to experiment and discussed

  10. Fission neutron multiplicity calculations

    International Nuclear Information System (INIS)

    Maerten, H.; Ruben, A.; Seeliger, D.

    1991-01-01

    A model for calculating neutron multiplicities in nuclear fission is presented. It is based on the solution of the energy partition problem as function of mass asymmetry within a phenomenological approach including temperature-dependent microscopic energies. Nuclear structure effects on fragment de-excitation, which influence neutron multiplicities, are discussed. Temperature effects on microscopic energy play an important role in induced fission reactions. Calculated results are presented for various fission reactions induced by neutrons. Data cover the incident energy range 0-20 MeV, i.e. multiple chance fission is considered. (author). 28 refs, 13 figs

  11. PWR core design calculations

    Energy Technology Data Exchange (ETDEWEB)

    Trkov, A; Ravnik, M; Zeleznik, N [Inst. Jozef Stefan, Ljubljana (Slovenia)

    1992-07-01

    Functional description of the programme package Cord-2 for PWR core design calculations is presented. Programme package is briefly described. Use of the package and calculational procedures for typical core design problems are treated. Comparison of main results with experimental values is presented as part of the verification process. (author) [Slovenian] Opisali smo programski paket CORD-2, ki se uporablja pri projektnih izracunih sredice pri upravljanju tlacnovodnega reaktorja. Prikazana je uporaba paketa in racunskih postopkov za tipicne probleme, ki nastopajo pri projektiranju sredice. Primerjava glavnih rezultatov z eksperimentalnimi vrednostmi je predstavljena kot del preveritvenega procesa. (author)

  12. Calculating Student Grades.

    Science.gov (United States)

    Allswang, John M.

    1986-01-01

    This article provides two short microcomputer gradebook programs. The programs, written in BASIC for the IBM-PC and Apple II, provide statistical information about class performance and calculate grades either on a normal distribution or based on teacher-defined break points. (JDH)

  13. Cardiovascular risk calculation

    African Journals Online (AJOL)

    James A. Ker

    2014-08-20

    Aug 20, 2014 ... smoking and elevated blood sugar levels (diabetes mellitus). These risk ... These are risk charts, e.g. FRS, a non-laboratory-based risk calculation, and ... for hard cardiovascular end-points, such as coronary death, myocardial ...

  14. Cooling tower calculations

    International Nuclear Information System (INIS)

    Simonkova, J.

    1988-01-01

    The problems are summed up of the dynamic calculation of cooling towers with forced and natural air draft. The quantities and relations are given characterizing the simultaneous exchange of momentum, heat and mass in evaporative water cooling by atmospheric air in the packings of cooling towers. The method of solution is clarified in the calculation of evaporation criteria and thermal characteristics of countercurrent and cross current cooling systems. The procedure is demonstrated of the calculation of cooling towers, and correction curves and the effect assessed of the operating mode at constant air number or constant outlet air volume flow on their course in ventilator cooling towers. In cooling towers with the natural air draft the flow unevenness is assessed of water and air relative to its effect on the resulting cooling efficiency of the towers. The calculation is demonstrated of thermal and resistance response curves and cooling curves of hydraulically unevenly loaded towers owing to the water flow rate parameter graded radially by 20% along the cross-section of the packing. Flow rate unevenness of air due to wind impact on the outlet air flow from the tower significantly affects the temperatures of cooled water in natural air draft cooling towers of a design with lower demands on aerodynamics, as early as at wind velocity of 2 m.s -1 as was demonstrated on a concrete example. (author). 11 figs., 10 refs

  15. Hypervelocity impact cratering calculations

    Science.gov (United States)

    Maxwell, D. E.; Moises, H.

    1971-01-01

    A summary is presented of prediction calculations on the mechanisms involved in hypervelocity impact cratering and response of earth media. Considered are: (1) a one-gram lithium-magnesium alloys impacting basalt normally at 6.4 km/sec, and (2) a large terrestrial impact corresponding to that of Sierra Madera.

  16. Languages for structural calculations

    International Nuclear Information System (INIS)

    Thomas, J.B.; Chambon, M.R.

    1988-01-01

    The differences between human and computing languages are recalled. It is argued that they are to some extent structured in antagonistic ways. Languages in structural calculation, in the past, present, and future, are considered. The contribution of artificial intelligence is stressed [fr

  17. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  18. Reactor dynamics calculations

    International Nuclear Information System (INIS)

    Devooght, J.; Lefvert, T.; Stankiewiez, J.

    1981-01-01

    This chapter deals with the work done in reactor dynamics within the Coordinated Research Program on Transport Theory and Advanced Reactor Calculations by three groups in Belgium, Poland, Sweden and Italy. Discretization methods in diffusion theory, collision probability methods in time-dependent neutron transport and singular perturbation method are represented in this paper

  19. Equilibrium fission model calculations

    International Nuclear Information System (INIS)

    Beckerman, M.; Blann, M.

    1976-01-01

    In order to aid in understanding the systematics of heavy ion fission and fission-like reactions in terms of the target-projectile system, bombarding energy and angular momentum, fission widths are calculated using an angular momentum dependent extension of the Bohr-Wheeler theory and particle emission widths using angular momentum coupling

  20. Course on hybrid calculation

    International Nuclear Information System (INIS)

    Weill, J.; Tellier; Bonnemay; Craigne; Chareton; Di Falco

    1969-02-01

    After a definition of hybrid calculation (combination of analogue and digital calculation) with a distinction between series and parallel hybrid computing, and a description of a hybrid computer structure and of task sharing between computers, this course proposes a description of hybrid hardware used in Saclay and Cadarache computing centres, and of operations performed by these systems. The next part addresses issues related to programming languages and software. The fourth part describes how a problem is organised for its processing on these computers. Methods of hybrid analysis are then addressed: resolution of optimisation problems, of partial differential equations, and of integral equations by means of different methods (gradient, maximum principle, characteristics, functional approximation, time slicing, Monte Carlo, Neumann iteration, Fischer iteration)

  1. Calculation of projected ranges

    International Nuclear Information System (INIS)

    Biersack, J.P.

    1980-09-01

    The concept of multiple scattering is reconsidered for obtaining the directional spreading of ion motion as a function of energy loss. From this the mean projection of each pathlength element of the ion trajectory is derived which - upon summation or integration - leads to the desired mean projected range. In special cases, the calculation can be carried out analytically, otherwise a simple general algorithm is derived which is suitable even for the smallest programmable calculators. Necessary input for the present treatment consists only of generally accessable stopping power and straggling formulas. The procedure does not rely on scattering cross sections, e.g. power potential or f(t 1 sup(/) 2 ) approximations. The present approach lends itself easily to include electronic straggling or to treat composed target materials, or even to account for the so-called time integral. (orig.)

  2. Spallation reactions: calculations

    International Nuclear Information System (INIS)

    Bertini, H.W.

    1975-01-01

    Current methods for calculating spallation reactions over various energy ranges are described and evaluated. Recent semiempirical fits to existing data will probably yield the most accurate predictions for these reactions in general. However, if the products in question have binding energies appreciably different from their isotropic neighbors and if the cross section is approximately 30 mb or larger, then the intranuclear-cascade-evaporation approach is probably better suited. (6 tables, 12 figures, 34 references) (U.S.)

  3. Performance assessment calculational exercises

    International Nuclear Information System (INIS)

    Barnard, R.W.; Dockery, H.A.

    1990-01-01

    The Performance Assessment Calculational Exercises (PACE) are an ongoing effort coordinated by Yucca Mountain Project Office. The objectives of fiscal year 1990 work, termed PACE-90, as outlined in the Department of Energy Performance Assessment (PA) Implementation Plan were to develop PA capabilities among Yucca Mountain Project (YMP) participants by calculating performance of a Yucca Mountain (YM) repository under ''expected'' and also ''disturbed'' conditions, to identify critical elements and processes necessary to assess the performance of YM, and to perform sensitivity studies on key parameters. It was expected that the PACE problems would aid in development of conceptual models and eventual evaluation of site data. The PACE-90 participants calculated transport of a selected set of radionuclides through a portion of Yucca Mountain for a period of 100,000 years. Results include analyses of fluid-flow profiles, development of a source term for radionuclide release, and simulations of contaminant transport in the fluid-flow field. Later work included development of a problem definition for perturbations to the originally modeled conditions and for some parametric sensitivity studies. 3 refs

  4. Anchorage Areas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — An anchorage area is a place where boats and ships can safely drop anchor. These areas are created in navigable waterways when ships and vessels require them for...

  5. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  6. Calculating Puddle Size

    Science.gov (United States)

    Burton, Megan; Mims, Patricia

    2012-01-01

    Learning through meaningful problem solving is integral in any successful mathematics program (Carpenter et al. 1999). The National Council of Teachers of Mathematics (NCTM) promotes the use of problem solving as a means to deepen understanding of all content areas within mathematics (NCTM 2000). This article describes a first-grade lesson that…

  7. Zero Temperature Hope Calculations

    International Nuclear Information System (INIS)

    Rozsnyai, B. F.

    2002-01-01

    The primary purpose of the HOPE code is to calculate opacities over a wide temperature and density range. It can also produce equation of state (EOS) data. Since the experimental data at the high temperature region are scarce, comparisons of predictions with the ample zero temperature data provide a valuable physics check of the code. In this report we show a selected few examples across the periodic table. Below we give a brief general information about the physics of the HOPE code. The HOPE code is an ''average atom'' (AA) Dirac-Slater self-consistent code. The AA label in the case of finite temperature means that the one-electron levels are populated according to the Fermi statistics, at zero temperature it means that the ''aufbau'' principle works, i.e. no a priory electronic configuration is set, although it can be done. As such, it is a one-particle model (any Hartree-Fock model is a one particle model). The code is an ''ion-sphere'' model, meaning that the atom under investigation is neutral within the ion-sphere radius. Furthermore, the boundary conditions for the bound states are also set at the ion-sphere radius, which distinguishes the code from the INFERNO, OPAL and STA codes. Once the self-consistent AA state is obtained, the code proceeds to generate many-electron configurations and proceeds to calculate photoabsorption in the ''detailed configuration accounting'' (DCA) scheme. However, this last feature is meaningless at zero temperature. There is one important feature in the HOPE code which should be noted; any self-consistent model is self-consistent in the space of the occupied orbitals. The unoccupied orbitals, where electrons are lifted via photoexcitation, are unphysical. The rigorous way to deal with that problem is to carry out complete self-consistent calculations both in the initial and final states connecting photoexcitations, an enormous computational task. The Amaldi correction is an attempt to address this problem by distorting the

  8. Calculation of the inventory

    International Nuclear Information System (INIS)

    Heilbron Filho, P.F.L.; Oliveira Brandao, R. de.

    1988-04-01

    The theory of Point Kernel applied to a source uniformelly distributed in a cylindrical geometry was utilized to estimated the Cs-137 content of each package of radioactive waste collected. The Taylor equation was employed to calculate the build-up factor and the Green function G was adjusted by means of a least square method. The theory also takes into account factors such as aditional shielding, heterogeneity and humidity of the medium as well as associated uncertanties of the parameters envolved. (author) [pt

  9. Calculations in furnace technology

    CERN Document Server

    Davies, Clive; Hopkins, DW; Owen, WS

    2013-01-01

    Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi

  10. Deep penetration calculations

    International Nuclear Information System (INIS)

    Thompson, W.L.; Deutsch, O.L.; Booth, T.E.

    1980-04-01

    Several Monte Carlo techniques are compared in the transport of neutrons of different source energies through two different deep-penetration problems each with two parts. The first problem involves transmission through a 200-cm concrete slab. The second problem is a 90 0 bent pipe jacketed by concrete. In one case the pipe is void, and in the other it is filled with liquid sodium. Calculations are made with two different Los Alamos Monte Carlo codes: the continuous-energy code MCNP and the multigroup code MCMG

  11. Weldon Spring dose calculations

    International Nuclear Information System (INIS)

    Dickson, H.W.; Hill, G.S.; Perdue, P.T.

    1978-09-01

    In response to a request by the Oak Ridge Operations (ORO) Office of the Department of Energy (DOE) for assistance to the Department of the Army (DA) on the decommissioning of the Weldon Spring Chemical Plant, the Health and Safety Research Division of the Oak Ridge National Laboratory (ORNL) performed limited dose assessment calculations for that site. Based upon radiological measurements from a number of soil samples analyzed by ORNL and from previously acquired radiological data for the Weldon Spring site, source terms were derived to calculate radiation doses for three specific site scenarios. These three hypothetical scenarios are: a wildlife refuge for hunting, fishing, and general outdoor recreation; a school with 40 hr per week occupancy by students and a custodian; and a truck farm producing fruits, vegetables, meat, and dairy products which may be consumed on site. Radiation doses are reported for each of these scenarios both for measured uranium daughter equilibrium ratios and for assumed secular equilibrium. Doses are lower for the nonequilibrium case

  12. Configuration space Faddeev calculations

    International Nuclear Information System (INIS)

    Payne, G.L.; Klink, W.H.; Polyzou, W.N.

    1989-01-01

    The detailed study of few-body systems provides one of the most effective means for studying nuclear physics at subnucleon distance scales. For few-body systems the model equations can be solved numerically with errors less than the experimental uncertainties. We have used such systems to investigate the size of relativistic effects, the role of meson-exchange currents, and the importance of quark degrees of freedom in the nucleus. Complete calculations for momentum-dependent potentials have been performed, and the properties of the three-body bound state for these potentials have been studied. Few-body calculations of the electromagnetic form factors of the deuteron and pion have been carried out using a front-form formulation of relativistic quantum mechanics. The decomposition of the operators transforming convariantly under the Poincare group into kinematical and dynamical parts has been studies. New ways for constructing interactions between particles, as well as interactions which lead to the production of particles, have been constructed in the context of a relativistic quantum mechanics. To compute scattering amplitudes in a nonperturbative way, classes of operators have been generated out of which the phase operator may be constructed. Finally, we have worked out procedures for computing Clebsch-Gordan and Racah coefficients on a computer, as well as giving procedures for dealing with the multiplicity problem

  13. Buoyant plume calculations

    International Nuclear Information System (INIS)

    Penner, J.E.; Haselman, L.C.; Edwards, L.L.

    1985-01-01

    Smoke from raging fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in surface temperatures. However, the extent of the decrease and even the sign of the temperature change, depend on how the smoke is distributed with altitude. We present a model capable of evaluating the initial distribution of lofted smoke above a massive fire. Calculations are shown for a two-dimensional slab version of the model and a full three-dimensional version. The model has been evaluated by simulating smoke heights for the Hamburg firestorm of 1943 and a smaller scale oil fire which occurred in Long Beach in 1958. Our plume heights for these fires are compared to those predicted by the classical Morton-Taylor-Turner theory for weakly buoyant plumes. We consider the effect of the added buoyancy caused by condensation of water-laden ground level air being carried to high altitude with the convection column as well as the effects of background wind on the calculated smoke plume heights for several fire intensities. We find that the rise height of the plume depends on the assumed background atmospheric conditions as well as the fire intensity. Little smoke is injected into the stratosphere unless the fire is unusually intense, or atmospheric conditions are more unstable than we have assumed. For intense fires significant amounts of water vapor are condensed raising the possibility of early scavenging of smoke particles by precipitation. 26 references, 11 figures

  14. Shielding calculations for NET

    International Nuclear Information System (INIS)

    Verschuur, K.A.; Hogenbirk, A.

    1991-05-01

    In the European Fusion Technology Programme there is only a small activity on research and development for fusion neutronics. Never-the-less, looking further than blanket design now, as ECN is getting involved in design of radiation shields for the coils and biological shields, it becomes apparent that fusion neutronics as a whole still needs substantial development. Existing exact codes for calculation of complex geometries like MCNP and DORT/TORT are put over the limits of their numerical capabilities, whilst approximate codes for complex geometries like FURNACE and MERCURE4 are put over the limits of their modelling capabilities. The main objective of this study is just to find out how far we can get with existing codes in obtaining reliable values for the radiation levels inside and outside the cryostat/shield during operation and after shut-down. Starting with a 1D torus model for preliminary parametric studies, more dimensional approximation of the torus or parts of it including the main heterogeneities should follow. Regular contacts with the NET-Team are kept, to be aware of main changes in NET design that might affect our calculation models. Work on the contract started 1 July 1990. The technical description of the contract is given. (author). 14 refs.; 4 figs.; 1 tab

  15. SR 97 - Radionuclide transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Lindgren, Maria [Kemakta Konsult AB, Stockholm (Sweden); Lindstroem, Fredrik [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)

    1999-12-01

    An essential component of a safety assessment is to calculate radionuclide release and dose consequences for different scenarios and cases. The SKB tools for such a quantitative assessment are used to calculate the maximum releases and doses for the hypothetical repository sites Aberg, Beberg and Ceberg for the initial canister defect scenario and also for the glacial melting case for Aberg. The reasonable cases, i.e. all parameters take reasonable values, results in maximum biosphere doses of 5x10{sup -8} Sv/yr for Aberg, 3x10{sup -8} Sv/yr for Beberg and 1x10{sup -8} Sv/yr for Ceberg for peat area. These doses lie significantly below 0.15 mSv/yr. (A dose of 0.15 mSv/yr for unit probability corresponds to the risk limit of 10{sup -5} per year for the most exposed individuals recommended in regulations.) The conclusion that the maximum risk would lie well below 10{sup -5} per year is also demonstrated by results from the probabilistic calculations, which directly assess the resulting risk by combining dose and probability estimates. The analyses indicate that the risk is 2x10{sup -5} Sv/yr for Aberg, 8x10{sup -7} Sv/yr for Beberg and 3x10{sup -8} Sv/yr for Ceberg. The analysis shows that the most important parameters in the near field are the number of defective canisters and the instant release fraction. The influence from varying one parameter never changes the doses as much as an order of magnitude. In the far field the most important uncertainties affecting release and retention are associated with permeability and connectivity of the fractures in the rock. These properties affect several parameters. Highly permeable and well connected fractures imply high groundwater fluxes and short groundwater travel times. Sparsely connected or highly variable fracture properties implies low flow wetted surface along migration paths. It should, however, be remembered that the far-field parameters have little importance if the near-field parameters take their reasonable

  16. SR 97 - Radionuclide transport calculations

    International Nuclear Information System (INIS)

    Lindgren, Maria; Lindstroem, Fredrik

    1999-12-01

    An essential component of a safety assessment is to calculate radionuclide release and dose consequences for different scenarios and cases. The SKB tools for such a quantitative assessment are used to calculate the maximum releases and doses for the hypothetical repository sites Aberg, Beberg and Ceberg for the initial canister defect scenario and also for the glacial melting case for Aberg. The reasonable cases, i.e. all parameters take reasonable values, results in maximum biosphere doses of 5x10 -8 Sv/yr for Aberg, 3x10 -8 Sv/yr for Beberg and 1x10 -8 Sv/yr for Ceberg for peat area. These doses lie significantly below 0.15 mSv/yr. (A dose of 0.15 mSv/yr for unit probability corresponds to the risk limit of 10 -5 per year for the most exposed individuals recommended in regulations.) The conclusion that the maximum risk would lie well below 10 -5 per year is also demonstrated by results from the probabilistic calculations, which directly assess the resulting risk by combining dose and probability estimates. The analyses indicate that the risk is 2x10 -5 Sv/yr for Aberg, 8x10 -7 Sv/yr for Beberg and 3x10 -8 Sv/yr for Ceberg. The analysis shows that the most important parameters in the near field are the number of defective canisters and the instant release fraction. The influence from varying one parameter never changes the doses as much as an order of magnitude. In the far field the most important uncertainties affecting release and retention are associated with permeability and connectivity of the fractures in the rock. These properties affect several parameters. Highly permeable and well connected fractures imply high groundwater fluxes and short groundwater travel times. Sparsely connected or highly variable fracture properties implies low flow wetted surface along migration paths. It should, however, be remembered that the far-field parameters have little importance if the near-field parameters take their reasonable values. In that case almost all

  17. Calculating graduation rates.

    Science.gov (United States)

    Starck, Patricia L; Love, Karen; McPherson, Robert

    2008-01-01

    In recent years, the focus has been on increasing the number of registered nurse (RN) graduates. Numerous states have initiated programs to increase the number and quality of students entering nursing programs, and to expand the capacity of their programs to enroll additional qualified students. However, little attention has been focused on an equally, if not more, effective method for increasing the number of RNs produced-increasing the graduation rate of students enrolling. This article describes a project that undertook the task of compiling graduation data for 15 entry-level programs, standardizing terms and calculations for compiling the data, and producing a regional report on graduation rates of RN students overall and by type of program. Methodology is outlined in this article. This effort produced results that were surprising to program deans and directors and is expected to produce greater collaborative efforts to improve these rates both locally and statewide.

  18. Mice take calculated risks.

    Science.gov (United States)

    Kheifets, Aaron; Gallistel, C R

    2012-05-29

    Animals successfully navigate the world despite having only incomplete information about behaviorally important contingencies. It is an open question to what degree this behavior is driven by estimates of stochastic parameters (brain-constructed models of the experienced world) and to what degree it is directed by reinforcement-driven processes that optimize behavior in the limit without estimating stochastic parameters (model-free adaptation processes, such as associative learning). We find that mice adjust their behavior in response to a change in probability more quickly and abruptly than can be explained by differential reinforcement. Our results imply that mice represent probabilities and perform calculations over them to optimize their behavior, even when the optimization produces negligible material gain.

  19. Smile esthetics: calculated beauty?

    Science.gov (United States)

    Lecocq, Guillaume; Truong Tan Trung, Lisa

    2014-06-01

    Esthetic demand from patients continues to increase. Consequently, the treatments we offer are moving towards more discreet or invisible techniques using lingual brackets in order to achieve harmonious, balanced results in line with our treatment goals. As orthodontists, we act upon relationships between teeth and bone. And the equilibrium they create impacts the entire face via the smile. A balanced smile is essential to an esthetic outcome and is governed by rules, which guide both the practitioner and patient. A smile can be described in terms of mathematical ratios and proportions but beauty cannot be calculated. For the smile to sit harmoniously within the face, we need to take into account facial proportions and the possibility of their being modified by our orthopedic appliances or by surgery. Copyright © 2014 CEO. Published by Elsevier Masson SAS. All rights reserved.

  20. Parallel computational in nuclear group constant calculation

    International Nuclear Information System (INIS)

    Su'ud, Zaki; Rustandi, Yaddi K.; Kurniadi, Rizal

    2002-01-01

    In this paper parallel computational method in nuclear group constant calculation using collision probability method will be discuss. The main focus is on the calculation of collision matrix which need large amount of computational time. The geometry treated here is concentric cylinder. The calculation of collision probability matrix is carried out using semi analytic method using Beckley Naylor Function. To accelerate computation speed some computer parallel used to solve the problem. We used LINUX based parallelization using PVM software with C or fortran language. While in windows based we used socket programming using DELPHI or C builder. The calculation results shows the important of optimal weight for each processor in case there area many type of processor speed

  1. Dose rate calculations for a reconnaissance vehicle

    International Nuclear Information System (INIS)

    Grindrod, L.; Mackey, J.; Salmon, M.; Smith, C.; Wall, S.

    2005-01-01

    A Chemical Nuclear Reconnaissance System (CNRS) has been developed by the British Ministry of Defence to make chemical and radiation measurements on contaminated terrain using appropriate sensors and recording equipment installed in a land rover. A research programme is under way to develop and validate a predictive capability to calculate the build-up of contamination on the vehicle, radiation detector performance and dose rates to the occupants of the vehicle. This paper describes the geometric model of the vehicle and the methodology used for calculations of detector response. Calculated dose rates obtained using the MCBEND Monte Carlo radiation transport computer code in adjoint mode are presented. These address the transient response of the detectors as the vehicle passes through a contaminated area. Calculated dose rates were found to agree with the measured data to be within the experimental uncertainties, thus giving confidence in the shielding model of the vehicle and its application to other scenarios. (authors)

  2. Relative Hazard Calculation Methodology

    International Nuclear Information System (INIS)

    DL Strenge; MK White; RD Stenner; WB Andrews

    1999-01-01

    The methodology presented in this document was developed to provide a means of calculating the RH ratios to use in developing useful graphic illustrations. The RH equation, as presented in this methodology, is primarily a collection of key factors relevant to understanding the hazards and risks associated with projected risk management activities. The RH equation has the potential for much broader application than generating risk profiles. For example, it can be used to compare one risk management activity with another, instead of just comparing it to a fixed baseline as was done for the risk profiles. If the appropriate source term data are available, it could be used in its non-ratio form to estimate absolute values of the associated hazards. These estimated values of hazard could then be examined to help understand which risk management activities are addressing the higher hazard conditions at a site. Graphics could be generated from these absolute hazard values to compare high-hazard conditions. If the RH equation is used in this manner, care must be taken to specifically define and qualify the estimated absolute hazard values (e.g., identify which factors were considered and which ones tended to drive the hazard estimation)

  3. Experimental Young's modulus calculations

    International Nuclear Information System (INIS)

    Chen, Y.; Jayakumar, R.; Yu, K.

    1994-01-01

    Coil is a very important magnet component. The turn location and the coil size impact both mechanical and magnetic behavior of the magnet. The Young's modulus plays a significant role in determining the coil location and size. Therefore, Young's modulus study is essential in predicting both the analytical and practical magnet behavior. To determine the coil Young's modulus, an experiment has been conducted to measure azimuthal sizes of a half quadrant QSE101 inner coil under different loading. All measurements are made at four different positions along an 8-inch long inner coil. Each measurement is repeated three times to determine the reproducibility of the experiment. To ensure the reliability of this experiment, the same measurement is performed twice with a open-quotes dummy coil,close quotes which is made of G10 and has the same dimension and similar azimuthal Young's modulus as the inner coil. The difference between the G10 azimuthal Young's modulus calculated from the experiments and its known value from the manufacturer will be compared. Much effort has been extended in analyzing the experimental data to obtain a more reliable Young's modulus. Analysis methods include the error analysis method and the least square method

  4. Relativistic few body calculations

    International Nuclear Information System (INIS)

    Gross, F.

    1988-01-01

    A modern treatment of the nuclear few-body problem must take into account both the quark structure of baryons and mesons, which should be important at short range, and the relativistic exchange of mesons, which describes the long range, peripheral interactions. A way to model both of these aspects is described. The long range, peripheral interactions are calculated using the spectator model, a general approach in which the spectators to nucleon interactions are put on their mass-shell. Recent numerical results for a relativistic OBE model of the NN interaction, obtained by solving a relativistic equation with one-particle on mass-shell, will be presented and discussed. Two meson exchange models, one with only four mesons (π,σ,/rho/,ω) but with a 25% admixture of γ 5 coupling for the pion, and a second with six mesons (π,σ,/rho/,ω,δ,/eta/) but pure γ 5 γ/sup μ/ pion coupling, are shown to give very good quantitative fits to the NN scattering phase shifts below 400 MeV, and also a good description of the /rvec p/ 40 Ca elastic scattering observables. Applications of this model to electromagnetic interactions of the two body system, with emphasis on the determination of relativistic current operators consistent with the dynamics and the exact treatment of current conservation in the presence of phenomenological form factors, will be described. 18 refs., 8 figs

  5. Equilibrium calculations, ch. 6

    International Nuclear Information System (INIS)

    Deursen, A.P.J. van

    1976-01-01

    A calculation is presented of dimer intensities obtained in supersonic expansions. There are two possible limiting considerations; the dimers observed are already present in the source, in thermodynamic equilibrium, and are accelerated in the expansion. Destruction during acceleration is neglected, as are processes leading to newly formed dimers. On the other hand one can apply a kinetic approach, where formation and destruction processes are followed throughout the expansion. The difficulty of this approach stems from the fact that the density, temperature and rate constants have to be known at all distances from the nozzle. The simple point of view has been adopted and the measured dimer intensities are compared with the equilibrium concentration in the source. The comparison is performed under the assumption that the detection efficiency for dimers is twice the detection efficiency for monomers. The experimental evidence against the simple point of view that the dimers of the onset region are formed in the source already, under equilibrium conditions, is discussed. (Auth.)

  6. Configuration space Faddeev calculations

    International Nuclear Information System (INIS)

    Payne, G.L.; Klink, W.H.; Ployzou, W.N.

    1991-01-01

    The detailed study of few-body systems provides one of the most precise tools for studying the dynamics of nuclei. Our research program consists of a careful theoretical study of the nuclear few-body systems. During the past year we have completed several aspects of this program. We have continued our program of using the trinucleon system to investigate the validity of various realistic nucleon-nucleon potentials. Also, the effects of meson-exchange currents in nuclear systems have been studied. Initial calculations using the configuration-space Faddeev equations for nucleon-deuteron scattering have been completed. With modifications to treat relativistic systems, few-body methods can be applied to phenomena that are sensitive to the structure of the individual hadrons. We have completed a review of Relativistic Hamiltonian Dynamics in Nuclear and Particle Physics for Advances in Nuclear Physics. Although it is called a review, it is a large document that contains a significant amount of new research

  7. Parametric Criticality Safety Calculations for Arrays of TRU Waste Containers

    Energy Technology Data Exchange (ETDEWEB)

    Gough, Sean T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-26

    The Nuclear Criticality Safety Division (NCSD) has performed criticality safety calculations for finite and infinite arrays of transuranic (TRU) waste containers. The results of these analyses may be applied in any technical area onsite (e.g., TA-54, TA-55, etc.), as long as the assumptions herein are met. These calculations are designed to update the existing reference calculations for waste arrays documented in Reference 1, in order to meet current guidance on calculational methodology.

  8. Mycoplasma pneumoniae and Chlamydia pneumoniae in calcified nodules of aortic stenotic valves Mycoplasma pneumoniae e Chlamydia pneumoniae nos focos de calcificação de valva aórtica estenótica

    Directory of Open Access Journals (Sweden)

    Maria de Lourdes HIGUCHI

    2002-07-01

    Full Text Available Aortic Valve Stenosis (AVS has been explained as an atherosclerotic process of the valve as they often exhibit inflammatory changes with infiltration of macrophages, T lymphocytes and lipid infiltration. The present study investigated whether the bacteria Chlamydia pneumoniae (CP and Mycoplasma pneumoniae (MP, detected previously in atherosclerotic plaques, are also present in AVS. Ten valves surgically removed from patients with AVS were analyzed by immunohistochemistry, in situ hybridization, and electron microscopy. The mean and standard deviation of the percentage areas occupied by CP antigens and MP - DNA were respectively 6.21 +/- 5.41 and 2.27 +/- 2.06 in calcified foci; 2.8 +/- 3.33 and 1.78+/- 3.63 in surrounding fibrotic areas, and 0.21 +/- 0.17 and 0.12 +/- 0.13 in less injured parts of the valve. There was higher amount of CP and MP in the calcified foci and in the surrounded fibrosis than in more preserved valvular regions. In conclusion, the fact that there were greater amounts of CP and MP in calcification foci of AVS favors the hypothesis that AS is not an inevitable degenerative process due to aging, but rather that it may be a response to the presence of these bacteria, similarly to the morphology detected in atherosclerosis damage.Estenose da Valva Aórtica (EVA tem sido considerada como um processo aterosclerótico das valvas pois elas freqüentemente exibem alterações inflamatórias com acúmulo de macrófagos e linfócitos T, bem como infiltração de lípides. O presente estudo investigou se as bactérias Chlamydia pneumoniae (CP e Mycoplasma pneumoniae (MP, detectadas previamente em placas ateroscleróticas, estavam presentes na EVA. Dez valvas removidas cirúrgicamente de pacientes com EVA foram analisadas pela imunohistoquímica, hibridização in situ e microscopia eletrônica. A média e desvio padrão das porcentagens de área ocupadas por antígenos de CP e de DNA do MP foram respectivamente de 6,21 +/- 5,41 e 2

  9. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  10. Revitalization Areas

    Data.gov (United States)

    Department of Housing and Urban Development — Revitalization areas are HUD-designated neighborhoods in need of economic and community development and where there is already a strong commitment by the local...

  11. 700 Area

    Data.gov (United States)

    Federal Laboratory Consortium — The 700 Area of the Hanford Site is located in downtown Richland.Called the Federal Office Building, the Richland Operations Site Manager and the Richland Operations...

  12. Calculation of shielding parameters

    International Nuclear Information System (INIS)

    Montoya Z, J.

    1994-01-01

    With the propose of reduce the hazard to radiation, exist three basic factors: a) time, the time to exposition to working person inside to area, from exist determined speed the doses, is proportional of the time permanence; b) distance, the reduce to doses is inverse square of the distance to exposition point; c) building, consist to interpose between source and exposition point to material. The main aspect development to the analysis of parameters distance and building. The analysis consist to development of the mathematical implicit, in the model of source radioactive, beginning with the geometry to source, distance to exposition source, and configuration building. In the final part was realize one comparative studied to calculus of parameters to blinding, employs two codes CPBGAM and MICROSHIELD, the first made as part to work thesis. The point source its a good approximation to any one real source, but in the majority of the time to propose analysis the spatial distribution of the source must realized in explicit way. The buildings calculus in volumetry's source can be approximate begin's of plan as source adaptations. It's important to have present that not only the building exist the exposition to the radiation, and the parameters time and distance plays an important paper too. (Author)

  13. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  14. Fast reactor calculational route for Pu burning core design

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, S. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-01-01

    This document provides a description of a calculational route, used in the Reactor Physics Research Section for sensitivity studies and initial design optimization calculations for fast reactor cores. The main purpose in producing this document was to provide a description of and user guides to the calculational methods, in English, as an aid to any future user of the calculational route who is (like the author) handicapped by a lack of literacy in Japanese. The document also provides for all users a compilation of information on the various parts of the calculational route, all in a single reference. In using the calculational route (to model Pu burning reactors) the author identified a number of areas where an improvement in the modelling of the standard calculational route was warranted. The document includes comments on and explanations of the modelling assumptions in the various calculations. Practical information on the use of the calculational route and the computer systems is also given. (J.P.N.)

  15. ENRAF gauge reference level calculations

    Energy Technology Data Exchange (ETDEWEB)

    Huber, J.H., Fluor Daniel Hanford

    1997-02-06

    This document describes the method for calculating reference levels for Enraf Series 854 Level Detectors as installed in the tank farms. The reference level calculation for each installed level gauge is contained herein.

  16. HEU benchmark calculations and LEU preliminary calculations for IRR-1

    International Nuclear Information System (INIS)

    Caner, M.; Shapira, M.; Bettan, M.; Nagler, A.; Gilat, J.

    2004-01-01

    We performed neutronics calculations for the Soreq Research Reactor, IRR-1. The calculations were done for the purpose of upgrading and benchmarking our codes and methods. The codes used were mainly WIMS-D/4 for cell calculations and the three dimensional diffusion code CITATION for full core calculations. The experimental flux was obtained by gold wire activation methods and compared with our calculated flux profile. The IRR-1 is loaded with highly enriched uranium fuel assemblies, of the plate type. In the framework of preparation for conversion to low enrichment fuel, additional calculations were done assuming the presence of LEU fresh fuel. In these preliminary calculations we investigated the effect on the criticality and flux distributions of the increase of U-238 loading, and the corresponding uranium density.(author)

  17. MCNP and OMEGA criticality calculations

    International Nuclear Information System (INIS)

    Seifert, E.

    1998-04-01

    The reliability of OMEGA criticality calculations is shown by a comparison with calculations by the validated and widely used Monte Carlo code MCNP. The criticality of 16 assemblies with uranium as fissionable is calculated with the codes MCNP (Version 4A, ENDF/B-V cross sections), MCNP (Version 4B, ENDF/B-VI cross sections), and OMEGA. Identical calculation models are used for the three codes. The results are compared mutually and with the experimental criticality of the assemblies. (orig.)

  18. CALCULATION OF LASER CUTTING COSTS

    OpenAIRE

    Bogdan Nedic; Milan Eric; Marijana Aleksijevic

    2016-01-01

    The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, compar...

  19. Transient anisotropic magnetic field calculation

    International Nuclear Information System (INIS)

    Jesenik, Marko; Gorican, Viktor; Trlep, Mladen; Hamler, Anton; Stumberger, Bojan

    2006-01-01

    For anisotropic magnetic material, nonlinear magnetic characteristics of the material are described with magnetization curves for different magnetization directions. The paper presents transient finite element calculation of the magnetic field in the anisotropic magnetic material based on the measured magnetization curves for different magnetization directions. For the verification of the calculation method some results of the calculation are compared with the measurement

  20. Transluminal angioplasty of a stenotic surgical splenorenal shunt

    International Nuclear Information System (INIS)

    Beers, B. van; Roche, A.; Cauquil, P.

    1988-01-01

    A stenosis of a side-to-side splenorenal shunt was treated by percutaneous angioplasty two years after the performance of the shunt. After dilatation, there was a fall of the splenorenal pressure gradient from 28 to 17 cm H 2 O and good transanastomotic flow was re-estabilshed. As in other arterial and venous territories, angioplasty may be an interesting alternative to surgery. (orig.)

  1. Quiet areas

    DEFF Research Database (Denmark)

    Petersen, Rikke Munck

    2016-01-01

    This paper argues that drone filming can substantiate our understanding of multisensorial experiences of quiet areas and urban landscapes. Contrary to the distanced gaze often associated with the drone, this paper discusses drone filming as an intimate performativity apparatus that can affect...... perception as a result of its interrelationships between motion, gaze, and sound. This paper uses four films, one of which is a drone flyover, to launch a discussion concerning a smooth and alluring gaze, a sliding gaze that penetrates landscapes, and site appearance. Films hold the capacity to project both...... and transcendence can facilitate a deeper understanding of intimate sensations, substantiating their role in the future design and planning of urban landscapes. Hence, it addresses the ethics of an intimacy perspective (of drone filming) in the qualification of quiet areas....

  2. Invert Effective Thermal Conductivity Calculation

    International Nuclear Information System (INIS)

    M.J. Anderson; H.M. Wade; T.L. Mitchell

    2000-01-01

    The objective of this calculation is to evaluate the temperature-dependent effective thermal conductivities of a repository-emplaced invert steel set and surrounding ballast material. The scope of this calculation analyzes a ballast-material thermal conductivity range of 0.10 to 0.70 W/m · K, a transverse beam spacing range of 0.75 to 1.50 meters, and beam compositions of A 516 carbon steel and plain carbon steel. Results from this calculation are intended to support calculations that identify waste package and repository thermal characteristics for Site Recommendation (SR). This calculation was developed by Waste Package Department (WPD) under Office of Civilian Radioactive Waste Management (OCRWM) procedure AP-3.12Q, Revision 1, ICN 0, Calculations

  3. Global nuclear-structure calculations

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.

    1990-01-01

    The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to ε 2 and ε 4 used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and Β-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential

  4. Three dimensions transport calculations for PWR core

    International Nuclear Information System (INIS)

    Richebois, E.

    2000-01-01

    The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)

  5. CALCULATION OF LASER CUTTING COSTS

    Directory of Open Access Journals (Sweden)

    Bogdan Nedic

    2016-09-01

    Full Text Available The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, comparison' of costs made by other unconventional methods and provides documentation that consists of reports on estimated costs.

  6. Calculation of Rydberg interaction potentials

    DEFF Research Database (Denmark)

    Weber, Sebastian; Tresp, Christoph; Menke, Henri

    2017-01-01

    for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up...... to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source....

  7. Helical tomotherapy shielding calculation for an existing LINAC treatment room: sample calculation and cautions

    International Nuclear Information System (INIS)

    Wu Chuan; Guo Fanqing; Purdy, James A

    2006-01-01

    This paper reports a step-by-step shielding calculation recipe for a helical tomotherapy unit (TomoTherapy Inc., Madison, WI, USA), recently installed in an existing Varian 600C treatment room. Both primary and secondary radiations (leakage and scatter) are explicitly considered. A typical patient load is assumed. Use factor is calculated based on an analytical formula derived from the tomotherapy rotational beam delivery geometry. Leakage and scatter are included in the calculation based on corresponding measurement data as documented by TomoTherapy Inc. Our calculation result shows that, except for a small area by the therapists' console, most of the existing Varian 600C shielding is sufficient for the new tomotherapy unit. This work cautions other institutions facing the similar situation, where an HT unit is considered for an existing LINAC treatment room, more secondary shielding might be considered at some locations, due to the significantly increased secondary shielding requirement by HT. (note)

  8. Educational audit on drug dose calculation learning in a Tanzanian ...

    African Journals Online (AJOL)

    Background: Patient safety is a key concern for nurses; ability to calculate drug ... Specific objectives were to assess learning from targeted teaching, to identify problem areas in perfor- .... this could result in reduced risk of drug dose error in.

  9. Calculation of dose distribution above contaminated soil

    Science.gov (United States)

    Kuroda, Junya; Tenzou, Hideki; Manabe, Seiya; Iwakura, Yukiko

    2017-07-01

    The purpose of this study was to assess the relationship between altitude and the distribution of the ambient dose rate in the air over soil decontamination area by using PHITS simulation code. The geometry configuration was 1000 m ×1000 m area and 1m in soil depth and 100m in altitude from the ground to simulate the area of residences or a school grounds. The contaminated region is supposed to be uniformly contaminated by Cs-137 γ radiation sources. The air dose distribution and space resolution was evaluated for flux of the gamma rays at each altitude, 1, 5, 10, and 20m. The effect of decontamination was calculated by defining sharpness S. S was the ratio of an average flux and a flux at the center of denomination area in each altitude. The suitable flight altitude of the drone is found to be less than 15m above a residence and 31m above a school grounds to confirm the decontamination effect. The calculation results can be a help to determine a flight planning of a drone to minimize the clash risk.

  10. Calculation of transportation energy for biomass collection

    Energy Technology Data Exchange (ETDEWEB)

    Kanai, G.; Takekura, K.; Kato, H.; Kobayashi, Y.; Yakushido, K. [National Agricultural Research Center, Tsukuba, Ibaraki (Japan)

    2010-07-01

    This paper reported on a study at a rice straw facility in Japan that produces bioethanol. Simulation modeling and calculations methods were used to examine the characteristics of field-to-facility transportation. Fuel consumption was found to be influenced by the conversion rate from straw to ethanol, the quantity of straw collected, and the ratio of the field area to that around the facility. Standard conditions were assumed based on reported data and actual observations for 15 ML/yr ethanol production, 0.3 kL output of ethanol from 1 t dry straw, 53.6 day/yr working days, 2.7 t truck load capacity, and 0.128 as the ratio of field to the area around the facility. According to calculations, a quantity of 50 kt dry straw requires 2.78 L of fuel to transport 1 t of dry straw, 109.5 trucks, and a 19.1 km collection area radius. The fuel consumption for transportation was found to be proportional to the quantity of straw to the 0.5 power, but inversely proportional to the ratio of field to the 0.5 power. The rate of increase in the number of trucks needed to collect straw increases with the decrease in the ratio of the field to area surface around the facility.

  11. Calculators

    Science.gov (United States)

    ... ounces of regular beer, 5 ounces of table wine, or 1.5 ounces of 80-proof distilled spirits. Distilled spirits include vodka, whiskey, gin, rum, and ... is 5% alcohol by volume (alc/vol), table wine is about 12% alc/vol, and straight 80-proof distilled spirits is 40% alc/vol. The percent alcohol by ...

  12. Investigating scintillometer source areas

    Science.gov (United States)

    Perelet, A. O.; Ward, H. C.; Pardyjak, E.

    2017-12-01

    Scintillometry is an indirect ground-based method for measuring line-averaged surface heat and moisture fluxes on length scales of 0.5 - 10 km. These length scales are relevant to urban and other complex areas where setting up traditional instrumentation like eddy covariance is logistically difficult. In order to take full advantage of scintillometry, a better understanding of the flux source area is needed. The source area for a scintillometer is typically calculated as a convolution of point sources along the path. A weighting function is then applied along the path to compensate for a total signal contribution that is biased towards the center of the beam path, and decreasing near the beam ends. While this method of calculating the source area provides an estimate of the contribution of the total flux along the beam, there are still questions regarding the physical meaning of the weighted source area. These questions are addressed using data from an idealized experiment near the Salt Lake City International Airport in northern Utah, U.S.A. The site is a flat agricultural area consisting of two different land uses. This simple heterogeneity in the land use facilitates hypothesis testing related to source areas. Measurements were made with a two wavelength scintillometer system spanning 740 m along with three standard open-path infrared gas analyzer-based eddy-covariance stations along the beam path. This configuration allows for direct observations of fluxes along the beam and comparisons to the scintillometer average. The scintillometer system employed measures the refractive index structure parameter of air for two wavelengths of electromagnetic radiation, 880 μm and 1.86 cm to simultaneously estimate path-averaged heat and moisture fluxes, respectively. Meteorological structure parameters (CT2, Cq2, and CTq) as well as surface fluxes are compared for various amounts of source area overlap between eddy covariance and scintillometry. Additionally, surface

  13. Methods for tornado frequency calculation of nuclear power plant

    International Nuclear Information System (INIS)

    Liu Haibin; Li Lin

    2012-01-01

    In order to take probabilistic safety assessment of nuclear power plant tornado attack event, a method to calculate tornado frequency of nuclear power plant is introduced based on HAD 101/10 and NUREG/CR-4839 references. This method can consider history tornado frequency of the plant area, construction dimension, intensity various along with tornado path and area distribution and so on and calculate the frequency of different scale tornado. (authors)

  14. Core calculational techniques and procedures

    International Nuclear Information System (INIS)

    Romano, J.J.

    1977-10-01

    Described are the procedures and techniques employed by B and W in core design analyses of power peaking, control rod worths, and reactivity coefficients. Major emphasis has been placed on current calculational tools and the most frequently performed calculations over the operating power range

  15. Economic calculation in socialist countries

    NARCIS (Netherlands)

    Ellman, M.; Durlauf, S.N.; Blume, L.E.

    2008-01-01

    In the 1930s, when the classical socialist system emerged, economic decisions were based not on detailed and precise economic methods of calculation but on rough and ready political methods. An important method of economic calculation - particularly in the post-Stalin period - was that of

  16. Calculation of Spectra of Solids:

    DEFF Research Database (Denmark)

    Lindgård, Per-Anker

    1975-01-01

    The Gilat-Raubenheimer method simplified to tetrahedron division is used to calculate the real and imaginary part of the dynamical response function for electrons. A frequency expansion for the real part is discussed. The Lindhard function is calculated as a test for numerical accuracy...

  17. Calculator. Owning a Small Business.

    Science.gov (United States)

    Parma City School District, OH.

    Seven activities are presented in this student workbook designed for an exploration of small business ownership and the use of the calculator in this career. Included are simulated situations in which students must use a calculator to compute property taxes; estimate payroll taxes and franchise taxes; compute pricing, approximate salaries,…

  18. Shielding calculational system for plutonium

    International Nuclear Information System (INIS)

    Zimmerman, M.G.; Thomsen, D.H.

    1975-08-01

    A computer calculational system has been developed and assembled specifically for calculating dose rates in AEC plutonium fabrication facilities. The system consists of two computer codes and all nuclear data necessary for calculation of neutron and gamma dose rates from plutonium. The codes include the multigroup version of the Battelle Monte Carlo code for solution of general neutron and gamma shielding problems and the PUSHLD code for solution of shielding problems where low energy gamma and x-rays are important. The nuclear data consists of built in neutron and gamma yields and spectra for various plutonium compounds, an automatic calculation of age effects and all cross-sections commonly used. Experimental correlations have been performed to verify portions of the calculational system. (23 tables, 7 figs, 16 refs) (U.S.)

  19. DRY TRANSFER FACILITY CRITICALITY SAFETY CALCULATIONS

    International Nuclear Information System (INIS)

    C.E. Sanders

    2005-01-01

    This design calculation updates the previous criticality evaluation for the fuel handling, transfer, and staging operations to be performed in the Dry Transfer Facility (DTF) including the remediation area. The purpose of the calculation is to demonstrate that operations performed in the DTF and RF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Dry Transfer Facility Description Document'' (BSC 2005 [DIRS 173737], p. 3-8). A description of the changes is as follows: (1) Update the supporting calculations for the various Category 1 and 2 event sequences as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2005 [DIRS 171429], Section 7). (2) Update the criticality safety calculations for the DTF staging racks and the remediation pool to reflect the current design. This design calculation focuses on commercial spent nuclear fuel (SNF) assemblies, i.e., pressurized water reactor (PWR) and boiling water reactor (BWR) SNF. U.S. Department of Energy (DOE) Environmental Management (EM) owned SNF is evaluated in depth in the ''Canister Handling Facility Criticality Safety Calculations'' (BSC 2005 [DIRS 173284]) and is also applicable to DTF operations. Further, the design and safety analyses of the naval SNF canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. Also, note that the results for the Monitored Geologic Repository (MGR) Site specific Cask (MSC) calculations are limited to the

  20. Agriculture-related radiation dose calculations

    International Nuclear Information System (INIS)

    Furr, J.M.; Mayberry, J.J.; Waite, D.A.

    1987-10-01

    Estimates of radiation dose to the public must be made at each stage in the identification and qualification process leading to siting a high-level nuclear waste repository. Specifically considering the ingestion pathway, this paper examines questions of reliability and adequacy of dose calculations in relation to five stages of data availability (geologic province, region, area, location, and mass balance) and three methods of calculation (population, population/food production, and food production driven). Calculations were done using the model PABLM with data for the Permian and Palo Duro Basins and the Deaf Smith County area. Extra effort expended in gathering agricultural data at succeeding environmental characterization levels does not appear justified, since dose estimates do not differ greatly; that effort would be better spent determining usage of food types that contribute most to the total dose; and that consumption rate and the air dispersion factor are critical to assessment of radiation dose via the ingestion pathway. 17 refs., 9 figs., 32 tabs

  1. Vestibule and Cask Preparation Mechanical Handling Calculation

    International Nuclear Information System (INIS)

    Ambre, N.

    2004-01-01

    The scope of this document is to develop the size, operational envelopes, and major requirements of the equipment to be used in the vestibule, cask preparation area, and the crane maintenance area of the Fuel Handling Facility. This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAIC Company L.L.C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC--28-01R W12101'' (Ref. 167124). This correspondence was appended by further correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC--28-01R W12101; TDL No. 04-024'' (Ref. 16875 1). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process

  2. Reactor core performance calculating device

    International Nuclear Information System (INIS)

    Tominaga, Kenji; Bando, Masaru; Sano, Hiroki; Maruyama, Hiromi.

    1995-01-01

    The device of the present invention can calculate a power distribution efficiently at high speed by a plurality of calculation means while taking an amount of the reactor state into consideration. Namely, an input device takes data from a measuring device for the amount of the reactor core state such as a large number of neutron detectors disposed in the reactor core for monitoring the reactor state during operation. An input data distribution device comprises a state recognition section and a data distribution section. The state recognition section recognizes the kind and amount of the inputted data and information of the calculation means. The data distribution section analyzes the characteristic of the inputted data, divides them into a several groups, allocates them to each of the calculation means for the purpose of calculating the reactor core performance efficiently at high speed based on the information from the state recognition section. A plurality of the calculation means calculate power distribution of each of regions based on the allocated inputted data, to determine the power distribution of the entire reactor core. As a result, the reactor core can be evaluated at high accuracy and at high speed irrespective of the whole reactor core or partial region. (I.S.)

  3. The calculation of CSF spaces in CT

    International Nuclear Information System (INIS)

    Hacker, H.; Artmann, H.

    1978-01-01

    Objective digital determination of CSF spaces is discussed, with ventricular and subarachnoid spaces handled separately. This method avoids the difficulty of visual definition of ventricular borders in planimetric measurements. The principle is to count automatically all pixels corresponding to CSF in a given region with a Hounsfield unit and to multiply this number by the pixel size. This will give the total surface area of CSF spaces in square millimeters. The calculation of pixel values for CSF spaces and brain tissue is experimentally formulated taking the intersection of the Gaussian curves for ventricular content and brain tissue. In practice, the determination of CSF spaces is done by first calculating a histogram of the total brain in a given slice defining all CSF spaces. Next a histogram of a region including ventricles with adjoining tissue is calculated and the ventricular size is calculated. By subtraction of the ventricle value from the total CSF space value, the subarachnoid space size is obtained. The advantages of this mehtod will be discussed. (orig.) [de

  4. Calculation of CSF spaces in CT

    Energy Technology Data Exchange (ETDEWEB)

    Hacker, H; Artmann, H [Frankfurt Univ. (Germany, F.R.). Abt. fuer Neuroradiologie

    1978-01-01

    Objective digital determination of CSF spaces is discussed, with ventricular and subarachnoid spaces handled separately. This method avoids the difficulty of visual definition of ventricular borders in planimetric measurements. The principle is to count automatically all pixels corresponding to CSF in a given region with a Hounsfield unit and to multiply this number by the pixel size. This will give the total surface area of CSF spaces in square millimeters. The calculation of pixel values for CSF spaces and brain tissue is experimentally formulated taking the intersection of the Gaussian curves for ventricular content and brain tissue. In practice, the determination of CSF spaces is done by first calculating a histogram of the total brain in a given slice defining all CSF spaces. Next a histogram of a region including ventricles with adjoining tissue is calculated and the ventricular size is calculated. By subtraction of the ventricle value from the total CSF space value, the subarachnoid space size is obtained. The advantages of this mehtod will be discussed.

  5. Alaska Village Electric Load Calculator

    Energy Technology Data Exchange (ETDEWEB)

    Devine, M.; Baring-Gould, E. I.

    2004-10-01

    As part of designing a village electric power system, the present and future electric loads must be defined, including both seasonal and daily usage patterns. However, in many cases, detailed electric load information is not readily available. NREL developed the Alaska Village Electric Load Calculator to help estimate the electricity requirements in a village given basic information about the types of facilities located within the community. The purpose of this report is to explain how the load calculator was developed and to provide instructions on its use so that organizations can then use this model to calculate expected electrical energy usage.

  6. Reactor calculations and nuclear information

    International Nuclear Information System (INIS)

    Lang, D.W.

    1977-12-01

    The relationship of sets of nuclear parameters and the macroscopic reactor quantities that can be calculated from them is examined. The framework of the study is similar to that of Usachev and Bobkov. The analysis is generalised and some properties required by common sense are demonstrated. The form of calculation permits revision of the parameter set. It is argued that any discrepancy between a calculation and measurement of a macroscopic quantity is more useful when applied directly to prediction of other macroscopic quantities than to revision of the parameter set. The mathematical technique outlined is seen to describe common engineering practice. (Author)

  7. Practical astronomy with your calculator

    CERN Document Server

    Duffett-Smith, Peter

    1989-01-01

    Practical Astronomy with your Calculator, first published in 1979, has enjoyed immense success. The author's clear and easy to follow routines enable you to solve a variety of practical and recreational problems in astronomy using a scientific calculator. Mathematical complexity is kept firmly in the background, leaving just the elements necessary for swiftly making calculations. The major topics are: time, coordinate systems, the Sun, the planetary system, binary stars, the Moon, and eclipses. In the third edition there are entirely new sections on generalised coordinate transformations, nutr

  8. A new calculation of LAMOST optical vignetting

    Science.gov (United States)

    Li, Shuang; Luo, Ali; Chen, Jianjun; Liu, Genrong; Comte, Georges

    2012-09-01

    A new method to calculate the optical vignetting of LAMOST (Large Sky Area Muti-Object Fiber Spectroscopic Telescope) is presented. With the pilot survey of LAMOST, it is necessary to have thorough and quantitative estimation and analysis on the observing efficiency which is affected by various factors: the optical system of the telescope and the spectrograph that is vignetting, the focal instrument, and the site condition. The wide field and large pupil of LAMOST fed by a Schmidt reflecting mirror, with a fixed optical axis coinciding with the local polar axis, lead to significant telescope vignetting, caused by the effective light-collecting area of the corrector, the light obstruction of the focal-plate, and the size of the primary mirror. A calculation of the vignetting has been presented by Xue et al. (2007), which considered 4 meter circle limitation and based on ray-tracking. In fact, there is no effect of the 4 meter circle limitation, so that we compute the vignetting again by means of obtaining the ratio of effective projected area of the corrector. All the results are derived by AUTOCAD. Moreover, the vignetting functions and vignetting variations with declination at which the telescope is pointed and the position considered in the focal surface are presented and analysed. Finally, compared with the ray-tracing method to obtain the vignetting before, the validity and availability of the proposed method are illustrated.

  9. Calculation of gas migration in fractured rock

    International Nuclear Information System (INIS)

    Thunvik, R.; Braester, C.

    1987-09-01

    Calculations are presented for rock properties characteristic to the Forsmark area. The rock permeability was determined by flow tests in vertical boreholes. It is assumed that the permeability distribution obtained from these boreholes is representative also for the permeability distribution along the repository cavern. Calculations were worked out for two different types of boundary conditions, one in which a constant gas flow rate equivalent to a gas production of 33000 kg/year was assumed and the other in which a constant gas cushion of 0.5 metres was assumed. For the permeability distribution considered, the breakthrough at the sea bottom occurred within one hour. The gaswater displacement took place mainly through the fractures of high permeability and practically no flow took place in the fractures of low permeability. (orig./DG)

  10. Calculation of pion form factor

    International Nuclear Information System (INIS)

    Vahedi, N.; Amirarjomand, S.

    1975-09-01

    The pion form factor is calculated using the structure function Wsub(2), which incorporates kinematical constraints, threshold behaviour and scaling. The Bloom-Gilman sum rule is used and only the two leading Regge trajectories are taken into account

  11. Landfill Gas Energy Benefits Calculator

    Science.gov (United States)

    This page contains the LFG Energy Benefits Calculator to estimate direct, avoided, and total greenhouse gas reductions, as well as environmental and energy benefits, for a landfill gas energy project.

  12. Calculate Your Body Mass Index

    Science.gov (United States)

    ... Can! ) Health Professional Resources Calculate Your Body Mass Index Body mass index (BMI) is a measure of body fat based ... Health Information Email Alerts Jobs and Careers Site Index About NHLBI National Institute of Health Department of ...

  13. CONTAIN calculations; CONTAIN-Rechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Scholtyssek, W.

    1995-08-01

    In the first phase of a benchmark comparison, the CONTAIN code was used to calculate an assumed EPR accident `medium-sized leak in the cold leg`, especially for the first two days after initiation of the accident. The results for global characteristics compare well with those of FIPLOC, MELCOR and WAVCO calculations, if the same materials data are used as input. However, significant differences show up for local quantities such as flows through leakages. (orig.)

  14. Numerical calculations near spatial infinity

    International Nuclear Information System (INIS)

    Zenginoglu, Anil

    2007-01-01

    After describing in short some problems and methods regarding the smoothness of null infinity for isolated systems, I present numerical calculations in which both spatial and null infinity can be studied. The reduced conformal field equations based on the conformal Gauss gauge allow us in spherical symmetry to calculate numerically the entire Schwarzschild-Kruskal spacetime in a smooth way including spacelike, null and timelike infinity and the domain close to the singularity

  15. Calculating radiation exposure and dose

    International Nuclear Information System (INIS)

    Hondros, J.

    1987-01-01

    This paper discusses the methods and procedures used to calculate the radiation exposures and radiation doses to designated employees of the Olympic Dam Project. Each of the three major exposure pathways are examined. These are: gamma irradiation, radon daughter inhalation and radioactive dust inhalation. A further section presents ICRP methodology for combining individual pathway exposures to give a total dose figure. Computer programs used for calculations and data storage are also presented briefly

  16. Accident consequence calculations for project W-058 safety analysis

    International Nuclear Information System (INIS)

    Van Keuren, J.C.

    1997-01-01

    This document describes the calculations performed to determine the accident consequences for the W-058 safety analysis. Project W-058 is the replacement cross site transfer system (RCSTS), which is designed to transort liquid waste between the 200 W and 200 E areas. Calculations for RCSTS safety analyses used the same methods as the calculations for the Tank Waste Remediation System (TWRS) Basis for Interim Operation (BIO) and its supporting calculation notes. Revised analyses were performed for the spray and pool leak accidents since the RCSTS flows and pressures differ from those assumed in the TWRS BIO. Revision 1 of the document incorporates review comments

  17. Rain Scattering and Co-ordinate Distance Calculation

    Directory of Open Access Journals (Sweden)

    M. Hajny

    1998-12-01

    Full Text Available Calculations of scattered field on the rain objects are based on using of Multiple MultiPole (MMP numerical method. Both bi-static scattering function and bi-static scattering cross section are calculated in the plane parallel to Earth surface. The co-ordination area was determined using the simple model of scattering volume [1]. Calculation for frequency 9.595 GHz and antenna elevation of 25° was done. Obtained results are compared with calculation in accordance to ITU-R recommendation.

  18. Radionuclide inventory calculation in VVER and BWR reactor

    International Nuclear Information System (INIS)

    Bouhaddane, A.; Farkas, F.; Slugen, V.; Ackermann, L.; Schienbein, M.

    2014-01-01

    The paper shows different aspects in the radionuclide inventory determination. Precise determination of the neutron flux distribution, presented for a BRW reactor, is vital for the activation calculations. The precision can be improved utilizing variance reduction methods as importance treatment, weight windows etc. Direct calculation of the radionuclide inventory via Monte Carlo code is presented for a VVER reactor. Burn-up option utilized in this calculation appears to be proper for reactor internal components. However, it will not be probably effective outside the reactor core. Further calculations in this area are required to support the forth-set findings. (authors)

  19. Calculation of Rydberg interaction potentials

    International Nuclear Information System (INIS)

    Weber, Sebastian; Büchler, Hans Peter; Tresp, Christoph; Urvoy, Alban; Hofferberth, Sebastian; Menke, Henri; Firstenberg, Ofer

    2017-01-01

    The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions to the leading dipole–dipole interaction term are no longer sufficient. In this tutorial, we review all relevant aspects of the full calculation of Rydberg interaction potentials. We discuss the derivation of the interaction Hamiltonian from the electrostatic multipole expansion, numerical and analytical methods for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source. (tutorial)

  20. Indexing aortic valve area by body surface area increases the prevalence of severe aortic stenosis

    DEFF Research Database (Denmark)

    Jander, Nikolaus; Gohlke-Bärwolf, Christa; Bahlmann, Edda

    2014-01-01

    To account for differences in body size in patients with aortic stenosis, aortic valve area (AVA) is divided by body surface area (BSA) to calculate indexed AVA (AVAindex). Cut-off values for severe stenosis are......To account for differences in body size in patients with aortic stenosis, aortic valve area (AVA) is divided by body surface area (BSA) to calculate indexed AVA (AVAindex). Cut-off values for severe stenosis are...

  1. Mordred: a molecular descriptor calculator.

    Science.gov (United States)

    Moriwaki, Hirotomo; Tian, Yu-Shi; Kawashita, Norihito; Takagi, Tatsuya

    2018-02-06

    Molecular descriptors are widely employed to present molecular characteristics in cheminformatics. Various molecular-descriptor-calculation software programs have been developed. However, users of those programs must contend with several issues, including software bugs, insufficient update frequencies, and software licensing constraints. To address these issues, we propose Mordred, a developed descriptor-calculation software application that can calculate more than 1800 two- and three-dimensional descriptors. It is freely available via GitHub. Mordred can be easily installed and used in the command line interface, as a web application, or as a high-flexibility Python package on all major platforms (Windows, Linux, and macOS). Performance benchmark results show that Mordred is at least twice as fast as the well-known PaDEL-Descriptor and it can calculate descriptors for large molecules, which cannot be accomplished by other software. Owing to its good performance, convenience, number of descriptors, and a lax licensing constraint, Mordred is a promising choice of molecular descriptor calculation software that can be utilized for cheminformatics studies, such as those on quantitative structure-property relationships.

  2. Propagation calculation for reactor cases

    Energy Technology Data Exchange (ETDEWEB)

    Yang Yanhua [School of Power and Energy Engineering, Shanghai Jiao Tong Univ., Shanghai (China); Moriyama, K.; Maruyama, Y.; Nakamura, H.; Hashimoto, K. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2000-11-01

    The propagation of steam explosion for real reactor geometry and conditions are investigated by using the computer code JASMINE-pro. The ex-vessel steam explosion is considered, which is described as follow: during the accident of reactor core meltdown, the molten core melts a hole at the bottom of reactor vessel and causes the higher temperature core fuel being leaked into the water pool below reactor vessel. During the melt-water mixing interaction process, the high temperature melt evaporates the cool water at an extreme high rate and might induce a steam explosion. A steam explosion could experience first the premixing phase and then the propagation explosion phase. For a propagation calculation, we should know the information about the initial fragmentation time, the total melt mass, premixing region size, initial void fraction and distribution of the melt volume fraction, and so on. All the initial conditions used in this calculation are based on analyses from some simple assumptions and the observation from the experiments. The results show that the most important parameter for the initial condition of this phase is the total mass and its initial distribution. This gives the requirement for a premixing calculation. On the other hand, for higher melt volume fraction case, the fragmentation is strong so that the local pressure can exceed over the EOS maximum pressure of the code, which lead to the incorrect calculation or divergence of the calculation. (Suetake, M.)

  3. The application of the finite element method for the low-cycle fatigue calculation of the elementsof the pipelines’ fixed support construction for the areas of above-ground routing of the oil pipeline «Zapolyarye — NPS „Pur-Pe“»

    Directory of Open Access Journals (Sweden)

    Surikov Vitaliy Ivanovich

    2014-02-01

    Full Text Available The present article studies the order of performing low-cycle fatigue strength calculation of the elements of the full-scale specimen construction of the fixed support DN 1000 of the above-ground oil pipeline “Zapolyarye — Purpe” during rig-testing. The calculation is performed with the aim of optimizing the quantity of testing and, accordingly, cost cutting for expensive experiments. The order of performing the calculation consists of two stages. At the first stage the calculation is performed by the finite element method of the full-scale specimen construction’s stressed-deformed state in the calculation complex ANSYS. Thearticle describes the main creation stages of the finite element calculation model for the full-scale specimen in ANSYS. The calculation model is developed in accordance with a three-dimensional model of the full-scale specimen, adapted for rig-testing by cyclic loads. The article provides the description of the full-scale specimen construction of the support and loading modes in rig-testing. Cyclic loads are accepted as calculation ones, which influence the support for the 50 years of the oil pipeline operation and simulate the composite impact in the process of the loads’ operation connected to the changes in the pumping pressure, operational bending moment. They also simulate preloading in the case of sagging of the neighboring free support. For the determination of the unobservable for the diagnostic devices defects impact on the reliability of the fixed support and welding joints of the fixed support with the oil pipeline by analogy with the full-scale specimen, artificial defects were embedded in the calculation model. The defects were performed in the form of cuts of the definite form, located in a special way in the spool and welding joints. At the second stage of calculation for low-cycle fatigue strength, the evaluation of the cyclic strength of the full-scale specimen construction’s elements of the

  4. Wide area Hyperspectral Motion Imaging

    Science.gov (United States)

    2017-02-03

    detection at a manageable false alarm rate using the adaptive coherence estimator algorithm. A radiance spectrum was calculated with MODTRAN at 5km...1mHz. In order to meet SNR and update rate, the area coverage is reduced to less than the size of a football stadium. An interferometer has

  5. Calculation of magnetic hyperfine constants

    International Nuclear Information System (INIS)

    Bufaical, R.F.; Maffeo, B.; Brandi, H.S.

    1975-01-01

    The magnetic hyperfine constants of the V sub(K) center in CaF 2 , SrF 2 and BaF 2 have been calculated assuming a phenomenological model, based on the F 2 - 'central molucule', to describe the wavefunction of the defect. Calculations have shown that introduction of a small degree of covalence, between this central molecule and neighboring ions, is necessary to improve the electronic structure description of the defect. It was also shown that the results for the hyperfine constants are strongly dependent on the relaxations of the ions neighboring the central molecule; these relaxations have been determined by fitting the experimental data. The present results are compared with other previous calculations where similar and different theoretical methods have been used

  6. Parameters calculation of shielding experiment

    International Nuclear Information System (INIS)

    Gavazza, S.

    1986-02-01

    The radiation transport methodology comparing the calculated reactions and dose rates for neutrons and gama-rays, with experimental measurements obtained on iron shield, irradiated in the YAYOI reactor is evaluated. The ENDF/B-IV and VITAMIN-C libraries and the AMPX-II modular system, for cross sections generation collapsed by the ANISN code were used. The transport calculations were made using the DOT 3.5 code, adjusting the boundary iron shield source spectrum to the reactions and dose rates, measured at the beginning of shield. The neutron and gamma ray distributions calculated on the iron shield presented reasonable agreement with experimental measurements. An experimental arrangement using the IEA-R1 reactor to determine a shielding benchmark is proposed. (Author) [pt

  7. Insertion device calculations with mathematica

    Energy Technology Data Exchange (ETDEWEB)

    Carr, R. [Stanford Synchrotron Radiation Lab., CA (United States); Lidia, S. [Univ. of California, Davis, CA (United States)

    1995-02-01

    The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectory solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.

  8. Automatic calculations of electroweak processes

    International Nuclear Information System (INIS)

    Ishikawa, T.; Kawabata, S.; Kurihara, Y.; Shimizu, Y.; Kaneko, T.; Kato, K.; Tanaka, H.

    1996-01-01

    GRACE system is an excellent tool for calculating the cross section and for generating event of the elementary process automatically. However it is not always easy for beginners to use. An interactive version of GRACE is being developed so as to be a user friendly system. Since it works exactly in the same environment as PAW, all functions of PAW are available for handling any histogram information produced by GRACE. As its application the cross sections of all elementary processes with up to 5-body final states induced by e + e - interaction are going to be calculated and to be summarized as a catalogue. (author)

  9. Calculation methods in program CCRMN

    Energy Technology Data Exchange (ETDEWEB)

    Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.

  10. Friction and wear calculation methods

    CERN Document Server

    Kragelsky, I V; Kombalov, V S

    1981-01-01

    Friction and Wear: Calculation Methods provides an introduction to the main theories of a new branch of mechanics known as """"contact interaction of solids in relative motion."""" This branch is closely bound up with other sciences, especially physics and chemistry. The book analyzes the nature of friction and wear, and some theoretical relationships that link the characteristics of the processes and the properties of the contacting bodies essential for practical application of the theories in calculating friction forces and wear values. The effect of the environment on friction and wear is a

  11. Selfconsistent calculations at finite temperatures

    International Nuclear Information System (INIS)

    Brack, M.; Quentin, P.

    1975-01-01

    Calculations have been done for the spherical nuclei 40 Ca, 208 Pb and the hypothetical superheavy nucleus with Z=114, A=298, as well as for the deformed nucleus 168 Yb. The temperature T was varied from zero up to 5 MeV. For T>3 MeV, some numerical problems arise in connection with the optimization of the basis when calculating deformed nuclei. However, at these high temperatures the occupation numbers in the continuum are sufficiently large so that the nucleus starts evaporating particles and no equilibrium state can be described. Results are obtained for excitation energies and entropies. (Auth.)

  12. Benchmark neutron porosity log calculations

    International Nuclear Information System (INIS)

    Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes

  13. Molecular calculations with B functions

    International Nuclear Information System (INIS)

    Steinborn, E.O.; Homeier, H.H.H.; Ema, I.; Lopez, R.; Ramirez, G.

    2000-01-01

    A program for molecular calculations with B functions is reported and its performance is analyzed. All the one- and two-center integrals and the three-center nuclear attraction integrals are computed by direct procedures, using previously developed algorithms. The three- and four-center electron repulsion integrals are computed by means of Gaussian expansions of the B functions. A new procedure for obtaining these expansions is also reported. Some results on full molecular calculations are included to show the capabilities of the program and the quality of the B functions to represent the electronic functions in molecules

  14. Lattice calculations in gauge theory

    International Nuclear Information System (INIS)

    Rebbi, C.

    1985-01-01

    The lattice formulation of quantum gauge theories is discussed as a viable technique for quantitative studies of nonperturbative effects in QCD. Evidence is presented to ascertain that whole classes of lattice actions produce a universal continuum limit. Discrepancies between numerical results from Monto Carlo simulations for the pure gauge system and for the system with gauge and quark fields are discussed. Numerical calculations for QCD require very substantial computational resources. The use of powerful vector processors of special purpose machines, in extending the scope and magnitude or the calculations is considered, and one may reasonably expect that in the near future good quantitative predictions will be obtained for QCD

  15. Statistical analysis of uncertainties of gamma-peak identification and area calculation in particulate air-filter environment radionuclide measurements using the results of a Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) organized intercomparison, Part I: Assessment of reliability and uncertainties of isotope detection and energy precision using artificial spiked test spectra, Part II: Assessment of the true type I error rate and the quality of peak area estimators in relation to type II errors using large numbers of natural spectra

    International Nuclear Information System (INIS)

    Zhang, W.; Zaehringer, M.; Ungar, K.; Hoffman, I.

    2008-01-01

    In this paper, the uncertainties of gamma-ray small peak analysis have been examined. As the intensity of a gamma-ray peak approaches its detection decision limit, derived parameters such as centroid channel energy, peak area, peak area uncertainty, baseline determination, and peak significance are statistically sensitive. The intercomparison exercise organized by the CTBTO provided an excellent opportunity for this to be studied. Near background levels, the false-positive and false-negative peak identification frequencies in artificial test spectra have been compared to statistically predictable limiting values. In addition, naturally occurring radon progeny were used to compare observed variance against nominal uncertainties. The results infer that the applied fit algorithms do not always represent the best estimator. Understanding the statistically predicted peak-finding limit is important for data evaluation and analysis assessment. Furthermore, these results are useful to optimize analytical procedures to achieve the best results

  16. On calculating intensity from XPS spectra

    International Nuclear Information System (INIS)

    Vegh, Janos

    2006-01-01

    The intensity calculation is the basis for all quantitative applications of electron spectroscopy. Unfortunately, some misinterpreted terms are used and correctly interpreted terms are misused in the overwhelming majority of publications in XPS, including most textbooks as well as accepted and proposed standards. Due to this mistake the number of the detected electrons is given as having dimension of energy (?) and also the formulas for calculating the peak area and its standard deviation are wrong. Since in all other spectroscopic fields the number of the detected particles is dimensionless, continuing this practice leads to isolating XPS from both other measurement sciences and theory, because the measured total intensity in XPS is simply not comparable to the ones derived with other spectroscopic methods or theoretically. Therefore, the basic measuring processes and terms are critically reviewed and their physically correct interpretation is given. This interpretation reveals that the error is hidden in the incorrect interpretation of both the measurement process and the measured quantity. It is shown that through using the correct interpretation both the dimensions of the intensity calculated from electron spectroscopic measurements as well as the formulas related to the intensity and its standard deviation will agree with all other spectroscopic fields

  17. Methods for Melting Temperature Calculation

    Science.gov (United States)

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  18. Development of My Footprint Calculator

    Science.gov (United States)

    Mummidisetti, Karthik

    The Environmental footprint is a very powerful tool that helps an individual to understand how their everyday activities are impacting environmental surroundings. Data shows that global climate change, which is a growing concern for nations all over the world, is already affecting humankind, plants and animals through raising ocean levels, droughts & desertification and changing weather patterns. In addition to a wide range of policy measures implemented by national and state governments, it is necessary for individuals to understand the impact that their lifestyle may have on their personal environmental footprint, and thus over the global climate change. "My Footprint Calculator" (myfootprintcalculator.com) has been designed to be one the simplest, yet comprehensive, web tools to help individuals calculate and understand their personal environmental impact. "My Footprint Calculator" is a website that queries users about their everyday habits and activities and calculates their personal impact on the environment. This website was re-designed to help users determine their environmental impact in various aspects of their lives ranging from transportation and recycling habits to water and energy usage with the addition of new features that will allow users to share their experiences and their best practices with other users interested in reducing their personal Environmental footprint. The collected data is stored in the database and a future goal of this work plans to analyze the collected data from all users (anonymously) for developing relevant trends and statistics.

  19. Calculations of nucleon structure functions

    International Nuclear Information System (INIS)

    Signal, A.I.

    1990-01-01

    We present a method of calculating deep inelastic nucleon structure functions using bag model wavefunctions. Our method uses the Peierls - Yoccoz projection to form translation invariant bag states. We obtain the correct support for the structure functions and satisfy the positivity requirements for quark and anti-quark distribution functions. (orig.)

  20. Data Acquisition and Flux Calculations

    DEFF Research Database (Denmark)

    Rebmann, C.; Kolle, O; Heinesch, B

    2012-01-01

    In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....

  1. Ab Initio Calculations of Oxosulfatovanadates

    DEFF Research Database (Denmark)

    Frøberg, Torben; Johansen, Helge

    1996-01-01

    Restricted Hartree-Fock and multi-configurational self-consistent-field calculations together with secondorder perturbation theory have been used to study the geometry, the electron density, and the electronicspectrum of (VO2SO4)-. A bidentate sulphate attachment to vanadium was found to be stabl...

  2. Coil protection calculator for TFTR

    International Nuclear Information System (INIS)

    Marsala, R.J.; Lawson, J.E.; Persing, R.G.; Senko, T.R.; Woolley, R.D.

    1989-01-01

    A new coil protection system (CPS) is being developed to replace the existing TFTR magnetic coil fault detector. The existing fault detector sacrifices TFTR operating capability for simplicity. The new CPS, when installed in October of 1988, will permit operation up to the actual coil stress limits parameters in real-time. The computation will be done in a microprocessor based Coil Protection Calculator (CPC) currently under construction at PPL. THe new CPC will allow TFTR to operate with higher plasma currents and will permit the optimization of pulse repetition rates. The CPC will provide real-time estimates of critical coil and bus temperatures and stresses based on real-time redundant measurements of coil currents, coil cooling water inlet temperature, and plasma current. The critical parameter calculations are compared to prespecified limits. If these limits are reached or exceeded, protection action will be initiated to a hard wired control system (HCS), which will shut down the power supplies. The CPC consists of a redundant VME based microprocessor system which will sample all input data and compute all stress quantities every ten milliseconds. Thermal calculations will be approximated every 10ms with an exact solution occurring every second. The CPC features continuous cross-checking of redundant input signal, automatic detection of internal failure modes, monitoring and recording of calculated results, and a quick, functional verification of performance via an internal test system. (author)

  3. Ab-initio ZORA calculations

    NARCIS (Netherlands)

    Faas, S.; Snijders, Jaap; van Lenthe, J.H.; HernandezLaguna, A; Maruani, J; McWeeny, R; Wilson, S

    2000-01-01

    In this paper we present the first application of the ZORA (Zeroth Order Regular Approximation of the Dirac Fock equation) formalism in Ab Initio electronic structure calculations. The ZORA method, which has been tested previously in the context of Density Functional Theory, has been implemented in

  4. Introduction to calculations of recuperators

    International Nuclear Information System (INIS)

    Dollar, M.

    1977-01-01

    Physical principles of heat transfer between fluid under turbulent flow conditions and a wall of a duct are described. The methods of calculations of heat transfer coefficient and the theory of recuperative heat exchangers are presented. Numerical examples are given to illustrate the theory. (author)

  5. Photoproduction data for heating calculations

    International Nuclear Information System (INIS)

    Van der Marck, Steven C.; Koning, Arjan J.; Rochman, Dimitri

    2008-01-01

    For irradiations in a materials test reactor, the prediction of the amount of gamma heating in the reactor is important. Only a good predictive calculation will lead to an irradiation in which the specified temperatures are reached. The photons produced by fission product decay are often missing in spectrum calculations for a reactor, but the contribution of the photons can be computed effectively using engineering correlations for the amount of fission product decay and the ensuing photon spectrum. The prompt photons are usually calculated by a spectrum code based on the underlying nuclear data libraries. For most of the important nuclides, the nuclear data libraries contain data for the photon productions rates. However, there are still many nuclides for which the photon production data are missing, and some of these nuclides contribute to gamma heating. In this paper it is estimated what the contributions to heating are from photon production on nuclides such as 236 U, 238 Pu, 135 I, 135 Xe, 147 Pm, 148 Pm, 148m Pm, and 149 Sm. Also, simple arguments are given to judge the effect from photon production on all other (lumped) fission products, and from 28 Al decay. For all these calculations the High Flux Reactor is used as an example. (authors)

  6. Methods for magnetostatic field calculation

    International Nuclear Information System (INIS)

    Vorozhtsov, S.B.

    1984-01-01

    Two methods for magnetostatic field calculation: differential and integrat are considered. Both approaches are shown to have certain merits and drawbacks, choice of the method depend on the type of the solved problem. An opportunity of combination of these tWo methods in one algorithm (hybrid method) is considered

  7. Prenatal radiation exposure. Dose calculation

    International Nuclear Information System (INIS)

    Scharwaechter, C.; Schwartz, C.A.; Haage, P.; Roeser, A.

    2015-01-01

    The unborn child requires special protection. In this context, the indication for an X-ray examination is to be checked critically. If thereupon radiation of the lower abdomen including the uterus cannot be avoided, the examination should be postponed until the end of pregnancy or alternative examination techniques should be considered. Under certain circumstances, either accidental or in unavoidable cases after a thorough risk assessment, radiation exposure of the unborn may take place. In some of these cases an expert radiation hygiene consultation may be required. This consultation should comprise the expected risks for the unborn while not perturbing the mother or the involved medical staff. For the risk assessment in case of an in-utero X-ray exposition deterministic damages with a defined threshold dose are distinguished from stochastic damages without a definable threshold dose. The occurrence of deterministic damages depends on the dose and the developmental stage of the unborn at the time of radiation. To calculate the risks of an in-utero radiation exposure a three-stage concept is commonly applied. Depending on the amount of radiation, the radiation dose is either estimated, roughly calculated using standard tables or, in critical cases, accurately calculated based on the individual event. The complexity of the calculation thereby increases from stage to stage. An estimation based on stage one is easily feasible whereas calculations based on stages two and especially three are more complex and often necessitate execution by specialists. This article demonstrates in detail the risks for the unborn child pertaining to its developmental phase and explains the three-stage concept as an evaluation scheme. It should be noted, that all risk estimations are subject to considerable uncertainties.

  8. Calculating zeros: Non-equilibrium free energy calculations

    International Nuclear Information System (INIS)

    Oostenbrink, Chris; Gunsteren, Wilfred F. van

    2006-01-01

    Free energy calculations on three model processes with theoretically known free energy changes have been performed using short simulation times. A comparison between equilibrium (thermodynamic integration) and non-equilibrium (fast growth) methods has been made in order to assess the accuracy and precision of these methods. The three processes have been chosen to represent processes often observed in biomolecular free energy calculations. They involve a redistribution of charges, the creation and annihilation of neutral particles and conformational changes. At very short overall simulation times, the thermodynamic integration approach using discrete steps is most accurate. More importantly, reasonable accuracy can be obtained using this method which seems independent of the overall simulation time. In cases where slow conformational changes play a role, fast growth simulations might have an advantage over discrete thermodynamic integration where sufficient sampling needs to be obtained at every λ-point, but only if the initial conformations do properly represent an equilibrium ensemble. From these three test cases practical lessons can be learned that will be applicable to biomolecular free energy calculations

  9. Validation of calculational methods for nuclear criticality safety - approved 1975

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The American National Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors, N16.1-1975, states in 4.2.5: In the absence of directly applicable experimental measurements, the limits may be derived from calculations made by a method shown to be valid by comparison with experimental data, provided sufficient allowances are made for uncertainties in the data and in the calculations. There are many methods of calculation which vary widely in basis and form. Each has its place in the broad spectrum of problems encountered in the nuclear criticality safety field; however, the general procedure to be followed in establishing validity is common to all. The standard states the requirements for establishing the validity and area(s) of applicability of any calculational method used in assessing nuclear criticality safety

  10. Blow.MOD2: a program for blowdown transient calculations

    International Nuclear Information System (INIS)

    Doval, A.

    1990-01-01

    The BLOW.MOD2 program has been developed to calculate the blowdown phase in a pressurized vessel after a break/valve is opened. It is a one volume model where break height and flow area are specified. Moody critical flow model was adopted under saturation conditions for flow calculation through the break. Heat transfer from structures and internals have been taken into account. Long term depressurization results and a more complex model are compared satisfactorily. (Author)

  11. Runoff Calculation by Neural Networks Using Radar Rainfall Data

    OpenAIRE

    岡田, 晋作; 四俵, 正俊

    1997-01-01

    Neural networks, are used to calculate runoff from weather radar data and ground rain gauge data. Compared to usual runoff models, it is easier to use radar data in neural network runoff calculation. Basically you can use the radar data directly, or without transforming them into rainfall, as the input of the neural network. A situation with the difficulty of ground measurement is supposed. To cover the area lacking ground rain gauge, radar data are used. In case that the distribution of grou...

  12. Mechanical calculation of heat exchangers

    International Nuclear Information System (INIS)

    Osweiller, Francis.

    1977-01-01

    Many heat exchangers are still being dimensioned at the present time by means of the American TEMA code (Tubular Exchanger Manufacturers Association). The basic formula of this code often gives rise to significant tubular plate thicknesses which, apart from the cost of materials, involve significant machining. Some constructors have brought into use calculation methods that are more analytic so as to take into better consideration the mechanical phenomena which come into play in a heat exchanger. After a brief analysis of these methods it is shown, how the original TEMA formulations have changed to reach the present version and how this code has incorporated Gardner's results for treating exchangers with two fixed heads. A formal and numerical comparison is then made of the analytical and TEMA methods by attempting to highlight a code based on these methods or a computer calculation programme in relation to the TEMA code [fr

  13. CONTRIBUTION FOR MINING ATMOSPHERE CALCULATION

    Directory of Open Access Journals (Sweden)

    Franica Trojanović

    1989-12-01

    Full Text Available Humid air is an unavoidable feature of mining atmosphere, which plays a significant role in defining the climate conditions as well as permitted circumstances for normal mining work. Saturated humid air prevents heat conduction from the human body by means of evaporation. Consequently, it is of primary interest in the mining practice to establish the relative air humidity either by means of direct or indirect methods. Percentage of water in the surrounding air may be determined in various procedures including tables, diagrams or particular calculations, where each technique has its specific advantages and disadvantages. Classical calculation is done according to Sprung's formula, in which case partial steam pressure should also be taken from the steam table. The new method without the use of diagram or tables, established on the functional relation of pressure and temperature on saturated line, is presented here for the first time (the paper is published in Croatian.

  14. The Collective Practice of Calculation

    DEFF Research Database (Denmark)

    Schrøder, Ida

    The calculation of costs plays an increasingly large role in the decision-making processes of public sector human service organizations. This has brought scholars of management accounting to investigate the relationship between caring professions and demands to make economic entities of the service...... productions as either a process of hybridization or conflict. With these approaches, though, they fail to interrogate the possibility that professional action might not be either the one or the other, but entail a broad variety of relationships between calculations and judgements. This paper elaborates...... and judgement to reach decisions to invest in social services. The line is not drawn between the two, but between the material arrangements that make decisions possible. This implies that the insisting on qualitatively based decisions gives the professionals agency to collectively engage in practical...

  15. Computer programs for lattice calculations

    International Nuclear Information System (INIS)

    Keil, E.; Reich, K.H.

    1984-01-01

    The aim of the workshop was to find out whether some standardisation could be achieved for future work in this field. A certain amount of useful information was unearthed, and desirable features of a ''standard'' program emerged. Progress is not expected to be breathtaking, although participants (practically from all interested US, Canadian and European accelerator laboratories) agreed that the mathematics of the existing programs is more or less the same. Apart from the NIH (not invented here) effect, there is a - to quite some extent understandable - tendency to stay with a program one knows and to add to it if unavoidable rather than to start using a new one. Users of the well supported program TRANSPORT (designed for beam line calculations) would prefer to have it fully extended for lattice calculations (to some extent already possible now), while SYNCH users wish to see that program provided with a user-friendly input, rather than spending time and effort for mastering a new program

  16. Coil protection calculator for TFTR

    International Nuclear Information System (INIS)

    Marsala, R.J.; Woolley, R.D.

    1987-01-01

    A new coil protection calculator (CPC) is presented in this paper. It is now being developed for TFTR's magnetic field coils will replace the existing coil fault detector. The existing fault detector sacrifices TFTR operating capability for simplicity. The new CPC will permit operation up to the actual coil limits by accurately and continuously computing coil parameters in real-time. The improvement will allow TFTR to operate with higher plasma currents and will permit the optimization of pulse repetition rates

  17. Rotor calculations for neutron spectroscopy

    International Nuclear Information System (INIS)

    Gobert, G.

    1968-01-01

    The determination of stress in a rotating disk plane of symmetry normal to the axis of rotation has been studied by a number of investigators. In a recent paper Reich gives an operating process for an analytical solution in an asymmetric rotating disk. In the report we give the calculation of finite difference stress solutions applicable to the two rotating disks. The equations are then programmed for the 360.75 computer by Fortran methods concerning the rotors of choppers. (author) [fr

  18. Neutron area monitor with TLD pairs

    International Nuclear Information System (INIS)

    Guzman G, K. A.; Borja H, C. G.; Valero L, C.; Hernandez D, V. M.; Vega C, H. R.

    2011-11-01

    The response of a passive neutron area monitor with pairs of thermoluminescent dosimeters has been calculated using the Monte Carlo code MCNP5. The response was calculated for one TLD 600 located at the center of a polyethylene cylinder, as moderator. When neutrons collide with the moderator lose their energy reaching the TLD with thermal energies where the ambient dose equivalent is calculated. The response was calculated for 47 monoenergetic neutron sources ranging from 1E(-9) to 20 MeV. Response was calculated using two irradiation geometries, one with an upper source and another with a lateral source. For both irradiation schemes the response was calculated with the TLDs in two positions, one parallel to the source and another perpendicular to the source. The advantage of this passive neutron monitor area is that can be used in locations with intense, pulsed and mixed radiation fields. (Author)

  19. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  20. Microcomputer generated pipe support calculations

    International Nuclear Information System (INIS)

    Hankinson, R.F.; Czarnowski, P.; Roemer, R.E.

    1991-01-01

    The cost and complexity of pipe support design has been a continuing challenge to the construction and modification of commercial nuclear facilities. Typically, pipe support design or qualification projects have required large numbers of engineers centrally located with access to mainframe computer facilities. Much engineering time has been spent repetitively performing a sequence of tasks to address complex design criteria and consolidating the results of calculations into documentation packages in accordance with strict quality requirements. The continuing challenges of cost and quality, the need for support engineering services at operating plant sites, and the substantial recent advances in microcomputer systems suggested that a stand-alone microcomputer pipe support calculation generator was feasible and had become a necessity for providing cost-effective and high quality pipe support engineering services to the industry. This paper outlines the preparation for, and the development of, an integrated pipe support design/evaluation software system which maintains all computer programs in the same environment, minimizes manual performance of standard or repetitive tasks, and generates a high quality calculation which is consistent and easily followed

  1. Calculational methods for lattice cells

    International Nuclear Information System (INIS)

    Askew, J.R.

    1980-01-01

    At the current stage of development, direct simulation of all the processes involved in the reactor to the degree of accuracy required is not an economic proposition, and this is achieved by progressive synthesis of models for parts of the full space/angle/energy neutron behaviour. The split between reactor and lattice calculations is one such simplification. Most reactors are constructed of repetitions of similar geometric units, the fuel elements, having broadly similar properties. Thus the provision of detailed predictions of their behaviour is an important step towards overall modelling. We shall be dealing with these lattice methods in this series of lectures, but will refer back from time to time to their relationship with overall reactor calculation The lattice cell is itself composed of somewhat similar sub-units, the fuel pins, and will itself often rely upon a further break down of modelling. Construction of a good model depends upon the identification, on physical and mathematical grounds, of the most helpful division of the calculation at this level

  2. Calculation of groundwater travel time

    International Nuclear Information System (INIS)

    Arnett, R.C.; Sagar, B.; Baca, R.G.

    1984-12-01

    Pre-waste-emplacement groundwater travel time is one indicator of the isolation capability of the geologic system surrounding a repository. Two distinct modeling approaches exist for prediction of groundwater flow paths and travel times from the repository location to the designated accessible environment boundary. These two approaches are: (1) the deterministic approach which calculates a single value prediction of groundwater travel time based on average values for input parameters and (2) the stochastic approach which yields a distribution of possible groundwater travel times as a function of the nature and magnitude of uncertainties in the model inputs. The purposes of this report are to (1) document the theoretical (i.e., mathematical) basis used to calculate groundwater pathlines and travel times in a basalt system, (2) outline limitations and ranges of applicability of the deterministic modeling approach, and (3) explain the motivation for the use of the stochastic modeling approach currently being used to predict groundwater pathlines and travel times for the Hanford Site. Example calculations of groundwater travel times are presented to highlight and compare the differences between the deterministic and stochastic modeling approaches. 28 refs

  3. Empirical Formulae for The Calculation of Austenite Supercooled Transformation Temperatures

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2015-04-01

    Full Text Available The paper presents empirical formulae for the calculation of austenite supercooled transformation temperatures, basing on the chemical composition, austenitising temperature and cooling rate. The multiple regression method was used. Four equations were established allowing to calculate temperature of the start area of ferrite, perlite, bainite and martensite at the given cooling rate. The calculation results obtained do not allow to determine the cooling rate range of ferritic, pearlitic, bainitic and martensite transformations. Classifiers based on logistic regression or neural network were established to solve this problem.

  4. Development of an EDV-supported decision instrument for site pre-selection of nuclear power plants. EDV-supported instrument for calculation of the space distribution of the collective dose rate and area contamination. Vol. 1. Radiation exposure through air- and water paths under authorized operating conditions and during incidents

    Energy Technology Data Exchange (ETDEWEB)

    Bruessermann, K; Eschhaus, M; Kreymborg, A; Muenster, M; Schommer, N

    1980-01-01

    The collective dose rate and the area contamination form a basis for site criteria going beyond the individual considerations of distribution of the population, hydrology, meteorology etc. The possibilities of radio-ecological models on the radiation exposure through air- and water paths during operation and incidents are described by example of Biblis, Muelheim-Kaerlich and Esensham. Comparative evaluations were effected for Fessenheim.

  5. Comparison of molecular mechanics-Poisson-Boltzmann surface area (MM-PBSA) and molecular mechanics-three-dimensional reference interaction site model (MM-3D-RISM) method to calculate the binding free energy of protein-ligand complexes: Effect of metal ion and advance statistical test

    Science.gov (United States)

    Pandey, Preeti; Srivastava, Rakesh; Bandyopadhyay, Pradipta

    2018-03-01

    The relative performance of MM-PBSA and MM-3D-RISM methods to estimate the binding free energy of protein-ligand complexes is investigated by applying these to three proteins (Dihydrofolate Reductase, Catechol-O-methyltransferase, and Stromelysin-1) differing in the number of metal ions they contain. None of the computational methods could distinguish all the ligands based on their calculated binding free energies (as compared to experimental values). The difference between the two comes from both polar and non-polar part of solvation. For charged ligand case, MM-PBSA and MM-3D-RISM give a qualitatively different result for the polar part of solvation.

  6. Glass dissolution rate measurement and calculation revisited

    Energy Technology Data Exchange (ETDEWEB)

    Fournier, Maxime, E-mail: maxime.fournier@cea.fr [CEA, DEN, DTCD, SECM, F-30207, Bagnols sur Cèze (France); Ull, Aurélien; Nicoleau, Elodie [CEA, DEN, DTCD, SECM, F-30207, Bagnols sur Cèze (France); Inagaki, Yaohiro [Department of Applied Quantum Physics & Nuclear Engineering, Kyushu University, Fukuoka, 819-0395 (Japan); Odorico, Michaël [ICSM-UMR5257 CEA/CNRS/UM2/ENSCM, Site de Marcoule, BP17171, F-30207, Bagnols sur Cèze (France); Frugier, Pierre; Gin, Stéphane [CEA, DEN, DTCD, SECM, F-30207, Bagnols sur Cèze (France)

    2016-08-01

    Aqueous dissolution rate measurements of nuclear glasses are a key step in the long-term behavior study of such waste forms. These rates are routinely normalized to the glass surface area in contact with solution, and experiments are very often carried out using crushed materials. Various methods have been implemented to determine the surface area of such glass powders, leading to differing values, with the notion of the reactive surface area of crushed glass remaining vague. In this study, around forty initial dissolution rate measurements were conducted following static and flow rate (SPFT, MCFT) measurement protocols at 90 °C, pH 10. The international reference glass (ISG), in the forms of powders with different particle sizes and polished monoliths, and soda-lime glass beads were examined. Although crushed glass grains clearly cannot be assimilated with spheres, it is when using the samples geometric surface (S{sub geo}) that the rates measured on powders are closest to those found for monoliths. Overestimation of the reactive surface when using the BET model (S{sub BET}) may be due to small physical features at the atomic scale—contributing to BET surface area but not to AFM surface area. Such features are very small compared with the thickness of water ingress in glass (a few hundred nanometers) and should not be considered in rate calculations. With a S{sub BET}/S{sub geo} ratio of 2.5 ± 0.2 for ISG powders, it is shown here that rates measured on powders and normalized to S{sub geo} should be divided by 1.3 and rates normalized to S{sub BET} should be multiplied by 1.9 in order to be compared with rates measured on a monolith. The use of glass beads indicates that the geometric surface gives a good estimation of glass reactive surface if sample geometry can be precisely described. Although data clearly shows the repeatability of measurements, results must be given with a high uncertainty of approximately ±25%. - Highlights: • Initial dissolution

  7. Nuclear data library in design calculation

    International Nuclear Information System (INIS)

    Hirano, Go; Kosaka, Shinya

    2006-01-01

    In core design calculation, nuclear data takes part as multi group cross section library during the assembly calculation, which is the first stage of a core design calculation. This report summarizes the multi group cross section libraries used in assembly calculations and also presents the methods adopted for resonance and assembly calculation. (author)

  8. Calculation of gas turbine characteristic

    Science.gov (United States)

    Mamaev, B. I.; Murashko, V. L.

    2016-04-01

    The reasons and regularities of vapor flow and turbine parameter variation depending on the total pressure drop rate π* and rotor rotation frequency n are studied, as exemplified by a two-stage compressor turbine of a power-generating gas turbine installation. The turbine characteristic is calculated in a wide range of mode parameters using the method in which analytical dependences provide high accuracy for the calculated flow output angle and different types of gas dynamic losses are determined with account of the influence of blade row geometry, blade surface roughness, angles, compressibility, Reynolds number, and flow turbulence. The method provides satisfactory agreement of results of calculation and turbine testing. In the design mode, the operation conditions for the blade rows are favorable, the flow output velocities are close to the optimal ones, the angles of incidence are small, and the flow "choking" modes (with respect to consumption) in the rows are absent. High performance and a nearly axial flow behind the turbine are obtained. Reduction of the rotor rotation frequency and variation of the pressure drop change the flow parameters, the parameters of the stages and the turbine, as well as the form of the characteristic. In particular, for decreased n, nonmonotonic variation of the second stage reactivity with increasing π* is observed. It is demonstrated that the turbine characteristic is mainly determined by the influence of the angles of incidence and the velocity at the output of the rows on the losses and the flow output angle. The account of the growing flow output angle due to the positive angle of incidence for decreased rotation frequencies results in a considerable change of the characteristic: poorer performance, redistribution of the pressure drop at the stages, and change of reactivities, growth of the turbine capacity, and change of the angle and flow velocity behind the turbine.

  9. Lab Tracker and Copper Calculator

    Science.gov (United States)

    ... have to do with factors of asymmetric neurologic development, such as being right or left-handed. The copper is often seen most prominently in the basal ganglia, the area deep within the brain that coordinates movements. The face of the giant ...

  10. Calculation of Rydberg interaction potentials

    DEFF Research Database (Denmark)

    Weber, Sebastian; Tresp, Christoph; Menke, Henri

    2017-01-01

    The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence...... for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up...

  11. FIPRED Project - Experiments and calculations

    International Nuclear Information System (INIS)

    Ohai, D.; Dumitrescu, I.; Doca, C.; Meleg, T.; Benga, D.

    2009-01-01

    Full text: The FIPRED (Fission Products Release from Debris Bed) Project was developed by INR in the framework of EC FP6 SARNET (2004-2008) and will be continued in EC FP6 SARNET2 (2009-2013). The project objective is the evaluation of fission product release from debris bed resulted after reactor severe accident by natural UO 2 sintered pellets self disintegration by oxidation. A large experimental program was performed covering the main parameters influencing granulometric distribution of powders (fragments) resulted from UO 2 sintered pellets self disintegration by air oxidation. The paper presents experimental results obtained and material equation obtained by mathematical calculations. (authors)

  12. The Dental Trauma Internet Calculator

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; Lauridsen, Eva Fejerskov; Christensen, Søren Steno Ahrensburg

    2012-01-01

    Background/Aim Prediction tools are increasingly used to inform patients about the future dental health outcome. Advanced statistical methods are required to arrive at unbiased predictions based on follow-up studies. Material and Methods The Internet risk calculator at the Dental Trauma Guide...... provides prognoses for teeth with traumatic injuries based on the Copenhagen trauma database: http://www.dentaltraumaguide.org The database includes 2191 traumatized permanent teeth from 1282 patients that were treated at the dental trauma unit at the University Hospital in Copenhagen (Denmark...

  13. Perturbation calculations with Wilson loop

    International Nuclear Information System (INIS)

    Peixoto Junior, L.B.

    1984-01-01

    We present perturbative calculations with the Wilson loop (WL). The dimensional regularization method is used with a special attention concerning to the problem of divergences in the WL expansion in second and fourth orders, in three and four dimensions. We show that the residue in the pole, in 4d, of the fourth order graphs contribution sum is important for the charge renormalization. We compute up to second order the exact expression of the WL, in three-dimensional gauge theories with topological mass as well as its assimptotic behaviour for small and large distances. the author [pt

  14. Symmetries applied to reactor calculations

    International Nuclear Information System (INIS)

    Makai, M.

    1982-03-01

    Three problems of a reactor-calculational model are discussed with the help of symmetry considerations. 1/ A coarse mesh method applicable to any geometry is derived. It is shown that the coarse mesh solution can be constructed from a few standard boundary value problems. 2/ A second stage homogenization method is given based on the Bloch theorem. This ensures the continuity of the current and the flux at the boundary. 3/ The validity of the micro-macro separation is shown for heterogeneous lattices. A formula for the neutron density is derived for cell homogenization. (author)

  15. Criticality calculations for safety analysis

    International Nuclear Information System (INIS)

    Vellozo, S.O.

    1981-01-01

    Criticality studies in uranium nitrate and plutonium nitrate aqueous solutions were done. For uranium compound three basic computer codes are used: GAMTEC-II, DTF-IV, KENO-IV. Water was used as refletor and the results obtained with the different computer codes were analyzed and compared with the 'Handbuck zur Kriticalitat'. The cross sections and the cylindrical geometry were generated by Gamtec-II computer code. In the second compound the thickness of the recipient with plutonium nitrate are used with rectangular geometry and concret reflector. The effective multiplication constant was calculated with the Gamtec-II and Keno-IV library. The results show many differences. (E.G) [pt

  16. Calculable resistors of coaxial design

    International Nuclear Information System (INIS)

    Kucera, J; Vollmer, E; Schurr, J; Bohacek, J

    2009-01-01

    1000 Ω and 1290.64 Ω coaxial resistors with calculable frequency dependence have been realized at PTB to be used in quantum Hall effect-based impedance measurements. In contradistinction to common designs of coaxial resistors, the design described in this paper makes it possible to remove the resistive element from the shield and to handle it without cutting the outer cylindrical shield of the resistor. Emphasis has been given to manufacturing technology and suppressing unwanted sources of frequency dependence. The adjustment accuracy is better than 10 µΩ Ω −1

  17. Radiation shielding calculation using MCNP

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro

    2001-01-01

    To verify the Monte Carlo code MCNP4A as a tool to generate the reference data in the shielding designs and the safety evaluations, various shielding benchmark experiments were analyzed using this code. These experiments were categorized in three types of the shielding subjects; bulk shielding, streaming, and skyshine. For the variance reduction technique, which is indispensable to get meaningful results with the Monte Carlo shielding calculation, we mainly used the weight window, the energy dependent Russian roulette and spitting. As a whole, our analyses performed enough small statistical errors and showed good agreements with these experiments. (author)

  18. Techniques of nuclear structure calculations

    International Nuclear Information System (INIS)

    Dyson, R.D.

    1967-04-01

    The quasiparticle method for identical particles interacting through pairing forces has been extended by others for use with systems of neutrons and protons. The method is to project isospin from separately considered neutron and proton quasiparticle wavefunctions. This is discussed in detail, and it seems that the projection may not be important. Therefore unprojected quasiparticle wavefunctions are tried with some success as a basis of states in which to diagonalize a realistic nuclear Hamiltonian. Brief unrelated calculations on nuclei of mass 19 and the SU(3) classification of states in the p-f shell are also presented. (author)

  19. Neutronic calculation of reactor cells

    International Nuclear Information System (INIS)

    Jaliff, J.O.

    1981-01-01

    Multigroup calculations of cylindrical pin cells were programmed, in Fortran IV, upon the basis of collision probabilities in each energy group. A rational approximation to the fuel-to-fuel collision probability in resonance groups was used. Together with the intermediate resonance theory, cross sections corrected for heterogeneity and absorber interactions were found. For the optimization of the program, the cell of a BWR reactor was taken as reference. Data for such a cell and the reactor's operating conditions are presented. PINCEL is a fast and flexible program, with checked results, around a 69-group library. (M.E.L.) [es

  20. Electronics reliability calculation and design

    CERN Document Server

    Dummer, Geoffrey W A; Hiller, N

    1966-01-01

    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  1. Digital calculations of engine cycles

    CERN Document Server

    Starkman, E S; Taylor, C Fayette

    1964-01-01

    Digital Calculations of Engine Cycles is a collection of seven papers which were presented before technical meetings of the Society of Automotive Engineers during 1962 and 1963. The papers cover the spectrum of the subject of engine cycle events, ranging from an examination of composition and properties of the working fluid to simulation of the pressure-time events in the combustion chamber. The volume has been organized to present the material in a logical sequence. The first two chapters are concerned with the equilibrium states of the working fluid. These include the concentrations of var

  2. Methods for calculating nonconcave entropies

    International Nuclear Information System (INIS)

    Touchette, Hugo

    2010-01-01

    Five different methods which can be used to analytically calculate entropies that are nonconcave as functions of the energy in the thermodynamic limit are discussed and compared. The five methods are based on the following ideas and techniques: (i) microcanonical contraction, (ii) metastable branches of the free energy, (iii) generalized canonical ensembles with specific illustrations involving the so-called Gaussian and Betrag ensembles, (iv) the restricted canonical ensemble, and (v) the inverse Laplace transform. A simple long-range spin model having a nonconcave entropy is used to illustrate each method

  3. FRELIB, Failure Reliability Index Calculation

    International Nuclear Information System (INIS)

    Parkinson, D.B.; Oestergaard, C.

    1984-01-01

    1 - Description of problem or function: Calculation of the reliability index given the failure boundary. A linearization point (design point) is found on the failure boundary for a stationary reliability index (min) and a stationary failure probability density function along the failure boundary, provided that the basic variables are normally distributed. 2 - Method of solution: Iteration along the failure boundary which must be specified - together with its partial derivatives with respect to the basic variables - by the user in a subroutine FSUR. 3 - Restrictions on the complexity of the problem: No distribution information included (first-order-second-moment-method). 20 basic variables (could be extended)

  4. NAC-1 cask dose rate calculations for LWR spent fuel

    International Nuclear Information System (INIS)

    CARLSON, A.B.

    1999-01-01

    A Nuclear Assurance Corporation nuclear fuel transport cask, NAC-1, is being considered as a transport and storage option for spent nuclear fuel located in the B-Cell of the 324 Building. The loaded casks will be shipped to the 200 East Area Interim Storage Area for dry interim storage. Several calculations were performed to assess the photon and neutron dose rates. This report describes the analytical methods, models, and results of this investigation

  5. Calculational Tool for Skin Contamination Dose Assessment

    CERN Document Server

    Hill, R L

    2002-01-01

    Spreadsheet calculational tool was developed to automate the calculations preformed for dose assessment of skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.

  6. Dissecting Reactor Antineutrino Flux Calculations

    Science.gov (United States)

    Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.

    2017-09-01

    Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235U, 239Pu, 241Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In the present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238U contribution as well as the effective charge and the allowed shape assumption used in the conversion method. We observe that including a shape correction of about +6 % MeV-1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.

  7. NATIONAL STORMWATER CALCULATOR USER'S GUIDE ...

    Science.gov (United States)

    The National Stormwater Calculator is a simple to use tool for computing small site hydrology for any location within the US. It estimates the amount of stormwater runoff generated from a site under different development and control scenarios over a long term period of historical rainfall. The analysis takes into account local soil conditions, slope, land cover and meteorology. Different types of low impact development (LID) practices (also known as green infrastructure) can be employed to help capture and retain rainfall on-site. Future climate change scenarios taken from internationally recognized climate change projections can also be considered. The calculator provides planning level estimates of capital and maintenance costs which will allow planners and managers to evaluate and compare effectiveness and costs of LID controls.The calculator’s primary focus is informing site developers and property owners on how well they can meet a desired stormwater retention target. It can be used to answer such questions as:• What is the largest daily rainfall amount that can be captured by a site in either its pre-development, current, or post-development condition?• To what degree will storms of different magnitudes be captured on site?• What mix of LID controls can be deployed to meet a given stormwater retention target?• How well will LID controls perform under future meteorological projections made by global climate change models?• What are the relativ

  8. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  9. Dispersion relations in loop calculations

    International Nuclear Information System (INIS)

    Kniehl, B.A.

    1996-01-01

    These lecture notes give a pedagogical introduction to the use of dispersion relations in loop calculations. We first derive dispersion relations which allow us to recover the real part of a physical amplitude from the knowledge of its absorptive part along the branch cut. In perturbative calculations, the latter may be constructed by means of Cutkosky's rule, which is briefly discussed. For illustration, we apply this procedure at one loop to the photon vacuum-polarization function induced by leptons as well as to the γf anti-f vertex form factor generated by the exchange of a massive vector boson between the two fermion legs. We also show how the hadronic contribution to the photon vacuum polarization may be extracted from the total cross section of hadron production in e + e - annihilation measured as a function of energy. Finally, we outline the application of dispersive techniques at the two-loop level, considering as an example the bosonic decay width of a high-mass Higgs boson. (author)

  10. Neutronics calculation of RTP core

    Science.gov (United States)

    Rabir, Mohamad Hairie B.; Zin, Muhammad Rawi B. Mohamed; Karim, Julia Bt. Abdul; Bayar, Abi Muttaqin B. Jalal; Usang, Mark Dennis Anak; Mustafa, Muhammad Khairul Ariff B.; Hamzah, Na'im Syauqi B.; Said, Norfarizan Bt. Mohd; Jalil, Muhammad Husamuddin B.

    2017-01-01

    Reactor calculation and simulation are significantly important to ensure safety and better utilization of a research reactor. The Malaysian's PUSPATI TRIGA Reactor (RTP) achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. Since early 90s, neutronics modelling were used as part of its routine in-core fuel management activities. The are several computer codes have been used in RTP since then, based on 1D neutron diffusion, 2D neutron diffusion and 3D Monte Carlo neutron transport method. This paper describes current progress and overview on neutronics modelling development in RTP. Several important parameters were analysed such as keff, reactivity, neutron flux, power distribution and fission product build-up for the latest core configuration. The developed core neutronics model was validated by means of comparison with experimental and measurement data. Along with the RTP core model, the calculation procedure also developed to establish better prediction capability of RTP's behaviour.

  11. Calculation of sound propagation in fibrous materials

    DEFF Research Database (Denmark)

    Tarnow, Viggo

    1996-01-01

    Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements.......Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements....

  12. FHFA Underserved Areas

    Data.gov (United States)

    Department of Housing and Urban Development — Federal Housing Finance Agency's (FHFA) Underserved Areas establishes underserved area designations for census tracts in Metropolitan Areas (MSAs), nonmetropolitan...

  13. Methodologies of Uncertainty Propagation Calculation

    International Nuclear Information System (INIS)

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  14. Thermodynamic Calculations for Systems Biocatalysis

    DEFF Research Database (Denmark)

    Abu, Rohana; Gundersen, Maria T.; Woodley, John M.

    2015-01-01

    the transamination of a pro-chiral ketone into a chiral amine (interesting in many pharmaceutical applications). Here, the products are often less energetically stable than the reactants, meaning that the reaction may be thermodynamically unfavourable. As in nature, such thermodynamically-challenged reactions can...... on the basis of kinetics. However, many of the most interesting non-natural chemical reactions which could potentially be catalysed by enzymes, are thermodynamically unfavourable and are thus limited by the equilibrium position of the reaction. A good example is the enzyme ω-transaminase, which catalyses...... be altered by coupling with other reactions. For instance, in the case of ω-transaminase, such a coupling could be with alanine dehydrogenase. Herein, the aim of this work is to identify thermodynamic bottlenecks within a multi-enzyme process, using group contribution method to calculate the Gibbs free...

  15. Calculating utility prudency issue costs

    International Nuclear Information System (INIS)

    Nielsen, K.R.

    1985-01-01

    The nuclear industry, particularly utilities and their construction, engineering and vendor agents, is faced with a surging increase in prudency management audits. What started as primarily a nuclear project-oriented requirement has spread to encompass most significant utility capital construction projects. Such audits are often a precedent condition to commencement of rate hearings. The cost engineer, a primary major capital construction project participant, is required to develop or critique ''prudency issue'' costs as part of such audits. Although utility costs in the broadest sense are potentially at issue, this paper concentrates on the typical project/construction management costs. The costs of design, procurement and construction are all subject to the calculation process

  16. Shielding calculations. Optimization vs. Paradigms

    International Nuclear Information System (INIS)

    Cornejo Diaz, Nestor; Hernandez Saiz, Alejandro; Martinez Gonzalez, Alina

    2005-01-01

    Many radiation shielding barriers in Cuba have been designed according to the criterion of Maxi-mum Projected Dose Rates. This fact has created the paradigm of low dose rates. Because of this, dose rate levels greater than units of Sv.h-1 would be considered unacceptable by many specialists, regardless of the real exposure times. Nowadays many shielding barriers are being designed using dose constraints in real exposure times. Behind the new barriers, dose rates could be notably greater than those behind the traditional ones, and it does not imply inadequate designs or constructive errors. In this work were obtained significant differences in dose rate levels and shield-ing thicknesses calculated by both methods for some typical installations. The work concludes that real exposure time approach is more adequate in order to optimise Radiation Protection, although this method should be carefully applied

  17. Dyscalculia and the Calculating Brain.

    Science.gov (United States)

    Rapin, Isabelle

    2016-08-01

    Dyscalculia, like dyslexia, affects some 5% of school-age children but has received much less investigative attention. In two thirds of affected children, dyscalculia is associated with another developmental disorder like dyslexia, attention-deficit disorder, anxiety disorder, visual and spatial disorder, or cultural deprivation. Infants, primates, some birds, and other animals are born with the innate ability, called subitizing, to tell at a glance whether small sets of scattered dots or other items differ by one or more item. This nonverbal approximate number system extends mostly to single digit sets as visual discrimination drops logarithmically to "many" with increasing numerosity (size effect) and crowding (distance effect). Preschoolers need several years and specific teaching to learn verbal names and visual symbols for numbers and school agers to understand their cardinality and ordinality and the invariance of their sequence (arithmetic number line) that enables calculation. This arithmetic linear line differs drastically from the nonlinear approximate number system mental number line that parallels the individual number-tuned neurons in the intraparietal sulcus in monkeys and overlying scalp distribution of discrete functional magnetic resonance imaging activations by number tasks in man. Calculation is a complex skill that activates both visual and spatial and visual and verbal networks. It is less strongly left lateralized than language, with approximate number system activation somewhat more right sided and exact number and arithmetic activation more left sided. Maturation and increasing number skill decrease associated widespread non-numerical brain activations that persist in some individuals with dyscalculia, which has no single, universal neurological cause or underlying mechanism in all affected individuals. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Calculation of Configurational Entropy in Complex Landscapes

    Directory of Open Access Journals (Sweden)

    Samuel A Cushman

    2018-04-01

    Full Text Available Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal to the logarithm of the number of ways a lattice with a given dimensionality and number of classes can be arranged to produce the same total amount of edge between cells of different classes. However, that work seemed to also suggest that the feasibility of applying this method to real landscapes was limited due to intractably large numbers of possible arrangements of raster cells in large landscapes. Here I extend that work by showing that: (1 the proportion of arrangements rather than the number with a given amount of edge length provides a means to calculate unbiased relative configurational entropy, obviating the need to compute all possible configurations of a landscape lattice; (2 the edge lengths of randomized landscape mosaics are normally distributed, following the central limit theorem; and (3 given this normal distribution it is possible to fit parametric probability density functions to estimate the expected proportion of randomized configurations that have any given edge length, enabling the calculation of configurational entropy on any landscape regardless of size or number of classes. I evaluate the boundary limits (4 for this normal approximation for small landscapes with a small proportion of a minority class and show it holds under all realistic landscape conditions. I further (5 demonstrate that this relationship holds for a sample of real landscapes that vary in size, patch richness, and evenness of area in each cover type, and (6 I show that the mean and standard deviation of the normally distributed edge lengths can be predicted nearly perfectly as a function of the size, patch richness and diversity of a landscape. Finally, (7 I show that the

  19. Calculation the kinetics of the baking biscuit process

    Directory of Open Access Journals (Sweden)

    S. T. Antipov

    2013-01-01

    Full Text Available Based on the input values of the equivalent values of thermophysical units and the heat transfer coefficient were calculated: values that reflect the kinetics of the process of baking; values allowing to determine the relationship duration baking temperature in the baking chamber; the voltage of the active area of the hearth.

  20. Calculation of the inverse data space via sparse inversion

    KAUST Repository

    Saragiotis, Christos; Doulgeris, Panagiotis C.; Verschuur, Dirk Jacob Eric

    2011-01-01

    The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from

  1. Class 1 Areas

    Data.gov (United States)

    U.S. Environmental Protection Agency — A "Class 1" area is a geographic area recognized by the EPA as being of the highest environmental quality and requiring maximum protection. Class I areas are areas...

  2. Optimal Height Calculation and Modelling of Noise Barrier

    Directory of Open Access Journals (Sweden)

    Raimondas Grubliauskas

    2011-04-01

    Full Text Available Transport is one of the main sources of noise having a particularly strong negative impact on the environment. In the city, one of the best methods to reduce the spread of noise in residential areas is a noise barrier. The article presents noise reduction barrier adaptation with empirical formulas calculating and modelling noise distribution. The simulation of noise dispersion has been performed applying the CadnaA program that allows modelling the noise levels of various developments under changing conditions. Calculation and simulation is obtained by assessing the level of noise reduction using the same variables. The investigation results are presented as noise distribution isolines. The selection of a different height of noise barriers are the results calculated at the heights of 1, 4 and 15 meters. The level of noise reduction at the maximum overlap of data, calculation and simulation has reached about 10%.Article in Lithuanian

  3. Engineering task plan for steam line ramp calculations

    International Nuclear Information System (INIS)

    DeSantis, G.N.; Freeman, R.D.

    1994-01-01

    The purpose of this document is to provide an approved work plan to perform calculations that verify the load limits of a proposed ramp over a steam line at the back side (East side) of SY Farm in support of work package 2W-94-00812/K. The objective of this supporting document is to provide Operations with a set of checked calculations that verify the ramp over the steam line at SY Farm will support a fully loaded concrete mixer truck without affecting the steam line. The calculations will be performed by an engineers from Facility Systems and independently checked and reviewed by another engineer. The calculations may then be added to the work package. If Operations decides to make any configuration changes to the steam line or surrounding area, Operations shall have these changes documented by an Engineering Change Notice (ECN). This ECN can be done by Facility Systems or any other engineering organization at the direction of Operations

  4. Calculation of transformers leakage reactance using electromagnetic energy technique

    International Nuclear Information System (INIS)

    Feiz, J.; Mohseni, H.; Sabet Marzooghi, S.; Naderian Jahromi, A.

    2004-01-01

    Determination of transformer leakage reactance using magnetic cores has long been an area of interest to engineers involved in the design of power and distribution transformers. This is required for predicting the performance of transformers before actual assembly of the transformers. In this paper a closed form solution technique applicable to the leakage reactance calculations for transformers is presented. An emphasis is on the development of a simple method to calculate the leakage reactance of the distribution transformers and smaller transformers. Energy technique procedure for computing the leakage reactances in distribution transformers is presented. This method is very efficient compared with the use of flux element and image technique and is also remarkably accurate. Examples of calculated leakage inductances and the short circuit impedance are given for illustration. For validation, the results are compared with the results obtained using test. This paper presents a novel technique for calculation of the leakage inductance in different parts of the transformer using the electromagnetic stored energy

  5. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  6. Time improvement of photoelectric effect calculation for absorbed dose estimation

    International Nuclear Information System (INIS)

    Massa, J M; Wainschenker, R S; Doorn, J H; Caselli, E E

    2007-01-01

    Ionizing radiation therapy is a very useful tool in cancer treatment. It is very important to determine absorbed dose in human tissue to accomplish an effective treatment. A mathematical model based on affected areas is the most suitable tool to estimate the absorbed dose. Lately, Monte Carlo based techniques have become the most reliable, but they are time expensive. Absorbed dose calculating programs using different strategies have to choose between estimation quality and calculating time. This paper describes an optimized method for the photoelectron polar angle calculation in photoelectric effect, which is significant to estimate deposited energy in human tissue. In the case studies, time cost reduction nearly reached 86%, meaning that the time needed to do the calculation is approximately 1/7 th of the non optimized approach. This has been done keeping precision invariant

  7. Shielding calculation for treatment rooms of high energy linear accelerator

    International Nuclear Information System (INIS)

    Elleithy, M.A.

    2006-01-01

    A review of German Institute of Standardization (DIN) scheme of the shielding calculation and the essential data required has been done for X-rays and electron beam in the energy range from 1 MeV to 50 MeV. Shielding calculation was done for primary and secondary radiations generated during X-ray operation of Linac. In addition, shielding was done against X-rays generated (Bremsstrahlung) by useful electron beams. The calculations also covered the neutrons generated from the interactions of useful X-rays (at energies above 8 MeV) with the surrounding. The present application involved the computation of shielding against the double scattered components of X-rays and neutrons in the maze area and the thickness of the paraffin wax of the room door. A new developed computer program was designed to assist shielding thickness calculations for a new Linac installation or in replacing an existing machine. The program used a combination of published tables and figures in computing the shielding thickness at different locations for all possible radiation situations. The DIN published data of 40 MeV accelerator room was compared with the program calculations. It was found that there is good agreement between both calculations. The developed program improved the accuracy and speed of calculation

  8. Stability calculations for MHD magnets

    International Nuclear Information System (INIS)

    Turner, L.R.; Wang, S.T.; Harrang, J.

    1978-01-01

    When a cryostable composite conductor carrying current experiences a heat input from a mechanical perturbation, a normal region develops which initially propagates and then either collapses or continues to propagate. A computer model has been devised to study this phenomenon. The model incorporates initial or continuing heat input from mechanical perturbations, heat conducted to the neighboring elements of the conductor and, if appropriate, heat conducted through insulation to neighboring turns. Heat is transferred to the helium coolant according to a specified heat transfer coefficient. If the element of conductor is in a normal or current-sharing state, resistive heating also occurs. The (unstable) equilibrium state of heat generation and conduction has been studied; results agree with those of a static calculation. The model has been validated against experimental measurements of response to heat pulses. The model suffers from uncertainties in transient heat transfer to the helium, but even more from uncertainties in the perturbing heat pulse which the magnet might be expected to suffer

  9. Selfconsistent calculations for hyperdeformed nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Molique, H.; Dobaczewski, J.; Dudek, J.; Luo, W.D. [Universite Louis Pasteur, Strasbourg (France)

    1996-12-31

    Properties of the hyperdeformed nuclei in the A {approximately} 170 mass range are re-examined using the self-consistent Hartree-Fock method with the SOP parametrization. A comparison with the previous predictions that were based on a non-selfconsistent approach is made. The existence of the {open_quotes}hyper-deformed shell closures{close_quotes} at the proton and neutron numbers Z=70 and N=100 and their very weak dependence on the rotational frequency is suggested; the corresponding single-particle energy gaps are predicted to play a role similar to that of the Z=66 and N=86 gaps in the super-deformed nuclei of the A {approximately} 150 mass range. Selfconsistent calculations suggest also that the A {approximately} 170 hyperdeformed structures have neglegible mass asymmetry in their shapes. Very importantly for the experimental studies, both the fission barriers and the {open_quotes}inner{close_quotes} barriers (that separate the hyperdeformed structures from those with smaller deformations) are predicted to be relatively high, up to the factor of {approximately}2 higher than the corresponding ones in the {sup 152}Dy superdeformed nucleus used as a reference.

  10. Calculating system reliability with SRFYDO

    Energy Technology Data Exchange (ETDEWEB)

    Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  11. Software testing in roughness calculation

    International Nuclear Information System (INIS)

    Chen, Y L; Hsieh, P F; Fu, W E

    2005-01-01

    A test method to determine the function quality provided by the software for roughness measurement is presented in this study. The function quality of the software requirements should be part of and assessed through the entire life cycle of the software package. The specific function, or output accuracy, is crucial for the analysis of the experimental data. For scientific applications, however, commercial software is usually embedded with specific instrument, which is used for measurement or analysis during the manufacture process. In general, the error ratio caused by the software would be more apparent especially when dealing with relatively small quantities, like the measurements in the nanometer-scale range. The model of 'using a data generator' proposed by NPL of UK was applied in this study. An example of the roughness software is tested and analyzed by the above mentioned process. After selecting the 'reference results', the 'reference data' was generated by a programmable 'data generator'. The filter function of 0.8 mm long cutoff value, defined in ISO 11562 was tested with 66 sinusoid data at different wavelengths. Test results from commercial software and CMS written program were compared to the theoretical data calculated from ISO standards. As for the filter function in this software, the result showed a significant disagreement between the reference and test results. The short cutoff feature for filtering at the high frequencies does not function properly, while the long cutoff feature has the maximum difference in the filtering ratio, which is more than 70% between the wavelength of 300 μm and 500 μm. Conclusively, the commercial software needs to be tested more extensively for specific application by appropriate design of reference dataset to ensure its function quality

  12. Nonlinear calculations for bump Cepheids

    International Nuclear Information System (INIS)

    Hodson, S.W.; Cox, A.N.

    1979-01-01

    Hydrodynamic calculations to find strictly periodic solutions for the fundamental mode pulsations of 7 M/sub sun/ models were made using the von Sengbusch--Stellingwerf relaxation method. The models have a helium enrichment in the surface convection zones to Y = 0.78, which from the linear theory period ratio π 2 /π 0 and the Simon and Schmidt resonance hypothesis, should give the observed Hertzsprung progression of light and velocity curve bump phase with period. These surface helium enhanced models show the proper nonlinear bump phase behavior without resort to any mass loss before or during the blue loop phases of yellow giant evolution. At 6000 K and the evolution theory luminosity of 4744 L/sub sun/ for 7 M/sub sun/, that is, at a fundamental mode period of 8.5 day, the velocity curve bump is well after the maximum expansion velocity. At 5400 K and at the same luminosity (period of 12.5 days), there is a bump on the velocity curve well before maximum expansion velocity time. The latter case seems to exhibit the Christy echos but not the former. The echo interpretation may not be appropriate for these masses which are larger than the anomalous masses used by Christy, Stobie, and Adams. Resonance of the fundamental and second overtone modes should not necessarily show echos of surface disturbances from the center. The conclusion is that helium enrichment in the surface convection zones can adequately explain observations of bump Cepheids at evolution theory masses. 12 references

  13. Relativistic Few-Body Hadronic Physics Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Polyzou, Wayne [Univ. of Iowa, Iowa City, IA (United States)

    2016-06-20

    The goal of this research proposal was to use ``few-body'' methods to understand the structure and reactions of systems of interacting hadrons (neutrons, protons, mesons, quarks) over a broad range of energy scales. Realistic mathematical models of few-hadron systems have the advantage that they are sufficiently simple that they can be solved with mathematically controlled errors. These systems are also simple enough that it is possible to perform complete accurate experimental measurements on these systems. Comparison between theory and experiment puts strong constraints on the structure of the models. Even though these systems are ``simple'', both the experiments and computations push the limits of technology. The important property of ``few-body'' systems is that the ``cluster property'' implies that the interactions that appear in few-body systems are identical to the interactions that appear in complicated many-body systems. Of particular interest are models that correctly describe physics at distance scales that are sensitive to the internal structure of the individual nucleons. The Heisenberg uncertainty principle implies that in order to be sensitive to physics on distance scales that are a fraction of the proton or neutron radius, a relativistic treatment of quantum mechanics is necessary. The research supported by this grant involved 30 years of effort devoted to studying all aspects of interacting two and three-body systems. Realistic interactions were used to compute bound states of two- and three-nucleon, and two- and three-quark systems. Scattering observables for these systems were computed for a broad range of energies - from zero energy scattering to few GeV scattering, where experimental evidence of sub-nucleon degrees of freedom is beginning to appear. Benchmark calculations were produced, which when compared with calculations of other groups provided an essential check on these complicated calculations. In

  14. Calculation of the Capture Edge in the OGMS Superconducting Separator

    International Nuclear Information System (INIS)

    Kozak, S.

    1998-01-01

    Many ferromagnetic particles, that should be deflected, are captured on the wall of an OGMS (Open Gradient Magnetic Separation) separator. This ferromagnetic material influences magnetic and hydrodynamic conditions in the separator working area. The problem how to calculate the capture edge can be defined as the search for the geometry of a nonlinear system at known boundary conditions. The boundary conditions on the capture edge are the function of the capture edge geometry. The experimental results of the separation recovery are given. The capture edge calculation has been performed by FLUX 2D and the results are presented. (author)

  15. SIMCRI: a simple computer code for calculating nuclear criticality parameters

    International Nuclear Information System (INIS)

    Nakamaru, Shou-ichi; Sugawara, Nobuhiko; Naito, Yoshitaka; Katakura, Jun-ichi; Okuno, Hiroshi.

    1986-03-01

    This is a user's manual for a simple criticality calculation code SIMCRI. The code has been developed to facilitate criticality calculation on a single unit of nuclear fuel. SIMCRI makes an extensive survey with a little computing time. Cross section library MGCL for SIMCRI is the same one for the Monte Carlo criticality code KENOIV; it is, therefore, easy to compare the results of the two codes. SIMCRI solves eigenvalue problems and fixed source problems based on the one space point B 1 equation. The results include infinite and effective multiplication factor, critical buckling, migration area, diffusion coefficient and so on. SIMCRI is comprised in the criticality safety evaluation code system JACS. (author)

  16. Calculation of the resonance cross section functions

    International Nuclear Information System (INIS)

    Slipicevic, K.F.

    1967-11-01

    This paper includes the procedure for calculating the Doppler broadened line shape functions ψ and χ which are needed for calculation of resonance cross section functions. The obtained values are given in tables

  17. Calculation of the resonance cross section functions

    Energy Technology Data Exchange (ETDEWEB)

    Slipicevic, K F [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1967-11-15

    This paper includes the procedure for calculating the Doppler broadened line shape functions {psi} and {chi} which are needed for calculation of resonance cross section functions. The obtained values are given in tables.

  18. Results of recent calculations using realistic potentials

    International Nuclear Information System (INIS)

    Friar, J.L.

    1987-01-01

    Results of recent calculations for the triton using realistic potentials with strong tensor forces are reviewed, with an emphasis on progress made using the many different calculational schemes. Several test problems are suggested. 49 refs., 5 figs

  19. The cellular approach to band structure calculations

    International Nuclear Information System (INIS)

    Verwoerd, W.S.

    1982-01-01

    A short introduction to the cellular approach in band structure calculations is given. The linear cellular approach and its potantial applicability in surface structure calculations is given some consideration in particular

  20. Calculation of protein-ligand binding affinities.

    Science.gov (United States)

    Gilson, Michael K; Zhou, Huan-Xiang

    2007-01-01

    Accurate methods of computing the affinity of a small molecule with a protein are needed to speed the discovery of new medications and biological probes. This paper reviews physics-based models of binding, beginning with a summary of the changes in potential energy, solvation energy, and configurational entropy that influence affinity, and a theoretical overview to frame the discussion of specific computational approaches. Important advances are reported in modeling protein-ligand energetics, such as the incorporation of electronic polarization and the use of quantum mechanical methods. Recent calculations suggest that changes in configurational entropy strongly oppose binding and must be included if accurate affinities are to be obtained. The linear interaction energy (LIE) and molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) methods are analyzed, as are free energy pathway methods, which show promise and may be ready for more extensive testing. Ultimately, major improvements in modeling accuracy will likely require advances on multiple fronts, as well as continued validation against experiment.

  1. Model for fission-product calculations

    International Nuclear Information System (INIS)

    Smith, A.B.

    1984-01-01

    Many fission-product cross sections remain unmeasurable thus considerable reliance must be placed upon calculational interpolation and extrapolation from the few available measured cross sections. The vehicle, particularly for the lighter fission products, is the conventional optical-statistical model. The applied goals generally are: capture cross sections to 7 to 10% accuracies and inelastic-scattering cross sections to 25 to 50%. Comparisons of recent evaluations and experimental results indicate that these goals too often are far from being met, particularly in the area of inelastic scattering, and some of the evaluated fission-product cross sections are simply physically unreasonable. It is difficult to avoid the conclusion that the models employed in many of the evaluations are inappropriate and/or inappropriately used. In order to alleviate the above unfortunate situations, a regional optical-statistical (OM) model was sought with the goal of quantitative prediction of the cross sections of the lighter-mass (Z = 30-51) fission products. The first step toward that goal was the establishment of a reliable experimental data base consisting of energy-averaged neutron total and differential-scattering cross sections. The second step was the deduction of a regional model from the experimental data. It was assumed that a spherical OM is appropriate: a reasonable and practical assumption. The resulting OM then was verified against the measured data base. Finally, the physical character of the regional model is examined

  2. Power calculation of grading device in desintegrator

    Science.gov (United States)

    Bogdanov, V. S.; Semikopenko, I. A.; Vavilov, D. V.

    2018-03-01

    This article describes the analytical method of measuring the secondary power consumption, necessitated by the installation of a grading device in the peripheral part of the grinding chamber in the desintegrator. There is a calculation model for defining the power input of the disintegrator increased by the extra power demand, required to rotate the grading device and to grind the material in the area between the external row of hammers and the grading device. The work has determined the inertia moments of a cylindrical section of the grading device with armour plates. The processing capacity of the grading device is adjusted to the conveying capacity of the auger feeder. The grading device enables one to increase the concentration of particles in the peripheral part of the grinding chamber and the amount of interaction between particles and armour plates as well as the number of colliding particles. The perforated sections provide the output of the ground material with the proper size granules, which together with the effects of armour plates, improves the efficiency of grinding. The power demand to rotate the grading device does not exceed the admissible value.

  3. Final disposal room structural response calculations

    International Nuclear Information System (INIS)

    Stone, C.M.

    1997-08-01

    Finite element calculations have been performed to determine the structural response of waste-filled disposal rooms at the WIPP for a period of 10,000 years after emplacement of the waste. The calculations were performed to generate the porosity surface data for the final set of compliance calculations. The most recent reference data for the stratigraphy, waste characterization, gas generation potential, and nonlinear material response have been brought together for this final set of calculations

  4. Buckling feedback of the spectral calculations

    International Nuclear Information System (INIS)

    Jing Xingqing; Shan Wenzhi; Luo Jingyu

    1992-01-01

    This paper studies the problems about buckling feedback of spectral calculations in physical calculations of the reactor and presents a useful method by which the buckling feedback of spectral calculations is implemented. The effect of the buckling feedback in spectra and the broad group cross section, convergence of buckling feedback iteration and the effect of the spectral zones dividing are discussed in the calculations. This method has been used for the physical design of HTR-10 MW Test Module

  5. Health Service Areas (HSAs) - Small Area Estimates

    Science.gov (United States)

    Health Service Areas (HSAs) are a compromise between the 3000 counties and the 50 states. An HSA may be thought of as an area that is relatively self-contained with respect to hospital care and may cross over state boundries.

  6. 47 CFR 1.1623 - Probability calculation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...

  7. Calculation of correlation between spins in a magnetic substance; Calcul des correlations entre spins dans une substance magnetique

    Energy Technology Data Exchange (ETDEWEB)

    Gennes, P.G. de [Commissariat a l' Energie Atomique, Saclay (France)

    1959-07-01

    The report states an elementary calculation of the correlation between spins in a magnetic substance, and particularly of their asymptotic form with regard to relatively wide-spaced spins. This permits the determination of the phenomenological parameters introduced by Var Hove to describe the magnetic scatter of neutrons in the critical opalescent area. (author) [French] On donne un calcul elementaire des correlations entre spins dans une substance magnetique, et notamment de leur forme asymptotique pour des spins assez eloignes. Ceci permet de determiner les parametres phenomenologiques introduits par Van Hove pour decrire la diffusion magnetique des neutrons dans la region d'opalescence critique. (auteur)

  8. Operational source receptor calculations for large agglomerations

    Science.gov (United States)

    Gauss, Michael; Shamsudheen, Semeena V.; Valdebenito, Alvaro; Pommier, Matthieu; Schulz, Michael

    2016-04-01

    For Air quality policy an important question is how much of the air pollution within an urbanized region can be attributed to local sources and how much of it is imported through long-range transport. This is critical information for a correct assessment of the effectiveness of potential emission measures. The ratio between indigenous and long-range transported air pollution for a given region depends on its geographic location, the size of its area, the strength and spatial distribution of emission sources, the time of the year, but also - very strongly - on the current meteorological conditions, which change from day to day and thus make it important to provide such calculations in near-real-time to support short-term legislation. Similarly, long-term analysis over longer periods (e.g. one year), or of specific air quality episodes in the past, can help to scientifically underpin multi-regional agreements and long-term legislation. Within the European MACC projects (Monitoring Atmospheric Composition and Climate) and the transition to the operational CAMS service (Copernicus Atmosphere Monitoring Service) the computationally efficient EMEP MSC-W air quality model has been applied with detailed emission data, comprehensive calculations of chemistry and microphysics, driven by high quality meteorological forecast data (up to 96-hour forecasts), to provide source-receptor calculations on a regular basis in forecast mode. In its current state, the product allows the user to choose among different regions and regulatory pollutants (e.g. ozone and PM) to assess the effectiveness of fictive emission reductions in air pollutant emissions that are implemented immediately, either within the agglomeration or outside. The effects are visualized as bar charts, showing resulting changes in air pollution levels within the agglomeration as a function of time (hourly resolution, 0 to 4 days into the future). The bar charts not only allow assessing the effects of emission

  9. Determination of retinal surface area.

    Science.gov (United States)

    Nagra, Manbir; Gilmartin, Bernard; Thai, Ngoc Jade; Logan, Nicola S

    2017-09-01

    Previous attempts at determining retinal surface area and surface area of the whole eye have been based upon mathematical calculations derived from retinal photographs, schematic eyes and retinal biopsies of donor eyes. 3-dimensional (3-D) ocular magnetic resonance imaging (MRI) allows a more direct measurement, it can be used to image the eye in vivo, and there is no risk of tissue shrinkage. The primary purpose of this study is to compare, using T2-weighted 3D MRI, retinal surface areas for superior-temporal (ST), inferior-temporal (IT), superior-nasal (SN) and inferior-nasal (IN) retinal quadrants. An ancillary aim is to examine whether inter-quadrant variations in area are concordant with reported inter-quadrant patterns of susceptibility to retinal breaks associated with posterior vitreous detachment (PVD). Seventy-three adult participants presenting without retinal pathology (mean age 26.25 ± 6.06 years) were scanned using a Siemens 3-Tesla MRI scanner to provide T2-weighted MR images that demarcate fluid-filled internal structures for the whole eye and provide high-contrast delineation of the vitreous-retina interface. Integrated MRI software generated total internal ocular surface area (TSA). The second nodal point was used to demarcate the origin of the peripheral retina in order to calculate total retinal surface area (RSA) and quadrant retinal surface areas (QRSA) for ST, IT, SN, and IN quadrants. Mean spherical error (MSE) was -2.50 ± 4.03D and mean axial length (AL) 24.51 ± 1.57 mm. Mean TSA and RSA for the RE were 2058 ± 189 and 1363 ± 160 mm 2 , respectively. Repeated measures anova for QRSA data indicated a significant difference within-quadrants (P area/mm increase in AL. Although the differences between QRSAs are relatively small, there was evidence of concordance with reported inter-quadrant patterns of susceptibility to retinal breaks associated with PVD. The data allow AL to be converted to QRSAs, which will assist further

  10. 105-KW Sandfilter Backwash Pit sludge volume calculation

    International Nuclear Information System (INIS)

    Dodd, E.N. Jr.

    1995-01-01

    The volume of sludge contained in the 100-KW Sandfilter Backwash Pit (SFBWP) was calculated from depth measurements of the sludge, pit dimension measurements and analysis of video tape recordings taken by an underwater camera. The term sludge as used in this report is any combination of sand, sediment, or corrosion products visible in the SFBWP area. This work was performed to determine baseline volume for use in determination of quantities of uranium and plutonium deposited in the pit from sandfilter backwashes. The SFBWP has three areas where sludge is deposited: (1) the main pit floor, (2) the transfer channel floor, and (3) the surfaces and structures in the SFBWP. The depths of sludge and the uniformity of deposition varies significantly between these three areas. As a result, each of the areas was evaluated separately. The total volume of sludge determined was 3.75 M 3 (132.2 ft 3 )

  11. MILDOS-AREA: An enhanced version of MILDOS for large-area sources

    International Nuclear Information System (INIS)

    Yuan, Y.C.; Wang, J.H.C.; Zielen, A.

    1989-06-01

    The MILDOS-AREA computer code is a modified version of the MILDOS code, which estimates the radiological impacts of airborne emissions from uranium mining and milling facilities or any other large-area source involving emissions of radioisotopes of the uranium-238 series. MILDOS-AREA is designed for execution on personal computers. The modifications incorporated in the MILDOS-AREA code provide enhanced capabilities for calculating doses from large-area sources and update dosimetry calculations. The major revision from the original MILDOS code is the treatment of atmospheric dispersion from area sources: MILDOS-AREA substitutes a finite element integration approach for the virtual-point method (the algorithm used in the original MILDOS code) when specified by the user. Other revisions include the option of using Martin-Tickvart dispersion coefficients in place of Briggs coefficients for a given source, consideration of plume reflection, and updated internal dosimetry calculations based on the most recent recommendations of the International Commission on Radiation Protection and the age-specific dose calculation methodology developed by Oak Ridge National Laboratory. This report also discusses changes in computer code structure incorporated into MILDOS-AREA, summarizes data input requirements, and provides instructions for installing and using the program on personal computers. 15 refs., 9 figs., 26 tabs

  12. Electric field calculations in brain stimulation based on finite elements

    DEFF Research Database (Denmark)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-01-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...

  13. Beam dynamics calculations for the linac booster beam line

    International Nuclear Information System (INIS)

    Lu, J.Q.; Cramer, J.G.; Storm, D.W.

    1987-01-01

    Beam optics focusing characteristics both in the transverse and longitudinal directions of the superconducting linac booster beam line are calculated for different particles. Three computer programs, which are TRANSPORT, LYRA and ENTIME, are used to simulate particle motions. The first one is used to simulate the particle radial motions. The effects of energy increase on to the transverse phase space area are considered by putting in accelerating matrices of each resonators. The second program is used to simulate particle longitudinal motions. Beam longitudinal motions are calculated with program ENTIME also, with which visual pictures in the Energy-Time phase space can be displayed on the terminal screen. Besides, the stability of the particle periodic motions in the radial directions are considered and calculated

  14. Calculations of a wideband metamaterial absorber using equivalent medium theory

    Science.gov (United States)

    Huang, Xiaojun; Yang, Helin; Wang, Danqi; Yu, Shengqing; Lou, Yanchao; Guo, Ling

    2016-08-01

    Metamaterial absorbers (MMAs) have drawn increasing attention in many areas due to the fact that they can achieve electromagnetic (EM) waves with unity absorptivity. We demonstrate the design, simulation, experiment and calculation of a wideband MMA based on a loaded double-square-loop (DSL) array of chip resisters. For a normal incidence EM wave, the simulated results show that the absorption of the full width at half maximum is about 9.1 GHz, and the relative bandwidth is 87.1%. Experimental results are in agreement with the simulations. More importantly, equivalent medium theory (EMT) is utilized to calculate the absorptions of the DSL MMA, and the calculated absorptions based on EMT agree with the simulated and measured results. The method based on EMT provides a new way to analysis the mechanism of MMAs.

  15. Review of Axial Burnup Distribution Considerations for Burnup Credit Calculations

    International Nuclear Information System (INIS)

    Wagner, J.C.; DeHart, M.D.

    2000-01-01

    This report attempts to summarize and consolidate the existing knowledge on axial burnup distribution issues that are important to burnup credit criticality safety calculations. Recently released Nuclear Regulatory Commission (NRC) staff guidance permits limited burnup credit, and thus, has prompted resolution of the axial burnup distribution issue. The reactivity difference between the neutron multiplication factor (keff) calculated with explicit representation of the axial burnup distribution and keff calculated assuming a uniform axial burnup is referred to as the ''end effect.'' This end effect is shown to be dependent on many factors, including the axial-burnup profile, total accumulated burnup, cooling time, initial enrichment, assembly design, and the isotopics considered (i.e., actinide-only or actinides plus fission products). Axial modeling studies, efforts related to the development of axial-profile databases, and the determination of bounding axial profiles are also discussed. Finally, areas that could benefit from further efforts are identified

  16. Comparison between ASHRAE and ISO thermal transmittance calculation methods

    DEFF Research Database (Denmark)

    Blanusa, Petar; Goss, William P.; Roth, Hartwig

    2007-01-01

    is proportional to the glazing/frame sightline distance that is also proportional to the total glazing spacer length. An example calculation of the overall heat transfer and thermal transmittance (U-value or U-factor) using the two methods for a thermally broken, aluminum framed slider window is presented....... The fenestration thermal transmittance calculations analyses presented in this paper show that small differences exist between the calculated thermal transmittance values produced by the ISO and ASHRAE methods. The results also show that the overall thermal transmittance difference between the two methodologies...... decreases as the total window area (glazing plus frame) increases. Thus, the resulting difference in thermal transmittance values for the two methods is negligible for larger windows. This paper also shows algebraically that the differences between the ISO and ASHRAE methods turn out to be due to the way...

  17. Two-dimensional sensitivity calculation code: SENSETWO

    International Nuclear Information System (INIS)

    Yamauchi, Michinori; Nakayama, Mitsuo; Minami, Kazuyoshi; Seki, Yasushi; Iida, Hiromasa.

    1979-05-01

    A SENSETWO code for the calculation of cross section sensitivities with a two-dimensional model has been developed, on the basis of first order perturbation theory. It uses forward neutron and/or gamma-ray fluxes and adjoint fluxes obtained by two-dimensional discrete ordinates code TWOTRAN-II. The data and informations of cross sections, geometry, nuclide density, response functions, etc. are transmitted to SENSETWO by the dump magnetic tape made in TWOTRAN calculations. The required input for SENSETWO calculations is thus very simple. The SENSETWO yields as printed output the cross section sensitivities for each coarse mesh zone and for each energy group, as well as the plotted output of sensitivity profiles specified by the input. A special feature of the code is that it also calculates the reaction rate with the response function used as the adjoint source in TWOTRAN adjoint calculation and the calculated forward flux from the TWOTRAN forward calculation. (author)

  18. Perspective on the audit calculation for SFR using TRACE code

    Energy Technology Data Exchange (ETDEWEB)

    Shin, An Dong; Choi, Yong Won; Bang, Young Suk; Bae, Moo Hoon; Huh, Byung Gil; Seol, Kwang One [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Korean Sodium Cooled Fast Reactor (SFR) is being developed by KAERI. The Prototype SFR will be a first SFR applied for licensing. KINS started research programs for preparing new concept design licensing recently. Safety analysis for the certain reactor is based on the computational estimation with conservatism and/or uncertainty of modeling. For the audit calculation for sodium cooled fast reactor (SFR), TRACE code is considered as one of analytical tool for SFR since TRACE code have already sodium related properties and models in it and have experience in the liquid metal coolant system area in abroad. Applicability of TRACE code for SFR is prechecked before real audit calculation. In this study, Demonstration Fast Reactor (DFR) 600 steady state conditions is simulated for identification of area of modeling improvements of TRACE code.

  19. Calculation of crack stress density of cement base materials

    Directory of Open Access Journals (Sweden)

    Chun-e Sui

    2018-01-01

    Full Text Available In this paper, the fracture load of cement paste with different water cement ratio, different mineral admixtures, including fly ash, silica fume and slag, is obtained through experiments. the three-dimensional fracture surface is reconstructed and the three-dimensional effective area of the fracture surface is calculated. the effective fracture stress density of different cement paste is obtained. The results show that the polynomial function can accurately describe the relationship between the three-dimensional total area and the tensile strength

  20. [Calculation of workers' health care costs].

    Science.gov (United States)

    Rydlewska-Liszkowska, Izabela

    2006-01-01

    In different health care systems, there are different schemes of organization and principles of financing activities aimed at ensuring the working population health and safety. Regardless of the scheme and the range of health care provided, economists strive for rationalization of costs (including their reduction). This applies to both employers who include workers' health care costs into indirect costs of the market product manufacture and health care institutions, which provide health care services. In practice, new methods of setting costs of workers' health care facilitate regular cost control, acquisition of detailed information about costs, and better adjustment of information to planning and control needs in individual health care institutions. For economic institutions and institutions specialized in workers' health care, a traditional cost-effect calculation focused on setting costs of individual products (services) is useful only if costs are relatively low and the output of simple products is not very high. But when products form aggregates of numerous actions like those involved in occupational medicine services, the method of activity based costing (ABC), representing the process approach, is much more useful. According to this approach costs are attributed to the product according to resources used during different activities involved in its production. The calculation of costs proceeds through allocation of all direct costs for specific processes in a given institution. Indirect costs are settled on the basis of resources used during the implementation of individual tasks involved in the process of making a new product. In this method, so called map of processes/actions consisted in the manufactured product and their interrelations are of particular importance. Advancements in the cost-effect for the management of health care institutions depend on their managerial needs. Current trends in this regard primarily depend on treating all cost reference

  1. Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area.

    Science.gov (United States)

    Easlon, Hsien Ming; Bloom, Arnold J

    2014-07-01

    Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. • Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. • Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.

  2. Easy Leaf Area: Automated Digital Image Analysis for Rapid and Accurate Measurement of Leaf Area

    Directory of Open Access Journals (Sweden)

    Hsien Ming Easlon

    2014-07-01

    Full Text Available Premise of the study: Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. Methods and Results: Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. Conclusions: Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.

  3. The calculation of dose rates from rectangular sources

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1998-01-01

    A common problem in radiation protection is the calculation of dose rates from extended sources and irregular shapes. Dose rates are proportional to the solid angle subtended by the source at the point of measurement. Simple methods of calculating solid angles would assist in estimating dose rates from large area sources and therefore improve predictive dose estimates when planning work near such sources. The estimation of dose rates is of particular interest to producers of radioactive ores but other users of bulk radioactive materials may have similar interest. The use of spherical trigonometry can assist in determination of solid angles and a simple equation is derived here for the determination of the dose at any distance from a rectangular surface. The solid angle subtended by complex shapes can be determined by modelling the area as a patchwork of rectangular areas and summing the solid angles from each rectangle. The dose rates from bags of thorium bearing ores is of particular interest in Western Australia and measured dose rates from bags and containers of monazite are compared with theoretical estimates based on calculations of solid angle. The agreement is fair but more detailed measurements would be needed to confirm the agreement with theory. (author)

  4. RHEIN, Modular System for Reactor Design Calculation

    International Nuclear Information System (INIS)

    Reiche, Christian; Barz, Hansulrich; Kunzmann, Bernd; Seifert, Eberhard; Wand, Hartmut

    1990-01-01

    1 - Description of program or function: RHEIN is a modular reactor code system for neutron physics calculations. It consists of a small number of system codes for execution control, data management, and handling support, as well as of the physical calculation routines. The execution is controlled by input data containing mathematical and physical parameters and simple commands for routine calls and data manipulations. The calculation routines are in tune with one another and the system takes care of the data transfer between them. Cross-section libraries with self shielding parameters are added to the system. 2 - Method of solution: The calculation routines can be used for solving the following physics problems: - Calculation of cross-section sets for infinite mediums, taking into account chord length. - Zero-dimensional spectrum calculation in diffusion, P1, or B1 approximation. - One-dimensional calculation in diffusion, P1, or collision probability approximation. - Two-dimensional diffusion calculation. - Cell calculation by THERMOS. - Zone-wise homogenized group collapsing within zero, one, or two-dimensional models. - Normalization, summarizing, etc. - Output of cross-section sets to off systems Sn and Monte-Carlo calculations

  5. Area distribution of an elastic Brownian motion

    International Nuclear Information System (INIS)

    Rajabpour, M A

    2009-01-01

    We calculate the excursion and meander area distributions of the elastic Brownian motion by using the self-adjoint extension of the Hamiltonian of the free quantum particle on the half line. We also give some comments on the area of the Brownian motion bridge on the real line with the origin removed. We will focus on the power of self-adjoint extension to investigate different possible boundary conditions for the stochastic processes. We also discuss some possible physical applications.

  6. Development of calculation system for decontamination effect, CDE

    International Nuclear Information System (INIS)

    Satoh, Daiki; Kojima, Kensuke; Oizumi, Akito; Matsuda, Norihiro; Kugo, Teruhiko; Sakamoto, Yukio; Endo, Akira; Okajima, Shigeaki

    2012-08-01

    Large amount of radionuclides had been discharged to environment in the accident of the Tokyo Electric Power Company Fukushima Daiichi Nuclear Power Plant caused by the 2011 off the Pacific coast of Tohoku Earthquake. The radionuclides deposited on the ground elevate dose rates in large area around the Fukushima site. For the reduction of the dose rate and recovery of the environment, decontamination based on a rational plan is an important and urgent subject. A computer software, named CDE (Calculation system for Decontamination Effect), has been developed to support planning the decontamination. CDE calculates the dose rates before the decontamination by using a database of dose contributions by radioactive cesium. The decontamination factor is utilized in the prediction of the dose rates after the decontamination, and dose rate reduction factor is evaluated to express the decontamination effect. The results are visualized on the image of a target zone with color map. In this paper, the overview of the software and the dose calculation method are reported. The comparison with the calculation results by a three-dimensional radiation transport code PHITS is also presented. In addition, the source code of the dose calculation program and user's manual of CDE are attached as appendices. (author)

  7. Primer for criticality calculations with DANTSYS

    International Nuclear Information System (INIS)

    Busch, R.D.

    1996-01-01

    With the closure of many experimental facilities, the nuclear criticality safety analyst is increasingly required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his or her facility. Typically, two types of codes are available: deterministic codes such as ANISN or DANTSYS that solve an approximate model exactly and Monte Carlo Codes such as KENO or MCNP that solve an exact model approximately. Often, the analyst feels that the deterministic codes are too simple and will not provide the necessary information, so most modeling uses Monte Carlo methods. This sometimes means that hours of effort are expended to produce results available in minutes from deterministic codes. A substantial amount of reliable information on nuclear systems can be obtained using deterministic methods if the user understands their limitations. To guide criticality specialists in this area, the Nuclear Criticality Safety Group at the University of New Mexico in cooperation with the Radiation Transport Group at Los Alamos National Laboratory has designed a primer to help the analyst understand and use the DANTSYS deterministic transport code for nuclear criticality safety analyses. (DANTSYS is the name of a suite of codes that users more commonly know as ONEDANT, TWODANT, TWOHEX, and THREEDANT.) It assumes a college education in a technical field, but there is no assumption of familiarity with neutronics codes in general or with DANTSYS in particular. The primer is designed to teach by example, with each example illustrating two or three DANTSYS features useful in criticality analyses

  8. Argosy 4 - A programme for lattice calculations

    International Nuclear Information System (INIS)

    MacDougall, J.D.

    1965-07-01

    This report contains a detailed description of the methods of calculation used in the Argosy 4 computer programme, and of the input requirements and printed results produced by the programme. An outline of the physics of the Argosy method is given. Section 2 describes the lattice calculation, including the burn up calculation, section 3 describes the control rod calculation and section 4 the reflector calculation. In these sections the detailed equations solved by the programme are given. In section 5 input requirements are given, and in section 6 the printed output obtained from an Argosy calculation is described. In section 7 are noted the principal differences between Argosy 4 and earlier versions of the Argosy programme

  9. Ensuring the validity of calculated subcritical limits

    International Nuclear Information System (INIS)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionally subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin

  10. Second reference calculation for the WIPP

    International Nuclear Information System (INIS)

    Branstetter, L.J.

    1985-03-01

    Results of the second reference calculation for the Waste Isolation Pilot Plant (WIPP) project using the dynamic relaxation finite element code SANCHO are presented. This reference calculation is intended to predict the response of a typical panel of excavated rooms designed for storage of nonheat-producing nuclear waste. Results are presented that include relevant deformations, relative clay seam displacements, and stress and strain profiles. This calculation is a particular solution obtained by a computer code, which has proven analytic capabilities when compared with other structural finite element codes. It is hoped that the results presented here will be useful in providing scoping values for defining experiments and for developing instrumentation. It is also hoped that the calculation will be useful as part of an exercise in developing a methodology for performing important design calculations by more than one analyst using more than one computer code, and for defining internal Quality Assurance (QA) procedures for such calculations. 27 refs., 15 figs

  11. Methodology of shielding calculation for nuclear reactors

    International Nuclear Information System (INIS)

    Maiorino, J.R.; Mendonca, A.G.; Otto, A.C.; Yamaguchi, Mitsuo

    1982-01-01

    A methodology of calculation that coupling a serie of computer codes in a net that make the possibility to calculate the radiation, neutron and gamma transport, is described, for deep penetration problems, typical of nuclear reactor shielding. This net of calculation begining with the generation of constant multigroups, for neutrons and gamma, by the AMPX system, coupled to ENDF/B-IV data library, the transport calculation of these radiations by ANISN, DOT 3.5 and Morse computer codes, up to the calculation of absorbed doses and/or equivalents buy SPACETRAN code. As examples of the calculation method, results from benchmark n 0 6 of Shielding Benchmark Problems - ORNL - RSIC - 25, namely Neutron and Secondary Gamma Ray fluence transmitted through a Slab of Borated Polyethylene, are presented. (Author) [pt

  12. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  13. Radiation damage calculations for compound materials

    International Nuclear Information System (INIS)

    Greenwood, L.R.

    1990-01-01

    This paper reports on the SPECOMP computer code, developed to calculate neutron-induced displacement damage cross sections for compound materials such as alloys, insulators, and ceramic tritium breeders for fusion reactors. These new calculations rely on recoil atom energy distributions previously computed with the DISCS computer code, the results of which are stored in SPECTER computer code master libraries. All reaction channels were considered in the DISCS calculations and the neutron cross sections were taken from ENDF/B-V. Compound damage calculations with SPECOMP thus do not need to perform any recoil atom calculations and consequently need no access to ENDF or other neutron data bases. The calculations proceed by determining secondary displacements for each combination of recoil atom and matrix atom using the Lindhard partition of the recoil energy to establish the damage energy

  14. Crack-opening area calculations for circumferential through-wall pipe cracks

    Energy Technology Data Exchange (ETDEWEB)

    Kishida, K.; Zahoor, A.

    1988-08-01

    This report describes the estimation schemes for crack opening displacement (COD) of a circumferential through-wall crack, then compares the COD predictions with pipe experimental data. Accurate predictions for COD are required to reliably predict the leak rate through a crack in leak-before-break applications.

  15. Crack-opening area calculations for circumferential through-wall pipe cracks

    International Nuclear Information System (INIS)

    Kishida, K.; Zahoor, A.

    1988-08-01

    This report describes the estimation schemes for crack opening displacement (COD) of a circumferential through-wall crack, then compares the COD predictions with pipe experimental data. Accurate predictions for COD are required to reliably predict the leak rate through a crack in leak-before-break applications

  16. Lack of Precision of Burn Surface Area Calculation by UK Armed Forces Medical Personnel

    Science.gov (United States)

    2014-03-01

    computer screen or tablet , and therefore the variability in perception and representation inherent in having a human assess and draw the burn remains...Potential solutions to this source of error include 3D MRI and TeraHertz scanning technologies [40], but at the time of writing, these are not yet

  17. Numerical calculation of the Fresnel transform.

    Science.gov (United States)

    Kelly, Damien P

    2014-04-01

    In this paper, we address the problem of calculating Fresnel diffraction integrals using a finite number of uniformly spaced samples. General and simple sampling rules of thumb are derived that allow the user to calculate the distribution for any propagation distance. It is shown how these rules can be extended to fast-Fourier-transform-based algorithms to increase calculation efficiency. A comparison with other theoretical approaches is made.

  18. Nuclear data preparation and discrete ordinates calculation

    International Nuclear Information System (INIS)

    Carmignani, B.

    1980-01-01

    These lectures deal with the use of the GAM-GATHER and GAM-THERMOS chains for the calculation of lattice cross sections and within use of the discrete ordinates one dimensional ANISN code for the calculation of criticality and flux distribution of the cell and of the whole reactor. As an example the codes are applied to the calculation of a PWR. Results of different approximations are compared. (author)

  19. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  20. Feasibility study on embedded transport core calculations

    International Nuclear Information System (INIS)

    Ivanov, B.; Zikatanov, L.; Ivanov, K.

    2007-01-01

    The main objective of this study is to develop an advanced core calculation methodology based on embedded diffusion and transport calculations. The scheme proposed in this work is based on embedded diffusion or SP 3 pin-by-pin local fuel assembly calculation within the framework of the Nodal Expansion Method (NEM) diffusion core calculation. The SP 3 method has gained popularity in the last 10 years as an advanced method for neutronics calculation. NEM is a multi-group nodal diffusion code developed, maintained and continuously improved at the Pennsylvania State University. The developed calculation scheme is a non-linear iteration process, which involves cross-section homogenization, on-line discontinuity factors generation, and boundary conditions evaluation by the global solution passed to the local calculation. In order to accomplish the local calculation, a new code has been developed based on the Finite Elements Method (FEM), which is capable of performing both diffusion and SP 3 calculations. The new code will be used in the framework of the NEM code in order to perform embedded pin-by-pin diffusion and SP 3 calculations on fuel assembly basis. The development of the diffusion and SP 3 FEM code is presented first following by its application to several problems. Description of the proposed embedded scheme is provided next as well as the obtained preliminary results of the C3 MOX benchmark. The results from the embedded calculations are compared with direct pin-by-pin whole core calculations in terms of accuracy and efficiency followed by conclusions made about the feasibility of the proposed embedded approach. (authors)

  1. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  2. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  3. Large scale calculations for hadron spectroscopy

    International Nuclear Information System (INIS)

    Rebbi, C.

    1985-01-01

    The talk reviews some recent Monte Carlo calculations for Quantum Chromodynamics, performed on Euclidean lattices of rather large extent. Purpose of the calculations is to provide accurate determinations of quantities, such as interquark potentials or mass eigenvalues, which are relevant for hadronic spectroscopy. Results obtained in quenched QCD on 16 3 x 32 lattices are illustrated, and a discussion of computational resources and techniques required for the calculations is presented. 18 refs.,3 figs., 2 tabs

  4. Calculation of saturated hydraulic conductivity of bentonite

    International Nuclear Information System (INIS)

    He Jun

    2006-01-01

    Hydraulic conductivity test has some defects such as weak repeatability, time-consuming. Taking bentonite as dual porous media, the calculation formula of the distance, d 2 , between montmorillonite in intraparticle pores is deduced. Improved calculated method of hydraulic conductivity is obtained using d 2 and Poiseuille law. The method is valid through the comparison with results of test and other methods. The method is very convenient to calculate hydraulic conductivity of bentonite of certain montmorillonite content and void ratio. (authors)

  5. Temperature Calculations in the Coastal Modeling System

    Science.gov (United States)

    2017-04-01

    ERDC/CHL CHETN-IV-110 April 2017 Approved for public release; distribution is unlimited . Temperature Calculations in the Coastal Modeling...tide) and river discharge at model boundaries, wave radiation stress, and wind forcing over a model computational domain. Physical processes calculated...calculated in the CMS using the following meteorological parameters: solar radiation, cloud cover, air temperature, wind speed, and surface water temperature

  6. Calculation of the BREN house shielding experiments

    International Nuclear Information System (INIS)

    Woolson, William A.; Gritzner, Michael L.

    1987-01-01

    The BREN house transmission experiments provide an excellent set of measurements to validate the calculational procedures that will be used to derive house shielding estimates for the revised dosimetry of the survivors of the Hiroshima and Nagasaki A-bombs. The BREN experiments were performed in realistic full scale models of Japanese residences. Although the radiation spectra and relative intensities of neutrons and gamma rays incident on the houses from the HPRR and the 60 Co source are not appropriate for direct application to the A-bomb survivors, they cover the full energy range of importance. The codes and calculations required to compare with BREN experiments are the same as those needed for the A-bomb dosimetry. They consist of a two-dimensional discrete-ordinates calculation of the free field coupled to an adjoint Monte Carlo calculation in detailed house geometry. The agreement obtained between calculations and the experiments is excellent for neutrons and 60 Co gamma rays. Every house transmission calculation spanning simple to complex configurations and detector locations for the 60 Co and HPRR was within an acceptable margin of error. The gamma-ray TF calculations for the reactor source did not agree well with the experiments. Analysis of this discrepancy, however, strongly indicates that the problem probably does not reside in the calculational procedure but in the measurements themselves. In conclusion, it is believed that the excellent agreement of our calculations with the BREN experiments validates the calculational procedure which is planed to be applied o estimating the house shielding for survivors of the Hiroshima and Nagasaki A-bombs. Certainly, the calculations for Hiroshima and Nagasaki will involve modifications to the code used for the computations reported here, but to the extent that these modifications involve increased calculational complexity to treat more realistic materials and configurations, the benchmark established by these

  7. Comparison of RESRAD with hand calculations

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1995-09-01

    This report is a continuation of an earlier comparison done with two other computer programs, GENII and PATHRAE. The dose calculations by the two programs were compared with each other and with hand calculations. These band calculations have now been compared with RESRAD Version 5.41 to examine the use of standard models and parameters in this computer program. The hand calculations disclosed a significant computational error in RESRAD. The Pu-241 ingestion doses are five orders of magnitude too small. In addition, the external doses from some nuclides differ greatly from expected values. Both of these deficiencies have been corrected in later versions of RESRAD

  8. Calculation of Critical Temperatures by Empirical Formulae

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2016-06-01

    Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.

  9. Range calculations using multigroup transport methods

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Robinson, M.T.; Dodds, H.L. Jr.

    1979-01-01

    Several aspects of radiation damage effects in fusion reactor neutron and ion irradiation environments are amenable to treatment by transport theory methods. In this paper, multigroup transport techniques are developed for the calculation of particle range distributions. These techniques are illustrated by analysis of Au-196 atoms recoiling from (n,2n) reactions with gold. The results of these calculations agree very well with range calculations performed with the atomistic code MARLOWE. Although some detail of the atomistic model is lost in the multigroup transport calculations, the improved computational speed should prove useful in the solution of fusion material design problems

  10. Pile Load Capacity – Calculation Methods

    Directory of Open Access Journals (Sweden)

    Wrana Bogumił

    2015-12-01

    Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.

  11. Some calculator programs for particle physics

    International Nuclear Information System (INIS)

    Wohl, C.G.

    1982-01-01

    Seven calculator programs that do simple chores that arise in elementary particle physics are given. LEGENDRE evaluates the Legendre polynomial series Σa/sub n/P/sub n/(x) at a series of values of x. ASSOCIATED LEGENDRE evaluates the first-associated Legendre polynomial series Σb/sub n/P/sub n/ 1 (x) at a series of values of x. CONFIDENCE calculates confidence levels for chi 2 , Gaussian, or Poisson probability distributions. TWO BODY calculates the c.m. energy, the initial- and final-state c.m. momenta, and the extreme values of t and u for a 2-body reaction. ELLIPSE calculates coordinates of points for drawing an ellipse plot showing the kinematics of a 2-body reaction or decay. DALITZ RECTANGULAR calculates coordinates of points on the boundary of a rectangular Dalitz plot. DALITZ TRIANGULAR calculates coordinates of points on the boundary of a triangular Dalitz plot. There are short versions of CONFIDENCE (EVEN N and POISSON) that calculate confidence levels for the even-degree-of-freedom-chi 2 and the Poisson cases, and there is a short version of TWO BODY (CM) that calculates just the c.m. energy and initial-state momentum. The programs are written for the HP-97 calculator

  12. Direct calculation of wind turbine tip loss

    DEFF Research Database (Denmark)

    Wood, D.H.; Okulov, Valery; Bhattacharjee, D.

    2016-01-01

    . We develop three methods for the direct calculation of the tip loss. The first is the computationally expensive calculation of the velocities induced by the helicoidal wake which requires the evaluation of infinite sums of products of Bessel functions. The second uses the asymptotic evaluation......The usual method to account for a finite number of blades in blade element calculations of wind turbine performance is through a tip loss factor. Most analyses use the tip loss approximation due to Prandtl which is easily and cheaply calculated but is known to be inaccurate at low tip speed ratio...

  13. Reaction rate calculations via transmission coefficients

    International Nuclear Information System (INIS)

    Feit, M.D.; Alder, B.J.

    1985-01-01

    The transmission coefficient of a wavepacket traversing a potential barrier can be determined by steady state calculations carried out in imaginary time instead of by real time dynamical calculations. The general argument is verified for the Eckart barrier potential by a comparison of transmission coefficients calculated from real and imaginary time solutions of the Schroedinger equation. The correspondence demonstrated here allows a formulation for the reaction rate that avoids difficulties due to both rare events and explicitly time dependent calculations. 5 refs., 2 figs

  14. Core physics calculation and analysis for SNRE

    International Nuclear Information System (INIS)

    Xie Jiachun; Zhao Shouzhi; Jia Baoshan

    2010-01-01

    Five different precise calculation models have been set up for Small Nuclear Rocket Engine (SNRE) core based on MCNP code, and then the effective multiplication constant, drum control worth and power distribution were calculated. The results from different models indicate that the model in which elements are homogeneous could be used in the reactivity calculation, but a detailed description of elements have to be used in the element internal power distribution calculation. The results of physics parameters show that the basic characteristics of SNRE are reasonable. The drum control worth is sufficient. The power distribution is symmetrical and reasonable. All of the parameters can satisfy the design requirement. (authors)

  15. Should Broca's area include Brodmann area 47?

    Science.gov (United States)

    Ardila, Alfredo; Bernal, Byron; Rosselli, Monica

    2017-02-01

    Understanding brain organization of speech production has been a principal goal of neuroscience. Historically, brain speech production has been associated with so-called Broca’s area (Brodmann area –BA- 44 and 45), however, modern neuroimaging developments suggest speech production is associated with networks rather than with areas. The purpose of this paper was to analyze the connectivity of BA47 ( pars orbitalis) in relation to language . A meta-analysis was conducted to assess the language network in which BA47 is involved. The Brainmap database was used. Twenty papers corresponding to 29 experimental conditions with a total of 373 subjects were included. Our results suggest that BA47 participates in a “frontal language production system” (or extended Broca’s system). The BA47  connectivity found is also concordant with a minor role in language semantics. BA47 plays a central role in the language production system.

  16. Dose calculation at distance of irradiation beams: case of women treated for the Hodgkin disease

    International Nuclear Information System (INIS)

    Poupon, E.; Alziar, I.; Vathaire, F. de; Diallo, I.; Bridier, A.; Bonniaud, G.; Lefkopoulos, D.; Ruaud, J.B.; Rousseau, V.; Kafrouni, H.

    2007-01-01

    The interest of precise calculation of radiation doses distributions remote areas of irradiation is to open new prospects in the knowledge of the contribution of radiotherapy in the occurrence of iatrogenic early and delayed effects. (N.C.)

  17. Summary of the meeting status of static reactor calculations in Nordic countries

    International Nuclear Information System (INIS)

    Lindahl, S.-Oe.

    1983-02-01

    Some impressions of the material presented at the meeting are given. The covered areas were as follows: in-core fuel management, cross section generation, burnable absorbers nodal models, pin power calculations and benchmarking. (Author)

  18. Regulatory guides for qualifying the calculation methodology of Furnas by CNEN

    International Nuclear Information System (INIS)

    1987-10-01

    Regulatory guides are presented which will be used for qualifying the calculation methodology of FURNAS by CNEN, in the areas of Neutronics, Thermohydraulics, Accident Analysis and Fuel Rod Performance, as applied to Angra 1 NPP. (Author) [pt

  19. Infrastructure Area Simplification Plan

    CERN Document Server

    Field, L.

    2011-01-01

    The infrastructure area simplification plan was presented at the 3rd EMI All Hands Meeting in Padova. This plan only affects the information and accounting systems as the other areas are new in EMI and hence do not require simplification.

  20. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  1. Vermont Designated Natural Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — Under Natural Areas Law (10 Vermont Statutes Annotated, Chapter 83 � 2607) the FPR commissioner, with the approval of the governor, may designate and set aside areas...

  2. Hydrologic Areas of Concern

    Data.gov (United States)

    University of New Hampshire — A Hydrologic Area of Concern (HAC) is a land area surrounding a water source, which is intended to include the portion of the watershed in which land uses are likely...

  3. Three dimensions transport calculations for PWR core; Calcul de coeur R.E.P. en transport 3D

    Energy Technology Data Exchange (ETDEWEB)

    Richebois, E

    2000-07-01

    The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)

  4. Three dimensions transport calculations for PWR core; Calcul de coeur R.E.P. en transport 3D

    Energy Technology Data Exchange (ETDEWEB)

    Richebois, E

    2000-07-01

    The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)

  5. Development of the code for filter calculation

    International Nuclear Information System (INIS)

    Gritzay, O.O.; Vakulenko, M.M.

    2012-01-01

    This paper describes a calculation method, which commonly used in the Neutron Physics Department to develop a new neutron filter or to improve the existing neutron filter. This calculation is the first step of the traditional filter development procedure. It allows easy selection of the qualitative and quantitative contents of a composite filter in order to receive the filtered neutron beam with given parameters

  6. The Monte Carlo applied for calculation dose

    International Nuclear Information System (INIS)

    Peixoto, J.E.

    1988-01-01

    The Monte Carlo method is showed for the calculation of absorbed dose. The trajectory of the photon is traced simulating sucessive interaction between the photon and the substance that consist the human body simulator. The energy deposition in each interaction of the simulator organ or tissue per photon is also calculated. (C.G.C.) [pt

  7. Thermohydraulic calculations of PWR primary circuits

    International Nuclear Information System (INIS)

    Botelho, D.A.

    1984-01-01

    Some mathematical and numerical models from Retran computer codes aiming to simulate reactor transients, are presented. The equations used for calculating one-dimensional flow are integrated using mathematical methods from Flash code, with steam code to correlate the variables from thermodynamic state. The algorithm obtained was used for calculating a PWR reactor. (E.G.) [pt

  8. Calculation of resonance integral for fuel cluster

    International Nuclear Information System (INIS)

    Remsak, S.

    1969-01-01

    The procedure for calculating the shielding correction, formulated in the previous paper [6], was broadened and applied for a cluster of cylindrical rods. The sam analytical method as in the previous paper was applied. A combination of Gauss method with the method of Almgren and Porn used for solving the same type of integral was used to calculate the geometry functions. CLUSTER code was written for ZUSE-Z-23 computer to calculate the shielding corrections for pairs of fuel rods in the cluster. Computing time for one pair of fuel rods depends on the number of closely placed rod, and for two closely placed rods it is about 3 hours. Calculations were done for clusters containing 7 and 19 UO 2 rods. results show that calculated values of resonance integrals are somewhat higher than the values obtained by Helstrand empirical formula. Taking into account the results for two rods from the previous paper it can be noted that the calculated and empirical values for clusters with 2 and 7 rods are in agreement since the deviations do not exceed the limits of experimental error (±2%). In case of larger cluster with 19 rods deviations are higher than the experimental error. Most probably the calculated values exceed the experimental ones result from the fact that in this paper the shielding correction is calculated only in the region up to 1 keV [sr

  9. Calculation and definition of safety indicators

    International Nuclear Information System (INIS)

    Cristian, I.; Branzeu, N.; Vidican, D.; Vladescu, G.

    1997-01-01

    This paper presents, based on Cernavoda safety indicators proposal, the purpose definition and calculation formulas for each of the selected safety indicators. Five categories of safety indicators for Cernavoda Unit 1 were identified, namely: overall plant safety performance; initiating events; safety system availability, physical barrier integrity; indirect indicators. Definition, calculation and use of some safety indicators are shown in a tabular form. (authors)

  10. Molecular mechanics calculations on cobalt phthalocyanine dimers

    NARCIS (Netherlands)

    Heuts, J.P.A.; Schipper, E.T.W.M.; Piet, P.; German, A.L.

    1995-01-01

    In order to obtain insight into the structure of cobalt phthalocyanine dimers, molecular mechanics calculations were performed on dimeric cobalt phthalocyanine species. Molecular mechanics calculations are first presented on monomeric cobalt(II) phthalocyanine. Using the Tripos force field for the

  11. Radiation protection calculations for diagnostic medical equipment

    International Nuclear Information System (INIS)

    Klueter, R.

    1992-01-01

    The standards DIN 6812 and DIN 6844 define the radiation protection requirements to be met by biomedical radiography equipment or systems for nuclear medicine. The paper explains the use of a specific computer program for radiation protection calculations. The program offers menu-controlled calculation, with free choice of the relevant nuclides. (DG) [de

  12. 40 CFR 1065.650 - Emission calculations.

    Science.gov (United States)

    2010-07-01

    ... field testing, you may calculate the ratio of total mass to total work, where these individual values... negative work rate values in the integration to calculate total work from that work path. Some work paths may result in a negative total work. Include negative total work values from any work path in the...

  13. CO2 calculator

    DEFF Research Database (Denmark)

    Nielsen, Claus Werner; Nielsen, Ole-Kenneth

    2009-01-01

    Many countries are in the process of mapping their national CO2 emissions, but only few have managed to produce an overall report at municipal level yet. Denmark, however, has succeeded in such a project. Using a new national IT-based calculation model, municipalities can calculate the extent...

  14. Local expressions for one-loop calculations

    International Nuclear Information System (INIS)

    Wasson, D.A.; Koonin, S.E.

    1991-01-01

    We develop local expressions for the contributions of the short-wavelength vacuum modes to the one-loop vacuum energy. These expressions significantly improve the convergence properties of various ''brute-force'' calculational methods. They also provide a continuous series of approximations that interpolate between the brute-force calculations and the derivative expansion

  15. GPU based acceleration of first principles calculation

    International Nuclear Information System (INIS)

    Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T

    2010-01-01

    We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.

  16. Calculated Atomic Volumes of the Actinide Metals

    DEFF Research Database (Denmark)

    Skriver, H.; Andersen, O. K.; Johansson, B.

    1979-01-01

    The equilibrium atomic volume is calculated for the actinide metals. It is possible to account for the localization of the 5f electrons taking place in americium.......The equilibrium atomic volume is calculated for the actinide metals. It is possible to account for the localization of the 5f electrons taking place in americium....

  17. Reactor physics calculations on HTR type configurations

    Energy Technology Data Exchange (ETDEWEB)

    Klippel, H.T.; Hogenbirk, A.; Stad, R.C.L. van der; Janssen, A.J.; Kuijper, J.C.; Levin, P.

    1995-04-01

    In this paper a short description of the ECN nuclear analysis code system is given with respect to application in HTR reactor physics calculations. First results of calculations performed on the PROTEUS benchmark are shown. Also first results of a HTGR benchmark are given. (orig.).

  18. Reactor physics calculations on HTR type configurations

    International Nuclear Information System (INIS)

    Klippel, H.T.; Hogenbirk, A.; Stad, R.C.L. van der; Janssen, A.J.; Kuijper, J.C.; Levin, P.

    1995-04-01

    In this paper a short description of the ECN nuclear analysis code system is given with respect to application in HTR reactor physics calculations. First results of calculations performed on the PROTEUS benchmark are shown. Also first results of a HTGR benchmark are given. (orig.)

  19. Calculated LET-Spectrum of Antiprotons

    DEFF Research Database (Denmark)

    Bassler, Niels

    -LET components resulting from the annihilation. Though, the calculations of dose-averaged LET in the entry region may suggest that the RBE of antiprotons in the plateau region could significantly differ from unity. Materials and Methods Monte Carlo simulations using FLUKA were performed for calculating...

  20. First principles calculations for analysis martensitic transformations

    International Nuclear Information System (INIS)

    Harmon, B.N.; Zhao, G.L.; Ho, K.M.; Chan, C.T.; Ye, Y.Y.; Ding, Y.; Zhang, B.L.

    1993-01-01

    The change in crystal energy is calculated for atomic displacements corresponding to phonons, elastic shears, and lattice transformations. Anomalies in the phonon dispersion curves of NiAl and NiTi are analyzed and recent calculations for TiPd alloys are presented

  1. Data base to compare calculations and observations

    International Nuclear Information System (INIS)

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed

  2. Calculation of neutron kerma in tissues

    International Nuclear Information System (INIS)

    Vega C, H.R.; Manzanares A, E.

    2004-01-01

    Neutron kerma of normal and tumor tissues has been calculated using the tissues elemental concentration. A program developed in Math cad contains the kerma factors of C, H, O, N, Na, Mg, P, S, Cl, K, etc. that are in normal and tumor human tissues. Having the elemental composition of any human tissue the neutron kerma can be calculated. The program was tested using the elemental composition of tumor tissues such as sarcoma, melanoma, carcinoma and adenoid cystic, also neutron kerma for adipose and muscle tissue for normal adult was calculated. The results are in agreement with those published in literature. The neutron kerma for water was also calculated because in some dosimetric calculations water is used to describe normal and tumor tissues. From this comparison was found that at larger energies kerma factors are approximately the same, but energies less than 100 eV the differences are large. (Author)

  3. Dose calculation system for remotely supporting radiotherapy

    International Nuclear Information System (INIS)

    Saito, K.; Kunieda, E.; Narita, Y.; Kimura, H.; Hirai, M.; Deloar, H. M.; Kaneko, K.; Ozaki, M.; Fujisaki, T.; Myojoyama, A.; Saitoh, H.

    2005-01-01

    The dose calculation system IMAGINE is being developed keeping in mind remotely supporting external radiation therapy using photon beams. The system is expected to provide an accurate picture of the dose distribution in a patient body, using a Monte Carlo calculation that employs precise models of the patient body and irradiation head. The dose calculation will be performed utilising super-parallel computing at the dose calculation centre, which is equipped with the ITBL computer, and the calculated results will be transferred through a network. The system is intended to support the quality assurance of current, widely carried out radiotherapy and, further, to promote the prevalence of advanced radiotherapy. Prototypes of the modules constituting the system have already been constructed and used to obtain basic data that are necessary in order to decide on the concrete design of the system. The final system will be completed in 2007. (authors)

  4. Non-perturbative background field calculations

    International Nuclear Information System (INIS)

    Stephens, C.R.; Department of Physics, University of Utah, Salt Lake City, Utah 84112)

    1988-01-01

    New methods are developed for calculating one loop functional determinants in quantum field theory. Instead of relying on a calculation of all the eigenvalues of the small fluctuation equation, these techniques exploit the ability of the proper time formalism to reformulate an infinite dimensional field theoretic problem into a finite dimensional covariant quantum mechanical analog, thereby allowing powerful tools such as the method of Jacobi fields to be used advantageously in a field theory setting. More generally the methods developed herein should be extremely valuable when calculating quantum processes in non-constant background fields, offering a utilitarian alternative to the two standard methods of calculation: perturbation theory in the background field or taking the background field into account exactly. The formalism developed also allows for the approximate calculation of covariances of partial differential equations from a knowledge of the solutions of a homogeneous ordinary differential equation. copyright 1988 Academic Press, Inc

  5. The FLUKA atmospheric neutrino flux calculation

    CERN Document Server

    Battistoni, G.; Montaruli, T.; Sala, P.R.

    2003-01-01

    The 3-dimensional (3-D) calculation of the atmospheric neutrino flux by means of the FLUKA Monte Carlo model is here described in all details, starting from the latest data on primary cosmic ray spectra. The importance of a 3-D calculation and of its consequences have been already debated in a previous paper. Here instead the focus is on the absolute flux. We stress the relevant aspects of the hadronic interaction model of FLUKA in the atmospheric neutrino flux calculation. This model is constructed and maintained so to provide a high degree of accuracy in the description of particle production. The accuracy achieved in the comparison with data from accelerators and cross checked with data on particle production in atmosphere certifies the reliability of shower calculation in atmosphere. The results presented here can be already used for analysis by current experiments on atmospheric neutrinos. However they represent an intermediate step towards a final release, since this calculation does not yet include the...

  6. Text book of dose calculation for operators

    International Nuclear Information System (INIS)

    Aoyagi, Haruki; Gonda, Kozo

    1979-07-01

    This is a text book of dose calculation for the operators of the reprocessing factory of Power Reactor and Nuclear Fuel Development Corporation. The radiations considered are beta-ray and gamma-ray. The method used is a point attenuation nuclear integral method. Radiation sources are considered as the assemblies of point sources. Dose from each point source is calculated, then, total dose is obtained by the integration for all sources. Attenuation is calculated by considering the attenuation owing to distance and the absorption by absorbers. The build-up factor is introduced for the correction for scattered gamma-ray. The build-up factor is given in a table for various scatterers. The operators are able to calculate dose by themselves. The results of integral calculation expressed with formulas are given in graphs. (Kato, T.)

  7. Reexamining the Dissolution of Spent Fuel: A Comparison of Different Methods for Calculating Rates

    International Nuclear Information System (INIS)

    Hanson, Brady D.; Stout, Ray B.

    2004-01-01

    Dissolution rates for spent fuel have typically been reported in terms of a rate normalized to the surface area of the specimen. Recent evidence has shown that neither the geometric surface area nor that measured with BET accurately predicts the effective surface area of spent fuel. Dissolution rates calculated from results obtained by flowthrough tests were reexamined comparing the cumulative releases and surface area normalized rates. While initial surface area is important for comparison of different rates, it appears that normalizing to the surface area introduces unnecessary uncertainty compared to using cumulative or fractional release rates. Discrepancies in past data analyses are mitigated using this alternative method

  8. Recharge and discharge calculations to characterize the groundwater hydrologic balance

    International Nuclear Information System (INIS)

    Liddle, R.G.

    1998-01-01

    Several methods are presented to quantify the ground water component of the hydrologic balance; including (1) hydrograph separation techniques, (2) water budget calculations, (3) spoil discharge techniques, and (4) underground mine inflow studies. Stream hydrograph analysis was used to calculate natural groundwater recharge and discharge rates. Yearly continuous discharge hydrographs were obtained for 16 watersheds in the Cumberland Plateau area of Tennessee. Baseflow was separated from storm runoff using computerized hydrograph analysis techniques developed by the USGS. The programs RECESS, RORA, and PART were used to develop master recession curves, calculate ground water recharge, and ground water discharge respectively. Station records ranged from 1 year of data to 60 years of data with areas of 0.67 to 402 square miles. Calculated recharge ranged from 7 to 28 inches of precipitation while ground water discharge ranged from 6 to 25 inches. Baseflow ranged from 36 to 69% of total flow. For sites with more than 4 years of data the median recharge was 20 inches/year and the 95% confidence interval for the median was 16.4 to 23.8 inches of recharge. Water budget calculations were also developed independently by a mining company in southern Tennessee. Results showed about 19 inches of recharge is available on a yearly basis. A third method used spoil water discharge measurements to calculate average recharge rate to the mine. Results showed 21.5 inches of recharge for this relatively flat area strip mine. In a further analysis it was shown that premining soil recharge rates of 19 inches consisted of about 17 inches of interflow and 2 inches of deep aquifer recharge while postmining recharge to the spoils had almost no interflow component. OSM also evaluated underground mine inflow data from northeast Tennessee and southeast Kentucky. This empirical data showed from 0.38 to 1.26 gallons per minute discharge per unit acreage of underground workings. This is the

  9. Calculation of the actual cost in the chemical fertilizer industry

    Directory of Open Access Journals (Sweden)

    Ion Ionescu

    2017-12-01

    Full Text Available The main goal of the research is to present a way of organising the managerial accounting of totally and semi finished product obtained in chemical fertilizer industry entities. For this study, we analyzed the current principle of managerial accounting to an entity in the studied area, in order to emphasize the need of organizing and implementing a modern accounting management to control the cost and increase the performance of the entities in this area, starting from the premise that there are sufficient similarities between entities in the field. The research has highlighted the fact that, nowadays, the cost calculation is organized using traditional methods, which focus on the monthly determination of the actual unit cost per product (semi-fabric and that it is necessary to organize and implement a managerial accounting, based on the use of a modern method, namely the standard cost method combined with cost centre method. The major implications of the proposed system for the researched field are the monthly calculation of actual costs per cost centres, the calculation of the actual cost per product, as the final cost carrier, to be performed over longer periods of time, usually, quarterly.

  10. HP-67 calculator programs for thermodynamic data and phase diagram calculations

    International Nuclear Information System (INIS)

    Brewer, L.

    1978-01-01

    This report is a supplement to a tabulation of the thermodynamic and phase data for the 100 binary systems of Mo with the elements from H to Lr. The calculations of thermodynamic data and phase equilibria were carried out from 5000 0 K to low temperatures. This report presents the methods of calculation used. The thermodynamics involved is rather straightforward and the reader is referred to any advanced thermodynamic text. The calculations were largely carried out using an HP-65 programmable calculator. In this report, those programs are reformulated for use with the HP-67 calculator; great reduction in the number of programs required to carry out the calculation results

  11. A finite element calculation of flux pumping

    Science.gov (United States)

    Campbell, A. M.

    2017-12-01

    A flux pump is not only a fascinating example of the power of Faraday’s concept of flux lines, but also an attractive way of powering superconducting magnets without large electronic power supplies. However it is not possible to do this in HTS by driving a part of the superconductor normal, it must be done by exceeding the local critical density. The picture of a magnet pulling flux lines through the material is attractive, but as there is no direct contact between flux lines in the magnet and vortices, unless the gap between them is comparable to the coherence length, the process must be explicable in terms of classical electromagnetism and a nonlinear V-I characteristic. In this paper a simple 2D model of a flux pump is used to determine the pumping behaviour from first principles and the geometry. It is analysed with finite element software using the A formulation and FlexPDE. A thin magnet is passed across one or more superconductors connected to a load, which is a large rectangular loop. This means that the self and mutual inductances can be calculated explicitly. A wide strip, a narrow strip and two conductors are considered. Also an analytic circuit model is analysed. In all cases the critical state model is used, so the flux flow resistivity and dynamic resistivity are not directly involved, although an effective resistivity appears when J c is exceeded. In most of the cases considered here is a large gap between the theory and the experiments. In particular the maximum flux transferred to the load area is always less than the flux of the magnet. Also once the threshold needed for pumping is exceeded the flux in the load saturates within a few cycles. However the analytic circuit model allows a simple modification to allow for the large reduction in I c when the magnet is over a conductor. This not only changes the direction of the pumped flux but leads to much more effective pumping.

  12. Nuclear calculation of the thorium reactor

    International Nuclear Information System (INIS)

    Hirakawa, Naohiro

    1998-01-01

    Even if for a reactor using thorium (and 233-U), its nuclear design calculation procedure is similar to the case using conventional 235-U, 238-U and plutonium. As nuclear composition varies with time on operation of nuclear reactor, calculation of its mean cross section should be conducted in details. At that time, one-group cross section obtained by integration over a whole of energy range is used for small member group. And, as the nuclear data for a base of its calculation is already prepared by JENDL3.2 and nuclear data library derived from it, the nuclear calculation of a nuclear reactor using thorium has no problem. From such a veiwpoint, IAEA has organized a coordinated research program of 'Potential of Th-based Fuel Cycles to Constrain Pu and to reduce Long-term Waste Toxicities' since 1996. All nations entering this program were regulated so as to institute by selecting a nuclear fuel cycle thinking better by each nation and to examine what cycle is expected by comparing their results. For a promise to conduct such neutral comparison, a comparison of bench mark calculations aiming at PWR was conducted to protect that the obtained results became different because of different calculation method and cross section adopted by each nation. Therefore, it was promoted by entrance of China, Germany, India, Israel, Japan, Korea, Russia and USA. The SWAT system developed by Tohoku University is used for its calculation code, by using which calculated results on the bench mark calculation at the fist and second stages and the nuclear reactor were reported. (G.K.)

  13. Effective connectivity reveals strategy differences in an expert calculator.

    Directory of Open Access Journals (Sweden)

    Ludovico Minati

    Full Text Available Mathematical reasoning is a core component of cognition and the study of experts defines the upper limits of human cognitive abilities, which is why we are fascinated by peak performers, such as chess masters and mental calculators. Here, we investigated the neural bases of calendrical skills, i.e. the ability to rapidly identify the weekday of a particular date, in a gifted mental calculator who does not fall in the autistic spectrum, using functional MRI. Graph-based mapping of effective connectivity, but not univariate analysis, revealed distinct anatomical location of "cortical hubs" supporting the processing of well-practiced close dates and less-practiced remote dates: the former engaged predominantly occipital and medial temporal areas, whereas the latter were associated mainly with prefrontal, orbitofrontal and anterior cingulate connectivity. These results point to the effect of extensive practice on the development of expertise and long term working memory, and demonstrate the role of frontal networks in supporting performance on less practiced calculations, which incur additional processing demands. Through the example of calendrical skills, our results demonstrate that the ability to perform complex calculations is initially supported by extensive attentional and strategic resources, which, as expertise develops, are gradually replaced by access to long term working memory for familiar material.

  14. Calculations with ANSYS/FLOTRAN to a core catcher benchmark

    International Nuclear Information System (INIS)

    Willschuetz, H.G.

    1999-01-01

    There are numerous experiments for the exploration of the corium spreading behaviour, but comparable data have not been available up to now in the field of the long-term behaviour of a corium expanded in a core catcher. For the calculations a pure liquid oxidic melt with a homogeneous internal heat source was assumed. The melt was distributed uniformly over the spreading area of the EPR core catcher. All codes applied the well known k-ε-turbulence-model to simulate the turbulent flow regime of this melt configuration. While the FVM-code calculations were performed with three dimensional models using a simple symmetry, the problem was modelled two-dimensionally with ANSYS due to limited CPU performance. In addition, the 2D results of ANSYS should allow a comparison for the planned second stage of the calculations. In this second stage, the behaviour of a segregated metal oxide melt should be examined. However, first estimates and pre-calculations showed that a 3D simulation of the problem is not possible with any of the codes due to lacking computer performance. (orig.)

  15. Electron and bremsstrahlung penetration and dose calculation

    Science.gov (United States)

    Watts, J. W., Jr.; Burrell, M. O.

    1972-01-01

    Various techniques for the calculation of electron and bremsstrahlung dose deposition are described. Energy deposition, transmission, and reflection coefficients for electrons incident on plane slabs are presented, and methods for their use in electron dose calculations were developed. A method using the straight-ahead approximation was also developed, and the various methods were compared and found to be in good agreement. Both accurate and approximate methods of calculating bremsstrahlung dose were derived and compared. Approximation is found to give a good estimate of dose where the electron spectrum falls off exponentially with energy.

  16. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  17. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  18. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  19. RA-0 reactor. New neutronic calculations

    International Nuclear Information System (INIS)

    Rumis, D.; Leszczynski, F.

    1990-01-01

    An updating of the neutronic calculations performed at the RA-0 reactor, located at the Natural, Physical and Exact Sciences Faculty of Cordoba National University, are herein described. The techniques used for the calculation of a reactor like the RA-0 allows prediction in detail of the flux behaviour in the core's interior and in the reflector, which will be helpful for experiments design. In particular, the use of WIMSD4 code to make calculations on the reactor implies a novelty in the possible applications of this code to solve the problems that arise in practice. (Author) [es

  20. Calculating lattice thermal conductivity: a synopsis

    Science.gov (United States)

    Fugallo, Giorgia; Colombo, Luciano

    2018-04-01

    We provide a tutorial introduction to the modern theoretical and computational schemes available to calculate the lattice thermal conductivity in a crystalline dielectric material. While some important topics in thermal transport will not be covered (including thermal boundary resistance, electronic thermal conduction, and thermal rectification), we aim at: (i) framing the calculation of thermal conductivity within the general non-equilibrium thermodynamics theory of transport coefficients, (ii) presenting the microscopic theory of thermal conduction based on the phonon picture and the Boltzmann transport equation, and (iii) outlining the molecular dynamics schemes to calculate heat transport. A comparative and critical addressing of the merits and drawbacks of each approach will be discussed as well.