WorldWideScience

Sample records for experimentally validated monte

  1. Experimental validation of Monte Carlo calculations for organ dose

    International Nuclear Information System (INIS)

    Yalcintas, M.G.; Eckerman, K.F.; Warner, G.G.

    1980-01-01

    The problem of validating estimates of absorbed dose due to photon energy deposition is examined. The computational approaches used for the estimation of the photon energy deposition is examined. The limited data for validation of these approaches is discussed and suggestions made as to how better validation information might be obtained

  2. Environmental dose rate heterogeneity of beta radiation and its implications for luminescence dating: Monte Carlo modelling and experimental validation

    DEFF Research Database (Denmark)

    Nathan, R.P.; Thomas, P.J.; Jain, M.

    2003-01-01

    and identify the likely size of these effects on D-e distributions. The study employs the MCNP 4C Monte Carlo electron/photon transport model, supported by an experimental validation of the code in several case studies. We find good agreement between the experimental measurements and the Monte Carlo...

  3. Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy

    International Nuclear Information System (INIS)

    Testa, M.; Schümann, J.; Lu, H.-M.; Paganetti, H.; Shin, J.; Faddegon, B.; Perl, J.

    2013-01-01

    Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the

  4. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons

    International Nuclear Information System (INIS)

    Peterson, S W; Polf, J; Archambault, L; Beddar, S; Bues, M; Ciangaru, G; Smith, A

    2009-01-01

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.

  5. Experimental validation of a rapid Monte Carlo based micro-CT simulator

    International Nuclear Information System (INIS)

    Colijn, A P; Zbijewski, W; Sasov, A; Beekman, F J

    2004-01-01

    We describe a newly developed, accelerated Monte Carlo simulator of a small animal micro-CT scanner. Transmission measurements using aluminium slabs are employed to estimate the spectrum of the x-ray source. The simulator incorporating this spectrum is validated with micro-CT scans of physical water phantoms of various diameters, some containing stainless steel and Teflon rods. Good agreement is found between simulated and real data: normalized error of simulated projections, as compared to the real ones, is typically smaller than 0.05. Also the reconstructions obtained from simulated and real data are found to be similar. Thereafter, effects of scatter are studied using a voxelized software phantom representing a rat body. It is shown that the scatter fraction can reach tens of per cents in specific areas of the body and therefore scatter can significantly affect quantitative accuracy in small animal CT imaging

  6. Development and experimental validation of a monte carlo modeling of the neutron emission from a d-t generator

    Energy Technology Data Exchange (ETDEWEB)

    Remetti, Romolo; Lepore, Luigi [Sapienza University of Rome, Dept. SBAI, Via Antonio Scarpa 14, 00161 Rome (Italy); Cherubini, Nadia [ENEA CRE Casaccia, Nuclear Material Characterization Laboratory and Nuclear Waste Management, Via Anguillarese 301, 00123 Rome (Italy)

    2017-01-11

    An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.

  7. Development and experimental validation of a monte carlo modeling of the neutron emission from a d-t generator

    Science.gov (United States)

    Remetti, Romolo; Lepore, Luigi; Cherubini, Nadia

    2017-01-01

    An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.

  8. Experimental validation of the DPM Monte Carlo code using minimally scattered electron beams in heterogeneous media

    International Nuclear Information System (INIS)

    Chetty, Indrin J.; Moran, Jean M.; Nurushev, Teamor S.; McShan, Daniel L.; Fraass, Benedick A.; Wilderman, Scott J.; Bielajew, Alex F.

    2002-01-01

    A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for electron beam dose calculations in heterogeneous media. Measurements were made using 10 MeV and 50 MeV minimally scattered, uncollimated electron beams from a racetrack microtron. Source distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber scans and then benchmarked against measurements in a homogeneous water phantom. The in-air spatial distributions were found to have FWHM of 4.7 cm and 1.3 cm, at 100 cm from the source, for the 10 MeV and 50 MeV beams respectively. Energy spectra for the electron beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. Profile measurements were made using an ion chamber in a water phantom with slabs of lung or bone-equivalent materials submerged at various depths. DPM calculations are, on average, within 2% agreement with measurement for all geometries except for the 50 MeV incident on a 6 cm lung-equivalent slab. Measurements using approximately monoenergetic, 50 MeV, 'pencil-beam'-type electrons in heterogeneous media provide conditions for maximum electronic disequilibrium and hence present a stringent test of the code's electron transport physics; the agreement noted between calculation and measurement illustrates that the DPM code is capable of accurate dose calculation even under such conditions. (author)

  9. Concurrent Reflectance Confocal Microscopy and Laser Doppler Flowmetry to Improve Skin Cancer Imaging: A Monte Carlo Model and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Alireza Mowla

    2016-09-01

    Full Text Available Optical interrogation of suspicious skin lesions is standard care in the management of skin cancer worldwide. Morphological and functional markers of malignancy are often combined to improve expert human diagnostic power. We propose the evaluation of the combination of two independent optical biomarkers of skin tumours concurrently. The morphological modality of reflectance confocal microscopy (RCM is combined with the functional modality of laser Doppler flowmetry, which is capable of quantifying tissue perfusion. To realize the idea, we propose laser feedback interferometry as an implementation of RCM, which is able to detect the Doppler signal in addition to the confocal reflectance signal. Based on the proposed technique, we study numerical models of skin tissue incorporating two optical biomarkers of malignancy: (i abnormal red blood cell velocities and concentrations and (ii anomalous optical properties manifested through tissue confocal reflectance, using Monte Carlo simulation. We also conduct a laboratory experiment on a microfluidic channel containing a dynamic turbid medium, to validate the efficacy of the technique. We quantify the performance of the technique by examining a signal to background ratio (SBR in both the numerical and experimental models, and it is shown that both simulated and experimental SBRs improve consistently using this technique. This work indicates the feasibility of an optical instrument, which may have a role in enhanced imaging of skin malignancies.

  10. New Monte Carlo model of cylindrical diffusing fibers illustrates axially heterogeneous fluorescence detection: simulation and experimental validation.

    Science.gov (United States)

    Baran, Timothy M; Foster, Thomas H

    2011-08-01

    We present a new Monte Carlo model of cylindrical diffusing fibers that is implemented with a graphics processing unit. Unlike previously published models that approximate the diffuser as a linear array of point sources, this model is based on the construction of these fibers. This allows for accurate determination of fluence distributions and modeling of fluorescence generation and collection. We demonstrate that our model generates fluence profiles similar to a linear array of point sources, but reveals axially heterogeneous fluorescence detection. With axially homogeneous excitation fluence, approximately 90% of detected fluorescence is collected by the proximal third of the diffuser for μ(s)'∕μ(a) = 8 in the tissue and 70 to 88% is collected in this region for μ(s)'∕μ(a) = 80. Increased fluorescence detection by the distal end of the diffuser relative to the center section is also demonstrated. Validation of these results was performed by creating phantoms consisting of layered fluorescent regions. Diffusers were inserted into these layered phantoms and fluorescence spectra were collected. Fits to these spectra show quantitative agreement between simulated fluorescence collection sensitivities and experimental results. These results will be applicable to the use of diffusers as detectors for dosimetry in interstitial photodynamic therapy.

  11. Hysteresis of liquid adsorption in porous media by coarse-grained Monte Carlo with direct experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Zeidman, Benjamin D. [Department of Chemistry, Colorado School of Mines, Golden, Colorado 80401 (United States); Lu, Ning [Department of Civil and Environmental Engineering, Colorado School of Mines, Golden, Colorado 80401 (United States); Wu, David T., E-mail: dwu@mines.edu [Department of Chemistry, Colorado School of Mines, Golden, Colorado 80401 (United States); Department of Chemical and Biological Engineering, Colorado School of Mines, Golden, Colorado 80401 (United States)

    2016-05-07

    The effects of path-dependent wetting and drying manifest themselves in many types of physical systems, including nanomaterials, biological systems, and porous media such as soil. It is desirable to better understand how these hysteretic macroscopic properties result from a complex interplay between gasses, liquids, and solids at the pore scale. Coarse-Grained Monte Carlo (CGMC) is an appealing approach to model these phenomena in complex pore spaces, including ones determined experimentally. We present two-dimensional CGMC simulations of wetting and drying in two systems with pore spaces determined by sections from micro X-ray computed tomography: a system of randomly distributed spheres and a system of Ottawa sand. Results for the phase distribution, water uptake, and matric suction when corrected for extending to three dimensions show excellent agreement with experimental measurements on the same systems. This supports the hypothesis that CGMC can generate metastable configurations representative of experimental hysteresis and can also be used to predict hysteretic constitutive properties of particular experimental systems, given pore space images.

  12. SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields

    International Nuclear Information System (INIS)

    Varadhan; Way, S; Arentsen, L; Gerbi, B

    2016-01-01

    Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R_8_0_–_2_0 electron distal falloff distance and number of particle histories was set at 500,000 per cm"2. Percent depth dose scans and beam profiles at dmax, d_9_0 and d_8_0 depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom"2 scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d_9_0 and d_8_0 depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.

  13. SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields

    Energy Technology Data Exchange (ETDEWEB)

    Varadhan [Minneapolis Radiation Oncology, Fridley, MN (United States); Way, S [Minneapolis Radiation Oncology, Robbinsdale, MN (United States); Arentsen, L; Gerbi, B [University of Minnesota, Minneapolis, MN (United States)

    2016-06-15

    Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R{sub 80–20} electron distal falloff distance and number of particle histories was set at 500,000 per cm{sup 2}. Percent depth dose scans and beam profiles at dmax, d{sub 90} and d{sub 80} depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom{sup 2} scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d{sub 90} and d{sub 80} depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.

  14. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    Science.gov (United States)

    Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.

    2015-09-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  15. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    International Nuclear Information System (INIS)

    Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Valvo, F; Fossati, P; Ciocca, M; Ferrari, A

    2015-01-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5–30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo ® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus ® chamber. An EBT3 ® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification. (paper)

  16. SU-F-T-152: Experimental Validation and Calculation Benchmark for a Commercial Monte Carlo Pencil BeamScanning Proton Therapy Treatment Planning System in Heterogeneous Media

    Energy Technology Data Exchange (ETDEWEB)

    Lin, L; Huang, S; Kang, M; Ainsley, C; Simone, C; McDonough, J; Solberg, T [University of Pennsylvania, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Eclipse AcurosPT 13.7, the first commercial Monte Carlo pencil beam scanning (PBS) proton therapy treatment planning system (TPS), was experimentally validated for an IBA dedicated PBS nozzle in the CIRS 002LFC thoracic phantom. Methods: A two-stage procedure involving the use of TOPAS 1.3 simulations was performed. First, Geant4-based TOPAS simulations in this phantom were experimentally validated for single and multi-spot profiles at several depths for 100, 115, 150, 180, 210 and 225 MeV proton beams, using the combination of a Lynx scintillation detector and a MatriXXPT ionization chamber array. Second, benchmark calculations were performed with both AcurosPT and TOPAS in a phantom identical to the CIRS 002LFC, with the exception that the CIRS bone/mediastinum/lung tissues were replaced with similar tissues that are predefined in AcurosPT (a limitation of this system which necessitates the two stage procedure). Results: Spot sigmas measured in tissue were in agreement within 0.2 mm of TOPAS simulation for all six energies, while AcurosPT was consistently found to have larger spot sigma (<0.7 mm) than TOPAS. Using absolute dose calibration by MatriXXPT, the agreements between profiles measurements and TOPAS simulation, and calculation benchmarks are over 97% except near the end of range using 2 mm/2% gamma criteria. Overdosing and underdosing were observed at the low and high density side of tissue interfaces, respectively, and these increased with increasing depth and decreasing energy. Near the mediastinum/lung interface, the magnitude can exceed 5 mm/10%. Furthermore, we observed >5% quenching effect in the conversion of Lynx measurements to dose. Conclusion: We recommend the use of an ionization chamber array in combination with the scintillation detector to measure absolute dose and relative PBS spot characteristics. We also recommend the use of an independent Monte Carlo calculation benchmark for the commissioning of a commercial TPS. Partially

  17. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    CERN Document Server

    Magro, G; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M

    2015-01-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5–30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size r...

  18. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    Science.gov (United States)

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  20. Exploration of parameters influencing the self-absorption losses in luminescent solar concentrators with an experimentally validated combined ray-tracing/Monte-Carlo model

    Science.gov (United States)

    Krumer, Zachar; van Sark, Wilfried G. J. H. M.; de Mello Donegá, Celso; Schropp, Ruud E. I.

    2013-09-01

    Luminescent solar concentrators (LSCs) are low cost photovoltaic devices, which reduce the amount of necessary semiconductor material per unit area of a photovoltaic solar energy converter by means of concentration. The device is comprised of a thin plastic plate in which luminescent species (fluorophores) have been incorporated.The fluorophores absorb the solar light and radiatively re-emit a part of the energy. Total internal reflection traps most of the emitted light inside the plate and wave-guides it to a narrow side facet with a solar cell attached, where conversion into electricity occurs. The eciency of such devices is as yet rather low, due to several loss mechanisms, of which self-absorption is of high importance. Combined ray-tracing and Monte-Carlosimulations is a widely used tool for efficiency estimations of LSC-devices prior to manufacturing. We have applied this method to a model experiment, in which we analysed the impact of self-absorption onto LSC-efficiency of fluorophores with different absorption/emission-spectral overlap (Stokes-shift): several organic dyes and semiconductor quantum dots (single compound and core/shell of type-II). These results are compared with the ones obtained experimentally demonstrating a good agreement. The validated model is used to investigate systematically the influence of spectral separation and luminescence quantum efficiency on the intensity loss inconsequence of increased self-absorption. The results are used to adopt a quantity called the self-absorption cross-section and establish it as reliable criterion for self-absorption properties of materials that can be obtained from fundamental data and has a more universal scope of application, than the currently used Stokes-shift.

  1. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  2. Treatment and Combination of Data Quality Monitoring Histograms to Perform Data vs. Monte Carlo Validation

    CERN Document Server

    Colin, Nolan

    2013-01-01

    In CMS's automated data quality validation infrastructure, it is not currently possible to assess how well Monte Carlo simulations describe data from collisions, if at all. In order to guarantee high quality data, a novel work flow was devised to perform `data vs. Monte Carlo' validation. Support for this comparison was added by allowing distributions from several Monte Carlo samples to be combined, matched to the data and then displayed in a histogram stack, overlaid with the experimental data.

  3. Optimization of Monte Carlo particle transport parameters and validation of a novel high throughput experimental setup to measure the biological effects of particle beams.

    Science.gov (United States)

    Patel, Darshana; Bronk, Lawrence; Guan, Fada; Peeler, Christopher R; Brons, Stephan; Dokic, Ivana; Abdollahi, Amir; Rittmüller, Claudia; Jäkel, Oliver; Grosshans, David; Mohan, Radhe; Titt, Uwe

    2017-11-01

    Accurate modeling of the relative biological effectiveness (RBE) of particle beams requires increased systematic in vitro studies with human cell lines with care towards minimizing uncertainties in biologic assays as well as physical parameters. In this study, we describe a novel high-throughput experimental setup and an optimized parameterization of the Monte Carlo (MC) simulation technique that is universally applicable for accurate determination of RBE of clinical ion beams. Clonogenic cell-survival measurements on a human lung cancer cell line (H460) are presented using proton irradiation. Experiments were performed at the Heidelberg Ion Therapy Center (HIT) with support from the Deutsches Krebsforschungszentrum (DKFZ) in Heidelberg, Germany using a mono-energetic horizontal proton beam. A custom-made variable range selector was designed for the horizontal beam line using the Geant4 MC toolkit. This unique setup enabled a high-throughput clonogenic assay investigation of multiple, well defined dose and linear energy transfer (LETs) per irradiation for human lung cancer cells (H460) cultured in a 96-well plate. Sensitivity studies based on application of different physics lists in conjunction with different electromagnetic constructors and production threshold values to the MC simulations were undertaken for accurate assessment of the calculated dose and the dose-averaged LET (LET d ). These studies were extended to helium and carbon ion beams. Sensitivity analysis of the MC parameterization revealed substantial dependence of the dose and LET d values on both the choice of physics list and the production threshold values. While the dose and LET d calculations using FTFP_BERT_LIV, FTFP_BERT_EMZ, FTFP_BERT_PEN and QGSP_BIC_EMY physics lists agree well with each other for all three ions, they show large differences when compared to the FTFP_BERT physics list with the default electromagnetic constructor. For carbon ions, the dose corresponding to the largest LET d

  4. Experimental validation of UTDefect

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, A.S. [ABB Tekniska Roentgencentralen AB, Taeby (Sweden); Bostroem, A.; Wirdelius, H. [Chalmers Univ. of Technology, Goeteborg (Sweden). Div. of Mechanics

    1997-01-01

    This study reports on conducted experiments and computer simulations of ultrasonic nondestructive testing (NDT). Experiments and simulations are compared with the purpose of validating the simulation program UTDefect. UTDefect simulates ultrasonic NDT of cracks and some other defects in isotropic and homogeneous materials. Simulations for the detection of surface breaking cracks are compared with experiments in pulse-echo mode on surface breaking cracks in carbon steel plates. The echo dynamics are plotted and compared with the simulations. The experiments are performed on a plate with thickness 36 mm and the crack depths are 7.2 mm and 18 mm. L- and T-probes with frequency 1, 2 and 4 MHz and angels 45, 60 and 70 deg are used. In most cases the probe and the crack is on opposite sides of the plate, but in some cases they are on the same side. Several cracks are scanned from two directions. In total 53 experiments are reported for 33 different combinations. Generally the simulations agree well with the experiments and UTDefect is shown to be able to, within certain limits, perform simulations that are close to experiments. It may be concluded that: For corner echoes the eight 45 deg cases and the eight 60 deg cases show good agreement between experiments and UTDefect, especially for the 7.2 mm crack. The amplitudes differ more for some cases where the defect is close to the probe and for the corner of the 18 mm crack. For the two 70 deg cases there are too few experimental values to compare the curve shapes, but the amplitudes do not differ too much. The tip diffraction echoes also agree well in general. For some cases, where the defect is close to the probe, the amplitudes differ more than 10-15 dB, but for all but two cases the difference in amplitude is less than 7 dB. 6 refs.

  5. Gamma streaming experiments for validation of Monte Carlo code

    International Nuclear Information System (INIS)

    Thilagam, L.; Mohapatra, D.K.; Subbaiah, K.V.; Iliyas Lone, M.; Balasubramaniyan, V.

    2012-01-01

    In-homogeneities in shield structures lead to considerable amount of leakage radiation (streaming) increasing the radiation levels in accessible areas. Development works on experimental as well as computational methods for quantifying this streaming radiation are still continuing. Monte Carlo based radiation transport code, MCNP is usually a tool for modeling and analyzing such problems involving complex geometries. In order to validate this computational method for streaming analysis, it is necessary to carry out some experimental measurements simulating these inhomogeneities like ducts and voids present in the bulk shields for typical cases. The data thus generated will be analysed by simulating the experimental set up employing MCNP code and optimized input parameters for the code in finding solutions for similar radiation streaming problems will be formulated. Comparison of experimental data obtained from radiation streaming experiments through ducts will give a set of thumb rules and analytical fits for total radiation dose rates within and outside the duct. The present study highlights the validation of MCNP code through the gamma streaming experiments carried out with the ducts of various shapes and dimensions. Over all, the present study throws light on suitability of MCNP code for the analysis of gamma radiation streaming problems for all duct configurations considered. In the present study, only dose rate comparisons have been made. Studies on spectral comparison of streaming radiation are in process. Also, it is planned to repeat the experiments with various shield materials. Since the penetrations and ducts through bulk shields are unavoidable in an operating nuclear facility the results on this kind of radiation streaming simulations and experiments will be very useful in the shield structure optimization without compromising the radiation safety

  6. Monte Carlo validation of self shielding and void effect calculations

    International Nuclear Information System (INIS)

    Tellier, H.; Coste, M.; Raepsaet, C.; Soldevila, M.; Van der Gucht, C.

    1995-01-01

    The self shielding validation and the void effect are studied with Monte Carlo method. The satisfactory comparison obtained between the APOLLO 2 results of the self shielding effect and the TRIPOLI and MCNP results allows us to be confident in the multigroup transport code. (K.A.)

  7. Fitting experimental data by using weighted Monte Carlo events

    International Nuclear Information System (INIS)

    Stojnev, S.

    2003-01-01

    A method for fitting experimental data using modified Monte Carlo (MC) sample is developed. It is intended to help when a single finite MC source has to fit experimental data looking for parameters in a certain underlying theory. The extraction of the searched parameters, the errors estimation and the goodness-of-fit testing is based on the binned maximum likelihood method

  8. First validation of the new continuous energy version of the MORET5 Monte Carlo code

    International Nuclear Information System (INIS)

    Miss, Joachim; Bernard, Franck; Forestier, Benoit; Haeck, Wim; Richet, Yann; Jacquet, Olivier

    2008-01-01

    The 5.A.1 version is the next release of the MORET Monte Carlo code dedicated to criticality and reactor calculations. This new version combines all the capabilities that are already available in the multigroup version with many new and enhanced features. The main capabilities of the previous version are the powerful association of a deterministic and Monte Carlo approach (like for instance APOLLO-MORET), the modular geometry, five source sampling techniques and two simulation strategies. The major advance in MORET5 is the ability to perform calculations either a multigroup or a continuous energy simulation. Thanks to these new developments, we now have better control over the whole process of criticality calculations, from reading the basic nuclear data to the Monte Carlo simulation itself. Moreover, this new capability enables us to better validate the deterministic-Monte Carlo multigroup calculations by performing continuous energy calculations with the same code, using the same geometry and tracking algorithms. The aim of this paper is to describe the main options available in this new release, and to present the first results. Comparisons of the MORET5 continuous-energy results with experimental measurements and against another continuous-energy Monte Carlo code are provided in terms of validation and time performance. Finally, an analysis of the interest of using a unified energy grid for continuous energy Monte Carlo calculations is presented. (authors)

  9. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  10. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  11. Monte Carlo Modelling of Mammograms : Development and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Spyrou, G; Panayiotakis, G [Univercity of Patras, School of Medicine, Medical Physics Department, 265 00 Patras (Greece); Bakas, A [Technological Educational Institution of Athens, Department of Radiography, 122 10 Athens (Greece); Tzanakos, G [University of Athens, Department of Physics, Divission of Nuclear and Particle Physics, 157 71 Athens (Greece)

    1999-12-31

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors) 16 refs, 4 figs

  12. Monte Carlo Modelling of Mammograms : Development and Validation

    International Nuclear Information System (INIS)

    Spyrou, G.; Panayiotakis, G.; Bakas, A.; Tzanakos, G.

    1998-01-01

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors)

  13. Study of TXRF experimental system by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Costa, Ana Cristina M.; Leitao, Roberta G.; Lopes, Ricardo T.; Anjos, Marcelino J.; Conti, Claudio C.

    2011-01-01

    The Total-Reflection X-ray Fluorescence (TXRF) technique offers unique possibilities to study the concentrations of a wide range of trace elements in various types of samples. Besides that, the TXRF technique is widely used to study the trace elements in biological, medical and environmental samples due to its multielemental character as well as simplicity of sample preparation and quantification methods used. In general the TXRF experimental setup is not simple and might require substantial experimental efforts. On the other hand, in recent years, experimental TXRF portable systems have been developed. It has motivated us to develop our own TXRF portable system. In this work we presented a first step in order to optimize a TXRF experimental setup using Monte Carlo simulation by MCNP code. The results found show that the Monte Carlo simulation method can be used to investigate the development of a TXRF experimental system before its assembly. (author)

  14. Validation of cross sections for Monte Carlo simulation of the photoelectric effect

    CERN Document Server

    Han, Min Cheol; Pia, Maria Grazia; Basaglia, Tullio; Batic, Matej; Hoff, Gabriela; Kim, Chan Hyeong; Saracco, Paolo

    2016-01-01

    Several total and partial photoionization cross section calculations, based on both theoretical and empirical approaches, are quantitatively evaluated with statistical analyses using a large collection of experimental data retrieved from the literature to identify the state of the art for modeling the photoelectric effect in Monte Carlo particle transport. Some of the examined cross section models are available in general purpose Monte Carlo systems, while others have been implemented and subjected to validation tests for the first time to estimate whether they could improve the accuracy of particle transport codes. The validation process identifies Scofield's 1973 non-relativistic calculations, tabulated in the Evaluated Photon Data Library(EPDL), as the one best reproducing experimental measurements of total cross sections. Specialized total cross section models, some of which derive from more recent calculations, do not provide significant improvements. Scofield's non-relativistic calculations are not surp...

  15. Preliminary validation of a Monte Carlo model for IMRT fields

    International Nuclear Information System (INIS)

    Wright, Tracy; Lye, Jessica; Mohammadi, Mohammad

    2011-01-01

    Full text: A Monte Carlo model of an Elekta linac, validated for medium to large (10-30 cm) symmetric fields, has been investigated for small, irregular and asymmetric fields suitable for IMRT treatments. The model has been validated with field segments using radiochromic film in solid water. The modelled positions of the multileaf collimator (MLC) leaves have been validated using EBT film, In the model, electrons with a narrow energy spectrum are incident on the target and all components of the linac head are included. The MLC is modelled using the EGSnrc MLCE component module. For the validation, a number of single complex IMRT segments with dimensions approximately 1-8 cm were delivered to film in solid water (see Fig, I), The same segments were modelled using EGSnrc by adjusting the MLC leaf positions in the model validated for 10 cm symmetric fields. Dose distributions along the centre of each MLC leaf as determined by both methods were compared. A picket fence test was also performed to confirm the MLC leaf positions. 95% of the points in the modelled dose distribution along the leaf axis agree with the film measurement to within 1%/1 mm for dose difference and distance to agreement. Areas of most deviation occur in the penumbra region. A system has been developed to calculate the MLC leaf positions in the model for any planned field size.

  16. MATLAB platform for Monte Carlo planning and dosimetry experimental evaluation

    International Nuclear Information System (INIS)

    Baeza, J. A.; Ureba, A.; Jimenez-Ortega, E.; Pereira-Barbeiro, A. R.; Leal, A.

    2013-01-01

    A new platform for the full Monte Carlo planning and an independent experimental evaluation that it can be integrated into clinical practice. The tool has proved its usefulness and efficiency and now forms part of the flow of work of our research group, the tool used for the generation of results, which are to be suitably revised and are being published. This software is an effort of integration of numerous algorithms of image processing, along with planning optimization algorithms, allowing the process of MCTP planning from a single interface. In addition, becomes a flexible and accurate tool for the evaluation of experimental dosimetric data for the quality control of actual treatments. (Author)

  17. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  18. MATLAB platform for Monte Carlo planning and dosimetry experimental evaluation; Plataforma Matlab para planificacion Monte Carlo y evaluacion dosimetrica experimental

    Energy Technology Data Exchange (ETDEWEB)

    Baeza, J. A.; Ureba, A.; Jimenez-Ortega, E.; Pereira-Barbeiro, A. R.; Leal, A.

    2013-07-01

    A new platform for the full Monte Carlo planning and an independent experimental evaluation that it can be integrated into clinical practice. The tool has proved its usefulness and efficiency and now forms part of the flow of work of our research group, the tool used for the generation of results, which are to be suitably revised and are being published. This software is an effort of integration of numerous algorithms of image processing, along with planning optimization algorithms, allowing the process of MCTP planning from a single interface. In addition, becomes a flexible and accurate tool for the evaluation of experimental dosimetric data for the quality control of actual treatments. (Author)

  19. Monte Carlo Simulations Validation Study: Vascular Brachytherapy Beta Sources

    International Nuclear Information System (INIS)

    Orion, I.; Koren, K.

    2004-01-01

    During the last decade many versions of angioplasty irradiation treatments have been proposed. The purpose of this unique brachytherapy is to administer a sufficient radiation dose into the vein walls in order to prevent restonosis, a clinical sequel to balloon angioplasty. The most suitable sources for this vascular brachytherapy are the β - emitters such as Re-188, P-32, and Sr-90/Y-90, with a maximum energy range of up to 2.1 MeV [1,2,3]. The radioactive catheters configurations offered for these treatments can be a simple wire [4], a fluid filled balloon or a coated stent. Each source is differently positioned inside the blood vessel, and the emitted electrons ranges therefore vary. Many types of sources and configurations were studied either experimentally or with the use of the Monte Carlo calculation technique, while most of the Monte Carlo simulations were carried out using EGS4 [5] or MCNP [6]. In this study we compared the beta-source absorbed-dose versus radial-distance of two treatment configurations using MCNP and EGS4 simulations. This comparison was aimed to discover the differences between the MCNP and the EGS4 simulation code systems in intermediate energies electron transport

  20. Characterization of a CLYC detector and validation of the Monte Carlo Simulation by measurement experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Suk; Ye, Sung Joon [Seoul National University, Seoul (Korea, Republic of); Smith, Martin B.; Koslowsky, Martin R. [Bubble Technology Industries Inc., Chalk River (Canada); Kwak, Sung Woo [Korea Institute of Nuclear Nonproliferation And Control (KINAC), Daejeon (Korea, Republic of); Kim Gee Hyun [Sejong University, Seoul (Korea, Republic of)

    2017-03-15

    Simultaneous detection of neutrons and gamma rays have become much more practicable, by taking advantage of good gamma-ray discrimination properties using pulse shape discrimination (PSD) technique. Recently, we introduced a commercial CLYC system in Korea, and performed an initial characterization and simulation studies for the CLYC detector system to provide references for the future implementation of the dual-mode scintillator system in various studies and applications. We evaluated a CLYC detector with 95% 6Li enrichment using various gamma-ray sources and a 252Cf neutron source, with validation of our Monte Carlo simulation results via measurement experiments. Absolute full-energy peak efficiency values were calculated for gamma-ray sources and neutron source using MCNP6 and compared with measurement experiments of the calibration sources. In addition, behavioral characteristics of neutrons were validated by comparing simulations and experiments on neutron moderation with various polyethylene (PE) moderator thicknesses. Both results showed good agreements in overall characteristics of the gamma and neutron detection efficiencies, with consistent ⁓20% discrepancy. Furthermore, moderation of neutrons emitted from {sup 252}Cf showed similarities between the simulation and the experiment, in terms of their relative ratios depending on the thickness of the PE moderator. A CLYC detector system was characterized for its energy resolution and detection efficiency, and Monte Carlo simulations on the detector system was validated experimentally. Validation of the simulation results in overall trend of the CLYC detector behavior will provide the fundamental basis and validity of follow-up Monte Carlo simulation studies for the development of our dual-particle imager using a rotational modulation collimator.

  1. Development and validation of ALEPH Monte Carlo burn-up code

    International Nuclear Information System (INIS)

    Stankovskiy, A.; Van den Eynde, G.; Vidmar, T.

    2011-01-01

    The Monte-Carlo burn-up code ALEPH is being developed in SCK-CEN since 2004. Belonging to the category of shells coupling Monte Carlo transport (MCNP or MCNPX) and 'deterministic' depletion codes (ORIGEN-2.2), ALEPH possess some unique features that distinguish it from other codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. Recent improvements of ALEPH concern full implementation of general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII, JENDL-3.3). The upgraded version of the code is capable to treat isomeric branching ratios, neutron induced fission product yields, spontaneous fission yields and energy release per fission recorded in ENDF-formatted data files. The alternative algorithm for time evolution of nuclide concentrations is added. A predictor-corrector mechanism and the calculation of nuclear heating are available as well. The validation of the code on REBUS experimental programme results has been performed. The upgraded version of ALEPH has shown better agreement with measured data than other codes, including previous version of ALEPH. (authors)

  2. Validation of variance reduction techniques in Mediso (SPIRIT DH-V) SPECT system by Monte Carlo

    International Nuclear Information System (INIS)

    Rodriguez Marrero, J. P.; Diaz Garcia, A.; Gomez Facenda, A.

    2015-01-01

    Monte Carlo simulation of nuclear medical imaging systems is a widely used method for reproducing their operation in a real clinical environment, There are several Single Photon Emission Tomography (SPECT) systems in Cuba. For this reason it is clearly necessary to introduce a reliable and fast simulation platform in order to obtain consistent image data. This data will reproduce the original measurements conditions. In order to fulfill these requirements Monte Carlo platform GAMOS (Geant4 Medicine Oriented Architecture for Applications) have been used. Due to the very size and complex configuration of parallel hole collimators in real clinical SPECT systems, Monte Carlo simulation usually consumes excessively high time and computing resources. main goal of the present work is to optimize the efficiency of calculation by means of new GAMOS functionality. There were developed and validated two GAMOS variance reduction techniques to speed up calculations. These procedures focus and limit transport of gamma quanta inside the collimator. The obtained results were asses experimentally in Mediso (SPIRIT DH-V) SPECT system. Main quality control parameters, such as sensitivity and spatial resolution were determined. Differences of 4.6% sensitivity and 8.7% spatial resolution were reported against manufacturer values. Simulation time was decreased up to 650 times. Using these techniques it was possible to perform several studies in almost 8 hours each. (Author)

  3. Experimental study and by Monte Carlo of a prototype of hodoscopic of fibre optics for high resolution applications; Estudio experimental y por Monte Carlo de un prototipo de hodoscopio de fibras opticas para aplicaciones de alta resolucion

    Energy Technology Data Exchange (ETDEWEB)

    Granero, D.; Blasco, J. M.; Sanchis, E.; Gonzalez, V.; Martin, J. D.; Ballester, F.; Sanchis, E.

    2013-07-01

    The purpose of this work is to test the response of a system composed of 21 scintillators radiation fibres and its electronics as proof of the validity of the System. For this it has radiated test system with a source of verification of Sr-90. In addition, performed Monte Carlo simulations of the system by comparing the results of the simulations with those obtained experimentally. Moreover taken an approximation to the behavior of a hodoscopic composed of 100 scintillators, transverse fibers between if, in proton therapy, conducting different Monte Carlo simulations. (Author)

  4. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-05-12

    Experimental design can be vital when experiments are resource-exhaustive and time-consuming. In this work, we carry out experimental design in the Bayesian framework. To measure the amount of information that can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, MLMC can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the MLMC method imposes fewer assumptions, such as the asymptotic concentration of posterior measures, required for instance by the Laplace approximation (LA). We test the MLMC method using two numerical examples. The first example is the design of sensor deployment for a Darcy flow problem governed by a one-dimensional Poisson equation. We place the sensors in the locations where the pressure is measured, and we model the conductivity field as a piecewise constant random vector with two parameters. The second one is chemical Enhanced Oil Recovery (EOR) core flooding experiment assuming homogeneous permeability. We measure the cumulative oil recovery, from a horizontal core flooded by water, surfactant and polymer, for different injection rates. The model parameters consist of the endpoint relative permeabilities, the residual saturations and the relative permeability exponents for the three phases: water, oil and

  5. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  6. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  7. Experimental study and by Monte Carlo of a prototype of hodoscopic of fibre optics for high resolution applications

    International Nuclear Information System (INIS)

    Granero, D.; Blasco, J. M.; Sanchis, E.; Gonzalez, V.; Martin, J. D.; Ballester, F.; Sanchis, E.

    2013-01-01

    The purpose of this work is to test the response of a system composed of 21 scintillators radiation fibres and its electronics as proof of the validity of the System. For this it has radiated test system with a source of verification of Sr-90. In addition, performed Monte Carlo simulations of the system by comparing the results of the simulations with those obtained experimentally. Moreover taken an approximation to the behavior of a hodoscopic composed of 100 scintillators, transverse fibers between if, in proton therapy, conducting different Monte Carlo simulations. (Author)

  8. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  9. Simplest Validation of the HIJING Monte Carlo Model

    CERN Document Server

    Uzhinsky, V.V.

    2003-01-01

    Fulfillment of the energy-momentum conservation law, as well as the charge, baryon and lepton number conservation is checked for the HIJING Monte Carlo program in $pp$-interactions at $\\sqrt{s}=$ 200, 5500, and 14000 GeV. It is shown that the energy is conserved quite well. The transverse momentum is not conserved, the deviation from zero is at the level of 1--2 GeV/c, and it is connected with the hard jet production. The deviation is absent for soft interactions. Charge, baryon and lepton numbers are conserved. Azimuthal symmetry of the Monte Carlo events is studied, too. It is shown that there is a small signature of a "flow". The situation with the symmetry gets worse for nucleus-nucleus interactions.

  10. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-01

    informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method

  11. Monte Carlo Calculation of Sensitivities to Secondary Angular Distributions. Theory and Validation

    International Nuclear Information System (INIS)

    Perell, R. L.

    2002-01-01

    The basic methods for solution of the transport equation that are in practical use today are the discrete ordinates (SN) method, and the Monte Carlo (Monte Carlo) method. While the SN method is typically less computation time consuming, the Monte Carlo method is often preferred for detailed and general description of three-dimensional geometries, and for calculations using cross sections that are point-wise energy dependent. For analysis of experimental and calculated results, sensitivities are needed. Sensitivities to material parameters in general, and to the angular distribution of the secondary (scattered) neutrons in particular, can be calculated by well known SN methods, using the fluxes obtained from solution of the direct and the adjoint transport equations. Algorithms to calculate sensitivities to cross-sections with Monte Carlo methods have been known for quite a time. However, only just recently we have developed a general Monte Carlo algorithm for the calculation of sensitivities to the angular distribution of the secondary neutrons

  12. Monte Carlo simulation and experimental verification of radiotherapy electron beams

    International Nuclear Information System (INIS)

    Griffin, J.; Deloar, H. M.

    2007-01-01

    Full text: Based on fundamental physics and statistics, the Monte Carlo technique is generally accepted as the accurate method for modelling radiation therapy treatments. A Monte Carlo simulation system has been installed, and models of linear accelerators in the more commonly used electron beam modes have been built and commissioned. A novel technique for radiation dosimetry is also being investigated. Combining the advantages of both water tank and solid phantom dosimetry, a hollow, thin walled shell or mask is filled with water and then raised above the natural water surface to produce a volume of water with the desired irregular shape.

  13. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-03-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  14. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    Science.gov (United States)

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  15. Experimental validation of the HARMONIE code

    International Nuclear Information System (INIS)

    Bernard, A.; Dorsselaere, J.P. van

    1984-01-01

    An experimental program of deformation, in air, of different groups of subassemblies (7 to 41 subassemblies), was performed on a scale 1 mock-up in the SPX1 geometry, in order to achieve a first experimental validation of the code HARMONIE. The agreement between tests and calculations was suitable, qualitatively for all the groups and quantitatively for regular groups of 19 subassemblies at most. The differences come mainly from friction between pads, and secondly from the foot gaps. (author)

  16. Validation of Monte Carlo event generators in the ATLAS Collaboration for LHC Run 2

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note reviews the main steps followed by the ATLAS Collaboration to validate the properties of particle-level simulated events from Monte Carlo event generators in order to ensure the correctness of all event generator configurations and production samples used in physics analyses. A central validation procedure is adopted which permits the continual validation of the functionality and the performance of the ATLAS event simulation infrastructure. Revisions and updates of the Monte Carlo event generators are also monitored. The methodology behind the validation and tools developed for that purpose, as well as various usage cases, are presented. The strategy has proven to play an essential role in identifying possible problems or unwanted features within a restricted timescale, verifying their origin and pointing to possible bug fixes before full-scale processing is initiated.

  17. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  18. Validating a virtual source model based in Monte Carlo Method for profiles and percent deep doses calculation

    Energy Technology Data Exchange (ETDEWEB)

    Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)

    2017-07-01

    The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)

  19. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  20. Monte Carlo simulation - a powerful tool to support experimental activities in structure reliability

    International Nuclear Information System (INIS)

    Yuritzinn, T.; Chapuliot, S.; Eid, M.; Masson, R.; Dahl, A.; Moinereau, D.

    2003-01-01

    Monte-Carlo Simulation (MCS) can have different uses in supporting structure reliability investigations and assessments. In this paper we focus our interest on the use of MCS as a numerical tool to support the fitting of the experimental data related to toughness experiments. (authors)

  1. Monte Carlo validation experiments for the gas Cherenkov detectors at the National Ignition Facility and Omega

    Energy Technology Data Exchange (ETDEWEB)

    Rubery, M. S.; Horsfield, C. J. [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom); Herrmann, H.; Kim, Y.; Mack, J. M.; Young, C.; Evans, S.; Sedillo, T.; McEvoy, A.; Caldwell, S. E. [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Grafil, E.; Stoeffl, W. [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Milnes, J. S. [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)

    2013-07-15

    The gas Cherenkov detectors at NIF and Omega measure several ICF burn characteristics by detecting multi-MeV nuclear γ emissions from the implosion. Of primary interest are γ bang-time (GBT) and burn width defined as the time between initial laser-plasma interaction and peak in the fusion reaction history and the FWHM of the reaction history respectively. To accurately calculate such parameters the collaboration relies on Monte Carlo codes, such as GEANT4 and ACCEPT, for diagnostic properties that cannot be measured directly. This paper describes a series of experiments performed at the High Intensity γ Source (HIγS) facility at Duke University to validate the geometries and material data used in the Monte Carlo simulations. Results published here show that model-driven parameters such as intensity and temporal response can be used with less than 50% uncertainty for all diagnostics and facilities.

  2. Experimental validation of prototype high voltage bushing

    Science.gov (United States)

    Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.

    2017-08-01

    Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.

  3. Correlation of experimental damage data for the development of the UT-MARLOWE Monte Carlo ion implant simulator

    International Nuclear Information System (INIS)

    Morris, M. F.; Tian, S.; Chen, Y.; Tasch, A.; Baumann, S.; Kirchhoff, J. F.; Hummel, R.; Prussin, S.; Kamenitsa, D.; Jackson, J.

    1999-01-01

    The Monte Carlo ion implant simulator UT-MARLOWE has usually been verified using a large array of Secondary Ion Mass Spectroscopy (SIMS) data (∼200 profiles per ion species)(1). A model has recently been developed (1) to explicitly simulate defect production, diffusion, and their interactions during the picosecond 'defect production stage' of ion implantation. In order to thoroughly validate this model, both SIMS and various damage measurements were obtained (primarily channeling-Rutherford Backscattering Spectroscopy, Differential Reflectometry and Tapered Groove Profilometry, but supported with SEM and XTEM data). In general, the data from the various experimental techniques was consistent, and the Kinetic Accumulation Damage Model (KADM) was developed and validated using this data. This paper discusses the gathering of damage data in conjunction with SIMS in support of the development of an ion implantation simulator

  4. Experimental validation of wireless communication with chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian [Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xian University of Technology, Xian 710048 (China); Baptista, Murilo S.; Grebogi, Celso [Institute for Complex System and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen AB24 3UE (United Kingdom)

    2016-08-15

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  5. Experimental validation of wireless communication with chaos

    International Nuclear Information System (INIS)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S.; Grebogi, Celso

    2016-01-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  6. Experimental validation of wireless communication with chaos.

    Science.gov (United States)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S; Grebogi, Celso

    2016-08-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  7. Evaluation of the shield calculation adequacy of radiotherapy rooms through Monte Carlo Method and experimental measures

    International Nuclear Information System (INIS)

    Meireles, Ramiro Conceicao

    2016-01-01

    The shielding calculation methodology for radiotherapy services adopted in Brazil and in several countries is that described in publication 151 of the National Council on Radiation Protection and Measurements (NCRP 151). This methodology however, markedly employs several approaches that can impact both in the construction cost and in the radiological safety of the facility. Although this methodology is currently well established by the high level of use, some parameters employed in the calculation methodology did not undergo to a detailed assessment to evaluate the impact of the various approaches considered. In this work the MCNP5 Monte Carlo code was used with the purpose of evaluating the above mentioned approaches. TVLs values were obtained for photons in conventional concrete (2.35g / cm 3 ), considering the energies of 6, 10 and 25 MeV, respectively, first considering an isotropic radiation source impinging perpendicular to the barriers, and subsequently a lead head shielding emitting a shaped beam, in the format of a pyramid trunk. Primary barriers safety margins, taking in account the head shielding emitting photon beam pyramid-shaped in the energies of 6, 10, 15 and 18 MeV were assessed. A study was conducted considering the attenuation provided by the patient's body in the energies of 6,10, 15 and 18 MeV, leading to new attenuation factors. Experimental measurements were performed in a real radiotherapy room, in order to map the leakage radiation emitted by the accelerator head shielding and the results obtained were employed in the Monte Carlo simulation, as well as to validate the entire study. The study results indicate that the TVLs values provided by (NCRP, 2005) show discrepancies in comparison with the values obtained by simulation and that there may be some barriers that are calculated with insufficient thickness. Furthermore, the simulation results show that the additional safety margins considered when calculating the width of the primary

  8. Experimental and Monte Carlo simulation studies of open cylindrical radon monitoring device using CR-39 detector

    Energy Technology Data Exchange (ETDEWEB)

    Rehman, Fazal-ur- E-mail: fazalr@kfupm.edu.sa; Jamil, K.; Zakaullah, M.; Abu-Jarad, F.; Mujahid, S.A

    2003-07-01

    There are several methods of measuring radon concentrations but nuclear track detector cylindrical dosimeters are widely employed. In this investigation, the consequence of effective volumes of the dosimeters on the registration of alpha tracks in a CR-39 detector was studied. In a series of experiments an optimum radius for a CR-39-based open cylindrical radon dosimeter was found to be about 3 cm. Monte Carlo simulation techniques hav been employed to verify the experimental results. In this context, a computer code Monte Carlo simulation dosimetry (MOCSID) was developed. Monte Carlo simulation experiments gave the optimum radius of the dosimeters as 3.0 cm. The experimental results are in good agreement with those obtained by Monte Carlo design calculations. In addition to this, plate-out effects of radon progeny were also studied. It was observed that the contribution of radon progeny ({sup 218}Po and {sup 214}Po) plated-out on the wall of the dosimeters increases with an increase of dosimeter radii and then decrease to 0 at a radius of about 3 cm if a point detector has been installed at the center of the dosimeter base. In the code MOCSID different types of random number generators were employed. The results of this research are very useful for designing an optimum size of radon dosimeters.

  9. Experimental and Monte Carlo simulation studies of open cylindrical radon monitoring device using CR-39 detector

    International Nuclear Information System (INIS)

    Rehman, Fazal-ur-; Jamil, K.; Zakaullah, M.; Abu-Jarad, F.; Mujahid, S.A.

    2003-01-01

    There are several methods of measuring radon concentrations but nuclear track detector cylindrical dosimeters are widely employed. In this investigation, the consequence of effective volumes of the dosimeters on the registration of alpha tracks in a CR-39 detector was studied. In a series of experiments an optimum radius for a CR-39-based open cylindrical radon dosimeter was found to be about 3 cm. Monte Carlo simulation techniques hav been employed to verify the experimental results. In this context, a computer code Monte Carlo simulation dosimetry (MOCSID) was developed. Monte Carlo simulation experiments gave the optimum radius of the dosimeters as 3.0 cm. The experimental results are in good agreement with those obtained by Monte Carlo design calculations. In addition to this, plate-out effects of radon progeny were also studied. It was observed that the contribution of radon progeny ( 218 Po and 214 Po) plated-out on the wall of the dosimeters increases with an increase of dosimeter radii and then decrease to 0 at a radius of about 3 cm if a point detector has been installed at the center of the dosimeter base. In the code MOCSID different types of random number generators were employed. The results of this research are very useful for designing an optimum size of radon dosimeters

  10. Initial validation of 4D-model for a clinical PET scanner using the Monte Carlo code gate

    International Nuclear Information System (INIS)

    Vieira, Igor F.; Lima, Fernando R.A.; Gomes, Marcelo S.; Vieira, Jose W.; Pacheco, Ludimila M.; Chaves, Rosa M.

    2011-01-01

    Building exposure computational models (ECM) of emission tomography (PET and SPECT) currently has several dedicated computing tools based on Monte Carlo techniques (SimSET, SORTEO, SIMIND, GATE). This paper is divided into two steps: (1) using the dedicated code GATE (Geant4 Application for Tomographic Emission) to build a 4D model (where the fourth dimension is the time) of a clinical PET scanner from General Electric, GE ADVANCE, simulating the geometric and electronic structures suitable for this scanner, as well as some phenomena 4D, for example, rotating gantry; (2) the next step is to evaluate the performance of the model built here in the reproduction of test noise equivalent count rate (NEC) based on the NEMA Standards Publication NU protocols 2-2007 for this tomography. The results for steps (1) and (2) will be compared with experimental and theoretical values of the literature showing actual state of art of validation. (author)

  11. Initial validation of 4D-model for a clinical PET scanner using the Monte Carlo code gate

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Igor F.; Lima, Fernando R.A.; Gomes, Marcelo S., E-mail: falima@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Vieira, Jose W.; Pacheco, Ludimila M. [Instituto Federal de Educacao, Ciencia e Tecnologia (IFPE), Recife, PE (Brazil); Chaves, Rosa M. [Instituto de Radium e Supervoltagem Ivo Roesler, Recife, PE (Brazil)

    2011-07-01

    Building exposure computational models (ECM) of emission tomography (PET and SPECT) currently has several dedicated computing tools based on Monte Carlo techniques (SimSET, SORTEO, SIMIND, GATE). This paper is divided into two steps: (1) using the dedicated code GATE (Geant4 Application for Tomographic Emission) to build a 4D model (where the fourth dimension is the time) of a clinical PET scanner from General Electric, GE ADVANCE, simulating the geometric and electronic structures suitable for this scanner, as well as some phenomena 4D, for example, rotating gantry; (2) the next step is to evaluate the performance of the model built here in the reproduction of test noise equivalent count rate (NEC) based on the NEMA Standards Publication NU protocols 2-2007 for this tomography. The results for steps (1) and (2) will be compared with experimental and theoretical values of the literature showing actual state of art of validation. (author)

  12. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method

    International Nuclear Information System (INIS)

    Cacais, F.L.; Delgado, J.U.; Loayza, V.M.

    2016-01-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  13. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    Energy Technology Data Exchange (ETDEWEB)

    Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2009-03-21

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  14. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    International Nuclear Information System (INIS)

    Espana, S; Herraiz, J L; Vicente, E; Udias, J M; Vaquero, J J; Desco, M

    2009-01-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  15. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  16. A validation study of the BURNUP and associated options of the MONTE CARLO neutronics code MONK5W

    International Nuclear Information System (INIS)

    Howard, E.A.

    1985-11-01

    This is a report on the validation of the burnup option of the Monte Carlo Neutronics Code MONK5W, together with the associated facilities which allow for control rod movements and power changes. The validation uses reference solutions produced by the Deterministic Neutronics Code LWR-WIMS for a 2D model which represents a whole reactor calculation with control rod movements. (author)

  17. Experimental and Monte Carlo simulated spectra of a liquid-metal-jet x-ray source

    International Nuclear Information System (INIS)

    Marziani, M.; Gambaccini, M.; Di Domenico, G.; Taibi, A.; Cardarelli, P.

    2014-01-01

    A prototype x-ray system based on a liquid-metal-jet anode was evaluated within the framework of the LABSYNC project. The generated spectrum was measured using a CZT-based spectrometer and was compared with spectra simulated by three Monte Carlo codes: MCNPX, PENELOPE and EGS5. Notable differences in the simulated spectra were found. These are mainly attributable to differences in the models adopted for the electron-impact ionization cross section. The simulation that more closely reproduces the experimentally measured spectrum was provided by PENELOPE. - Highlights: • The x-ray spectrum of a liquid-jet x-ray anode was measured with a CZT spectrometer. • Results were compared with Monte Carlo simulations using MCNPX, PENELOPE, EGS5. • Notable differences were found among the Monte Carlo simulated spectra. • The key role was played by the electron-impact ionization cross-section model used. • The experimentally measured spectrum was closely reproduced by the PENELOPE code

  18. Calibration of a gamma spectrometer for measuring natural radioactivity. Experimental measurements and modeling by Monte-Carlo methods

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-01-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  19. Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs

    CERN Document Server

    Watt, G.

    2012-01-01

    We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established 'MSTW 2008' global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...

  20. TU-H-CAMPUS-IeP1-02: Validation of a CT Monte Carlo Software

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, R; Wulff, J; Penchev, P [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen (Germany); Zink, K [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen (Germany); University Medical Center Giessen and Marburg, Marburg (Germany)

    2016-06-15

    Purpose: To validate the in-house developed CT Monte Carlo calculation tool GMctdospp against reference simulation sets provided by the AAPM in the new report 195. Methods: Deposited energy was calculated in four segments (test 1) and two 10 cm long cylinders (test 2) inside a CTDI phantom (following case #4 of the AAPM report 195). The x-ray point source of a given 120 kVp spectrum was collimated to a fan beam with two thicknesses (10 mm, 80 mm) for a static and a rotational setup. In addition, a given chest geometry was used to calculate deposited energy in several organs for a 0° static and a rotational beam (following case #5 of the AAPM report 195). The results of GMctdospp were compared against the particular mean value of the four quoted Monte Carlo codes (EGSnrc, Geant 4, MCNP and Penelope). Results: Calculated values showed no outliers in any of the cases. Differences between GMctdospp and the particular mean Results: Calculated values showed no outliers in any of the cases. Differences between GMctdospp and the particular mean value were always at similar magnitude compared to the quoted codes. For case #4 (CTDI phantom) the relative differences were within 1.5 %, on average 0.4 % and for case #5 (chest phantom) within 2.5 % and on average 0.85 %. Conclusion: The results confirmed an overall uncertainty of the Monte-Carlo calculation chain in GMctdospp being <2.5 %, for most cases even better. This can be considered small compared to other sources of uncertainties, e.g. virtual source and patient models. The photon transport implemented in GMctdospp inside a voxel-based patient geometry was successfully verified.

  1. Gamma irradiator dose mapping: a Monte Carlo simulation and experimental measurements

    International Nuclear Information System (INIS)

    Rodrigues, Rogerio R.; Ribeiro, Mariana A.; Grynberg, Suely E.; Ferreira, Andrea V.; Meira-Belo, Luiz Claudio; Sousa, Romulo V.; Sebastiao, Rita de C.O.

    2009-01-01

    Gamma irradiator facilities can be used in a wide range of applications such as biological and chemical researches, food treatment and sterilization of medical devices and products. Dose mapping must be performed in these equipment in order to establish plant operational parameters, as dose uniformity, source utilization efficiency and maximum and minimum dose positions. The isodoses curves are generally measured using dosimeters distributed throughout the device, and this procedure often consume a large amount of dosimeters, irradiation time and manpower. However, a detailed curve doses identification of the irradiation facility can be performed using Monte Carlo simulation, which reduces significantly the monitoring with dosimeters. The present work evaluates the absorbed dose in the CDTN/CNEN Gammacell Irradiation Facility, using the Monte Carlo N-particles (MCNP) code. The Gammacell 220, serial number 39, was produced by Atomic Energy of Canada Limited and was loaded with sources of 60 Co. Dose measurements using TLD and Fricke dosimeters were also performed to validate the calculations. The good agreement of the results shows that Monte Carlo simulations can be used as a predictive tool of irradiation planning for the CDTN/CNEN Gamma Cell Irradiator. (author)

  2. Validation and verification of the ORNL Monte Carlo codes for nuclear safety analysis

    International Nuclear Information System (INIS)

    Emmett, M.B.

    1993-01-01

    The process of ensuring the quality of computer codes can be very time consuming and expensive. The Oak Ridge National Laboratory (ORNL) Monte Carlo codes all predate the existence of quality assurance (QA) standards and configuration control. The number of person-years and the amount of money spent on code development make it impossible to adhere strictly to all the current requirements. At ORNL, the Nuclear Engineering Applications Section of the Computing Applications Division is responsible for the development, maintenance, and application of the Monte Carlo codes MORSE and KENO. The KENO code is used for doing criticality analyses; the MORSE code, which has two official versions, CGA and SGC, is used for radiation transport analyses. Because KENO and MORSE were very thoroughly checked out over the many years of extensive use both in the United States and in the international community, the existing codes were open-quotes baselined.close quotes This means that the versions existing at the time the original configuration plan is written are considered to be validated and verified code systems based on the established experience with them

  3. Development and validation of Monte Carlo dose computations for contrast-enhanced stereotactic synchrotron radiation therapy

    International Nuclear Information System (INIS)

    Vautrin, M.

    2011-01-01

    Contrast-enhanced stereotactic synchrotron radiation therapy (SSRT) is an innovative technique based on localized dose-enhancement effects obtained by reinforced photoelectric absorption in the tumor. Medium energy monochromatic X-rays (50 - 100 keV) are used for irradiating tumors previously loaded with a high-Z element. Clinical trials of SSRT are being prepared at the European Synchrotron Radiation Facility (ESRF), an iodinated contrast agent will be used. In order to compute the energy deposited in the patient (dose), a dedicated treatment planning system (TPS) has been developed for the clinical trials, based on the ISOgray TPS. This work focuses on the SSRT specific modifications of the TPS, especially to the PENELOPE-based Monte Carlo dose engine. The TPS uses a dedicated Monte Carlo simulation of medium energy polarized photons to compute the deposited energy in the patient. Simulations are performed considering the synchrotron source, the modeled beamline geometry and finally the patient. Specific materials were also implemented in the voxelized geometry of the patient, to consider iodine concentrations in the tumor. The computation process has been optimized and parallelized. Finally a specific computation of absolute doses and associated irradiation times (instead of monitor units) was implemented. The dedicated TPS was validated with depth dose curves, dose profiles and absolute dose measurements performed at the ESRF in a water tank and solid water phantoms with or without bone slabs. (author) [fr

  4. Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

    International Nuclear Information System (INIS)

    McKinney, Gregg W.

    2012-01-01

    Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

  5. Validation of the Monte Carlo Criticality Program KENO V. a for highly-enriched uranium systems

    Energy Technology Data Exchange (ETDEWEB)

    Knight, J.R.

    1984-11-01

    A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results.

  6. Monte Carlo calculations and experimental measurements of dosimetric parameters of the IRA-103Pd brachytherapy source

    International Nuclear Information System (INIS)

    Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S. Hamed; Shavar, Arzhang

    2008-01-01

    This article presents a brachytherapy source having 103 Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model 103 Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA- 103 Pd source in water was found to be 0.678 cGy h -1 U -1 with an approximate uncertainty of ±0.1%. The anisotropy function, F(r,θ), and the radial dose function, g(r), of the IRA- 103 Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms

  7. DEAR Monte Carlo simulation versus experimental data in measurements with the DEAR NTP setup

    International Nuclear Information System (INIS)

    Bragadireanu, A.M.; Iliescu, M.; Petrascu, C.; Ponta, T.

    1999-01-01

    The DEAR NTP setup was installed in DAΦNE and is taking background data since February 1999. The goal of this work is to compare the measurements, in terms of charged particle hits (clusters), with the DEAR Monte Carlo simulation, taking into account the main effects due to which the particles are lost from circulating beams: Touschek effect and beam-gas interaction. To be mentioned that, during this period, no collisions between electrons and positrons have been achieved in the DEAR Interaction Point (IP) and consequently we don't have any experimental data concerning the hadronic background coming from φ-decays directly, or as secondary products of hadronic interactions. The NTP setup was shielded using lead and copper which gives a shielding factor of about 4. In parallel with the NTP setup, the signals from two scintillator slabs (150 x 80 x 2 mm) collected by 4 PMTs, positioned bellow the NTP setup and facing the IP, were digitized and counted using a National Instruments Timer/Counter Card. To compare experimental data with results of the Monte Carlo simulation we selected periods with only one circulating beam (electrons or positrons), in order to have a clean data set and we selected data files with CCD occupancy lower than 5%. As concerning the X-rays, the statistics was too poor to perform any quantitative comparison. The comparison between Monte Carlo, CCD data and kaon monitor data, for two beams are shown. It can be seen the agreement is fairly good and promising along the way of checking our routines which describes the experimental setup and the physical processes occurring in the accelerator environment. (authors)

  8. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Actis, S [Paul-Scherrer-Institute Wuerenlingen and Villigen, Villigen (Switzerland); Arbuzov, A [Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics; Balossini, G [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Pavia [IT; and others

    2009-12-15

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e{sup +}e{sup -} colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on {tau} decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and {tau} decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed. (orig.)

  9. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    International Nuclear Information System (INIS)

    Actis, S.; Arbuzov, A.

    2009-12-01

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e + e - colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on τ decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and τ decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed. (orig.)

  10. A new cubic phantom for PET/CT dosimetry: Experimental and Monte Carlo characterization

    International Nuclear Information System (INIS)

    Belinato, Walmir; Silva, Rogerio M.V.; Souza, Divanizia N.; Santos, William S.; Caldas, Linda V.E.; Perini, Ana P.; Neves, Lucio P.

    2015-01-01

    In recent years, positron emission tomography (PET) associated with multidetector computed tomography (MDCT) has become a diagnostic technique widely disseminated to evaluate various malignant tumors and other diseases. However, during PET/CT examinations, the doses of ionizing radiation experienced by the internal organs of patients may be substantial. To study the doses involved in PET/CT procedures, a new cubic phantom of overlapping acrylic plates was developed and characterized. This phantom has a deposit for the placement of the fluorine-18 fluoro-2-deoxy-D-glucose ( 18 F-FDG) solution. There are also small holes near the faces for the insertion of optically stimulated luminescence dosimeters (OSLD). The holes for OSLD are positioned at different distances from the 18 F-FDG deposit. The experimental results were obtained in two PET/CT devices operating with different parameters. Differences in the absorbed doses were observed in OSLD measurements due to the non-orthogonal positioning of the detectors inside the phantom. This phantom was also evaluated using Monte Carlo simulations, with the MCNPX code. The phantom and the geometrical characteristics of the equipment were carefully modeled in the MCNPX code, in order to develop a new methodology form comparison of experimental and simulated results, as well as to allow the characterization of PET/CT equipments in Monte Carlo simulations. All results showed good agreement, proving that this new phantom may be applied for these experiments. (authors)

  11. A new cubic phantom for PET/CT dosimetry: Experimental and Monte Carlo characterization

    Energy Technology Data Exchange (ETDEWEB)

    Belinato, Walmir [Departamento de Ensino, Instituto Federal de Educacao, Ciencia e Tecnologia da Bahia, Campus Vitoria da Conquista, Zabele, Av. Amazonas 3150, 45030-220 Vitoria da Conquista, BA (Brazil); Silva, Rogerio M.V.; Souza, Divanizia N. [Departamento de Fisica, Universidade Federal de Sergipe-UFS, Sao Cristovao, Sergipe (Brazil); Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP, Av. Prof. Lineu Prestes, 2242, Cidade Universitaria, 05508-000 Sao Paulo SP (Brazil); Perini, Ana P.; Neves, Lucio P. [Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP, Av. Prof. Lineu Prestes, 2242, Cidade Universitaria, 05508-000 Sao Paulo SP (Brazil); Instituto de Fisica, Universidade Federal de Uberlandia, Caixa Postal 593, 38400-902, Uberlandia, MG (Brazil)

    2015-07-01

    In recent years, positron emission tomography (PET) associated with multidetector computed tomography (MDCT) has become a diagnostic technique widely disseminated to evaluate various malignant tumors and other diseases. However, during PET/CT examinations, the doses of ionizing radiation experienced by the internal organs of patients may be substantial. To study the doses involved in PET/CT procedures, a new cubic phantom of overlapping acrylic plates was developed and characterized. This phantom has a deposit for the placement of the fluorine-18 fluoro-2-deoxy-D-glucose ({sup 18}F-FDG) solution. There are also small holes near the faces for the insertion of optically stimulated luminescence dosimeters (OSLD). The holes for OSLD are positioned at different distances from the {sup 18}F-FDG deposit. The experimental results were obtained in two PET/CT devices operating with different parameters. Differences in the absorbed doses were observed in OSLD measurements due to the non-orthogonal positioning of the detectors inside the phantom. This phantom was also evaluated using Monte Carlo simulations, with the MCNPX code. The phantom and the geometrical characteristics of the equipment were carefully modeled in the MCNPX code, in order to develop a new methodology form comparison of experimental and simulated results, as well as to allow the characterization of PET/CT equipments in Monte Carlo simulations. All results showed good agreement, proving that this new phantom may be applied for these experiments. (authors)

  12. Mechanism of Kinetically Controlled Capillary Condensation in Nanopores: A Combined Experimental and Monte Carlo Approach.

    Science.gov (United States)

    Hiratsuka, Tatsumasa; Tanaka, Hideki; Miyahara, Minoru T

    2017-01-24

    We find the rule of capillary condensation from the metastable state in nanoscale pores based on the transition state theory. The conventional thermodynamic theories cannot achieve it because the metastable capillary condensation inherently includes an activated process. We thus compute argon adsorption isotherms on cylindrical pore models and atomistic silica pore models mimicking the MCM-41 materials by the grand canonical Monte Carlo and the gauge cell Monte Carlo methods and evaluate the rate constant for the capillary condensation by the transition state theory. The results reveal that the rate drastically increases with a small increase in the chemical potential of the system, and the metastable capillary condensation occurs for any mesopores when the rate constant reaches a universal critical value. Furthermore, a careful comparison between experimental adsorption isotherms and the simulated ones on the atomistic silica pore models reveals that the rate constant of the real system also has a universal value. With this finding, we can successfully estimate the experimental capillary condensation pressure over a wide range of temperatures and pore sizes by simply applying the critical rate constant.

  13. Validation of the Monte Carlo criticality program KENO IV and the Hansen-Roach sixteen-energy-group-cross sections for high-assay uranium systems

    International Nuclear Information System (INIS)

    Handley, G.R.; Masters, L.C.; Stachowiak, R.V.

    1981-01-01

    Validation of the Monte Carlo criticality code, KENO IV, and the Hansen-Roach sixteen-energy-group cross sections was accomplished by calculating the effective neutron multiplication constant, k/sub eff/, of 29 experimentally critical assemblies which had uranium enrichments of 92.6% or higher in the uranium-235 isotope. The experiments were chosen so that a large variety of geometries and of neutron energy spectra were covered. Problems, calculating the k/sub eff/ of systems with high-uranium-concentration uranyl nitrate solution that were minimally reflected or unreflected, resulted in the separate examination of five cases

  14. Monte Carlo calculations and experimental measurements of dosimetric parameters of the IRA-103Pd source

    International Nuclear Information System (INIS)

    Sadeghi, Mahdi; Hosseini, Hamed; Raisali, Gholamreza

    2008-01-01

    Full text: The use of 103 Pd seed sources for permanent prostate implantation has become a popular brachytherapy application. As recommended by AAPM the dosimetric characteristics of the new source must be determined using experimental and Monte Carlo simulations, before its use in clinical applications thus The goal of this report is the experimental and theoretical determination of the dosimetric characteristics of this source following the recommendations in the AAPM TG-43U1 protocol. Figure 1 shows the geometry of the IRA- 103 Pd source. The source consists of a cylindrical silver core, 0.3 cm long x 0.05 cm in diameter, onto which 0.5 nm layer of 103 Pd has been uniformly adsorbed. The effective active length of source is 0.3 cm and the silver core encapsulated inside a hollow titanium tube with 0.45 cm long, 0.07 cm and 0.08 inner and outer diameters and two caps. The Monte Carlo N-Particle (MCNP) code, version 4C, was used to determine the relevant dosimetric parameters of the source. The geometry of the Monte Carlo simulation performed in this study consisted of a sphere with 30 cm diameter. Dose distributions around this source were measured in two Perspex phantom using enough TLD chips. For these measurements, slabs of Perspex material were machined to accommodate the source and TLD chips. A value of 0.67± 1% cGy.h -1 .U -1 for, Λ, was calculated as the ratio of d(r 0 ,θ 0 ) and s K , that may be compared with Λ values obtained for 103 Pd sources. Result of calculations and measurements values of dosimetric parameters of the source including radial dose function, g(r), and anisotropy function, F(r,θ), has been shown in separate figures. The radial dose function, g(r), for the IRA- 103 Pd source and other 103 Pd sources is included in Fig. 2. Comparison between measured and Monte Carlo simulated dose function, g(r), and anisotropy function, F(r,θ), of this source demonstrated that they are in good agreement with each other and The value of Λ is

  15. Monte Carlo characterization of materials for prosthetic implants and dosimetric validation of Pinnacle3 TPS

    International Nuclear Information System (INIS)

    Palleri, Francesca; Baruffaldi, Fabio; Angelini, Anna Lisa; Ferri, Andrea; Spezi, Emiliano

    2008-01-01

    In external beam radiotherapy the calculation of dose distribution for patients with hip prostheses is critical. Metallic implants not only degrade the image quality but also perturb the dose distribution. Conventional treatment planning systems do not accurately account for high-Z prosthetic implants heterogeneities, especially at interfaces. The materials studied in this work have been chosen on the basis of a statistical investigation on the hip prostheses implanted in 70 medical centres. The first aim of this study is a systematic characterization of materials used for hip prostheses, and it has been provided by BEAMnrc Monte Carlo code. The second aim is to evaluate the capabilities of a specific treatment planning system, Pinnacle 3 , when dealing with dose calculations in presence of metals, also close to the regions of high-Z gradients. In both cases it has been carried out an accurate comparison versus experimental measurements for two clinical photon beam energies (6 MV and 18 MV) and for two experimental sets-up: metallic cylinders inserted in a water phantom and in a specifically built PMMA slab. Our results show an agreement within 2% between experiments and MC simulations. TPS calculations agree with experiments within 3%.

  16. Monte Carlo characterization of materials for prosthetic implants and dosimetric validation of Pinnacle 3 TPS

    Science.gov (United States)

    Palleri, Francesca; Baruffaldi, Fabio; Angelini, Anna Lisa; Ferri, Andrea; Spezi, Emiliano

    2008-12-01

    In external beam radiotherapy the calculation of dose distribution for patients with hip prostheses is critical. Metallic implants not only degrade the image quality but also perturb the dose distribution. Conventional treatment planning systems do not accurately account for high-Z prosthetic implants heterogeneities, especially at interfaces. The materials studied in this work have been chosen on the basis of a statistical investigation on the hip prostheses implanted in 70 medical centres. The first aim of this study is a systematic characterization of materials used for hip prostheses, and it has been provided by BEAMnrc Monte Carlo code. The second aim is to evaluate the capabilities of a specific treatment planning system, Pinnacle 3, when dealing with dose calculations in presence of metals, also close to the regions of high-Z gradients. In both cases it has been carried out an accurate comparison versus experimental measurements for two clinical photon beam energies (6 MV and 18 MV) and for two experimental sets-up: metallic cylinders inserted in a water phantom and in a specifically built PMMA slab. Our results show an agreement within 2% between experiments and MC simulations. TPS calculations agree with experiments within 3%.

  17. Monte Carlo and experimental internal radionuclide dosimetry in RANDO head phantom

    International Nuclear Information System (INIS)

    Ghahraman Asl, Ruhollah; Nasseri, Shahrokh; Parach, Ali Asghar; Zakavi, Seyed Rasoul; Momennezhad Mehdi; Davenport, David

    2015-01-01

    Monte Carlo techniques are widely employed in internal dosimetry to obtain better estimates of absorbed dose distributions from irradiation sources in medicine. Accurate 3D absorbed dosimetry would be useful for risk assessment of inducing deterministic and stochastic biological effects for both therapeutic and diagnostic radiopharmaceuticals in nuclear medicine. The goal of this study was to experimentally evaluate the use of Geant4 application for tomographic emission (GATE) Monte Carlo package for 3D internal dosimetry using the head portion of the RANDO phantom. GATE package (version 6.1) was used to create a voxel model of a human head phantom from computed tomography (CT) images. Matrix dimensions consisted of 319 × 216 × 30 voxels (0.7871 × 0.7871 × 5 mm 3 ). Measurements were made using thermoluminescent dosimeters (TLD-100). One rod-shaped source with 94 MBq activity of 99m Tc was positioned in the brain tissue of the posterior part of the human head phantom in slice number 2. The results of the simulation were compared with measured mean absorbed dose per cumulative activity (S value). Absorbed dose was also calculated for each slice of the digital model of the head phantom and dose volume histograms (DVHs) were computed to analyze the absolute and relative doses in each slice from the simulation data. The S-values calculated by GATE and TLD methods showed a significant correlation (correlation coefficient, r 2 ≥ 0.99, p < 0.05) with each other. The maximum relative percentage differences were ≤14 % for most cases. DVHs demonstrated dose decrease along the direction of movement toward the lower slices of the head phantom. Based on the results obtained from GATE Monte Carlopackage it can be deduced that a complete dosimetry simulation study, from imaging to absorbed dose map calculation, is possible to execute in a single framework.

  18. Experimental validation of a topology optimized acoustic cavity

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Sigmund, Ole; Fernandez Grande, Efren

    2015-01-01

    This paper presents the experimental validation of an acoustic cavity designed using topology optimization with the goal of minimizing the sound pressure locally for monochromatic excitation. The presented results show good agreement between simulations and measurements. The effect of damping...

  19. Experimental validation of optimum resistance moment of concrete ...

    African Journals Online (AJOL)

    Experimental validation of optimum resistance moment of concrete slabs reinforced ... other solutions to combat corrosion problems in steel reinforced concrete. ... Eight specimens of two-way spanning slabs reinforced with CFRP bars were ...

  20. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  1. Validating experimental and theoretical Langmuir probe analyses

    Science.gov (United States)

    Pilling, L. S.; Carnegie, D. A.

    2007-08-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a dc discharge plasma over a wide variety of conditions. This discharge contains a dual-temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities, an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital-motion limited (OML) is approximately the same as the radial-motion gradients. An analysis of the 'gradients' from the radial-motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature, or separation of the ion and electron contributions. Only the value of the space potential is necessary to determine the applicable theory.

  2. Laser-wakefield accelerators for medical phase contrast imaging: Monte Carlo simulations and experimental studies

    Science.gov (United States)

    Cipiccia, S.; Reboredo, D.; Vittoria, Fabio A.; Welsh, G. H.; Grant, P.; Grant, D. W.; Brunetti, E.; Wiggins, S. M.; Olivo, A.; Jaroszynski, D. A.

    2015-05-01

    X-ray phase contrast imaging (X-PCi) is a very promising method of dramatically enhancing the contrast of X-ray images of microscopic weakly absorbing objects and soft tissue, which may lead to significant advancement in medical imaging with high-resolution and low-dose. The interest in X-PCi is giving rise to a demand for effective simulation methods. Monte Carlo codes have been proved a valuable tool for studying X-PCi including coherent effects. The laser-plasma wakefield accelerators (LWFA) is a very compact particle accelerator that uses plasma as an accelerating medium. Accelerating gradient in excess of 1 GV/cm can be obtained, which makes them over a thousand times more compact than conventional accelerators. LWFA are also sources of brilliant betatron radiation, which are promising for applications including medical imaging. We present a study that explores the potential of LWFA-based betatron sources for medical X-PCi and investigate its resolution limit using numerical simulations based on the FLUKA Monte Carlo code, and present preliminary experimental results.

  3. A discussion on validity of the diffusion theory by Monte Carlo method

    Science.gov (United States)

    Peng, Dong-qing; Li, Hui; Xie, Shusen

    2008-12-01

    Diffusion theory was widely used as a basis of the experiments and methods in determining the optical properties of biological tissues. A simple analytical solution could be obtained easily from the diffusion equation after a series of approximations. Thus, a misinterpret of analytical solution would be made: while the effective attenuation coefficient of several semi-infinite bio-tissues were the same, the distribution of light fluence in the tissues would be the same. In order to assess the validity of knowledge above, depth resolved internal fluence of several semi-infinite biological tissues which have the same effective attenuation coefficient were simulated with wide collimated beam in the paper by using Monte Carlo method in different condition. Also, the influence of bio-tissue refractive index on the distribution of light fluence was discussed in detail. Our results showed that, when the refractive index of several bio-tissues which had the same effective attenuation coefficient were the same, the depth resolved internal fluence would be the same; otherwise, the depth resolved internal fluence would be not the same. The change of refractive index of tissue would have affection on the light depth distribution in tissue. Therefore, the refractive index is an important optical property of tissue, and should be taken in account while using the diffusion approximation theory.

  4. Application of Monte Carlo cross-validation to identify pathway cross-talk in neonatal sepsis.

    Science.gov (United States)

    Zhang, Yuxia; Liu, Cui; Wang, Jingna; Li, Xingxia

    2018-03-01

    To explore genetic pathway cross-talk in neonates with sepsis, an integrated approach was used in this paper. To explore the potential relationships between differently expressed genes between normal uninfected neonates and neonates with sepsis and pathways, genetic profiling and biologic signaling pathway were first integrated. For different pathways, the score was obtained based upon the genetic expression by quantitatively analyzing the pathway cross-talk. The paired pathways with high cross-talk were identified by random forest classification. The purpose of the work was to find the best pairs of pathways able to discriminate sepsis samples versus normal samples. The results found 10 pairs of pathways, which were probably able to discriminate neonates with sepsis versus normal uninfected neonates. Among them, the best two paired pathways were identified according to analysis of extensive literature. Impact statement To find the best pairs of pathways able to discriminate sepsis samples versus normal samples, an RF classifier, the DS obtained by DEGs of paired pathways significantly associated, and Monte Carlo cross-validation were applied in this paper. Ten pairs of pathways were probably able to discriminate neonates with sepsis versus normal uninfected neonates. Among them, the best two paired pathways ((7) IL-6 Signaling and Phospholipase C Signaling (PLC); (8) Glucocorticoid Receptor (GR) Signaling and Dendritic Cell Maturation) were identified according to analysis of extensive literature.

  5. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    Science.gov (United States)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  6. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies

    Energy Technology Data Exchange (ETDEWEB)

    Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Santiago de Compostela, Galicia (Spain); Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Cortés, Julia; Garrido, Miguel [Servicio de Medicina Nuclear, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia, Spain and Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Pombar, Miguel [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia (Spain); Ruibal, Álvaro [Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Fundación Tejerina, 28003, Madrid (Spain)

    2014-05-15

    Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.

  7. Flow cytometry: design, development and experimental validation

    International Nuclear Information System (INIS)

    Seigneur, Alain

    1987-01-01

    The flow cytometry techniques allow the analysis and sorting of living biologic cells at rates above five to ten thousand events per second. After a short review, we present in this report the design and development of a 'high-tech' apparatus intended for research laboratories and the experimental results. The first part deals with the physical principles allowing morphologic and functional analysis of cells or cellular components. The measured parameters are as follows: electrical resistance pulse sizing, light scattering and fluorescence. Hydrodynamic centering is used, and in the same way, the division of a water-stream into droplets leading to electrostatic sorting of particles. The second part deals with the apparatus designed by the 'Commissariat a l'Energie Atomique' (C.E.A.) and industrialised by 'ODAM' (ATC 3000). The last part of this thesis work is the performance evaluations of this cyto-meter. The difference between the two size measurement methods are analyzed: electrical resistance pulse sizing versus small-angle light scattering. By an original optics design, high sensitivity has been reached in the fluorescence measurement: the equivalent noise corresponds to six hundred fluorescein isothiocyanate (FITC) molecules. The sorting performances have also been analyzed and the cell viability proven. (author) [fr

  8. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    2000-01-01

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  9. Performance of the Opalinus Clay under thermal loading: experimental results from Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Gens, A. [Universitat Politència de Catalunya, Barcelona (Spain); Wieczorek, K. [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) GmbH, Braunschweig (Germany); Gaus, I. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); and others

    2017-04-15

    The paper presents an overview of the behaviour of Opalinus Clay under thermal loading as observed in three in situ heating tests performed in the Mont Terri rock laboratory: HE-B, HE-D and HE-E. The three tests are summarily described; they encompass a broad range of test layouts and experimental conditions. Afterwards, the following topics are examined: determination of thermal conductivity, thermally-induced pore pressure generation and thermally-induced mechanical effects. The mechanisms underlying pore pressure generation and dissipation are discussed in detail and the relationship between rock damage and thermal loading is examined using an additional in situ test: SE-H. The paper concludes with an evaluation of the various thermo-hydro-mechanical (THM) interactions identified in the heating tests. (authors)

  10. Experimental Validation of a Permeability Model for Enrichment Membranes

    International Nuclear Information System (INIS)

    Orellano, Pablo; Brasnarof, Daniel; Florido Pablo

    2003-01-01

    An experimental loop with a real scale diffuser, in a single enrichment-stage configuration, was operated with air at different process conditions, in order to characterize the membrane permeability.Using these experimental data, an analytical geometric-and-morphologic-based model was validated.It is conclude that a new set of independent measurements, i.e. enrichment, is necessary in order to fully characterize diffusers, because of its internal parameters are not univocally determinated with permeability experimental data only

  11. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, Scott E., E-mail: sedavids@utmb.edu [Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas 77555 (United States); Cui, Jing [Radiation Oncology, University of Southern California, Los Angeles, California 90033 (United States); Kry, Stephen; Ibbott, Geoffrey S.; Followill, David S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vicic, Milos [Department of Applied Physics, University of Belgrade, Belgrade 11000 (Serbia); White, R. Allen [Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2016-08-15

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data

  12. Quantitative comparisons between experimentally measured 2-D carbon radiation and Monte Carlo impurity (MCI) code simulations

    International Nuclear Information System (INIS)

    Evans, T.E.; Leonard, A.W.; West, W.P.; Finkenthal, D.F.; Fenstermacher, M.E.; Porter, G.D.

    1998-08-01

    Experimentally measured carbon line emissions and total radiated power distributions from the DIII-D divertor and Scrape-Off Layer (SOL) are compared to those calculated with the Monte Carlo Impurity (MCI) model. A UEDGE background plasma is used in MCI with the Roth and Garcia-Rosales (RG-R) chemical sputtering model and/or one of six physical sputtering models. While results from these simulations do not reproduce all of the features seen in the experimentally measured radiation patterns, the total radiated power calculated in MCI is in relatively good agreement with that measured by the DIII-D bolometric system when the Smith78 physical sputtering model is coupled to RG-R chemical sputtering in an unaltered UEDGE plasma. Alternatively, MCI simulations done with UEDGE background ion temperatures along the divertor target plates adjusted to better match those measured in the experiment resulted in three physical sputtering models which when coupled to the RG-R model gave a total radiated power that was within 10% of measured value

  13. Experimental validation of neutron activation simulation of a varian medical linear accelerator.

    Science.gov (United States)

    Morato, S; Juste, B; Miro, R; Verdu, G; Diez, S

    2016-08-01

    This work presents a Monte Carlo simulation using the last version of MCNP, v. 6.1.1, of a Varian CLinAc emitting a 15MeV photon beam. The main objective of the work is to estimate the photoneutron production and activated products inside the medical linear accelerator head. To that, the Varian LinAc head was modelled in detail using the manufacturer information, and the model was generated with a CAD software and exported as a mesh to be included in the particle transport simulation. The model includes the transport of photoneutrons generated by primary photons and the (n, γ) reactions which can result in activation products. The validation of this study was done using experimental measures. Activation products have been identified by in situ gamma spectroscopy placed at the jaws exit of the LinAc shortly after termination of a high energy photon beam irradiation. Comparison between experimental and simulation results shows good agreement.

  14. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  15. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  16. Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction

    International Nuclear Information System (INIS)

    Aguiar, Pablo; Pino, Francisco; Silva-Rodríguez, Jesús; Pavía, Javier; Ros, Doménec; Ruibal, Álvaro

    2014-01-01

    Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the

  17. Experimental Validation of a Wave Energy Converter Array Hydrodynamics Tool

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Ferri, Francesco; Kofoed, Jens Peter

    2017-01-01

    This paper uses experimental data to validate a wave energy converter (WEC) array hydrodynamics tool developed within the context of linearized potential flow theory. To this end, wave forces and power absorption by an array of five-point absorber WECs in monochromatic and panchromatic waves were...

  18. Experimental validation of the containment codes ASTARTE and SEURBNUK

    International Nuclear Information System (INIS)

    Kendall, K.C.; Arnold, L.A.; Broadhouse, B.J.; Jones, A.; Yerkess, A.; Benuzzi, A.

    1979-10-01

    The fast reactor containment codes ASTARTE and SEURBNUK are being validated against data from the COVA series of small scale experiments being performed jointly by the UKAEA and JRC Ispra. The experimental programme is nearly complete, and data are given. (U.K.)

  19. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  20. The validation of organ dose calculations using voxel phantoms and Monte Carlo methods applied to point and water immersion sources.

    Science.gov (United States)

    Hunt, J G; da Silva, F C A; Mauricio, C L P; dos Santos, D S

    2004-01-01

    The Monte Carlo program 'Visual Monte Carlo-dose calculation' (VMC-dc) uses a voxel phantom to simulate the body organs and tissues, transports photons through this phantom and reports the absorbed dose received by each organ and tissue relevant to the calculation of effective dose as defined in ICRP Publication 60. This paper shows the validation of VMC-dc by comparison with EGSnrc and with a physical phantom containing TLDs. The validation of VMC-dc by comparison with EGSnrc was made for a collimated beam of 0.662 MeV photons irradiating a cube of water. For the validation by comparison with the physical phantom, the case considered was a whole body irradiation with a point 137Cs source placed at a distance of 1 m from the thorax of an Alderson-RANDO phantom. The validation results show good agreement for the doses obtained using VMC-dc and EGSnrc calculations, and from VMC-dc and TLD measurements. The program VMC-dc was then applied to the calculation of doses due to immersion in water containing gamma emitters. The dose conversion coefficients for water immersion are compared with their equivalents in the literature.

  1. The validation of organ dose calculations using voxel phantoms and Monte Carlo methods applied to point and water immersion sources

    International Nuclear Information System (INIS)

    Hunt, J. G.; Da Silva, F. C. A.; Mauricio, C. L. P.; Dos Santos, D. S.

    2004-01-01

    The Monte Carlo program 'Visual Monte Carlo-dose calculation' (VMC-dc) uses a voxel phantom to simulate the body organs and tissues, transports photons through this phantom and reports the absorbed dose received by each organ and tissue relevant to the calculation of effective dose as defined in ICRP Publication 60. This paper shows the validation of VMC-dc by comparison with EGSnrc and with a physical phantom containing TLDs. The validation of VMC-dc by comparison with EGSnrc was made for a collimated beam of 0.662 MeV photons irradiating a cube of water. For the validation by comparison with the physical phantom, the case considered was a whole body irradiation with a point 137 Cs source placed at a distance of 1 m from the thorax of an Alderson-RANDO phantom. The validation results show good agreement for the doses obtained using VMC-dc and EGSnrc calculations, and from VMC-dc and TLD measurements. The program VMC-dc was then applied to the calculation of doses due to immersion in water containing gamma emitters. The dose conversion coefficients for water immersion are compared with their equivalents in the literature. (authors)

  2. Experimentally guided Monte Carlo calculations of the atmospheric muon flux for interdisciplinary applications

    International Nuclear Information System (INIS)

    Mitrica, B.; Brancus, I.M.; Toma, G.; Bercuci, A.; Aiftimiei, C.; Wentz, J.; Rebel, H.

    2004-01-01

    Atmospheric muons are produced in the interactions of primary cosmic rays particle with Earth's atmosphere, mainly by the decay of pions and kaons generated in hadronic interactions. They decay further in electrons and positrons and electron and muon neutrinos. Being the penetrating cosmic rays component, the muons manage to pass entirely through the atmosphere and can pass even larger absorbers before they interact with the material at the Earth's surface, and due to cosmogenic production of isotopes by atmospheric muons, information of astrophysical, environmental and material research interest can be obtained. Up to now, mainly semi-analytical approximations have been used to calculate the muon flux for estimating the cosmogenic isotope production, necessary for different applications. Our estimation of the atmospheric muon flux is based on a Monte-Carlo simulation program CORSIKA, in which we simulate the development in the atmosphere of the extensive air showers, using different models for the description of the hadronic interaction. Atmospheric muons are produced in the interactions of primary cosmic rays particle with Earth's atmosphere, mainly by the decay of pions and kaons generated in hadronic interactions. They decay further in electrons and positrons and electron and muon neutrinos. Being the penetrating cosmic rays component, the muons manage to pass entirely through the atmosphere and can pass even larger absorbers before they interact with the material at the Earth's surface, and due to cosmogenic production of isotopes by atmospheric muons, information of astrophysical, environmental and material research interest can be obtained. Up to now, mainly semi-analytical approximations have been used to calculate the muon flux for estimating the cosmogenic isotope production, necessary for different applications. Our estimation of the atmospheric muon flux is based on a Monte-Carlo simulation program CORSIKA, in which we simulates the development in the

  3. Solar-Diesel Hybrid Power System Optimization and Experimental Validation

    Science.gov (United States)

    Jacobus, Headley Stewart

    As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.

  4. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold, E-mail: hli@radonc.wustl.edu [Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, Missouri 63110 (United States)

    2016-07-15

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this

  5. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    International Nuclear Information System (INIS)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold

    2016-01-01

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this

  6. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model.

    Science.gov (United States)

    Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold

    2016-07-01

    The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and

  7. Fission Product Experimental Program: Validation and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Leclaire, N.; Ivanova, T.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Girault, E. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-02-15

    From 1998 to 2004, a series of critical experiments referred to as the fission product (FP) experimental program was performed at the Commissariat a l'Energie Atomique Valduc research facility. The experiments were designed by Institut de Radioprotection et de Surete Nucleaire (IRSN) and funded by AREVA NC and IRSN within the French program supporting development of a technical basis for burnup credit validation. The experiments were performed with the following six key fission products encountered in solution either individually or as mixtures: {sup 103}Rh, {sup 133}Cs, {sup nat}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd. The program aimed at compensating for the lack of information on critical experiments involving FPs and at establishing a basis for FPs credit validation. One hundred forty-five critical experiments were performed, evaluated, and analyzed with the French CRISTAL criticality safety package and the American SCALE5. 1 code system employing different cross-section libraries. The aim of the paper is to show the experimental data potential to improve the ability to perform validation of full burnup credit calculation. The paper describes three Phases of the experimental program; the results of preliminary evaluation, the calculation, and the sensitivity/uncertainty study of the FP experiments used to validate the APOLLO2-MORET 4 route in the CRISTAL criticality package for burnup credit applications. (authors)

  8. Experimental validation of a new heterogeneous mechanical test design

    Science.gov (United States)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  9. Criteria of the validation of experimental and evaluated covariance data

    International Nuclear Information System (INIS)

    Badikov, S.

    2008-01-01

    The criteria of the validation of experimental and evaluated covariance data are reviewed. In particular: a) the criterion of the positive definiteness for covariance matrices, b) the relationship between the 'integral' experimental and estimated uncertainties, c) the validity of the statistical invariants, d) the restrictions imposed to correlations between experimental errors, are described. Applying these criteria in nuclear data evaluation was considered and 4 particular points have been examined. First preserving positive definiteness of covariance matrices in case of arbitrary transformation of a random vector was considered, properties of the covariance matrices in operations widely used in neutron and reactor physics (splitting and collapsing energy groups, averaging the physical values over energy groups, estimation parameters on the basis of measurements by means of generalized least squares method) were studied. Secondly, an algorithm for comparison of experimental and estimated 'integral' uncertainties was developed, square root of determinant of a covariance matrix is recommended for use in nuclear data evaluation as a measure of 'integral' uncertainty for vectors of experimental and estimated values. Thirdly, a set of statistical invariants-values which are preserved in statistical processing was presented. And fourthly, the inequality that signals a correlation between experimental errors that leads to unphysical values is given. An application is given concerning the cross-section of the (n,t) reaction on Li 6 with a neutron incident energy comprised between 1 and 100 keV

  10. A validation of direct grey Dancoff factors results for cylindrical cells in cluster geometry by the Monte Carlo method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch; Bogado, Sergio; Vilhena, Marco T.

    2008-01-01

    The WIMS code is a well known and one of the most used codes to handle nuclear core physics calculations. Recently, the PIJM module of the WIMS code was modified in order to allow the calculation of Grey Dancoff factors, for partially absorbing materials, using the alternative definition in terms of escape and collision probabilities. Grey Dancoff factors for the Canadian CANDU-37 and CANFLEX assemblies were calculated with PIJM at five symmetrically distinct fuel pin positions. The results, obtained via Direct Method, i.e., by direct calculation of escape and collision probabilities, were satisfactory when compared with the ones of literature. On the other hand, the PIJMC module was developed to calculate escape and collision probabilities using Monte Carlo method. Modifications in this module were performed to determine Black Dancoff factors, considering perfectly absorbing fuel rods. In this work, we proceed further in the task of validating the Direct Method by the Monte Carlo approach. To this end, the PIJMC routine is modified to compute Grey Dancoff factors using the cited alternative definition. Results are reported for the mentioned CANDU-37 and CANFLEX assemblies obtained with PIJMC, at the same fuel pin positions as with PIJM. A good agreement is observed between the results from the Monte Carlo and Direct methods

  11. Experimental and computational validation of BDTPS using a heterogeneous boron phantom

    CERN Document Server

    Daquino, G G; Mazzini, M; Moss, R L; Muzi, L

    2004-01-01

    The idea to couple the treatment planning system (TPS) to the information on the real boron distribution in the patient acquired by positron emission tomography (PET) is the main added value of the new methodology set-up at DIMNP (Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione) of University of Pisa, in collaboration with the JRC (Joint Research Centre) at Petten (NL). This methodology has been implemented in a new TPS, called Boron Distribution Treatment Planning System (BDTPS), which takes into account the actual boron distribution in the patient's organ, as opposed to other TPSs used in BNCT that assume an ideal uniform boron distribution. BDTPS is based on the Monte Carlo technique and has been experimentally validated comparing the computed main parameters (thermal neutron flux, boron dose, etc.) to those measured during the irradiation of an ad hoc designed phantom (HEterogeneous BOron phanto M, HEBOM). The results are also in good agreement with those obtained by the standard TPS SER...

  12. Experimental validation of calculated atomic charges in ionic liquids

    Science.gov (United States)

    Fogarty, Richard M.; Matthews, Richard P.; Ashworth, Claire R.; Brandt-Talbot, Agnieszka; Palgrave, Robert G.; Bourne, Richard A.; Vander Hoogerstraete, Tom; Hunt, Patricia A.; Lovelock, Kevin R. J.

    2018-05-01

    A combination of X-ray photoelectron spectroscopy and near edge X-ray absorption fine structure spectroscopy has been used to provide an experimental measure of nitrogen atomic charges in nine ionic liquids (ILs). These experimental results are used to validate charges calculated with three computational methods: charges from electrostatic potentials using a grid-based method (ChelpG), natural bond orbital population analysis, and the atoms in molecules approach. By combining these results with those from a previous study on sulfur, we find that ChelpG charges provide the best description of the charge distribution in ILs. However, we find that ChelpG charges can lead to significant conformational dependence and therefore advise that small differences in ChelpG charges (<0.3 e) should be interpreted with care. We use these validated charges to provide physical insight into nitrogen atomic charges for the ILs probed.

  13. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  14. Experimental validation of an ultrasonic flowmeter for unsteady flows

    Science.gov (United States)

    Leontidis, V.; Cuvier, C.; Caignaert, G.; Dupont, P.; Roussette, O.; Fammery, S.; Nivet, P.; Dazin, A.

    2018-04-01

    An ultrasonic flowmeter was developed for further applications in cryogenic conditions and for measuring flow rate fluctuations in the range of 0 to 70 Hz. The prototype was installed in a flow test rig, and was validated experimentally both in steady and unsteady water flow conditions. A Coriolis flowmeter was used for the calibration under steady state conditions, whereas in the unsteady case the validation was done simultaneously against two methods: particle image velocimetry (PIV), and with pressure transducers installed flush on the wall of the pipe. The results show that the developed flowmeter and the proposed methodology can accurately measure the frequency and amplitude of unsteady fluctuations in the experimental range of 0-9 l s-1 of the mean main flow rate and 0-70 Hz of the imposed disturbances.

  15. Physics of subcritical multiplying regions and experimental validation

    International Nuclear Information System (INIS)

    Salvatores, M.

    1996-01-01

    The coupling of a particle accelerator with a spallation target and with a subcritical multiplying region has been proposed in the fifties and is called here a hybrid system. This article gives some ideas about the energetic balance of such a system. The possibilities of experimental validation of some properties of a subcritical multiplying region by using MASURCA facility at CEA-Cadarache are examined. The results of a preliminary experiment called MUSE are presented. (A.C.)

  16. Validation of Monte Carlo simulation of neutron production in a spallation experiment

    Czech Academy of Sciences Publication Activity Database

    Zavorka, L.; Adam, Jindřich; Artiushenko, M.; Baldin, A. A.; Brudanin, V. B.; Katovsky, K.; Suchopár, M.; Svoboda, Ondřej; Vrzalová, Jitka; Wagner, Vladimír

    2015-01-01

    Roč. 80, JUN (2015), s. 178-187 ISSN 0306-4549 R&D Projects: GA MŠk LA08002; GA MŠk LG14004 Institutional support: RVO:61389005 Keywords : accelerator-driven systems * uranium spallation target * neutron emission * activation measurement * Monte Carlo simulation Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.174, year: 2015

  17. Monte-Carlo validation of secondary sodium activation in a pool-type LMFBR

    International Nuclear Information System (INIS)

    Plamiotti, G.; Rado, V.; Salvatores, M.

    1980-09-01

    The secondary sodium activation in a pool-type LMFBR is the main parameter to be accurately evaluated in the shield design. In the present work a complete two dimensional description of the system, including core, shielding and sodium up to Heat Exchangers, is coupled to local Heat Exchanger Monte-Carlo calculations. This refined calculation is used to deduce a simplified method to take into account the coupling of radial propagation in the Heat Exchanger and its finite cylindrical structure

  18. Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain); Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d' Oncologia, Barcelona 08036 (Spain); Silva-Rodríguez, Jesús [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain); Pavía, Javier [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ros, Doménec [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ruibal, Álvaro [Servicio Medicina Nuclear, CHUS (Spain); Grupo de Imaxe Molecular, Facultade de Medicina (USC), IDIS, Santiago de Compostela 15706 (Spain); Fundación Tejerina, Madrid (Spain); and others

    2014-03-15

    Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the

  19. Monte Carlo and experimental determination of correction factors for gamma knife perfexion small field dosimetry measurements

    Science.gov (United States)

    Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.

    2017-09-01

    Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of

  20. Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.

    Science.gov (United States)

    Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B

    2018-01-24

    The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.

  1. INL Experimental Program Roadmap for Thermal Hydraulic Code Validation

    Energy Technology Data Exchange (ETDEWEB)

    Glenn McCreery; Hugh McIlroy

    2007-09-01

    Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role of expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related

  2. Monte Carlo simulations of adult and pediatric computed tomography exams: Validation studies of organ doses with physical phantoms

    International Nuclear Information System (INIS)

    Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.

    2013-01-01

    Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT

  3. Multiphysics modelling and experimental validation of high concentration photovoltaic modules

    International Nuclear Information System (INIS)

    Theristis, Marios; Fernández, Eduardo F.; Sumner, Mike; O'Donovan, Tadhg S.

    2017-01-01

    Highlights: • A multiphysics modelling approach for concentrating photovoltaics was developed. • An experimental campaign was conducted to validate the models. • The experimental results were in good agreement with the models. • The multiphysics modelling allows the concentrator’s optimisation. - Abstract: High concentration photovoltaics, equipped with high efficiency multijunction solar cells, have great potential in achieving cost-effective and clean electricity generation at utility scale. Such systems are more complex compared to conventional photovoltaics because of the multiphysics effect that is present. Modelling the power output of such systems is therefore crucial for their further market penetration. Following this line, a multiphysics modelling procedure for high concentration photovoltaics is presented in this work. It combines an open source spectral model, a single diode electrical model and a three-dimensional finite element thermal model. In order to validate the models and the multiphysics modelling procedure against actual data, an outdoor experimental campaign was conducted in Albuquerque, New Mexico using a high concentration photovoltaic monomodule that is thoroughly described in terms of its geometry and materials. The experimental results were in good agreement (within 2.7%) with the predicted maximum power point. This multiphysics approach is relatively more complex when compared to empirical models, but besides the overall performance prediction it can also provide better understanding of the physics involved in the conversion of solar irradiance into electricity. It can therefore be used for the design and optimisation of high concentration photovoltaic modules.

  4. Experimental Validation of a Dynamic Model for Lightweight Robots

    Directory of Open Access Journals (Sweden)

    Alessandro Gasparetto

    2013-03-01

    Full Text Available Nowadays, one of the main topics in robotics research is dynamic performance improvement by means of a lightening of the overall system structure. The effective motion and control of these lightweight robotic systems occurs with the use of suitable motion planning and control process. In order to do so, model-based approaches can be adopted by exploiting accurate dynamic models that take into account the inertial and elastic terms that are usually neglected in a heavy rigid link configuration. In this paper, an effective method for modelling spatial lightweight industrial robots based on an Equivalent Rigid Link System approach is considered from an experimental validation perspective. A dynamic simulator implementing the formulation is used and an experimental test-bench is set-up. Experimental tests are carried out with a benchmark L-shape mechanism.

  5. Topology Optimization for Wave Propagation Problems with Experimental Validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk

    designed using the proposed method is provided. A novel approach for designing meta material slabs with selectively tuned negative refractive behavior is outlined. Numerical examples demonstrating the behavior of a slab under different conditions is provided. Results from an experimental studydemonstrating...... agreement with numerical predictions are presented. Finally an approach for designing acoustic wave shaping devices is treated. Three examples of applications are presented, a directional sound emission device, a wave splitting device and a flat focusing lens. Experimental results for the first two devices......This Thesis treats the development and experimental validation of density-based topology optimization methods for wave propagation problems. Problems in the frequency regime where design dimensions are between approximately one fourth and ten wavelengths are considered. All examples treat problems...

  6. Introduction to the Monte Carlo project and the approach to the validation of probabilistic models of dietary exposure to selected food chemicals

    NARCIS (Netherlands)

    Gibney, M.J.; Voet, van der H.

    2003-01-01

    The Monte Carlo project was established to allow an international collaborative effort to define conceptual models for food chemical and nutrient exposure, to define and validate the software code to govern these models, to provide new or reconstructed databases for validation studies, and to use

  7. Monte Carlo validation of the TrueBeam 10XFFF phase–space files for applications in lung SABR

    International Nuclear Information System (INIS)

    Teke, Tony; Duzenli, Cheryl; Bergman, Alanah; Viel, Francis; Atwal, Parmveer; Gete, Ermias

    2015-01-01

    .8% for 10 × 10 and 3 × 3 cm 2 field sizes. This represents a significant improvement over the performance of the ECLIPSE AAA. Conclusions: The 10XFFF phase–space data offered by the Varian Monte Carlo research team have been validated for clinical use using measured, interinstitutional beam data in water and with film dosimetry in inhomogeneous media

  8. Monte Carlo validation of the TrueBeam 10XFFF phase–space files for applications in lung SABR

    Energy Technology Data Exchange (ETDEWEB)

    Teke, Tony, E-mail: tteke2@bccancer.bc.ca [Medical Physics, BC Cancer Agency—Centre for the Southern Interior, Kelowna, British Columbia V1Y 5L3 (Canada); Duzenli, Cheryl; Bergman, Alanah; Viel, Francis; Atwal, Parmveer; Gete, Ermias [Medical Physics, BC Cancer Agency—Vancouver Centre, Vancouver, British Columbia V5Z 4E6 (Canada)

    2015-12-15

    measurements to within 2.8% for 10 × 10 and 3 × 3 cm{sup 2} field sizes. This represents a significant improvement over the performance of the ECLIPSE AAA. Conclusions: The 10XFFF phase–space data offered by the Varian Monte Carlo research team have been validated for clinical use using measured, interinstitutional beam data in water and with film dosimetry in inhomogeneous media.

  9. Numerical simulation and experimental validation of coiled adiabatic capillary tubes

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Valladares, O. [Centro de Investigacion en Energia, Universidad Nacional Autonoma de Mexico (UNAM), Apdo. Postal 34, 62580 Temixco, Morelos (Mexico)

    2007-04-15

    The objective of this study is to extend and validate the model developed and presented in previous works [O. Garcia-Valladares, C.D. Perez-Segarra, A. Oliva, Numerical simulation of capillary tube expansion devices behaviour with pure and mixed refrigerants considering metastable region. Part I: mathematical formulation and numerical model, Applied Thermal Engineering 22 (2) (2002) 173-182; O. Garcia-Valladares, C.D. Perez-Segarra, A. Oliva, Numerical simulation of capillary tube expansion devices behaviour with pure and mixed refrigerants considering metastable region. Part II: experimental validation and parametric studies, Applied Thermal Engineering 22 (4) (2002) 379-391] to coiled adiabatic capillary tube expansion devices working with pure and mixed refrigerants. The discretized governing equations are coupled using an implicit step by step method. A special treatment has been implemented in order to consider transitions (subcooled liquid region, metastable liquid region, metastable two-phase region and equilibrium two-phase region). All the flow variables (enthalpies, temperatures, pressures, vapor qualities, velocities, heat fluxes, etc.) together with the thermophysical properties are evaluated at each point of the grid in which the domain is discretized. The numerical model allows analysis of aspects such as geometry, type of fluid (pure substances and mixtures), critical or non-critical flow conditions, metastable regions, and transient aspects. Comparison of the numerical simulation with a wide range of experimental data presented in the technical literature will be shown in the present article in order to validate the model developed. (author)

  10. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    Science.gov (United States)

    2014-03-27

    Vehicle Code System (VCS), the Monte Carlo Adjoint SHielding (MASH), and the Monte Carlo n- Particle ( MCNP ) code. Of the three, the oldest and still most...widely utilized radiation transport code is MCNP . First created at Los Alamos National Laboratory (LANL) in 1957, the code simulated neutral...particle types, and previous versions of MCNP were repeatedly validated using both simple and complex 10 geometries [12, 13]. Much greater discussion and

  11. Experimental validation of a computer simulation of radiographic film

    International Nuclear Information System (INIS)

    Goncalves, Elicardo A. de S.; Azeredo, Raphaela; Assis, Joaquim T.; Anjos, Marcelino J. dos; Oliveira, Davi F.; Oliveira, Luis F. de

    2015-01-01

    In radiographic films, the behavior of characteristic curve is very important for the image quality. Digitization/visualization are always performed by light transmission and the characteristic curve is known as a behavior of optical density in function of exposure. In a first approach, in a Monte-Carlo computer simulation trying to build a Hurter-Driffield curve by a stochastic model, the results showed the same known shape, but some behaviors, like the influence of silver grain size, are not expected. A real H and D curve was build exposing films, developing and measuring the optical density. When comparing model results with a real curve, trying to fit them and estimating some parameters, a difference in high exposure region shows a divergence between the models and the experimental data. Since the optical density is a function of metallic silver generated by chemical development, direct proportion was considered, but the results suggests a limitation in this proportion. In fact, when the optical density was changed by another way to measure silver concentration, like x-ray fluorescence, the new results agree with the models. Therefore, overexposed films can contain areas with different silver concentrations but it can't be seen due to the fact that optical density measurement is limited. Mapping the silver concentration in the film area can be a solution to reveal these dark images, and x-ray fluorescence has shown to be the best way to perform this new way to digitize films. (author)

  12. Experimental validation of a computer simulation of radiographic film

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil). Laboratorio de Instrumentacao e Simulacao Computacional Cientificas Aplicadas; Azeredo, Raphaela, E-mail: raphaelaazeredo@yahoo.com.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Instituto de Fisica Armando Dias Tavares. Programa de Pos-Graduacao em Fisica; Assis, Joaquim T., E-mail: joaquim@iprj.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico; Anjos, Marcelino J. dos; Oliveira, Davi F.; Oliveira, Luis F. de, E-mail: marcelin@uerj.br, E-mail: davi.oliveira@uerj.br, E-mail: lfolive@uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Instituto de Fisica Armando Dias Tavares. Departamento de Fisica Aplicada e Termodinamica

    2015-07-01

    In radiographic films, the behavior of characteristic curve is very important for the image quality. Digitization/visualization are always performed by light transmission and the characteristic curve is known as a behavior of optical density in function of exposure. In a first approach, in a Monte-Carlo computer simulation trying to build a Hurter-Driffield curve by a stochastic model, the results showed the same known shape, but some behaviors, like the influence of silver grain size, are not expected. A real H and D curve was build exposing films, developing and measuring the optical density. When comparing model results with a real curve, trying to fit them and estimating some parameters, a difference in high exposure region shows a divergence between the models and the experimental data. Since the optical density is a function of metallic silver generated by chemical development, direct proportion was considered, but the results suggests a limitation in this proportion. In fact, when the optical density was changed by another way to measure silver concentration, like x-ray fluorescence, the new results agree with the models. Therefore, overexposed films can contain areas with different silver concentrations but it can't be seen due to the fact that optical density measurement is limited. Mapping the silver concentration in the film area can be a solution to reveal these dark images, and x-ray fluorescence has shown to be the best way to perform this new way to digitize films. (author)

  13. Experimental validation of lead cross sections for scale and MCNP

    International Nuclear Information System (INIS)

    Henrikson, D.J.

    1995-01-01

    Moving spent nuclear fuel between facilities often requires the use of lead-shielded casks. Criticality safety that is based upon calculations requires experimental validation of the fuel matrix and lead cross section libraries. A series of critical experiments using a high-enriched uranium-aluminum fuel element with a variety of reflectors, including lead, has been identified. Twenty-one configurations were evaluated in this study. The fuel element was modelled for KENO V.a and MCNP 4a using various cross section sets. The experiments addressed in this report can be used to validate lead-reflected calculations. Factors influencing calculated k eff which require further study include diameters of styrofoam inserts and homogenization

  14. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  15. Numerical modeling and experimental validation of thermoplastic composites induction welding

    Science.gov (United States)

    Palmieri, Barbara; Nele, Luigi; Galise, Francesco

    2018-05-01

    In this work, a numerical simulation and experimental test of the induction welding of continuous fibre-reinforced thermoplastic composites (CFRTPCs) was provided. The thermoplastic Polyamide 66 (PA66) with carbon fiber fabric was used. Using a dedicated software (JMag Designer), the influence of the fundamental process parameters such as temperature, current and holding time was investigated. In order to validate the results of the simulations, and therefore the numerical model used, experimental tests were carried out, and the temperature values measured during the tests were compared with the aid of an optical pyrometer, with those provided by the numerical simulation. The mechanical properties of the welded joints were evaluated by single lap shear tests.

  16. Spectra and depth-dose deposition in a polymethylmethacrylate breast phantom obtained by experimental and Monte Carlo method; Espectros e deposicao de dose em profundidade em phantom de mama de polimetilmetacrilato: obtencao experimental e por metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    David, Mariano G.; Pires, Evandro J.; Magalhaes, Luis A.; Almeida, Carlos E. de; Alves, Carlos F.E., E-mail: marianogd08@gmail.com [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Lab. Ciencias Radiologicas; Albuquerque, Marcos A. [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), RJ (Brazil). Instituto Alberto Luiz Coimbra; Bernal, Mario A. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Instituto de Fisica Gleb Wataghin; Peixoto, Jose G. [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2012-08-15

    This paper focuses on the obtainment, using experimental and Monte Carlo-simulated (MMC) methods, of the photon spectra at various depths and depth-dose deposition curves for x-rays beams used in mammography, obtained on a polymethylmethacrylate (PMMA) breast phantom. Spectra were obtained for 28 and 30 kV quality-beams and the corresponding average energy values (Emed) were calculated. For the experimental acquisition was used a Si-PIN photodiode spectrometer and for the MMC simulations the PENELOPE code was employed. The simulated and the experimental spectra show a very good agreement, which was corroborated by the low differences found between the Emed values. An increase in the Emed values and a strong attenuation of the beam through the depth of the PMMA phantom was also observed. (author)

  17. Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter

    CERN Document Server

    Adloff, C.; Blaising, J.J.; Drancourt, C.; Espargiliere, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Schlereth, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S.T.; Sosebee, M.; White, A.P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N.K.; Mavromanolakis, G.; Thomson, M.A.; Ward, D.R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Apostolakis, J.; Dotti, A.; Folger, G.; Ivantchenko, V.; Uzhinskiy, V.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G.C.; Dyshkant, A.; Lima, J.G.R.; Zutshi, V.; Hostachy, J.Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Gottlicher, P.; Gunter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.Ch.; Shen, W.; Stamen, R.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G.W.; Kawagoe, K.; Dauncey, P.D.; Magnan, A.M.; Bartsch, V.; Wing, M.; Salvatore, F.; Alamillo, E.Calvo; Fouz, M.C.; Puerta-Pelayo, J.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Popov, V.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Tikhomirov, V.; Kiesling, C.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Amjad, M.S.; Bonis, J.; Callier, S.; Conforti di Lorenzo, S.; Cornebise, P.; Doublet, Ph.; Dulucq, F.; Fleury, J.; Frisson, T.; van der Kolk, N.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch.; Poschl, R.; Raux, L.; Rouene, J.; Seguin-Moreau, N.; Anduze, M.; Boudry, V.; Brient, J-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Gotze, M.; Hartbrich, O.; Sauer, J.; Weber, S.; Zeitnitz, C.

    2013-01-01

    Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.

  18. Validation and simulation of a regulated survey system through Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    Asier Lacasta Soto

    2015-07-01

    Full Text Available Channel flow covers long distances and obeys to variable temporal behaviour. It is usually regulated by hydraulic elements as lateralgates to provide a correct of water supply. The dynamics of this kind of flow is governed by a partial differential equations systemnamed shallow water model. They have to be complemented with a simplified formulation for the gates. All the set of equations forma non-linear system that can only be solved numerically. Here, an explicit upwind numerical scheme in finite volumes able to solveall type of flow regimes is used. Hydraulic structures (lateral gates formulation introduces parameters with some uncertainty. Hence,these parameters will be calibrated with a Monte Carlo algorithm obtaining associated coefficients to each gate. Then, they will bechecked, using real cases provided by the monitorizing equipment of the Pina de Ebro channel located in Zaragoza.

  19. Validation of the Monte Carlo Criticality Program KENO V.a for highly-enriched uranium systems

    International Nuclear Information System (INIS)

    Knight, J.R.

    1984-11-01

    A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results

  20. Experimental validation of solid rocket motor damping models

    Science.gov (United States)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2017-12-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  1. Experimental validation of solid rocket motor damping models

    Science.gov (United States)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2018-06-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  2. Experimental and Monte Carlo studies of fluence corrections for graphite calorimetry in low- and high-energy clinical proton beams

    International Nuclear Information System (INIS)

    Lourenço, Ana; Thomas, Russell; Bouchard, Hugo; Kacperek, Andrzej; Vondracek, Vladimir; Royle, Gary; Palmans, Hugo

    2016-01-01

    Purpose: The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Methods: Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the FLUKA code [A. Ferrari et al., “FLUKA: A multi-particle transport code,” in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., “The FLUKA Code: Developments and challenges for high energy and medical applications,” Nucl. Data Sheets 120, 211–214 (2014)], to partial fluence corrections measured experimentally. Results: A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Conclusions: Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary

  3. Experimental characterization and Monte Carlo simulation of Si(Li) detector efficiency by radioactive sources and PIXE

    Energy Technology Data Exchange (ETDEWEB)

    Mesradi, M. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France); Elanique, A. [Departement de Physique, FS/BP 8106, Universite Ibn Zohr, Agadir, Maroc (Morocco); Nourreddine, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)], E-mail: abdelmjid.nourreddine@ires.in2p3.fr; Pape, A.; Raiser, D.; Sellam, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)

    2008-06-15

    This work relates to the study and characterization of the response function of an X-ray spectrometry system. The intrinsic efficiency of a Si(Li) detector has been simulated with the Monte Carlo codes MCNP and GEANT4 in the photon energy range of 2.6-59.5 keV. After finding it necessary to take a radiograph of the detector inside its cryostat to learn the correct dimensions, agreement within 10% between the simulations and experimental measurements with several point-like sources and PIXE results was obtained.

  4. Experimental and Monte Carlo studies of fluence corrections for graphite calorimetry in low- and high-energy clinical proton beams

    Energy Technology Data Exchange (ETDEWEB)

    Lourenço, Ana, E-mail: am.lourenco@ucl.ac.uk [Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom and Division of Acoustics and Ionising Radiation, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Thomas, Russell; Bouchard, Hugo [Division of Acoustics and Ionising Radiation, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Kacperek, Andrzej [National Eye Proton Therapy Centre, Clatterbridge Cancer Centre, Wirral CH63 4JY (United Kingdom); Vondracek, Vladimir [Proton Therapy Center, Budinova 1a, Prague 8 CZ-180 00 (Czech Republic); Royle, Gary [Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom); Palmans, Hugo [Division of Acoustics and Ionising Radiation, National Physical Laboratory, Teddington TW11 0LW, United Kingdom and Medical Physics Group, EBG MedAustron GmbH, A-2700 Wiener Neustadt (Austria)

    2016-07-15

    Purpose: The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Methods: Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the FLUKA code [A. Ferrari et al., “FLUKA: A multi-particle transport code,” in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., “The FLUKA Code: Developments and challenges for high energy and medical applications,” Nucl. Data Sheets 120, 211–214 (2014)], to partial fluence corrections measured experimentally. Results: A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Conclusions: Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary

  5. CFD Modeling and Experimental Validation of a Solar Still

    Directory of Open Access Journals (Sweden)

    Mahmood Tahir

    2017-01-01

    Full Text Available Earth is the densest planet of the solar system with total area of 510.072 million square Km. Over 71.68% of this area is covered with water leaving a scant area of 28.32% for human to inhabit. The fresh water accounts for only 2.5% of the total volume and the rest is the brackish water. Presently, the world is facing chief problem of lack of potable water. This issue can be addressed by converting brackish water into potable through a solar distillation process and solar still is specially assigned for this purpose. Efficiency of a solar still explicitly depends on its design parameters, such as wall material, chamber depth, width and slope of the zcondensing surface. This study was aimed at investigating the solar still parameters using CFD modeling and experimental validation. The simulation data of ANSYS-FLUENT was compared with actual experimental data. A close agreement among the simulated and experimental results was seen in the presented work. It reveals that ANSYS-FLUENT is a potent tool to analyse the efficiency of the new designs of the solar distillation systems.

  6. Method for Determining Volumetric Efficiency and Its Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ambrozik Andrzej

    2017-12-01

    Full Text Available Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, in which non-stationary processes occur. To make calculations of thermodynamic parameters of the engine operating cycle, based on the comparison of cycles, it is necessary to know the mean constant value of cylinder pressure throughout this process. Because of the character of in-cylinder pressure pattern and difficulties in pressure experimental determination, in the present paper, a novel method for the determination of this quantity was presented. In the new approach, the iteration method was used. In the method developed for determining the volumetric efficiency, the following equations were employed: the law of conservation of the amount of substance, the first law of thermodynamics for open system, dependences for changes in the cylinder volume vs. the crankshaft rotation angle, and the state equation. The results of calculations performed with this method were validated by means of experimental investigations carried out for a selected engine at the engine test bench. A satisfactory congruence of computational and experimental results as regards determining the volumetric efficiency was obtained. The method for determining the volumetric efficiency presented in the paper can be used to investigate the processes taking place in the cylinder of an IC engine.

  7. Experimental validation of models for Plasma Focus devices

    International Nuclear Information System (INIS)

    Rodriguez Palomino, Luis; Gonzalez, Jose; Clausse, Alejandro

    2003-01-01

    Plasma Focus(PF) Devices are thermonuclear pulsators that produce short pulsed radiation (X-ray, charged particles and neutrons). Since Filippov and Mather, investigations have been used to study plasma properties. Nowadays the interest about PF is focused in technology applications, related to the use of these devices as pulsed neutron sources. In the numerical calculus the Inter institutional PLADEMA (PLAsmas DEnsos MAgnetizados) network is developing three models. Each one is useful in different engineering stages of the Plasma Focus design. One of the main objectives in this work is a comparative study on the influence of the different parameters involved in each models. To validate these results, several experimental measurements under different geometry and initial conditions were performed. (author)

  8. Experimental validation of additively manufactured optimized shapes for passive cooling

    DEFF Research Database (Denmark)

    Lazarov, Boyan S.; Sigmund, Ole; Meyer, Knud E.

    2018-01-01

    This article confirms the superior performance of topology optimized heat sinks compared to lattice designs and suggests simpler manufacturable pin-fin design interpretations. The development is driven by the wide adoption of light-emitting-diode (LED) lamps for industrial and residential lighting....... The presented heat sink solutions are generated by topology optimization, a computational morphogenesis approach with ultimate design freedom, relying on high-performance computing and simulation. Optimized devices exhibit complex and organic-looking topologies which are realized with the help of additive...... manufacturing. To reduce manufacturing cost, a simplified interpretation of the optimized design is produced and validated as well. Numerical and experimental results agree well and indicate that the obtained designs outperform lattice geometries by more than 21%, resulting in a doubling of life expectancy and...

  9. Experimental Validation of the LHC Helium Relief System Flow Modeling

    CERN Document Server

    Fydrych, J; Riddone, G

    2006-01-01

    In case of simultaneous resistive transitions in a whole sector of magnets in the Large Hadron Collider, the helium would be vented from the cold masses to a dedicated recovery system. During the discharge the cold helium will eventually enter a pipe at room temperature. During the first period of the flow the helium will be heated intensely due to the pipe heat capacity. To study the changes of the helium thermodynamic and flow parameters we have simulated numerically the most critical flow cases. To verify and validate numerical results, a dedicated laboratory test rig representing the helium relief system has been designed and commissioned. Both numerical and experimental results allow us to determine the distributions of the helium parameters along the pipes as well as mechanical strains and stresses.

  10. Characterization of an extrapolation chamber for low-energy X-rays: Experimental and Monte Carlo preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Neves, Lucio P., E-mail: lpneves@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Silva, Eric A.B., E-mail: ebrito@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Perini, Ana P., E-mail: aperini@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Maidana, Nora L., E-mail: nmaidana@if.usp.br [Universidade de Sao Paulo, Instituto de Fisica, Travessa R 187, 05508-900 Sao Paulo, SP (Brazil); Caldas, Linda V.E., E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil)

    2012-07-15

    The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IPEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. - Highlights: Black-Right-Pointing-Pointer A homemade extrapolation chamber was studied experimentally and with Monte Carlo. Black-Right-Pointing-Pointer It was characterized as a secondary dosimetry standard, for low energy X-rays. Black-Right-Pointing-Pointer Several characterization tests were performed and the results were satisfactory. Black-Right-Pointing-Pointer Simulation showed that its components may influence the response up to 11.0%. Black-Right-Pointing-Pointer This chamber may be used as a secondary standard at our laboratory.

  11. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    International Nuclear Information System (INIS)

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-01-01

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm 2 ). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm 2 . Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm 2 ) only 92% of the data meet the criteria. Total scatter factors show a good agreement ( 2 ) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm 2 . Special care must be taken for smaller fields.

  12. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  13. Assessment of the production of medical isotopes using the Monte Carlo code FLUKA: Simulations against experimental measurements

    Energy Technology Data Exchange (ETDEWEB)

    Infantino, Angelo, E-mail: angelo.infantino@unibo.it [Department of Industrial Engineering, Montecuccolino Laboratory, University of Bologna, Via dei Colli 16, 40136 Bologna (Italy); Oehlke, Elisabeth [TRIUMF, 4004 Wesbrook Mall, V6T 2A3 Vancouver, BC (Canada); Department of Radiation Science & Technology, Delft University of Technology, Postbus 5, 2600 AA Delft (Netherlands); Mostacci, Domiziano [Department of Industrial Engineering, Montecuccolino Laboratory, University of Bologna, Via dei Colli 16, 40136 Bologna (Italy); Schaffer, Paul; Trinczek, Michael; Hoehr, Cornelia [TRIUMF, 4004 Wesbrook Mall, V6T 2A3 Vancouver, BC (Canada)

    2016-01-01

    The Monte Carlo code FLUKA is used to simulate the production of a number of positron emitting radionuclides, {sup 18}F, {sup 13}N, {sup 94}Tc, {sup 44}Sc, {sup 68}Ga, {sup 86}Y, {sup 89}Zr, {sup 52}Mn, {sup 61}Cu and {sup 55}Co, on a small medical cyclotron with a proton beam energy of 13 MeV. Experimental data collected at the TR13 cyclotron at TRIUMF agree within a factor of 0.6 ± 0.4 with the directly simulated data, except for the production of {sup 55}Co, where the simulation underestimates the experiment by a factor of 3.4 ± 0.4. The experimental data also agree within a factor of 0.8 ± 0.6 with the convolution of simulated proton fluence and cross sections from literature. Overall, this confirms the applicability of FLUKA to simulate radionuclide production at 13 MeV proton beam energy.

  14. Neuroinflammatory targets and treatments for epilepsy validated in experimental models.

    Science.gov (United States)

    Aronica, Eleonora; Bauer, Sebastian; Bozzi, Yuri; Caleo, Matteo; Dingledine, Raymond; Gorter, Jan A; Henshall, David C; Kaufer, Daniela; Koh, Sookyong; Löscher, Wolfgang; Louboutin, Jean-Pierre; Mishto, Michele; Norwood, Braxton A; Palma, Eleonora; Poulter, Michael O; Terrone, Gaetano; Vezzani, Annamaria; Kaminski, Rafal M

    2017-07-01

    A large body of evidence that has accumulated over the past decade strongly supports the role of inflammation in the pathophysiology of human epilepsy. Specific inflammatory molecules and pathways have been identified that influence various pathologic outcomes in different experimental models of epilepsy. Most importantly, the same inflammatory pathways have also been found in surgically resected brain tissue from patients with treatment-resistant epilepsy. New antiseizure therapies may be derived from these novel potential targets. An essential and crucial question is whether targeting these molecules and pathways may result in anti-ictogenesis, antiepileptogenesis, and/or disease-modification effects. Therefore, preclinical testing in models mimicking relevant aspects of epileptogenesis is needed to guide integrated experimental and clinical trial designs. We discuss the most recent preclinical proof-of-concept studies validating a number of therapeutic approaches against inflammatory mechanisms in animal models that could represent novel avenues for drug development in epilepsy. Finally, we suggest future directions to accelerate preclinical to clinical translation of these recent discoveries. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  15. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  16. Tyre tread-block friction: modelling, simulation and experimental validation

    Science.gov (United States)

    Wallaschek, Jörg; Wies, Burkard

    2013-07-01

    Pneumatic tyres are used in vehicles since the beginning of the last century. They generate braking and steering forces for bicycles, motor cycles, cars, busses, trucks, agricultural vehicles and aircraft. These forces are generated in the usually very small contact area between tyre and road and their performance characteristics are of eminent importance for safety and comfort. Much research has been addressed to optimise tyre design with respect to footprint pressure and friction. In this context, the development of virtual tyre prototypes, that is, simulation models for the tyre, has grown to a science in its own. While the modelling of the structural dynamics of the tyre has reached a very advanced level, which allows to take into account effects like the rate-independent inelasticity of filled elastomers or the transient 3D deformations of the ply-reinforced tread, shoulder and sidewalls, little is known about the friction between tread-block elements and road. This is particularly obvious in the case when snow, ice, water or a third-body layer are present in the tyre-road contact. In the present paper, we give a survey on the present state of knowledge in the modelling, simulation and experimental validation of tyre tread-block friction processes. We concentrate on experimental techniques.

  17. IVIM: modeling, experimental validation and application to animal models

    International Nuclear Information System (INIS)

    Fournet, Gabrielle

    2016-01-01

    This PhD thesis is centered on the study of the IVIM ('Intravoxel Incoherent Motion') MRI sequence. This sequence allows for the study of the blood microvasculature such as the capillaries, arterioles and venules. To be sensitive only to moving groups of spins, diffusion gradients are added before and after the 180 degrees pulse of a spin echo (SE) sequence. The signal component corresponding to spins diffusing in the tissue can be separated from the one related to spins travelling in the blood vessels which is called the IVIM signal. These two components are weighted by f IVIM which represents the volume fraction of blood inside the tissue. The IVIM signal is usually modelled by a mono-exponential (ME) function and characterized by a pseudo-diffusion coefficient, D*. We propose instead a bi-exponential IVIM model consisting of a slow pool, characterized by F slow and D* slow corresponding to the capillaries as in the ME model, and a fast pool, characterized by F fast and D* fast, related to larger vessels such as medium-size arterioles and venules. This model was validated experimentally and more information was retrieved by comparing the experimental signals to a dictionary of simulated IVIM signals. The influence of the pulse sequence, the repetition time and the diffusion encoding time was also studied. Finally, the IVIM sequence was applied to the study of an animal model of Alzheimer's disease. (author) [fr

  18. Design of JT-60SA magnets and associated experimental validations

    International Nuclear Information System (INIS)

    Zani, L.; Barabaschi, P.; Peyrot, M.; Meunier, L.; Tomarchio, V.; Duglue, D.; Decool, P.; Torre, A.; Marechal, J.L.; Della Corte, A.; Di Zenobio, A.; Muzzi, L.; Cucchiaro, A.; Turtu, S.; Ishida, S.; Yoshida, K.; Tsuchiya, K.; Kizu, K.; Murakami, H.

    2011-01-01

    In the framework of the JT-60SA project, aiming at upgrading the present JT-60U tokamak toward a fully superconducting configuration, the detailed design phase led to adopt for the three main magnet systems a brand new design. Europe (EU) is expected to provide to Japan (JA) the totality of the toroidal field (TF) magnet system, while JA will provide both Equilibrium field (EF) and Central Solenoid (CS) systems. All magnet designs were optimized trough the past years and entered in parallel into extensive experimentally-based phases of concept validation, which came to maturation in the years 2009 and 2010. For this, all magnet systems were investigated by mean of dedicated samples, e.g. conductor and joint samples designed, manufactured and tested at full scale in ad hoc facilities either in EU or in JA. The present paper, after an overall description of magnet systems layouts, presents in a general approach the different experimental campaigns dedicated to qualification design and manufacture processes of either coils, conductors and electrical joints. The main results with the associated analyses are shown and the main conclusions presented, especially regarding their contribution to consolidate the triggering of magnet mass production. The status of respective manufacturing stages in EU and in JA are also evoked. (authors)

  19. Validation of radioactive isotope activity measurement in homogeneous waste drum using Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Tran Thien; Tran, Le Bao; Ton, Thai Van; Chuong, Huynh Dinh; Tao, Chau Van [VNUHCM-Univ. of Science, Ho Chi Minh City (Viet Nam). Dept. of Nuclear Physics; VNUHCM-Univ. of Science, Ho Chi Minh City (Viet Nam). Nuclear Technique Lab.; Tam, Hoang Duc [Ho Chi Minh City Univ. of Pedagogy (Viet Nam). Faculty of Physics; Quang, Ma Thuy [VNUHCM-Univ. of Science, Ho Chi Minh City (Viet Nam). Dept. of Nuclear Physics

    2017-07-15

    In this work, the angular dependent efficiency recorded by collimated NaI(Tl) detector is determined a quantification of the activity of mono- and multi-energy gamma emitting isotopes positioning in a waste drum. The simulated efficiencies using both MCNP5 and Geant4 are in good agreement with experimental results. Referring to these simulated efficiencies, we recalculated the source activity with the highest deviation of 13%.

  20. Validation of radioactive isotope activity measurement in homogeneous waste drum using Monte Carlo codes

    International Nuclear Information System (INIS)

    Thanh, Tran Thien; Tran, Le Bao; Ton, Thai Van; Chuong, Huynh Dinh; Tao, Chau Van; VNUHCM-Univ. of Science, Ho Chi Minh City; Tam, Hoang Duc; Quang, Ma Thuy

    2017-01-01

    In this work, the angular dependent efficiency recorded by collimated NaI(Tl) detector is determined a quantification of the activity of mono- and multi-energy gamma emitting isotopes positioning in a waste drum. The simulated efficiencies using both MCNP5 and Geant4 are in good agreement with experimental results. Referring to these simulated efficiencies, we recalculated the source activity with the highest deviation of 13%.

  1. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    Energy Technology Data Exchange (ETDEWEB)

    Palau, J M [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)

    2005-07-01

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  2. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    International Nuclear Information System (INIS)

    Palau, J.M.

    2005-01-01

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U 235 , U 238 , Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  3. Monte Carlo; based validation of the ENDF/MC2-II/SDX cell homogenization path

    International Nuclear Information System (INIS)

    Wade, D.C.

    1978-11-01

    The results are summarized of a program of validation of the unit cell homogenization prescriptions and codes used for the analysis of Zero Power Reactor (ZPR) fast breeder reactor critical experiments. The ZPR drawer loading patterns comprise both plate type and pin-calandria type unit cells. A prescription is used to convert the three dimensional physical geometry of the drawer loadings into one dimensional calculational models. The ETOE-II/MC 2 -II/SDX code sequence is used to transform ENDF/B basic nuclear data into unit cell average broad group cross sections based on the 1D models. Cell average, broad group anisotropic diffusion coefficients are generated using the methods of Benoist or of Gelbard. The resulting broad (approx. 10 to 30) group parameters are used in multigroup diffusion and S/sub n/ transport calculations of full core XY or RZ models which employ smeared atom densities to represent the contents of the unit cells

  4. First experimental validation on the core equilibrium code: HARMONIE

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.; Cozzani, M.; Gnuffi, M.

    1981-08-01

    The code HARMONIE calculates the mechanical equilibrium of a fast reactor. An experimental program of deformation, in air, of groups of subassemblies, was performed on a mock-up, in the Super Phenix 1- geometry. This program included three kinds of tests, all performed without and then with grease: on groups of 2 or 3 rings of subassemblies, subjected to a force acting upon flats or angles; on groups of 35 and 41 subassemblies, subjected to a force acting on the first row, then with 1 or 2 empty cells; and on groups with 1 or 2 bowed subassemblies or 1 enlarged one over flats. A preliminary test on the friction coefficient in air between two pads showed some dependance upon the pad surface condition with a scattering factor of 8. Two basic code hypotheses were validated: the rotation of the subassemblies around their axis was negligible after deformation of the group, and the choice of a mean Maxwell coefficient, between those of 1st and 2nd slope, led to very similar results to experimental. The agreement between tests and HARMONIE calculations was suitable, qualitatively for all the groups and quantitatively for regular groups of 3 rings at most. But the difference increased for larger groups of 35 or 41 subassemblies: friction between pads, neglected by HARMONIE, seems to be the main reason. Other reasons for these differences are: the influence of the loading order on the mock-up, and the initial contacts issued from the gap between foot and diagrid-insert, and from manufacture bowings

  5. Validation of a new continuous Monte Carlo burnup code using a Mox fuel assembly

    International Nuclear Information System (INIS)

    El bakkari, B.; El Bardouni, T.; Merroun, O.; El Younoussi, C.; Boulaich, Y.; Boukhal, H.; Chakir, E.

    2009-01-01

    The reactivity of nuclear fuel decreases with irradiation (or burnup) due to the transformation of heavy nuclides and the formation of fission products. Burnup credit studies aim at accounting for fuel irradiation in criticality studies of the nuclear fuel cycle (transport, storage, etc...). The principal objective of this study is to evaluate the potential capabilities of a newly developed burnup code called 'BUCAL1'. BUCAL1 differs in comparison with other burnup codes as it does not use the calculated neutron flux as input to other computer codes to generate the nuclide inventory for the next time step. Instead, BUCAL1 directly uses the neutron reaction tally information generated by MCNP for each nuclide of interest to determine the new nuclides inventory. This allows the full capabilities of MCNP to be incorporated into the calculation and a more accurate and robust analysis to be performed. Validation of BUCAL1 was processed by code-to-code comparisons using predictions of several codes from the NEA/OCED. Infinite multiplication factors (k ∞ ) and important fission product and actinide concentrations were compared for a MOX core benchmark exercise. Results of calculations are analysed and discussed.

  6. The value of radiochromic film dosimetry around air cavities: experimental results and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Paelinck, L; Reynaert, N; Thierens, H; Wagter, C de; Neve, W de

    2003-01-01

    In this study we investigate radiochromic film dosimetry around air cavities with particular focus on the perturbation of the dose distribution by the film when the film is parallel to the beam axis. We considered a layered polystyrene phantom containing an air cavity as a model for the air-soft tissue geometry that may occur after surgical resection of a paranasal sinus tumour. A radiochromic film type MD-55 was positioned within the phantom so that it intersected the cavity. Two phantom set-ups were examined. In the first case, the air cavity is at the centre of the phantom, thus the film is lying along the central beam axis. In the second case, the cavity and film are located 2 cm offset from the phantom centre and the central beam axis. In order to examine the influence of the film on the dose distribution and to interpret the film-measured results, Monte Carlo simulations were performed. The film was modelled rigorously to incorporate the composition and structure of the film. Two field configurations, a 1 x 10 cm 2 field and a 10 x 10 cm 2 field, were examined. The dose behind the air cavity is reduced by 6 to 7% for both field configurations when a film that intersects the cavity contains the central beam axis. This is due to the attenuation exerted by the film when photons cross the cavity. Offsetting the beam to the cavity and the film by 2 cm removes the dose reduction behind the air cavity completely. Another result was that the rebuild-up behind the cavity for the 10 x 10 cm 2 field, albeit less significant than for the 1 x 10 cm 2 field, could only be measured by the film that was placed offset with respect to the central beam axis. Although radiochromic film is approximately soft-tissue equivalent and energy independent as compared to radiographic films, care should be taken in the case of inhomogeneous phantoms when the film intersects air cavities and contains the beam central axis. Errors in dose measurement can be expected distal to the air cavity

  7. Validation of a buffet meal design in an experimental restaurant.

    Science.gov (United States)

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Integral Reactor Containment Condensation Model and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Qiao [Oregon State Univ., Corvallis, OR (United States); Corradini, Michael [Univ. of Wisconsin, Madison, WI (United States)

    2016-05-02

    This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). In the three years of investigation duration, following the original proposal, the planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flow into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). A film flow model was developed for the scaling analysis, and the results suggest that the 1/3 scaled test facility covers large portion of laminar film flow, leading to a lower average heat transfer coefficient comparing to the prototypic value. Although it is conservative in reactor safety analysis, the significant reduction of heat transfer coefficient (50%) could under estimate the prototypic condensation heat transfer rate, resulting in inaccurate prediction of the decay heat removal capability. Further investigation is thus needed to quantify the scaling distortion for safety analysis code validation. Experimental investigations were performed in the existing MASLWR test facility at OrST with minor modifications. A total of 13 containment condensation tests were conducted for pressure

  9. EXPERIMENTAL AND MONTE CARLO INVESTIGATIONS OF BCF-12 SMALL‑AREA PLASTIC SCINTILLATION DETECTORS FOR NEUTRON PINHOLE CAMERA.

    Science.gov (United States)

    Bielecki, J; Drozdowicz, K; Dworak, D; Igielski, A; Janik, W; Kulinska, A; Marciniak, L; Scholz, M; Turzanski, M; Wiacek, U; Woznicka, U; Wójcik-Gargula, A

    2017-12-11

    Plastic organic scintillators such as the blue-emitting BCF-12 are versatile and inexpensive tools. Recently, BCF-12 scintillators have been foreseen for investigation of the spatial distribution of neutrons emitted from dense magnetized plasma. For this purpose, small-area (5 mm × 5 mm) detectors based on BCF-12 scintillation rods and Hamamatsu photomultiplier tubes were designed and constructed at the Institute of Nuclear Physics. They will be located inside the neutron pinhole camera of the PF-24 plasma focus device. Two different geometrical layouts and approaches to the construction of the scintillation element were tested. The aim of this work was to determine the efficiency of the detectors. For this purpose, the experimental investigations using a neutron generator and a Pu-Be source were combined with Monte Carlo computations using the Geant4 code. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Evaluation of the shield calculation adequacy of radiotherapy rooms through Monte Carlo Method and experimental measures; Avaliacao da adequacao do calculo de blindagens de salas de radioterapia atraves do metodo de Monte Carlos e medidas experimentais

    Energy Technology Data Exchange (ETDEWEB)

    Meireles, Ramiro Conceicao

    2016-07-01

    The shielding calculation methodology for radiotherapy services adopted in Brazil and in several countries is that described in publication 151 of the National Council on Radiation Protection and Measurements (NCRP 151). This methodology however, markedly employs several approaches that can impact both in the construction cost and in the radiological safety of the facility. Although this methodology is currently well established by the high level of use, some parameters employed in the calculation methodology did not undergo to a detailed assessment to evaluate the impact of the various approaches considered. In this work the MCNP5 Monte Carlo code was used with the purpose of evaluating the above mentioned approaches. TVLs values were obtained for photons in conventional concrete (2.35g / cm{sup 3}), considering the energies of 6, 10 and 25 MeV, respectively, first considering an isotropic radiation source impinging perpendicular to the barriers, and subsequently a lead head shielding emitting a shaped beam, in the format of a pyramid trunk. Primary barriers safety margins, taking in account the head shielding emitting photon beam pyramid-shaped in the energies of 6, 10, 15 and 18 MeV were assessed. A study was conducted considering the attenuation provided by the patient's body in the energies of 6,10, 15 and 18 MeV, leading to new attenuation factors. Experimental measurements were performed in a real radiotherapy room, in order to map the leakage radiation emitted by the accelerator head shielding and the results obtained were employed in the Monte Carlo simulation, as well as to validate the entire study. The study results indicate that the TVLs values provided by (NCRP, 2005) show discrepancies in comparison with the values obtained by simulation and that there may be some barriers that are calculated with insufficient thickness. Furthermore, the simulation results show that the additional safety margins considered when calculating the width of the

  11. An improved energy-range relationship for high-energy electron beams based on multiple accurate experimental and Monte Carlo data sets

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Andreo, P.; Hyoedynmaa, S.; Brahme, A.; Bielajew, A.F.

    1995-01-01

    A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H 2 O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)

  12. On the use of Gafchromic EBT3 films for validating a commercial electron Monte Carlo dose calculation algorithm.

    Science.gov (United States)

    Chan, EuJin; Lydon, Jenny; Kron, Tomas

    2015-03-07

    This study aims to investigate the effects of oblique incidence, small field size and inhomogeneous media on the electron dose distribution, and to compare calculated (Elekta/CMS XiO) and measured results. All comparisons were done in terms of absolute dose. A new measuring method was developed for high resolution, absolute dose measurement of non-standard beams using Gafchromic® EBT3 film. A portable U-shaped holder was designed and constructed to hold EBT3 films vertically in a reproducible setup submerged in a water phantom. The experimental film method was verified with ionisation chamber measurements and agreed to within 2% or 1 mm. Agreement between XiO electron Monte Carlo (eMC) and EBT3 was within 2% or 2 mm for most standard fields and 3% or 3 mm for the non-standard fields. Larger differences were seen in the build-up region where XiO eMC overestimates dose by up to 10% for obliquely incident fields and underestimates the dose for small circular fields by up to 5% when compared to measurement. Calculations with inhomogeneous media mimicking ribs, lung and skull tissue placed at the side of the film in water agreed with measurement to within 3% or 3 mm. Gafchromic film in water proved to be a convenient high spatial resolution method to verify dose distributions from electrons in non-standard conditions including irradiation in inhomogeneous media.

  13. Dosimetric study of prostate brachytherapy using techniques of Monte-Carlo simulation, experimental measurements and comparison with a treatment plan

    International Nuclear Information System (INIS)

    Teles, Pedro; Barros, Silvia; Vaz, Pedro; Goncalves, Isabel; Facure, Alessandro; Rosa, Luiz da; Santos, Maira; Pereira Junior, Pedro Paulo; Zankl, Maria

    2013-01-01

    Prostate Brachytherapy is a radiotherapy technique, which consists in inserting a number of radioactive seeds (containing, usually, the following radionuclides 125 l, 241 Am or 103 Pd ) surrounding or in the vicinity of, prostate tumor tissue . The main objective of this technique is to maximize the radiation dose to the tumor and minimize it in other tissues and organs healthy, in order to reduce its morbidity. The absorbed dose distribution in the prostate, using this technique is usually non-homogeneous and time dependent. Various parameters such as the type of seed, the attenuation interactions between them, their geometrical arrangement within the prostate, the actual geometry of the seeds,and further swelling of the prostate gland after implantation greatly influence the course of absorbed dose in the prostate and surrounding areas. Quantification of these parameters is therefore extremely important for dose optimization and improvement of their plans conventional treatment, which in many cases not fully take into account. The Monte Carlo techniques allow to study these parameters quickly and effectively. In this work, we use the program MCNPX and generic voxel phantom (GOLEM) where simulated different geometric arrangements of seeds containing 125 I, Amersham Health model of type 6711 in prostates of different sizes, in order to try to quantify some of the parameters. The computational model was validated using a phantom prostate cubic RW3 type , consisting of tissue equivalent, and thermoluminescent dosimeters. Finally, to have a term of comparison with a treatment real plan it was simulate a treatment plan used in a hospital of Rio de Janeiro, with exactly the same parameters, and our computational model. The results obtained in our study seem to indicate that the parameters described above may be a source of uncertainty in the correct evaluation of the dose required for actual treatment plans. The use of Monte Carlo techniques can serve as a complementary

  14. Development and experimental validation of a tool to determine out-of-field dose in radiotherapy

    International Nuclear Information System (INIS)

    Bessieres, I.

    2013-01-01

    Over the last two decades, many technical developments have been achieved on intensity modulated radiotherapy (IMRT) and allow a better conformation of the dose to the tumor and consequently increase the success of cancer treatments. These techniques often reduce the dose to organs at risk close to the target volume; nevertheless they increase peripheral dose levels. In this situation, the rising of the survival rate also increases the probability of secondary effects expression caused by peripheral dose deposition (second cancers for instance). Nowadays, the peripheral dose is not taken into account during the treatment planning and no reliable prediction tool exists. However it becomes crucial to consider the peripheral dose during the planning, especially for pediatric cases. Many steps of the development of an accurate and fast Monte Carlo out-of-field dose prediction tool based on the PENELOPE code have been achieved during this PhD work. To this end, we demonstrated the ability of the PENELOPE code to estimate the peripheral dose by comparing its results with reference measurements performed on two experimental configurations (metrological and pre-clinical). During this experimental work, we defined a protocol for low doses measurement with OSL dosimeters. In parallel, we highlighted the slow convergence of the code for clinical use. Consequently, we accelerated the code by implementing a new variance reduction technique called pseudo-deterministic transport which is specifically with the objective of improving calculations in areas far away from the beam. This step improved the efficiency of the peripheral doses estimation in both validation configurations (by a factor of 20) in order to reach reasonable computing times for clinical application. Optimization works must be realized in order improve the convergence of our tool and consider a final clinical use. (author) [fr

  15. Validation of Monte Carlo predictions of LWR-PROTEUS safety parameters using an improved whole-reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Plaschy, M. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institute, CH-5232 Villigen, PSI (Switzerland)], E-mail: michael.plaschy@eos.ch; Murphy, M.; Jatuff, F.; Perret, G.; Seiler, R. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institute, CH-5232 Villigen, PSI (Switzerland); Chawla, R. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institute, CH-5232 Villigen, PSI (Switzerland); Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, EPFL (Switzerland)

    2009-10-15

    The recent experimental programme conducted in the PROTEUS research reactor at the Paul Scherrer Institute (PSI) has concerned detailed investigations of advanced light water reactor (LWR) fuels. More than fifteen different configurations of the multi-zone critical facility have been studied, each of them requiring accurate estimation of operational safety parameters, in particular the critical driver loadings, shutdown rod worths and the effective delayed neutron fraction {beta}{sub eff}. The current paper presents a full-scale 3D Monte Carlo model for the facility, set up using the MCNPX code, which has been employed for calculation of the operational characteristics for seven different LWR-PROTEUS configurations. Thereby, a variety of nuclear data libraries (viz. ENDF/B6v2, ENDF/B6v8, JEF2.2, JEFF3.0, JEFF3.1, JENDL3.2, and JENDL3.3) have been used, and predictions of k{sub eff} and shutdown rod worths compared with experimental values. Even though certain library-specific trends have been observed, the k{sub eff} predictions are generally very satisfactory, viz. with discrepancies of <0.5% between calculation (C) and experiment (E). The results also confirm the consistent determination of reactivity variations, the C/E values for the shutdown (safety) rod worths being always within 5% of unity. In addition, the MCNP modelling of the multi-zone reactor has yielded interesting results for the delayed neutron fraction ({beta}{sub eff}) in the different configurations, a breakdown being made possible in each case in terms of delayed neutron group, fissioning nuclide, and reactor region.

  16. SU-E-J-145: Validation of An Analytical Model for in Vivo Range Verification Using GATE Monte Carlo Simulation in Proton Therapy

    International Nuclear Information System (INIS)

    Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K

    2015-01-01

    Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using

  17. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method; Validacao da incerteza de pesagens no preparo de padroes de radionuclideos por Metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Cacais, F.L.; Delgado, J.U., E-mail: facacais@gmail.com [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Loayza, V.M. [Instituto Nacional de Metrologia (INMETRO), Rio de Janeiro, RJ (Brazil). Qualidade e Tecnologia

    2016-07-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  18. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    International Nuclear Information System (INIS)

    Kim, Sangroh; Yoshizumi, Terry T; Yin Fangfang; Chetty, Indrin J

    2013-01-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the

  19. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    Science.gov (United States)

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral

  20. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4

    Science.gov (United States)

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-01

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential

  1. Experimental verification by means of thermoluminescent dosimetry of the distribution dose absorbed in water for a 137Cs Amersham CDCS-M-3 source, Monte Carlo simulated

    International Nuclear Information System (INIS)

    Fragoso Valdez, F. R.; Alvarez Romero, J. T.

    2001-01-01

    It verifies, in a experimental way, the Monte Carlo simulation results (PENELOPE algorithm) for the water absorbed dose distribution, imparted by a 1 37 Cs - Amersham source (model CDCS-M-3). The feigned results are expressed in terms of the functions Α(r,z), g(r) and F(r,Θ) according to the recommendations of the AAPM TG 43 [es

  2. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties of the surroun......This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties...... to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  3. Decomposition of a laser-Doppler spectrum for estimation of speed distribution of particles moving in an optically turbid medium: Monte Carlo validation study

    International Nuclear Information System (INIS)

    Liebert, A; Zolek, N; Maniewski, R

    2006-01-01

    A method for measurement of distribution of speed of particles moving in an optically turbid medium is presented. The technique is based on decomposition of the laser-Doppler spectrum. The theoretical background is shown together with the results of Monte Carlo simulations, which were performed to validate the proposed method. The laser-Doppler spectra were obtained by Monte Carlo simulations for assumed uniform and Gaussian speed distributions of particles moving in the turbid medium. The Doppler shift probability distributions were calculated by Monte Carlo simulations for several anisotropy factors of the medium, assuming the Hanyey-Greenstein phase function. The results of the spectra decomposition show that the calculated speed distribution of moving particles match well the distribution assumed for Monte Carlo simulations. This result was obtained for the spectra simulated in optical conditions, in which the photon is scattered with the Doppler shift not more than once during its travel between the source and detector. Influence of multiple scattering of the photon is analysed and a perspective of spectrum decomposition under such conditions is considered. Potential applications and limitations of the method are discussed

  4. An experimental and Monte Carlo investigation of the energy dependence of alanine/EPR dosimetry: I. Clinical x-ray beams

    International Nuclear Information System (INIS)

    Zeng, G G; McEwen, M R; Rogers, D W O; Klassen, N V

    2004-01-01

    The energy dependence of alanine/EPR dosimetry, in terms of absorbed dose-to-water for clinical 6, 10, 25 MV x-rays and 60 Co rays was investigated by measurements and Monte Carlo (MC) calculations. The dose rates were traceable to the NRC primary standard for absorbed dose, a sealed water calorimetry. The electron paramagnetic resonance (EPR) spectra of irradiated pellets were measured using a Bruker EMX 081 EPR spectrometer. The DOSRZnrc Monte Carlo code of the EGSnrc system was used to simulate the experimental conditions with BEAM code calculated input spectra of x-rays and γ-rays. Within the experimental uncertainty of 0.5%, the alanine EPR response to absorbed dose-to-water for x-rays was not dependent on beam quality from 6 MV to 25 MV, but on average, it was about 0.6% lower than its response to 60 Co gamma rays. Combining experimental data with Monte Carlo calculations, it is found that the alanine/EPR response per unit absorbed dose-to-alanine is the same for clinical x-rays and 60 Co gamma rays within the uncertainty of 0.6%. Monte Carlo simulations showed that neither the presence of PMMA holder nor varying the dosimeter thickness between 1 mm and 5 mm has significant effect on the energy dependence of alanine/EPR dosimetry within the calculation uncertainty of 0.3%

  5. Validation of MCDS by comparison of predicted with experimental velocity distribution functions in rarefied normal shocks

    Science.gov (United States)

    Pham-Van-diep, Gerald C.; Erwin, Daniel A.

    1989-01-01

    Velocity distribution functions in normal shock waves in argon and helium are calculated using Monte Carlo direct simulation. These are compared with experimental results for argon at M = 7.18 and for helium at M = 1.59 and 20. For both argon and helium, the variable-hard-sphere (VHS) model is used for the elastic scattering cross section, with the velocity dependence derived from a viscosity-temperature power-law relationship in the way normally used by Bird (1976).

  6. Experimental validation of waveform relaxation technique for power ...

    Indian Academy of Sciences (India)

    Two systems are considered: a HVDC controller tested with a detailed model of the converters, and a TCSC based damping controller tested with a low frequency model of a power system. The results are validated with those obtained using simulated models of the controllers. We also present results of an experiment in ...

  7. Experimental Validation of the Reverberation Effect in Room Electromagnetics

    DEFF Research Database (Denmark)

    Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri

    2015-01-01

    . This tail can be characterized with Sabine's or Eyring's reverberation models, which were initially developed in acoustics. So far, these models were only fitted to data collected from radio measurements, but no thorough validation of their prediction ability in electromagnetics has been performed yet...

  8. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry.

    Science.gov (United States)

    Barbeiro, A R; Ureba, A; Baeza, J A; Linares, R; Perucha, M; Jiménez-Ortega, E; Velázquez, S; Mateos, J C; Leal, A

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  9. Validation of the GATE Monte Carlo simulation platform for modelling a CsI(Tl) scintillation camera dedicated to small-animal imaging

    International Nuclear Information System (INIS)

    Lazaro, D; Buvat, I; Loudos, G; Strul, D; Santin, G; Giokaris, N; Donnarieix, D; Maigne, L; Spanoudaki, V; Styliaris, S; Staelens, S; Breton, V

    2004-01-01

    Monte Carlo simulations are increasingly used in scintigraphic imaging to model imaging systems and to develop and assess tomographic reconstruction algorithms and correction methods for improved image quantitation. GATE (GEANT4 application for tomographic emission) is a new Monte Carlo simulation platform based on GEANT4 dedicated to nuclear imaging applications. This paper describes the GATE simulation of a prototype of scintillation camera dedicated to small-animal imaging and consisting of a CsI(Tl) crystal array coupled to a position-sensitive photomultiplier tube. The relevance of GATE to model the camera prototype was assessed by comparing simulated 99m Tc point spread functions, energy spectra, sensitivities, scatter fractions and image of a capillary phantom with the corresponding experimental measurements. Results showed an excellent agreement between simulated and experimental data: experimental spatial resolutions were predicted with an error less than 100 μm. The difference between experimental and simulated system sensitivities for different source-to-collimator distances was within 2%. Simulated and experimental scatter fractions in a [98-82 keV] energy window differed by less than 2% for sources located in water. Simulated and experimental energy spectra agreed very well between 40 and 180 keV. These results demonstrate the ability and flexibility of GATE for simulating original detector designs. The main weakness of GATE concerns the long computation time it requires: this issue is currently under investigation by the GEANT4 and the GATE collaborations

  10. Hypersonic nozzle/afterbody CFD code validation. I - Experimental measurements

    Science.gov (United States)

    Spaid, Frank W.; Keener, Earl R.

    1993-01-01

    This study was conducted to obtain a detailed experimental description of the flow field created by the interaction of a single-expansion-ramp-nozzle flow with a hypersonic external stream. Data were obtained from a generic nozzle/afterbody model in the 3.5-Foot Hypersonic Wind Tunnel of the NASA Ames Research Center in a cooperative experimental program involving Ames and the McDonnell Douglas Research Laboratories. This paper presents experimental results consisting primarily of surveys obtained with a five-hole total-pressure/flow-direction probe and a total-temperature probe. These surveys were obtained in the flow field created by the interaction between the underexpanded jet plume and the external flow.

  11. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  12. Experimental Analysis and Model Validation of an Opaque Ventilated Facade

    DEFF Research Database (Denmark)

    López, F. Peci; Jensen, Rasmus Lund; Heiselberg, Per

    2012-01-01

    Natural ventilation is a convenient way of reducing energy consumption in buildings. In this study an experimental module of an opaque ventilated façade (OVF) was built and tested for assessing its potential of supplying free ventilation and air preheating for the building. A numerical model was ...

  13. Melt pool modelling, simulation and experimental validation for SLM

    NARCIS (Netherlands)

    Wits, Wessel

    2017-01-01

    SLM parts are built by successively melting layers of powder in a powder bed. Process parameters are often optimized experimentally by laser scanning a number of single tracks and subsequently determining which settings lead to a good compromise between quality and build speed. However,

  14. Method for Determining Volumetric Efficiency and Its Experimental Validation

    OpenAIRE

    Ambrozik Andrzej; Kurczyński Dariusz; Łagowski Piotr

    2017-01-01

    Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, ...

  15. Solar power plant performance evaluation: simulation and experimental validation

    International Nuclear Information System (INIS)

    Natsheh, E M; Albarbar, A

    2012-01-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P and O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  16. Solar power plant performance evaluation: simulation and experimental validation

    Science.gov (United States)

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  17. DMFC anode polarization: Experimental analysis and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Casalegno, A.; Marchesi, R. [Dipartimento di Energetica, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2008-01-03

    Anode two-phase flow has an important influence on DMFC performance and methanol crossover. In order to elucidate two-phase flow influence on anode performance, in this work, anode polarization is investigated combining experimental and modelling approach. A systematic experimental analysis of operating conditions influence on anode polarization is presented. Hysteresis due to operating condition is observed; experimental results suggest that it arises from methanol accumulation and has to be considered in evaluating DMFC performances and measurements reproducibility. A model of DMFC anode polarization is presented and utilised as tool to investigate anode two-phase flow. The proposed analysis permits one to produce a confident interpretation of the main involved phenomena. In particular, it confirms that methanol electro-oxidation kinetics is weakly dependent on methanol concentration and that methanol transport in gas phase produces an important contribution in anode feeding. Moreover, it emphasises the possibility to optimise anode flow rate in order to improve DMFC performance and reduce methanol crossover. (author)

  18. HIPdb: a database of experimentally validated HIV inhibiting peptides.

    Science.gov (United States)

    Qureshi, Abid; Thakur, Nishant; Kumar, Manoj

    2013-01-01

    Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.

  19. The turbulent viscosity models and their experimental validation; Les modeles de viscosite turbulente et leur validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This workshop on turbulent viscosity models and on their experimental validation was organized by the `convection` section of the French society of thermal engineers. From the 9 papers presented during this workshop, 8 deal with the modeling of turbulent flows inside combustion chambers, turbo-machineries or in other energy-related applications, and have been selected for ETDE. (J.S.)

  20. Calculating of Dose Distribution in Tongue Brachytherapy by Different Radioisotopes using Monte Carlo Simulation and Comparing by Experimental Data

    Directory of Open Access Journals (Sweden)

    Banafsheh Zeinali Rafsanjani

    2011-06-01

    Full Text Available Introduction: Among different kinds of oral cavity cancers, the frequency of tongue cancer occurrence is more significant. Brachytherapy is the most common method to cure tongue cancers. Long sources are used in different techniques of tongue brachytherapy. The objective of this study is to asses the dose distribution around long sources, comparing different radioisotopes as brachytherapy sources, measuring the homogeneity of delivered dose to treatment volume and also comparing mandible dose and dose of tongue in the regions near the mandible with and without using shield. Material and Method: The Monte Carlo code MCNP4C was used for simulation. The accuracy of simulation was verified by comparing the results with experimental data. The sources like Ir-192, Cs-137, Ra-226, Au-198, In-111 and Ba-131 were simulated and the position of sources was determined by Paris system. Results: The percentage of mandible dose reduction with use of 2 mm Pb shield for the sources mentioned above were: 35.4%, 20.1%, 86.6%, 32.24%, 75.6%, and 36.8%. The tongue dose near the mandible with use of shied did not change significantly. The dose homogeneity from the most to least was obtained from these sources: Cs-137, Au-198, Ir-192, Ba-131, In-111 and Ra-226. Discussion and Conclusion: Ir-192 and Cs-137 were the best sources for tongue brachytherapy treatment but In-111 and Ra-226 were not suitable choices for tongue brachytherapy. The sources like Au-198 and Ba-131 had rather the same performance as Ir-192

  1. Numerical and Experimental Validation of a New Damage Initiation Criterion

    Science.gov (United States)

    Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.

    2017-09-01

    Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.

  2. Dynamic Modeling of Wind Turbine Gearboxes and Experimental Validation

    DEFF Research Database (Denmark)

    Pedersen, Rune

    Grinding corrections are often applied to gear teeth, which will alter the load distribution across the tooth. Grinding corrections will also change the load sharing between neighboring tooth pairs, and in turn the gear mesh stiffness. In this thesis, a model for calculating the gear mesh stiffness...... is presented. The model takes into account the effects of load and applied grinding corrections. The results are verified by comparing to simulated and experimental results reported in the existing literature. Using gear data loosely based on a 1 MW wind turbine gearbox, the gear mesh stiffness is expanded...

  3. Experimental validation of flexible multibody dynamics beam formulations

    Energy Technology Data Exchange (ETDEWEB)

    Bauchau, Olivier A., E-mail: olivier.bauchau@sjtu.edu.cn; Han, Shilei [University of Michigan-Shanghai Jiao Tong University Joint Institute (China); Mikkola, Aki; Matikainen, Marko K. [Lappeenranta University of Technology, Department of Mechanical Engineering (Finland); Gruber, Peter [Austrian Center of Competence in Mechatronics GmbH (Austria)

    2015-08-15

    In this paper, the accuracies of the geometrically exact beam and absolute nodal coordinate formulations are studied by comparing their predictions against an experimental data set referred to as the “Princeton beam experiment.” The experiment deals with a cantilevered beam experiencing coupled flap, lag, and twist deformations. In the absolute nodal coordinate formulation, two different beam elements are used. The first is based on a shear deformable approach in which the element kinematics is described using two nodes. The second is based on a recently proposed approach featuring three nodes. The numerical results for the geometrically exact beam formulation and the recently proposed three-node absolute nodal coordinate formulation agree well with the experimental data. The two-node beam element predictions are similar to those of linear beam theory. This study suggests that a careful and thorough evaluation of beam elements must be carried out to assess their ability to deal with the three-dimensional deformations typically found in flexible multibody systems.

  4. Observers for vehicle tyre/road forces estimation: experimental validation

    Science.gov (United States)

    Doumiati, M.; Victorino, A.; Lechner, D.; Baffet, G.; Charara, A.

    2010-11-01

    The motion of a vehicle is governed by the forces generated between the tyres and the road. Knowledge of these vehicle dynamic variables is important for vehicle control systems that aim to enhance vehicle stability and passenger safety. This study introduces a new estimation process for tyre/road forces. It presents many benefits over the existing state-of-art works, within the dynamic estimation framework. One of these major contributions consists of discussing in detail the vertical and lateral tyre forces at each tyre. The proposed method is based on the dynamic response of a vehicle instrumented with potentially integrated sensors. The estimation process is separated into two principal blocks. The role of the first block is to estimate vertical tyre forces, whereas in the second block two observers are proposed and compared for the estimation of lateral tyre/road forces. The different observers are based on a prediction/estimation Kalman filter. The performance of this concept is tested and compared with real experimental data using a laboratory car. Experimental results show that the proposed approach is a promising technique to provide accurate estimation. Thus, it can be considered as a practical low-cost solution for calculating vertical and lateral tyre/road forces.

  5. Numerical modelling of negative discharges in air with experimental validation

    International Nuclear Information System (INIS)

    Tran, T N; Golosnoy, I O; Lewin, P L; Georghiou, G E

    2011-01-01

    Axisymmetric finite element models have been developed for the simulation of negative discharges in air without and with the presence of dielectrics. The models are based on the hydrodynamic drift-diffusion approximation. A set of continuity equations accounting for the movement, generation and loss of charge carriers (electrons, positive and negative ions) is coupled with Poisson's equation to take into account the effect of space and surface charges on the electric field. The model of a negative corona discharge (without dielectric barriers) in a needle-plane geometry is analysed first. The results obtained show good agreement with experimental observations for various Trichel pulse characteristics. With dielectric barriers introduced into the discharge system, the surface discharge exhibits some similarities and differences to the corona case. The model studies the dynamics of volume charge generation, electric field variations and charge accumulation over the dielectric surface. The predicted surface charge density is consistent with experimental results obtained from the Pockels experiment in terms of distribution form and magnitude.

  6. 76 FR 81991 - National Spectrum Sharing Research Experimentation, Validation, Verification, Demonstration and...

    Science.gov (United States)

    2011-12-29

    ... NATIONAL SCIENCE FOUNDATION National Spectrum Sharing Research Experimentation, Validation... requirements of national level spectrum research, development, demonstration, and field trial facilities... to determine the optimal way to manage and use the radio spectrum. During Workshop I held at Boulder...

  7. Electrode-tissues interface: modeling and experimental validation

    International Nuclear Information System (INIS)

    Sawan, M; Laaziri, Y; Mounaim, F; Elzayat, E; Corcos, J; Elhilali, M M

    2007-01-01

    The electrode-tissues interface (ETI) is one of the key issues in implantable devices such as stimulators and sensors. Once the stimulator is implanted, safety and reliability become more and more critical. In this case, modeling and monitoring of the ETI are required. We propose an empirical model for the ETI and a dedicated integrated circuit to measure its corresponding complex impedance. These measurements in the frequency range of 1 Hz to 100 kHz were achieved in acute dog experiments. The model demonstrates a closer fitting with experimental measurements. In addition, a custom monitoring device based on a stimuli current generator has been completed to evaluate the phase shift and voltage across the electrodes and to transmit wirelessly the values to an external controller. This integrated circuit has been fabricated in a CMOS 0.18 μm process, which consumes 4 mW only during measurements and occupies an area of 1 mm 2 . (review article)

  8. EXPERIMENTAL VALIDATION OF CUMULATIVE SURFACE LOCATION ERROR FOR TURNING PROCESSES

    Directory of Open Access Journals (Sweden)

    Adam K. Kiss

    2016-02-01

    Full Text Available The aim of this study is to create a mechanical model which is suitable to investigate the surface quality in turning processes, based on the Cumulative Surface Location Error (CSLE, which describes the series of the consecutive Surface Location Errors (SLE in roughing operations. In the established model, the investigated CSLE depends on the currently and the previously resulted SLE by means of the variation of the width of cut. The phenomenon of the system can be described as an implicit discrete map. The stationary Surface Location Error and its bifurcations were analysed and flip-type bifurcation was observed for CSLE. Experimental verification of the theoretical results was carried out.

  9. Experimental validation of a Bayesian model of visual acuity.

    LENUS (Irish Health Repository)

    Dalimier, Eugénie

    2009-01-01

    Based on standard procedures used in optometry clinics, we compare measurements of visual acuity for 10 subjects (11 eyes tested) in the presence of natural ocular aberrations and different degrees of induced defocus, with the predictions given by a Bayesian model customized with aberrometric data of the eye. The absolute predictions of the model, without any adjustment, show good agreement with the experimental data, in terms of correlation and absolute error. The efficiency of the model is discussed in comparison with image quality metrics and other customized visual process models. An analysis of the importance and customization of each stage of the model is also given; it stresses the potential high predictive power from precise modeling of ocular and neural transfer functions.

  10. Stratification of bubbly horizontal flows: modeling and experimental validation

    International Nuclear Information System (INIS)

    Bottin, M.

    2010-01-01

    Hot films and optical probes enabled the acquisition of measurements in bubbly flows at 5, 20 and 40 diameters from the inlet of the vein on the METERO facility which test section is a horizontal circular pipe of 100 mm inner diameter. The distribution of the different phases, the existence of coalescence and sedimentation mechanisms, the influence of the liquid and gas flow rates, the radial and axial evolutions are analyzed thanks to fast camera videos and numerous and varied experimental results (void fraction, bubbles sizes, interfacial area, mean and fluctuating velocities and turbulent kinetic energy of the liquid phase). Some models, based on the idea that the flow reaches an equilibrium state sufficiently far from the inlet of the pipe, were developed to simulate mean interfacial area and turbulent kinetic energy transports in bubbly flows. (author)

  11. Brazilian Irradiation Project: CAFE-MOD1 validation experimental program

    International Nuclear Information System (INIS)

    Mattos, Joao Roberto Loureiro de; Costa, Antonio Carlos L. da; Esteves, Fernando Avelar; Dias, Marcio Soares

    1999-01-01

    The Brazilian Irradiation Project whose purpose is to provide Brazil with a minimal structure to qualify the design, fabrication and quality procedures of nuclear fuels, consists of three main facilities: IEA-R1 reactor of IPEN-CNEN/SP, CAFE-MOD1 irradiation device and a unit of hot cells. The CAFE-MOD1 is based on concepts successfully used for more than 20 years in the main nuclear institutes around the world. Despite these concepts are already proved it should be adapted to each reactor condition. For this purpose, there is an ongoing experimental program aiming at the certification of the criteria and operational limits of the CAFE-MOD1 in order to get the allowance for its installation at the IEA-R1 reactor. (author)

  12. Multi-actuators vehicle collision avoidance system - Experimental validation

    Science.gov (United States)

    Hamid, Umar Zakir Abdul; Zakuan, Fakhrul Razi Ahmad; Akmal Zulkepli, Khairul; Zulfaqar Azmi, Muhammad; Zamzuri, Hairi; Rahman, Mohd Azizi Abdul; Aizzat Zakaria, Muhammad

    2018-04-01

    The Insurance Institute for Highway Safety (IIHS) of the United States of America in their reports has mentioned that a significant amount of the road mishaps would be preventable if more automated active safety applications are adopted into the vehicle. This includes the incorporation of collision avoidance system. The autonomous intervention by the active steering and braking systems in the hazardous scenario can aid the driver in mitigating the collisions. In this work, a real-time platform of a multi-actuators vehicle collision avoidance system is developed. It is a continuous research scheme to develop a fully autonomous vehicle in Malaysia. The vehicle is a modular platform which can be utilized for different research purposes and is denominated as Intelligent Drive Project (iDrive). The vehicle collision avoidance proposed design is validated in a controlled environment, where the coupled longitudinal and lateral motion control system is expected to provide desired braking and steering actuation in the occurrence of a frontal static obstacle. Results indicate the ability of the platform to yield multi-actuators collision avoidance navigation in the hazardous scenario, thus avoiding the obstacle. The findings of this work are beneficial for the development of a more complex and nonlinear real-time collision avoidance work in the future.

  13. Numerical simulation and experimental validation of aircraft ground deicing model

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2016-05-01

    Full Text Available Aircraft ground deicing plays an important role of guaranteeing the aircraft safety. In practice, most airports generally use as many deicing fluids as possible to remove the ice, which causes the waste of the deicing fluids and the pollution of the environment. Therefore, the model of aircraft ground deicing should be built to establish the foundation for the subsequent research, such as the optimization of the deicing fluid consumption. In this article, the heat balance of the deicing process is depicted, and the dynamic model of the deicing process is provided based on the analysis of the deicing mechanism. In the dynamic model, the surface temperature of the deicing fluids and the ice thickness are regarded as the state parameters, while the fluid flow rate, the initial temperature, and the injection time of the deicing fluids are treated as control parameters. Ignoring the heat exchange between the deicing fluids and the environment, the simplified model is obtained. The rationality of the simplified model is verified by the numerical simulation and the impacts of the flow rate, the initial temperature and the injection time on the deicing process are investigated. To verify the model, the semi-physical experiment system is established, consisting of the low-constant temperature test chamber, the ice simulation system, the deicing fluid heating and spraying system, the simulated wing, the test sensors, and the computer measure and control system. The actual test data verify the validity of the dynamic model and the accuracy of the simulation analysis.

  14. Simulation of the AC corona phenomenon with experimental validation

    International Nuclear Information System (INIS)

    Villa, Andrea; Barbieri, Luca; Marco, Gondola; Malgesini, Roberto; Leon-Garzon, Andres R

    2017-01-01

    The corona effect, and in particular the Trichel phenomenon, is an important aspect of plasma physics with many technical applications, such as pollution reduction, surface and medical treatments. This phenomenon is also associated with components used in the power industry where it is, in many cases, the source of electro-magnetic disturbance, noise and production of undesired chemically active species. Despite the power industry to date using mainly alternating current (AC) transmission, most of the studies related to the corona effect have been carried out with direct current (DC) sources. Therefore, there is technical interest in validating numerical codes capable of simulating the AC phenomenon. In this work we describe a set of partial differential equations that are comprehensive enough to reproduce the distinctive features of the corona in an AC regime. The model embeds some selectable chemical databases, comprising tens of chemical species and hundreds of reactions, the thermal dynamics of neutral species and photoionization. A large set of parameters—deduced from experiments and numerical estimations—are compared, to assess the effectiveness of the proposed approach. (paper)

  15. A comprehensive system for dosimetric commissioning and Monte Carlo validation for the small animal radiation research platform.

    Science.gov (United States)

    Tryggestad, E; Armour, M; Iordachita, I; Verhaegen, F; Wong, J W

    2009-09-07

    Our group has constructed the small animal radiation research platform (SARRP) for delivering focal, kilo-voltage radiation to targets in small animals under robotic control using cone-beam CT guidance. The present work was undertaken to support the SARRP's treatment planning capabilities. We have devised a comprehensive system for characterizing the radiation dosimetry in water for the SARRP and have developed a Monte Carlo dose engine with the intent of reproducing these measured results. We find that the SARRP provides sufficient therapeutic dose rates ranging from 102 to 228 cGy min(-1) at 1 cm depth for the available set of high-precision beams ranging from 0.5 to 5 mm in size. In terms of depth-dose, the mean of the absolute percentage differences between the Monte Carlo calculations and measurement is 3.4% over the full range of sampled depths spanning 0.5-7.2 cm for the 3 and 5 mm beams. The measured and computed profiles for these beams agree well overall; of note, good agreement is observed in the profile tails. Especially for the smallest 0.5 and 1 mm beams, including a more realistic description of the effective x-ray source into the Monte Carlo model may be important.

  16. A comprehensive system for dosimetric commissioning and Monte Carlo validation for the small animal radiation research platform

    Energy Technology Data Exchange (ETDEWEB)

    Tryggestad, E; Armour, M; Wong, J W [Deptartment of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD (United States); Iordachita, I [Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD (United States); Verhaegen, F [Department of Radiation Oncology (MAASTRO Physics), GROW School, Maastricht University Medical Center, Maastricht (Netherlands)

    2009-09-07

    Our group has constructed the small animal radiation research platform (SARRP) for delivering focal, kilo-voltage radiation to targets in small animals under robotic control using cone-beam CT guidance. The present work was undertaken to support the SARRP's treatment planning capabilities. We have devised a comprehensive system for characterizing the radiation dosimetry in water for the SARRP and have developed a Monte Carlo dose engine with the intent of reproducing these measured results. We find that the SARRP provides sufficient therapeutic dose rates ranging from 102 to 228 cGy min{sup -1} at 1 cm depth for the available set of high-precision beams ranging from 0.5 to 5 mm in size. In terms of depth-dose, the mean of the absolute percentage differences between the Monte Carlo calculations and measurement is 3.4% over the full range of sampled depths spanning 0.5-7.2 cm for the 3 and 5 mm beams. The measured and computed profiles for these beams agree well overall; of note, good agreement is observed in the profile tails. Especially for the smallest 0.5 and 1 mm beams, including a more realistic description of the effective x-ray source into the Monte Carlo model may be important.

  17. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  18. Experimental verification of lung dose with radiochromic film: comparison with Monte Carlo simulations and commercially available treatment planning systems

    International Nuclear Information System (INIS)

    Paelinck, L; Reynaert, N; Thierens, H; Neve, W De; Wagter, C de

    2005-01-01

    The purpose of this study was to assess the absorbed dose in and around lung tissue by performing radiochromic film measurements, Monte Carlo simulations and calculations with superposition convolution algorithms. We considered a layered polystyrene phantom of 12 x 12 x 12 cm 3 containing a central cavity of 6 x 6 x 6 cm 3 filled with Gammex RMI lung-equivalent material. Two field configurations were investigated, a small 1 x 10 cm 2 field and a larger 10 x 10 cm 2 field. First, we performed Monte Carlo simulations to investigate the influence of radiochromic film itself on the measured dose distribution when the film intersects a lung-equivalent region and is oriented parallel to the central beam axis. To that end, the film and the lung-equivalent materials were modelled in detail, taking into account their specific composition. Next, measurements were performed with the film oriented both parallel and perpendicular to the central beam axis to verify the results of our Monte Carlo simulations. Finally, we digitized the phantom in two commercially available treatment planning systems, Helax-TMS version 6.1A and Pinnacle version 6.2b, and calculated the absorbed dose in the phantom with their incorporated superposition convolution algorithms to compare with the Monte Carlo simulations. Comparing Monte Carlo simulations with measurements reveals that radiochromic film is a reliable dosimeter in and around lung-equivalent regions when the film is positioned perpendicular to the central beam axis. Radiochromic film is also able to predict the absorbed dose accurately when the film is positioned parallel to the central beam axis through the lung-equivalent region. However, attention must be paid when the film is not positioned along the central beam axis, in which case the film gradually attenuates the beam and decreases the dose measured behind the cavity. This underdosage disappears by offsetting the film a few centimetres. We find deviations of about 3.6% between

  19. Experimental verification of lung dose with radiochromic film: comparison with Monte Carlo simulations and commercially available treatment planning systems

    Science.gov (United States)

    Paelinck, L.; Reynaert, N.; Thierens, H.; DeNeve, W.; DeWagter, C.

    2005-05-01

    The purpose of this study was to assess the absorbed dose in and around lung tissue by performing radiochromic film measurements, Monte Carlo simulations and calculations with superposition convolution algorithms. We considered a layered polystyrene phantom of 12 × 12 × 12 cm3 containing a central cavity of 6 × 6 × 6 cm3 filled with Gammex RMI lung-equivalent material. Two field configurations were investigated, a small 1 × 10 cm2 field and a larger 10 × 10 cm2 field. First, we performed Monte Carlo simulations to investigate the influence of radiochromic film itself on the measured dose distribution when the film intersects a lung-equivalent region and is oriented parallel to the central beam axis. To that end, the film and the lung-equivalent materials were modelled in detail, taking into account their specific composition. Next, measurements were performed with the film oriented both parallel and perpendicular to the central beam axis to verify the results of our Monte Carlo simulations. Finally, we digitized the phantom in two commercially available treatment planning systems, Helax-TMS version 6.1A and Pinnacle version 6.2b, and calculated the absorbed dose in the phantom with their incorporated superposition convolution algorithms to compare with the Monte Carlo simulations. Comparing Monte Carlo simulations with measurements reveals that radiochromic film is a reliable dosimeter in and around lung-equivalent regions when the film is positioned perpendicular to the central beam axis. Radiochromic film is also able to predict the absorbed dose accurately when the film is positioned parallel to the central beam axis through the lung-equivalent region. However, attention must be paid when the film is not positioned along the central beam axis, in which case the film gradually attenuates the beam and decreases the dose measured behind the cavity. This underdosage disappears by offsetting the film a few centimetres. We find deviations of about 3.6% between

  20. HZETRN radiation transport validation using balloon-based experimental data

    Science.gov (United States)

    Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.

    2018-05-01

    The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that

  1. Helicopter noise in hover: Computational modelling and experimental validation

    Science.gov (United States)

    Kopiev, V. F.; Zaytsev, M. Yu.; Vorontsov, V. I.; Karabasov, S. A.; Anikin, V. A.

    2017-11-01

    The aeroacoustic characteristics of a helicopter rotor are calculated by a new method, to assess its applicability in assessing rotor performance in hovering. Direct solution of the Euler equations in a noninertial coordinate system is used to calculate the near-field flow around the spinning rotor. The far-field noise field is calculated by the Ffowcs Williams-Hawkings (FW-H) method using permeable control surfaces that include the blade. For a multiblade rotor, the signal obtained is duplicated and shifted in phase for each successive blade. By that means, the spectral characteristics of the far-field noise may be obtained. To determine the integral aerodynamic characteristics of the rotor, software is written to calculate the thrust and torque characteristics from the near-field flow solution. The results of numerical simulation are compared with experimental acoustic and aerodynamic data for a large-scale model of a helicopter main rotor in an open test facility. Two- and four-blade configurations of the rotor are considered, in different hover conditions. The proposed method satisfactorily predicts the aerodynamic characteristics of the blades in such conditions and gives good estimates for the first harmonics of the noise. That permits the practical use of the proposed method, not only for hovering but also for forward flight.

  2. Experimental validation of incomplete data CT image reconstruction techniques

    International Nuclear Information System (INIS)

    Eberhard, J.W.; Hsiao, M.L.; Tam, K.C.

    1989-01-01

    X-ray CT inspection of large metal parts is often limited by x-ray penetration problems along many of the ray paths required for a complete CT data set. In addition, because of the complex geometry of many industrial parts, manipulation difficulties often prevent scanning over some range of angles. CT images reconstructed from these incomplete data sets contain a variety of artifacts which limit their usefulness in part quality determination. Over the past several years, the authors' company has developed 2 new methods of incorporating a priori information about the parts under inspection to significantly improve incomplete data CT image quality. This work reviews the methods which were developed and presents experimental results which confirm the effectiveness of the techniques. The new methods for dealing with incomplete CT data sets rely on a priori information from part blueprints (in electronic form), outer boundary information from touch sensors, estimates of part outer boundaries from available x-ray data, and linear x-ray attenuation coefficients of the part. The two methods make use of this information in different fashions. The relative performance of the two methods in detecting various flaw types is compared. Methods for accurately registering a priori information with x-ray data are also described. These results are critical to a new industrial x-ray inspection cell built for inspection of large aircraft engine parts

  3. Experimental Validation of Elliptical Fin-Opening Behavior

    Directory of Open Access Journals (Sweden)

    James M. Garner

    2003-01-01

    Full Text Available An effort to improve the performance of ordnance has led to the consideration of the use of folding elliptical fins for projectile stabilization. A second order differential equation was used to model elliptical fin deployment history and accounts for: deployment with respect to the geometric properties of the fin, the variation in fin aerodynamics during deployment, the initial yaw effect on fin opening, and the variation in deployment speed based on changes in projectile spin. This model supports tests conducted at the Transonic Experimental Facility, Aberdeen Proving Ground examining the opening behavior of these uniquely shaped fins. The fins use the centrifugal force from the projectile spin to deploy. During the deployment, the fin aerodynamic forces vary with angle-of-attack changes to the free stream. Model results indicate that projectile spin dominates the initial opening rates and aerodynamics dominate near the fully open state. The model results are examined to explain the observed behaviors, and suggest improvements for later designs.

  4. Study of experimental validation for combustion analysis of GOTHIC code

    International Nuclear Information System (INIS)

    Lee, J. Y.; Yang, S. Y.; Park, K. C.; Jeong, S. H.

    2001-01-01

    In this study, present lumped and subdivided GOTHIC6 code analyses of the premixed hydrogen combustion experiment at the Seoul National University and comparison with the experiment results. The experimental facility has 16367 cc free volume and rectangular shape. And the test was performed with unit equivalence ratio of the hydrogen and air, and with various location of igniter position. Using the lumped and mechanistic combustion model in GOTHIC6 code, the experiments were simulated with the same conditions. In the comparison between experiment and calculated results, the GOTHIC6 prediction of the combustion response does not compare well with the experiment results. In the point of combustion time, the lumped combustion model of GOTHIC6 code does not simulate the physical phenomena of combustion appropriately. In the case of mechanistic combustion model, the combustion time is predicted well, but the induction time of calculation data is longer than the experiment data remarkably. Also, the laminar combustion model of GOTHIC6 has deficiency to simulate combustion phenomena unless control the user defined value appropriately. And the pressure is not a proper variable that characterize the three dimensional effect of combustion

  5. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling; Etalonnage d'un spectrometre gamma en vue de la mesure de la radioactivite naturelle. Mesures experimentales et modelisation par techniques de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)

    2007-03-15

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  6. Processing and benchmarking of evaluated nuclear data file/b-viii.0β4 cross-section library by analysis of a series of critical experimental benchmark using the monte carlo code MCNP(X and NJOY2016

    Directory of Open Access Journals (Sweden)

    Kabach Ouadie

    2017-12-01

    Full Text Available To validate the new Evaluated Nuclear Data File (ENDF/B-VIII.0β4 library, 31 different critical cores were selected and used for a benchmark test of the important parameter keff. The four utilized libraries are processed using Nuclear Data Processing Code (NJOY2016. The results obtained with the ENDF/B-VIII.0β4 library were compared against those calculated with ENDF/B-VI.8, ENDF/B-VII.0, and ENDF/B-VII.1 libraries using the Monte Carlo N-Particle (MCNP(X code. All the MCNP(X calculations of keff values with these four libraries were compared with the experimentally measured results, which are available in the International Critically Safety Benchmark Evaluation Project. The obtained results are discussed and analyzed in this paper.

  7. Geant4-DNA coupling and validation in the GATE Monte Carlo platform for DNA molecules irradiation in a calculation grid environment

    International Nuclear Information System (INIS)

    Pham, Quang Trung

    2014-01-01

    The Monte Carlo simulation methods are successfully being used in various areas of medical physics but also at different scales, for example, from the radiation therapy treatment planning systems to the prediction of the effects of radiation in cancer cells. The Monte Carlo simulation platform GATE based on the Geant4 tool-kit offers features dedicated to simulations in medical physics (nuclear medicine and radiotherapy). For radiobiology applications, the Geant4-DNA physical models are implemented to track particles till very low energy (eV) and are adapted for estimation of micro-dosimetric quantities. In order to implement a multi-scale Monte Carlo platform, we first validated the physical models of Geant4-DNA, and integrated them into GATE. Finally, we validated this implementation in the context of radiation therapy and proton therapy. In order to validate the Geant4-DNA physical models, dose point kernels for monoenergetic electrons (10 keV to 100 keV) were simulated using the physical models of Geant4-DNA and were compared to those simulated with Geant4 Standard physical models and another Monte Carlo code EGSnrc. The range and the stopping powers of electrons (7.4 eV to 1 MeV) and protons (1 keV to 100 MeV) calculated with GATE/Geant4-DNA were then compared with literature. We proposed to simulate with the GATE platform the impact of clinical and preclinical beams on cellular DNA. We modeled a clinical proton beam of 193.1 MeV, 6 MeV clinical electron beam and a X-ray irradiator beam. The beams models were validated by comparing absorbed dose computed and measured in liquid water. Then, the beams were used to calculate the frequency of energy deposits in DNA represented by different geometries. First, the DNA molecule was represented by small cylinders: 2 nm x 2 nm (∼10 bp), 5 nm x 10 nm (nucleosome) and 25 nm x 25 nm (chromatin fiber). All these cylinders were placed randomly in a sphere of liquid water (500 nm radius). Then we reconstructed the DNA

  8. Development of a quality-assessment tool for experimental bruxism studies: reliability and validity.

    Science.gov (United States)

    Dawson, Andreas; Raphael, Karen G; Glaros, Alan; Axelsson, Susanna; Arima, Taro; Ernberg, Malin; Farella, Mauro; Lobbezoo, Frank; Manfredini, Daniele; Michelotti, Ambra; Svensson, Peter; List, Thomas

    2013-01-01

    To combine empirical evidence and expert opinion in a formal consensus method in order to develop a quality-assessment tool for experimental bruxism studies in systematic reviews. Tool development comprised five steps: (1) preliminary decisions, (2) item generation, (3) face-validity assessment, (4) reliability and discriminitive validity assessment, and (5) instrument refinement. The kappa value and phi-coefficient were calculated to assess inter-observer reliability and discriminative ability, respectively. Following preliminary decisions and a literature review, a list of 52 items to be considered for inclusion in the tool was compiled. Eleven experts were invited to join a Delphi panel and 10 accepted. Four Delphi rounds reduced the preliminary tool-Quality-Assessment Tool for Experimental Bruxism Studies (Qu-ATEBS)- to 8 items: study aim, study sample, control condition or group, study design, experimental bruxism task, statistics, interpretation of results, and conflict of interest statement. Consensus among the Delphi panelists yielded good face validity. Inter-observer reliability was acceptable (k = 0.77). Discriminative validity was excellent (phi coefficient 1.0; P reviews of experimental bruxism studies, exhibits face validity, excellent discriminative validity, and acceptable inter-observer reliability. Development of quality assessment tools for many other topics in the orofacial pain literature is needed and may follow the described procedure.

  9. Validation of Varian TrueBeam electron phase–spaces for Monte Carlo simulation of MLC-shaped fields

    International Nuclear Information System (INIS)

    Lloyd, Samantha A. M.; Gagne, Isabelle M.; Zavgorodni, Sergei; Bazalova-Carter, Magdalena

    2016-01-01

    Purpose: This work evaluates Varian’s electron phase–space sources for Monte Carlo simulation of the TrueBeam for modulated electron radiation therapy (MERT) and combined, modulated photon and electron radiation therapy (MPERT) where fields are shaped by the photon multileaf collimator (MLC) and delivered at 70 cm SSD. Methods: Monte Carlo simulations performed with EGSnrc-based BEAMnrc/DOSXYZnrc and PENELOPE-based PRIMO are compared against diode measurements for 5 × 5, 10 × 10, and 20 × 20 cm 2 MLC-shaped fields delivered with 6, 12, and 20 MeV electrons at 70 cm SSD (jaws set to 40 × 40 cm 2 ). Depth dose curves and profiles are examined. In addition, EGSnrc-based simulations of relative output as a function of MLC-field size and jaw-position are compared against ion chamber measurements for MLC-shaped fields between 3 × 3 and 25 × 25 cm 2 and jaw positions that range from the MLC-field size to 40 × 40 cm 2 . Results: Percent depth dose curves generated by BEAMnrc/DOSXYZnrc and PRIMO agree with measurement within 2%, 2 mm except for PRIMO’s 12 MeV, 20 × 20 cm 2 field where 90% of dose points agree within 2%, 2 mm. Without the distance to agreement, differences between measurement and simulation are as large as 7.3%. Characterization of simulated dose parameters such as FWHM, penumbra width and depths of 90%, 80%, 50%, and 20% dose agree within 2 mm of measurement for all fields except for the FWHM of the 6 MeV, 20 × 20 cm 2 field which falls within 2 mm distance to agreement. Differences between simulation and measurement exist in the profile shoulders and penumbra tails, in particular for 10 × 10 and 20 × 20 cm 2 fields of 20 MeV electrons, where both sets of simulated data fall short of measurement by as much as 3.5%. BEAMnrc/DOSXYZnrc simulated outputs agree with measurement within 2.3% except for 6 MeV MLC-shaped fields. Discrepancies here are as great as 5.5%. Conclusions: TrueBeam electron phase–spaces available from Varian have been

  10. Validation of Varian TrueBeam electron phase–spaces for Monte Carlo simulation of MLC-shaped fields

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, Samantha A. M. [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 3P6 5C2 (Canada); Gagne, Isabelle M., E-mail: imgagne@bccancer.bc.ca; Zavgorodni, Sergei [Department of Medical Physics, BC Cancer Agency–Vancouver Island Centre, Victoria, British Columbia V8R 6V5, Canada and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 5C2 (Canada); Bazalova-Carter, Magdalena [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 5C2 (Canada)

    2016-06-15

    Purpose: This work evaluates Varian’s electron phase–space sources for Monte Carlo simulation of the TrueBeam for modulated electron radiation therapy (MERT) and combined, modulated photon and electron radiation therapy (MPERT) where fields are shaped by the photon multileaf collimator (MLC) and delivered at 70 cm SSD. Methods: Monte Carlo simulations performed with EGSnrc-based BEAMnrc/DOSXYZnrc and PENELOPE-based PRIMO are compared against diode measurements for 5 × 5, 10 × 10, and 20 × 20 cm{sup 2} MLC-shaped fields delivered with 6, 12, and 20 MeV electrons at 70 cm SSD (jaws set to 40 × 40 cm{sup 2}). Depth dose curves and profiles are examined. In addition, EGSnrc-based simulations of relative output as a function of MLC-field size and jaw-position are compared against ion chamber measurements for MLC-shaped fields between 3 × 3 and 25 × 25 cm{sup 2} and jaw positions that range from the MLC-field size to 40 × 40 cm{sup 2}. Results: Percent depth dose curves generated by BEAMnrc/DOSXYZnrc and PRIMO agree with measurement within 2%, 2 mm except for PRIMO’s 12 MeV, 20 × 20 cm{sup 2} field where 90% of dose points agree within 2%, 2 mm. Without the distance to agreement, differences between measurement and simulation are as large as 7.3%. Characterization of simulated dose parameters such as FWHM, penumbra width and depths of 90%, 80%, 50%, and 20% dose agree within 2 mm of measurement for all fields except for the FWHM of the 6 MeV, 20 × 20 cm{sup 2} field which falls within 2 mm distance to agreement. Differences between simulation and measurement exist in the profile shoulders and penumbra tails, in particular for 10 × 10 and 20 × 20 cm{sup 2} fields of 20 MeV electrons, where both sets of simulated data fall short of measurement by as much as 3.5%. BEAMnrc/DOSXYZnrc simulated outputs agree with measurement within 2.3% except for 6 MeV MLC-shaped fields. Discrepancies here are as great as 5.5%. Conclusions: TrueBeam electron phase

  11. Monte Carlo modeling and simulations of the High Definition (HD120) micro MLC and validation against measurements for a 6 MV beam

    International Nuclear Information System (INIS)

    Borges, C.; Zarza-Moreno, M.; Heath, E.; Teixeira, N.; Vaz, P.

    2012-01-01

    Purpose: The most recent Varian micro multileaf collimator (MLC), the High Definition (HD120) MLC, was modeled using the BEAMNRC Monte Carlo code. This model was incorporated into a Varian medical linear accelerator, for a 6 MV beam, in static and dynamic mode. The model was validated by comparing simulated profiles with measurements. Methods: The Varian Trilogy (2300C/D) accelerator model was accurately implemented using the state-of-the-art Monte Carlo simulation program BEAMNRC and validated against off-axis and depth dose profiles measured using ionization chambers, by adjusting the energy and the full width at half maximum (FWHM) of the initial electron beam. The HD120 MLC was modeled by developing a new BEAMNRC component module (CM), designated HDMLC, adapting the available DYNVMLC CM and incorporating the specific characteristics of this new micro MLC. The leaf dimensions were provided by the manufacturer. The geometry was visualized by tracing particles through the CM and recording their position when a leaf boundary is crossed. The leaf material density and abutting air gap between leaves were adjusted in order to obtain a good agreement between the simulated leakage profiles and EBT2 film measurements performed in a solid water phantom. To validate the HDMLC implementation, additional MLC static patterns were also simulated and compared to additional measurements. Furthermore, the ability to simulate dynamic MLC fields was implemented in the HDMLC CM. The simulation results of these fields were compared with EBT2 film measurements performed in a solid water phantom. Results: Overall, the discrepancies, with and without MLC, between the opened field simulations and the measurements using ionization chambers in a water phantom, for the off-axis profiles are below 2% and in depth-dose profiles are below 2% after the maximum dose depth and below 4% in the build-up region. On the conditions of these simulations, this tungsten-based MLC has a density of 18.7 g

  12. A validation report for the KALIMER core design computing system by the Monte Carlo transport theory code

    International Nuclear Information System (INIS)

    Lee, Ki Bog; Kim, Yeong Il; Kim, Kang Seok; Kim, Sang Ji; Kim, Young Gyun; Song, Hoon; Lee, Dong Uk; Lee, Byoung Oon; Jang, Jin Wook; Lim, Hyun Jin; Kim, Hak Sung

    2004-05-01

    In this report, the results of KALIMER (Korea Advanced LIquid MEtal Reactor) core design calculated by the K-CORE computing system are compared and analyzed with those of MCDEP calculation. The effective multiplication factor, flux distribution, fission power distribution and the number densities of the important nuclides effected from the depletion calculation for the R-Z model and Hex-Z model of KALIMER core are compared. It is confirmed that the results of K-CORE system compared with those of MCDEP based on the Monte Carlo transport theory method agree well within 700 pcm for the effective multiplication factor estimation and also within 2% in the driver fuel region, within 10% in the radial blanket region for the reaction rate and the fission power density. Thus, the K-CORE system for the core design of KALIMER by treating the lumped fission product and mainly important nuclides can be used as a core design tool keeping the necessary accuracy

  13. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations

    International Nuclear Information System (INIS)

    Streek, Jacco van de; Neumann, Marcus A.

    2010-01-01

    The accuracy of a dispersion-corrected density functional theory method is validated against 241 experimental organic crystal structures from Acta Cryst. Section E. This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 Å either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect

  14. A non-additive repulsive contribution in an equation of state: The development for homonuclear square well chains equation of state validated against Monte Carlo simulation

    International Nuclear Information System (INIS)

    Trinh, Thi-Kim-Hoang; Passarello, Jean-Philippe; Hemptinne, Jean-Charles de; Lugo, Rafael; Lachet, Veronique

    2016-01-01

    This work consists of the adaptation of a non-additive hard sphere theory inspired by Malakhov and Volkov [Polym. Sci., Ser. A 49(6), 745–756 (2007)] to a square-well chain. Using the thermodynamic perturbation theory, an additional term is proposed that describes the effect of perturbing the chain of square well spheres by a non-additive parameter. In order to validate this development, NPT Monte Carlo simulations of thermodynamic and structural properties of the non-additive square well for a pure chain and a binary mixture of chains are performed. Good agreements are observed between the compressibility factors originating from the theory and those from molecular simulations.

  15. Validation of a Monte Carlo model used for simulating tube current modulation in computed tomography over a wide range of phantom conditions/challenges

    Energy Technology Data Exchange (ETDEWEB)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); DeMarco, John J. [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2014-11-01

    Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain

  16. Validation of Experimental whole-body SAR Assessment Method in a Complex Indoor Environment

    DEFF Research Database (Denmark)

    Bamba, Aliou; Joseph, Wout; Vermeeren, Gunter

    2012-01-01

    Assessing experimentally the whole-body specific absorption rate (SARwb) in a complex indoor environment is very challenging. An experimental method based on room electromagnetics theory (accounting only the Line-Of-Sight as specular path) to assess the whole-body SAR is validated by numerical...... of the proposed method is that it allows discarding the computation burden because it does not use any discretizations. Results show good agreement between measurement and computation at 2.8 GHz, as long as the plane wave assumption is valid, i.e., for high distances from the transmitter. Relative deviations 0...

  17. Experimental validation of a mathematical model for seabed liquefaction in waves

    DEFF Research Database (Denmark)

    Sumer, B. Mutlu; Kirca, Özgür; Fredsøe, Jørgen

    2011-01-01

    This paper summarizes the results of an experimental study directed towards the validation of a mathematical model for the buildup of pore water pressure and resulting liquefaction of marine soils under progressive waves. Experiments were conducted under controlled conditions with silt ( d50 = 0.......070 mm) in a wave flume with a soil pit. Waves with wave heights in the range 7.7-18 cm with the water depth 55 cm and the wave period 1.6 s enabled us to study both the liquefaction and no-liquefaction regime pore water pressure buildup. The experimental data was used to validate the model. A numerical...

  18. Experimental Testing Procedures and Dynamic Model Validation for Vanadium Redox Flow Battery Storage System

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per Bromand

    2013-01-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing...... efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs...

  19. DMFC performance and methanol cross-over: Experimental analysis and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Casalegno, A.; Marchesi, R. [Dipartimento di Energia, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2008-10-15

    A combined experimental and modelling approach is proposed to analyze methanol cross-over and its effect on DMFC performance. The experimental analysis is performed in order to allow an accurate investigation of methanol cross-over influence on DMFC performance, hence measurements were characterized in terms of uncertainty and reproducibility. The findings suggest that methanol cross-over is mainly determined by diffusion transport and affects cell performance partly via methanol electro-oxidation at the cathode. The modelling analysis is carried out to further investigate methanol cross-over phenomenon. A simple model evaluates the effectiveness of two proposed interpretations regarding methanol cross-over and its effects. The model is validated using the experimental data gathered. Both the experimental analysis and the proposed and validated model allow a substantial step forward in the understanding of the main phenomena associated with methanol cross-over. The findings confirm the possibility to reduce methanol cross-over by optimizing anode feeding. (author)

  20. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  1. Validation of the MCNP-DSP Monte Carlo code for calculating source-driven noise parameters of subcritical systems

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1995-01-01

    This paper describes calculations performed to validate the modified version of the MCNP code, the MCNP-DSP, used for: the neutron and photon spectra of the spontaneous fission of californium 252; the representation of the detection processes for scattering detectors; the timing of the detection process; and the calculation of the frequency analysis parameters for the MCNP-DSP code

  2. Monte Carlo investigation of I-125 interseed attenuation for standard and thinner seeds in prostate brachytherapy with phantom validation using a MOSFET.

    Science.gov (United States)

    Mason, J; Al-Qaisieh, B; Bownes, P; Henry, A; Thwaites, D

    2013-03-01

    In permanent seed implant prostate brachytherapy the actual dose delivered to the patient may be less than that calculated by TG-43U1 due to interseed attenuation (ISA) and differences between prostate tissue composition and water. In this study the magnitude of the ISA effect is assessed in a phantom and in clinical prostate postimplant cases. Results are compared for seed models 6711 and 9011 with 0.8 and 0.5 mm diameters, respectively. A polymethyl methacrylate (PMMA) phantom was designed to perform ISA measurements in a simple eight-seed arrangement and at the center of an implant of 36 seeds. Monte Carlo (MC) simulation and experimental measurements using a MOSFET dosimeter were used to measure dose rate and the ISA effect. MC simulations of 15 CT-based postimplant prostate treatment plans were performed to compare the clinical impact of ISA on dose to prostate, urethra, rectum, and the volume enclosed by the 100% isodose, for 6711 and 9011 seed models. In the phantom, ISA reduced the dose rate at the MOSFET position by 8.6%-18.3% (6711) and 7.8%-16.7% (9011) depending on the measurement configuration. MOSFET measured dose rates agreed with MC simulation predictions within the MOSFET measurement uncertainty, which ranged from 5.5% to 7.2% depending on the measurement configuration (k = 1, for the mean of four measurements). For 15 clinical implants, the mean ISA effect for 6711 was to reduce prostate D90 by 4.2 Gy (3%), prostate V100 by 0.5 cc (1.4%), urethra D10 by 11.3 Gy (4.4%), rectal D2cc by 5.5 Gy (4.6%), and the 100% isodose volume by 2.3 cc. For the 9011 seed the mean ISA effect reduced prostate D90 by 2.2 Gy (1.6%), prostate V100 by 0.3 cc (0.7%), urethra D10 by 8.0 Gy (3.2%), rectal D2cc by 3.1 Gy (2.7%), and the 100% isodose volume by 1.2 cc. Differences between the MC simulation and TG-43U1 consensus data for the 6711 seed model had a similar impact, reducing mean prostate D90 by 6 Gy (4.2%) and V100 by 0.6 cc (1.8%). ISA causes the delivered dose

  3. Experimental validation of the twins prediction program for rolling noise. Pt.2: results

    NARCIS (Netherlands)

    Thompson, D.J.; Fodiman, P.; Mahé, H.

    1996-01-01

    Two extensive measurement campaigns have been carried out to validate the TWINS prediction program for rolling noise, as described in part 1 of this paper. This second part presents the experimental results of vibration and noise during train pass-bys and compares them with predictions from the

  4. Experimental validation for calcul methods of structures having shock non-linearity

    International Nuclear Information System (INIS)

    Brochard, D.; Buland, P.

    1987-01-01

    For the seismic analysis of non-linear structures, numerical methods have been developed which need to be validated on experimental results. The aim of this paper is to present the design method of a test program which results will be used for this purpose. Some applications to nuclear components will illustrate this presentation [fr

  5. Validation of a Wave-Body Interaction Model by Experimental Tests

    DEFF Research Database (Denmark)

    Ferri, Francesco; Kramer, Morten; Pecher, Arthur

    2013-01-01

    Within the wave energy field, numerical simulation has recently acquired a worldwide consent as being a useful tool, besides physical model testing. The main goal of this work is the validation of a numerical model by experimental results. The numerical model is based on a linear wave-body intera...

  6. Development of a quality-assessment tool for experimental bruxism studies: reliability and validity

    NARCIS (Netherlands)

    Dawson, A.; Raphael, K.G.; Glaros, A.; Axelsson, S.; Arima, T.; Ernberg, M.; Farella, M.; Lobbezoo, F.; Manfredini, D.; Michelotti, A.; Svensson, P.; List, T.

    2013-01-01

    AIMS: To combine empirical evidence and expert opinion in a formal consensus method in order to develop a quality-assessment tool for experimental bruxism studies in systematic reviews. METHODS: Tool development comprised five steps: (1) preliminary decisions, (2) item generation, (3) face-validity

  7. Impact-friction vibrations of tubular systems. Numerical simulation and experimental validation

    International Nuclear Information System (INIS)

    Jacquart, G.

    1993-05-01

    This note presents a summary on the numerical developments made to simulate impact-friction vibrations of tubular systems, detailing the algorithms used and the expression of impact and friction forces. A synthesis of the experimental results obtained on MASSIF workbench is also presented, as well as their comparison with numerical computations in order to validate the numerical approach. (author). 5 refs

  8. Experimental Validation of Mathematical Framework for Fast Switching Valves used in Digital Hydraulic Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller

    2015-01-01

    of 10 kW during switching (mean of approximately 250 W) and a pressure loss below 0.5 bar at 600 l/min. The main goal of this article is validate parts of the mathematical framework based on a series of experiments. Furthermore, this article aims to document the experience gained from the experimental...

  9. Design of passive directional acoustic devices using Topology Optimization - from method to experimental validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Fernandez Grande, Efren

    2016-01-01

    emission in two dimensions and is experimentally validated using three dimensional prints of the optimized designs. The emitted fields exhibit a level difference of at least 15 dB on axis relative to the off-axis directions, over frequency bands of approximately an octave. It is demonstrated to be possible...

  10. Experimental validation of sound field control with a circular double-layer array of loudspeakers

    DEFF Research Database (Denmark)

    Chang, Jiho; Jacobsen, Finn

    2013-01-01

    This paper is concerned with experimental validation of a recently proposed method of controlling sound fields with a circular double-layer array of loudspeakers [Chang and Jacobsen, J. Acoust. Soc. Am. 131(6), 4518-4525 (2012)]. The double-layer of loudspeakers is realized with 20 pairs of closed...

  11. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    An experimental analysis has been performed to validate the measurement error of cooling curves measured in thin walled ductile cast iron. Specially designed thermocouples with Ø0.2 mm thermocouple wire in Ø1.6 mm ceramic tube was used for the experiments. Temperatures were measured in plates...

  12. Modeling of surge in free-spool centrifugal compressors : experimental validation

    NARCIS (Netherlands)

    Gravdahl, J.T.; Willems, F.P.T.; Jager, de A.G.; Egeland, O.

    2004-01-01

    The derivation of a compressor characteristic, and the experimental validation of a dynamic model for a variable speed centrifugal compressor using this characteristic, are presented. The dynamic compressor model of Fink et al. is used, and a variable speed compressor characteristic is derived by

  13. Experimental validation of a rate-based model for CO2 capture using an AMP solution

    DEFF Research Database (Denmark)

    Gabrielsen, Jostein; Svendsen, H. F.; Michelsen, Michael Locht

    2007-01-01

    Detailed experimental data, including temperature profiles over the absorber, for a carbon dioxide (CO"2) absorber with structured packing in an integrated laboratory pilot plant using an aqueous 2-amino-2-methyl-1-propanol (AMP) solution are presented. The experimental gas-liquid material balance...... was within an average of 3.5% for the experimental conditions presented. A predictive rate-based steady-state model for CO"2 absorption into an AMP solution, using an implicit expression for the enhancement factor, has been validated against the presented pilot plant data. Furthermore, a parameter...

  14. Experimental validation of the fluid–structure interaction simulation of a bioprosthetic aortic heart valve

    International Nuclear Information System (INIS)

    Kemp, I.; Dellimore, K.; Rodriguez, R.; Scheffer, C.; Blaine, D.; Weich, H.; Doubell, A.

    2013-01-01

    Experiments performed on a 19 mm diameter bioprosthetic valve were used to successfully validate the fluid–structure interaction (FSI) simulation of an aortic valve at 72 bpm. The FSI simulation was initialized via a novel approach utilizing a Doppler sonogram of the experimentally tested valve. Using this approach very close quantitative agreement (≤12.5 %) between the numerical predictions and experimental values for several key valve performance parameters, including the peak systolic transvalvular pressure gradient, rapid valve opening time and rapid valve closing time, was obtained. The predicted valve leaflet kinematics during opening and closing were also in good agreement with the experimental measurements.

  15. Analytical and Experimental Study for Validation of the Device to Confine BN Reactor Melted Fuel

    International Nuclear Information System (INIS)

    Rogozhkin, S.; Osipov, S.; Sobolev, V.; Shepelev, S.; Kozhaev, A.; Mavrin, M.; Ryabov, A.

    2013-01-01

    To validate the design and confirm the design characteristics of the special retaining device (core catcher) used for protection of BN reactor vessel in the case of a severe beyond-design basis accident with core melting, computational and experimental studies were carried out. The Tray test facility that uses water as coolant was developed and fabricated by OKBM; experimental studies were performed. To verify the methodical approach used for the computational study, experimental results obtained in the Tray test facility were compared with numerical simulation results obtained by the STAR-CCM+ CFD code

  16. SU-E-T-345: Validation of a Patient-Specific Monte Carlo Targeted Radionuclide Therapy Dosimetry Platform

    International Nuclear Information System (INIS)

    Besemer, A; Bednarz, B

    2014-01-01

    Purpose: There is a compelling need for personalized dosimetry in targeted radionuclide therapy given that conventional dose calculation methods fail to accurately predict dose response relationships. To address this need, we have developed a Geant4-based Monte Carlo patient-specific 3D dosimetry platform for TRT. This platform calculates patient-specific dose distributions based on serial CT/PET or CT/SPECT images acquired after injection of the TRT agent. In this work, S-values and specific absorbed fractions (SAFs) were calculated using this platform and benchmarked against reference values. Methods: S-values for 1, 10, 100, and 1000g spherical tumors with uniform activity distributions of I-124, I-125, I-131, F-18, and Ra-223 were calculated and compared to OLINDA/EXM reference values. SAFs for monoenergetic photons of 0.01, 0.1, and 1 MeV and S factors for monoenergetic electrons of 0.935 MeV were calculated for the liver, kidneys, lungs, pancreas, spleen, and adrenals in the Zubal Phantom and compared with previously published values. Sufficient particles were simulated to keep the voxel statistical uncertainty below 5%. Results: The calculated spherical S-values agreed within a few percent of reference data from OLINDA/EXM for each radionuclide and sphere size. The comparison of photon SAFs and electron S-values with previously published values showed good agreement with the previously published values. The S-values and SAFs of the source organs agreed within 1%. Conclusion: Our platform has been benchmarked against reference values for a variety of radionuclides and over a wide range of energies and tumor sizes. Therefore, this platform could be used to provide accurate patientspecific dosimetry for use in radiopharmaceutical clinical trials

  17. Monte Carlo and experimental evaluation of accuracy and noise properties of two scatter correction methods for SPECT

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Bautovich, G.; Iida, H.; Hutton, B.F.; Braun, M.; Nakamura, T.

    1996-01-01

    Scatter correction is a prerequisite for quantitative SPECT, but potentially increases noise. Monte Carlo simulations (EGS4) and physical phantom measurements were used to compare accuracy and noise properties of two scatter correction techniques: the triple-energy window (TEW), and the transmission dependent convolution subtraction (TDCS) techniques. Two scatter functions were investigated for TDCS: (i) the originally proposed mono-exponential function (TDCS mono ) and (ii) an exponential plus Gaussian scatter function (TDCS Gauss ) demonstrated to be superior from our Monte Carlo simulations. Signal to noise ratio (S/N) and accuracy were investigated in cylindrical phantoms and a chest phantom. Results from each method were compared to the true primary counts (simulations), or known activity concentrations (phantom studies). 99m Tc was used in all cases. The optimized TDCS Gauss method overall performed best, with an accuracy of better than 4% for all simulations and physical phantom studies. Maximum errors for TEW and TDCS mono of -30 and -22%, respectively, were observed in the heart chamber of the simulated chest phantom. TEW had the worst S/N ratio of the three techniques. The S/N ratios of the two TDCS methods were similar and only slightly lower than those of simulated true primary data. Thus, accurate quantitation can be obtained with TDCS Gauss , with a relatively small reduction in S/N ratio. (author)

  18. Radiation Damage in Nuclear Fuel for Advanced Burner Reactors: Modeling and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Niels Gronbech; Asta, Mark; Ozolins, Nigel Browning' Vidvuds; de Walle, Axel van; Wolverton, Christopher

    2011-12-29

    The consortium has completed its existence and we are here highlighting work and accomplishments. As outlined in the proposal, the objective of the work was to advance the theoretical understanding of advanced nuclear fuel materials (oxides) toward a comprehensive modeling strategy that incorporates the different relevant scales involved in radiation damage in oxide fuels. Approaching this we set out to investigate and develop a set of directions: 1) Fission fragment and ion trajectory studies through advanced molecular dynamics methods that allow for statistical multi-scale simulations. This work also includes an investigation of appropriate interatomic force fields useful for the energetic multi-scale phenomena of high energy collisions; 2) Studies of defect and gas bubble formation through electronic structure and Monte Carlo simulations; and 3) an experimental component for the characterization of materials such that comparisons can be obtained between theory and experiment.

  19. Direct simulation Monte Carlo ray tracing model of light scattering by a class of real particles and comparison with PROGRA2 experimental results

    International Nuclear Information System (INIS)

    Mikrenska, M.; Koulev, P.; Renard, J.-B.; Hadamcik, E.; Worms, J.-C.

    2006-01-01

    The Direct Simulation Monte Carlo (DSMC) model is presented for three-dimensional single scattering of natural light by suspended, randomly oriented, optically homogeneous and isotropic, rounded and stochastically rough cubic particles. The modelled particles have large size parameter that allows geometric optics approximation to be used. The proposed computational model is simple and flexible. It is tested by comparison with known geometric optics solution for a perfect cube and Lorenz-Mie solution for a sphere, as extreme cases of the class of rounded cubes. Scattering and polarization properties of particles with various geometrical and optical characteristics are examined. The experimental study of real NaCl crystals with new Progra 2 instrument in microgravity conditions is conducted. The experimental and computed polarization and brightness phase curves are compared

  20. An Experimental Simulation to Validate FEM to Predict Transverse Young’s Modulus of FRP Composites

    Directory of Open Access Journals (Sweden)

    V. S. Sai

    2013-01-01

    Full Text Available Finite element method finds application in the analysis of FRP composites due to its versatility in getting the solution for complex cases which are not possible by exact classical analytical approaches. The finite element result is questionable unless it is obtained from converged mesh and properly validated. In the present work specimens are prepared with metallic materials so that the arrangement of fibers is close to hexagonal packing in a matrix as similar arrangement in case of FRP is complex due to the size of fibers. Transverse Young’s moduli of these specimens are determined experimentally. Equivalent FE models are designed and corresponding transverse Young’s moduli are compared with the experimental results. It is observed that the FE values are in good agreement with the experimental results, thus validating FEM for predicting transverse modulus of FRP composites.

  1. SU-F-T-155: Validation of a Commercial Monte Carlo Dose Calculation Algorithm for Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Saini, J; Wong, T [SCCA Proton Therapy Center, Seattle, WA (United States); St James, S; Stewart, R; Bloch, C [University of Washington, Seattle, WA (United States); Traneus, E [Raysearch Laboratories AB, Stockholm. (Sweden)

    2016-06-15

    Purpose: Compare proton pencil beam scanning dose measurements to GATE/GEANT4 (GMC) and RayStation™ Monte Carlo (RMC) simulations. Methods: Proton pencil beam models of the IBA gantry at the Seattle Proton Therapy Center were developed in the GMC code system and a research build of the RMC. For RMC, a preliminary beam model that does not account for upstream halo was used. Depth dose and lateral profiles are compared for the RMC, GMC and a RayStation™ pencil beam dose (RPB) model for three spread out Bragg peaks (SOBPs) in homogenous water phantom. SOBP comparisons were also made among the three models for a phantom with a (i) 2 cm bone and a (ii) 0.5 cm titanium insert. Results: Measurements and GMC estimates of R80 range agree to within 1 mm, and the mean point-to-point dose difference is within 1.2% for all integrated depth dose (IDD) profiles. The dose differences at the peak are 1 to 2%. All of the simulated spot sigmas are within 0.15 mm of the measured values. For the three SOBPs considered, the maximum R80 deviation from measurement for GMC was −0.35 mm, RMC 0.5 mm, and RPB −0.1 mm. The minimum gamma pass using the 3%/3mm criterion for all the profiles was 94%. The dose comparison for heterogeneous inserts in low dose gradient regions showed dose differences greater than 10% at the distal edge of interface between RPB and GMC. The RMC showed improvement and agreed with GMC to within 7%. Conclusion: The RPB dosimetry show clinically significant differences (> 10%) from GMC and RMC estimates. The RMC algorithm is superior to the RPB dosimetry in heterogeneous media. We suspect modelling of the beam’s halo may be responsible for a portion of the remaining discrepancy and that RayStation will reduce this discrepancy as they finalize the release. Erik Traneus is employed as a Research Scientist at RaySearch Laboratories. The research build of the RayStation TPS used in the study was made available to the SCCA free of charge. RaySearch did not provide

  2. Converging stereotactic radiotherapy using kilovoltage X-rays: experimental irradiation of normal rabbit lung and dose-volume analysis with Monte Carlo simulation.

    Science.gov (United States)

    Kawase, Takatsugu; Kunieda, Etsuo; Deloar, Hossain M; Tsunoo, Takanori; Seki, Satoshi; Oku, Yohei; Saitoh, Hidetoshi; Saito, Kimiaki; Ogawa, Eileen N; Ishizaka, Akitoshi; Kameyama, Kaori; Kubo, Atsushi

    2009-10-01

    To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.

  3. Experimental validation of the buildings energy performance (PEC assessment methods with reference to occupied spaces heating

    Directory of Open Access Journals (Sweden)

    Cristian PETCU

    2010-01-01

    Full Text Available This paper is part of the series of pre-standardization research aimed to analyze the existing methods of calculating the Buildings Energy Performance (PEC in view of their correction of completing. The entire research activity aims to experimentally validate the PEC Calculation Algorithm as well as the comparative application, on the support of several case studies focused on representative buildings of the stock of buildings in Romania, of the PEC calculation methodology for buildings equipped with occupied spaces heating systems. The targets of the report are the experimental testing of the calculation models so far known (NP 048-2000, Mc 001-2006, SR EN 13790:2009, on the support provided by the CE INCERC Bucharest experimental building, together with the complex calculation algorithms specific to the dynamic modeling, for the evaluation of the occupied spaces heat demand in the cold season, specific to the traditional buildings and to modern buildings equipped with solar radiation passive systems, of the ventilated solar space type. The schedule of the measurements performed in the 2008-2009 cold season is presented as well as the primary processing of the measured data and the experimental validation of the heat demand monthly calculation methods, on the support of CE INCERC Bucharest. The calculation error per heating season (153 days of measurements between the measured heat demand and the calculated one was of 0.61%, an exceptional value confirming the phenomenological nature of the INCERC method, NP 048-2006. The mathematical model specific to the hourly thermal balance is recurrent – decisional with alternating paces. The experimental validation of the theoretical model is based on the measurements performed on the CE INCERC Bucharest building, within a time lag of 57 days (06.01-04.03.2009. The measurements performed on the CE INCERC Bucharest building confirm the accuracy of the hourly calculation model by comparison to the values

  4. A comprehensive collection of experimentally validated primers for Polymerase Chain Reaction quantitation of murine transcript abundance

    Directory of Open Access Journals (Sweden)

    Wang Xiaowei

    2008-12-01

    Full Text Available Abstract Background Quantitative polymerase chain reaction (QPCR is a widely applied analytical method for the accurate determination of transcript abundance. Primers for QPCR have been designed on a genomic scale but non-specific amplification of non-target genes has frequently been a problem. Although several online databases have been created for the storage and retrieval of experimentally validated primers, only a few thousand primer pairs are currently present in existing databases and the primers are not designed for use under a common PCR thermal profile. Results We previously reported the implementation of an algorithm to predict PCR primers for most known human and mouse genes. We now report the use of that resource to identify 17483 pairs of primers that have been experimentally verified to amplify unique sequences corresponding to distinct murine transcripts. The primer pairs have been validated by gel electrophoresis, DNA sequence analysis and thermal denaturation profile. In addition to the validation studies, we have determined the uniformity of amplification using the primers and the technical reproducibility of the QPCR reaction using the popular and inexpensive SYBR Green I detection method. Conclusion We have identified an experimentally validated collection of murine primer pairs for PCR and QPCR which can be used under a common PCR thermal profile, allowing the evaluation of transcript abundance of a large number of genes in parallel. This feature is increasingly attractive for confirming and/or making more precise data trends observed from experiments performed with DNA microarrays.

  5. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations.

    Science.gov (United States)

    van de Streek, Jacco; Neumann, Marcus A

    2010-10-01

    This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.

  6. Fate of the open-shell singlet ground state in the experimentally accessible acenes: A quantum Monte Carlo study

    Science.gov (United States)

    Dupuy, Nicolas; Casula, Michele

    2018-04-01

    By means of the Jastrow correlated antisymmetrized geminal power (JAGP) wave function and quantum Monte Carlo (QMC) methods, we study the ground state properties of the oligoacene series, up to the nonacene. The JAGP is the accurate variational realization of the resonating-valence-bond (RVB) ansatz proposed by Pauling and Wheland to describe aromatic compounds. We show that the long-ranged RVB correlations built in the acenes' ground state are detrimental for the occurrence of open-shell diradical or polyradical instabilities, previously found by lower-level theories. We substantiate our outcome by a direct comparison with another wave function, tailored to be an open-shell singlet (OSS) for long-enough acenes. By comparing on the same footing the RVB and OSS wave functions, both optimized at a variational QMC level and further projected by the lattice regularized diffusion Monte Carlo method, we prove that the RVB wave function has always a lower variational energy and better nodes than the OSS, for all molecular species considered in this work. The entangled multi-reference RVB state acts against the electron edge localization implied by the OSS wave function and weakens the diradical tendency for higher oligoacenes. These properties are reflected by several descriptors, including wave function parameters, bond length alternation, aromatic indices, and spin-spin correlation functions. In this context, we propose a new aromatic index estimator suitable for geminal wave functions. For the largest acenes taken into account, the long-range decay of the charge-charge correlation functions is compatible with a quasi-metallic behavior.

  7. Preliminary Validation of the MATRA-LMR Code Using Existing Sodium-Cooled Experimental Data

    International Nuclear Information System (INIS)

    Choi, Sun Rock; Kim, Sangji

    2014-01-01

    The main objective of the SFR prototype plant is to verify TRU metal fuel performance, reactor operation, and transmutation ability of high-level wastes. The core thermal-hydraulic design is used to ensure the safe fuel performance during the whole plant operation. The fuel design limit is highly dependent on both the maximum cladding temperature and the uncertainties of the design parameters. Therefore, an accurate temperature calculation in each subassembly is highly important to assure a safe and reliable operation of the reactor systems. The current core thermalhydraulic design is mainly performed using the SLTHEN (Steady-State LMR Thermal-Hydraulic Analysis Code Based on ENERGY Model) code, which has been already validated using the existing sodium-cooled experimental data. In addition to the SLTHEN code, a detailed analysis is performed using the MATRA-LMR (Multichannel Analyzer for Transient and steady-state in Rod Array-Liquid Metal Reactor) code. In this work, the MATRA-LMR code is validated for a single subassembly evaluation using the previous experimental data. The MATRA-LMR code has been validated using existing sodium-cooled experimental data. The results demonstrate that the design code appropriately predicts the temperature distributions compared with the experimental values. Major differences are observed in the experiments with the large pin number due to the radial-wise mixing difference

  8. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    Science.gov (United States)

    Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.

    2007-03-01

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  9. Monte Carlo-based validation of the ENDF/MC2-II/SDX cell homogenization path

    International Nuclear Information System (INIS)

    Wade, D.C.

    1979-04-01

    The results are presented of a program of validation of the unit cell homogenization prescriptions and codes used for the analysis of Zero Power Reactor (ZPR) fast breeder reactor critical experiments. The ZPR drawer loading patterns comprise both plate type and pin-calandria type unit cells. A prescription is used to convert the three dimensional physical geometry of the drawer loadings into one dimensional calculational models. The ETOE-II/MC 2 -II/SDX code sequence is used to transform ENDF/B basic nuclear data into unit cell average broad group cross sections based on the 1D models. Cell average, broad group anisotropic diffusion coefficients are generated using the methods of Benoist or of Gelbard. The resulting broad (approx. 10 to 30) group parameters are used in multigroup diffusion and S/sub n/ transport calculations of full core XY or RZ models which employ smeared atom densities to represent the contents of the unit cells

  10. Monte Carlo Simulation for Particle Detectors

    CERN Document Server

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  11. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    Science.gov (United States)

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  12. Comparison of experimental and Monte-Carlo simulation of MeV particle transport through tapered/straight glass capillaries and circular collimators

    Energy Technology Data Exchange (ETDEWEB)

    Hespeels, F., E-mail: felicien.hespeels@unamur.be [University of Namur, PMR, 61 rue de Bruxelles, 5000 Namur (Belgium); Tonneau, R. [University of Namur, PMR, 61 rue de Bruxelles, 5000 Namur (Belgium); Ikeda, T. [RIKEN Nishina Center, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Lucas, S. [University of Namur, PMR, 61 rue de Bruxelles, 5000 Namur (Belgium)

    2015-11-01

    Highlights: • Monte-Carlo simulation for beam transportation through collimations devices. • We confirm the focusing effect of tapered glass capillary. • We confirm the feasibility of using passive collimation devices for ion beam analysis application. - Abstract: This study compares the capabilities of three different passive collimation devices to produce micrometer-sized beams for proton and alpha particle beams (1.7 MeV and 5.3 MeV respectively): classical platinum TEM-like collimators, straight glass capillaries and tapered glass capillaries. In addition, we developed a Monte-Carlo code, based on the Rutherford scattering theory, which simulates particle transportation through collimating devices. The simulation results match the experimental observations of beam transportation through collimators both in air and vacuum. This research shows the focusing effects of tapered capillaries which clearly enable higher transmission flux. Nevertheless, the capillaries alignment with an incident beam is a prerequisite but is tedious, which makes the TEM collimator the easiest way to produce a 50 μm microbeam.

  13. Parametric model of servo-hydraulic actuator coupled with a nonlinear system: Experimental validation

    Science.gov (United States)

    Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.

    2018-05-01

    Hydraulic actuators play a key role in experimental structural dynamics. In a previous study, a physics-based model for a servo-hydraulic actuator coupled with a nonlinear physical system was developed. Later, this dynamical model was transformed into controllable canonical form for position tracking control purposes. For this study, a nonlinear device is designed and fabricated to exhibit various nonlinear force-displacement profiles depending on the initial condition and the type of materials used as replaceable coupons. Using this nonlinear system, the controllable canonical dynamical model is experimentally validated for a servo-hydraulic actuator coupled with a nonlinear physical system.

  14. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams

    International Nuclear Information System (INIS)

    Kuenzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar

    2009-01-01

    The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm 2 field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm 2 ) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the

  15. Visual Servoing Tracking Control of a Ball and Plate System: Design, Implementation and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ming-Tzu Ho

    2013-07-01

    Full Text Available This paper presents the design, implementation and validation of real-time visual servoing tracking control for a ball and plate system. The position of the ball is measured with a machine vision system. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA device to meet real-time constraints. A detailed dynamic model of the system is derived for the simulation study. By neglecting the high-order coupling terms, the ball and plate system model is simplified into two decoupled ball and beam systems, and an approximate input-output feedback linearization approach is then used to design the controller for trajectory tracking. The designed control law is implemented on a digital signal processor (DSP. The validity of the performance of the developed control system is investigated through simulation and experimental studies. Experimental results show that the designed system functions well with reasonable agreement with simulations.

  16. Validation of a modified PENELOPE Monte Carlo code for applications in digital and dual-energy mammography

    Science.gov (United States)

    Del Lama, L. S.; Cunha, D. M.; Poletti, M. E.

    2017-08-01

    The presence and morphology of microcalcification clusters are the main point to provide early indications of breast carcinomas. However, the visualization of those structures may be jeopardized due to overlapping tissues even for digital mammography systems. Although digital mammography is the current standard for breast cancer diagnosis, further improvements should be achieved in order to address some of those physical limitations. One possible solution for such issues is the application of the dual-energy technique (DE), which is able to highlight specific lesions or cancel out the tissue background. In this sense, this work aimed to evaluate several quantities of interest in radiation applications and compare those values with works present in the literature to validate a modified PENELOPE code for digital mammography applications. For instance, the scatter-to-primary ratio (SPR), the scatter fraction (SF) and the normalized mean glandular dose (DgN) were evaluated by simulations and the resulting values were compared to those found in earlier studies. Our results present a good correlation for the evaluated quantities, showing agreement equal or better than 5% for the scatter and dosimetric-related quantities when compared to the literature. Finally, a DE imaging chain was simulated and the visualization of microcalcifications was investigated.

  17. Electromagnetic scattering problems -Numerical issues and new experimental approaches of validation

    Energy Technology Data Exchange (ETDEWEB)

    Geise, Robert; Neubauer, Bjoern; Zimmer, Georg [University of Braunschweig, Institute for Electromagnetic Compatibility, Schleinitzstrasse 23, 38106 Braunschweig (Germany)

    2015-03-10

    Electromagnetic scattering problems, thus the question how radiated energy spreads when impinging on an object, are an essential part of wave propagation. Though the Maxwell’s differential equations as starting point, are actually quite simple,the integral formulation of an object’s boundary conditions, respectively the solution for unknown induced currents can only be solved numerically in most cases.As a timely topic of practical importance the scattering of rotating wind turbines is discussed, the numerical description of which is still based on rigorous approximations with yet unspecified accuracy. In this context the issue of validating numerical solutions is addressed, both with reference simulations but in particular with the experimental approach of scaled measurements. For the latter the idea of an incremental validation is proposed allowing a step by step validation of required new mathematical models in scattering theory.

  18. Electrocatalysis of borohydride oxidation: a review of density functional theory approach combined with experimental validation

    Science.gov (United States)

    Sison Escaño, Mary Clare; Lacdao Arevalo, Ryan; Gyenge, Elod; Kasai, Hideaki

    2014-09-01

    The electrocatalysis of borohydride oxidation is a complex, up-to-eight-electron transfer process, which is essential for development of efficient direct borohydride fuel cells. Here we review the progress achieved by density functional theory (DFT) calculations in explaining the adsorption of BH4- on various catalyst surfaces, with implications for electrocatalyst screening and selection. Wherever possible, we correlate the theoretical predictions with experimental findings, in order to validate the proposed models and to identify potential directions for further advancements.

  19. Numerical Simulation and Experimental Validation of the Inflation Test of Latex Balloons

    Directory of Open Access Journals (Sweden)

    Claudio Bustos

    Full Text Available Abstract Experiments and modeling aimed at assessing the mechanical response of latex balloons in the inflation test are presented. To this end, the hyperelastic Yeoh material model is firstly characterized via tensile test and, then, used to numerically simulate via finite elements the stress-strain evolution during the inflation test. The numerical pressure-displacement curves are validated with those obtained experimentally. Moreover, this analysis is extended to a biomedical problem of an eyeball under glaucoma conditions.

  20. Numerical Simulation and Experimental Validation of the Inflation Test of Latex Balloons

    OpenAIRE

    Bustos, Claudio; Herrera, Claudio García; Celentano, Diego; Chen, Daming; Cruchaga, Marcela

    2016-01-01

    Abstract Experiments and modeling aimed at assessing the mechanical response of latex balloons in the inflation test are presented. To this end, the hyperelastic Yeoh material model is firstly characterized via tensile test and, then, used to numerically simulate via finite elements the stress-strain evolution during the inflation test. The numerical pressure-displacement curves are validated with those obtained experimentally. Moreover, this analysis is extended to a biomedical problem of an...

  1. Kinetic Monte Carlo simulations compared with continuum models and experimental properties of pattern formation during ion beam sputtering

    International Nuclear Information System (INIS)

    Chason, E; Chan, W L

    2009-01-01

    Kinetic Monte Carlo simulations model the evolution of surfaces during low energy ion bombardment using atomic level mechanisms of defect formation, recombination and surface diffusion. Because the individual kinetic processes are completely determined, the resulting morphological evolution can be directly compared with continuum models based on the same mechanisms. We present results of simulations based on a curvature-dependent sputtering mechanism and diffusion of mobile surface defects. The results are compared with a continuum linear instability model based on the same physical processes. The model predictions are found to be in good agreement with the simulations for predicting the early-stage morphological evolution and the dependence on processing parameters such as the flux and temperature. This confirms that the continuum model provides a reasonable approximation of the surface evolution from multiple interacting surface defects using this model of sputtering. However, comparison with experiments indicates that there are many features of the surface evolution that do not agree with the continuum model or simulations, suggesting that additional mechanisms are required to explain the observed behavior.

  2. Experimental validation of a thermodynamic boiler model under steady state and dynamic conditions

    International Nuclear Information System (INIS)

    Carlon, Elisa; Verma, Vijay Kumar; Schwarz, Markus; Golicza, Laszlo; Prada, Alessandro; Baratieri, Marco; Haslinger, Walter; Schmidl, Christoph

    2015-01-01

    Highlights: • Laboratory tests on two commercially available pellet boilers. • Steady state and a dynamic load cycle tests. • Pellet boiler model calibration based on data registered in stationary operation. • Boiler model validation with reference to both stationary and dynamic operation. • Validated model suitable for coupled simulation of building and heating system. - Abstract: Nowadays dynamic building simulation is an essential tool for the design of heating systems for residential buildings. The simulation of buildings heated by biomass systems, first of all needs detailed boiler models, capable of simulating the boiler both as a stand-alone appliance and as a system component. This paper presents the calibration and validation of a boiler model by means of laboratory tests. The chosen model, i.e. TRNSYS “Type 869”, has been validated for two commercially available pellet boilers of 6 and 12 kW nominal capacities. Two test methods have been applied: the first is a steady state test at nominal load and the second is a load cycle test including stationary operation at different loads as well as transient operation. The load cycle test is representative of the boiler operation in the field and characterises the boiler’s stationary and dynamic behaviour. The model had been calibrated based on laboratory data registered during stationary operation at different loads and afterwards it was validated by simulating both the stationary and the dynamic tests. Selected parameters for the validation were the heat transfer rates to water and the water temperature profiles inside the boiler and at the boiler outlet. Modelling results showed better agreement with experimental data during stationary operation rather than during dynamic operation. Heat transfer rates to water were predicted with a maximum deviation of 10% during the stationary operation, and a maximum deviation of 30% during the dynamic load cycle. However, for both operational regimes the

  3. Relationship of otolith strontium-to-calcium ratios and salinity: Experimental validation for juvenile salmonids

    Science.gov (United States)

    Zimmerman, C.E.

    2005-01-01

    Analysis of otolith strontium (Sr) or strontium-to-calcium (Sr:Ca) ratios provides a powerful tool to reconstruct the chronology of migration among salinity environments for diadromous salmonids. Although use of this method has been validated by examination of known individuals and translocation experiments, it has never been validated under controlled experimental conditions. In this study, incorporation of otolith Sr was tested across a range of salinities and resulting levels of ambient Sr and Ca concentrations in juvenile chinook salmon (Oncorhynchus tshawytscha), coho salmon (Oncorhynchus kisutch), sockeye salmon (Oncorhynchus nerka), rainbow trout (Oncorhynchus rnykiss), and Arctic char (Salvelinus alpinus). Experimental water was mixed, using stream water and seawater as end members, to create experimental salinities of 0.1, 6.3, 12.7, 18.6, 25.5, and 33.0 psu. Otolith Sr and Sr:Ca ratios were significantly related to salinity for all species (r2 range: 0.80-0.91) but provide only enough predictive resolution to discriminate among fresh water, brackish water, and saltwater residency. These results validate the use of otolith Sr:Ca ratios to broadly discriminate salinity histories encountered by salmonids but highlight the need for further research concerning the influence of osmoregulation and physiological changes associated with smoking on otolith microchemistry.

  4. Dislocation-mediated strain hardening in tungsten: Thermo-mechanical plasticity theory and experimental validation

    Science.gov (United States)

    Terentyev, Dmitry; Xiao, Xiazi; Dubinko, A.; Bakaeva, A.; Duan, Huiling

    2015-12-01

    A self-consistent thermo-mechanical model to study the strain-hardening behavior of polycrystalline tungsten was developed and validated by a dedicated experimental route. Dislocation-dislocation multiplication and storage, as well dislocation-grain boundary (GB) pinning were the major mechanisms underlying the evolution of plastic deformation, thus providing a link between the strain hardening behavior and material's microstructure. The microstructure of the polycrystalline tungsten samples has been thoroughly investigated by scanning and electron microscopy. The model was applied to compute stress-strain loading curves of commercial tungsten grades, in the as-received and as-annealed states, in the temperature range of 500-1000 °C. Fitting the model to the independent experimental results obtained using a single crystal and as-received polycrystalline tungsten, the model demonstrated its capability to predict the deformation behavior of as-annealed samples in a wide temperature range and applied strain. The relevance of the dislocation-mediated plasticity mechanisms used in the model have been validated using transmission electron microscopy examination of the samples deformed up to different amounts of strain. On the basis of the experimental validation, the limitations of the model are determined and discussed.

  5. Monte-Carlo simulation of OCT structural images of human skin using experimental B-scans and voxel based approach to optical properties distribution

    Science.gov (United States)

    Frolov, S. V.; Potlov, A. Yu.; Petrov, D. A.; Proskurin, S. G.

    2017-03-01

    A method of optical coherence tomography (OCT) structural images reconstruction using Monte Carlo simulations is described. Biological object is considered as a set of 3D elements that allow simulation of media, structure of which cannot be described analytically. Each voxel is characterized by its refractive index and anisotropy parameter, scattering and absorption coefficients. B-scans of the inner structure are used to reconstruct a simulated image instead of analytical representation of the boundary geometry. Henye-Greenstein scattering function, Beer-Lambert-Bouguer law and Fresnel equations are used for photon transport description. Efficiency of the described technique is checked by the comparison of the simulated and experimentally acquired A-scans.

  6. Criticality calculations on pebble-bed HTR-PROTEUS configuration as a validation for the pseudo-scattering tracking method implemented in the MORET 5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Forestier, Benoit; Miss, Joachim; Bernard, Franck; Dorval, Aurelien [Institut de Radioprotection et Surete Nucleaire, Fontenay aux Roses (France); Jacquet, Olivier [Independent consultant (France); Verboomen, Bernard [Belgian Nuclear Research Center - SCK-CEN (Belgium)

    2008-07-01

    The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (k{sub eff}) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method, known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by mean of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model, as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time, will also be presented. (authors)

  7. Validation of the Continuous-Energy Monte Carlo Criticality-Safety Analysis System MVP and JENDL-3.2 Using the Internationally Evaluated Criticality Benchmarks

    International Nuclear Information System (INIS)

    Mitake, Susumu

    2003-01-01

    Validation of the continuous-energy Monte Carlo criticality-safety analysis system, comprising the MVP code and neutron cross sections based on JENDL-3.2, was examined using benchmarks evaluated in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Eight experiments (116 configurations) for the plutonium solution and plutonium-uranium mixture systems performed at Valduc, Battelle Pacific Northwest Laboratories, and other facilities were selected and used in the studies. The averaged multiplication factors calculated with MVP and MCNP-4B using the same neutron cross-section libraries based on JENDL-3.2 were in good agreement. Based on methods provided in the Japanese nuclear criticality-safety handbook, the estimated criticality lower-limit multiplication factors to be used as a subcriticality criterion for the criticality-safety evaluation of nuclear facilities were obtained. The analysis proved the applicability of the MVP code to the criticality-safety analysis of nuclear fuel facilities, particularly to the analysis of systems fueled with plutonium and in homogeneous and thermal-energy conditions

  8. Apparent and internal validity of a Monte Carlo-Markov model for cardiovascular disease in a cohort follow-up study.

    Science.gov (United States)

    Nijhuis, Rogier L; Stijnen, Theo; Peeters, Anna; Witteman, Jacqueline C M; Hofman, Albert; Hunink, M G Myriam

    2006-01-01

    To determine the apparent and internal validity of the Rotterdam Ischemic heart disease & Stroke Computer (RISC) model, a Monte Carlo-Markov model, designed to evaluate the impact of cardiovascular disease (CVD) risk factors and their modification on life expectancy (LE) and cardiovascular disease-free LE (DFLE) in a general population (hereinafter, these will be referred to together as (DF)LE). The model is based on data from the Rotterdam Study, a cohort follow-up study of 6871 subjects aged 55 years and older who visited the research center for risk factor assessment at baseline (1990-1993) and completed a follow-up visit 7 years later (original cohort). The transition probabilities and risk factor trends used in the RISC model were based on data from 3501 subjects (the study cohort). To validate the RISC model, the number of simulated CVD events during 7 years' follow-up were compared with the observed number of events in the study cohort and the original cohort, respectively, and simulated (DF)LEs were compared with the (DF)LEs calculated from multistate life tables. Both in the study cohort and in the original cohort, the simulated distribution of CVD events was consistent with the observed number of events (CVD deaths: 7.1% v. 6.6% and 7.4% v. 7.6%, respectively; non-CVD deaths: 11.2% v. 11.5% and 12.9% v. 13.0%, respectively). The distribution of (DF)LEs estimated with the RISC model consistently encompassed the (DF)LEs calculated with multistate life tables. The simulated events and (DF)LE estimates from the RISC model are consistent with observed data from a cohort follow-up study.

  9. Simulation, experimental validation and kinematic optimization of a Stirling engine using air and helium

    International Nuclear Information System (INIS)

    Bert, Juliette; Chrenko, Daniela; Sophy, Tonino; Le Moyne, Luis; Sirot, Frédéric

    2014-01-01

    A Stirling engine with nominal output power of 1 kW is tested using air and helium as working gases. The influence of working pressure, engine speed and temperature of the hot source is studied, analyzing instantaneous gas pressure as well as instantaneous and stationary temperature at different positions to derive the effective power. A zero dimensional finite-time thermodynamic, three zones model of a generic Stirling engine is developed and successfully validated against experimental gas temperature and pressure in each zone, providing the effective power. This validation underlines the interest of different working gases as well as different geometric configurations for different applications. Furthermore, the validated model allows parametric studies of the engine, with regard to geometry, working gas and engine kinematics. It is used in order to optimize the kinematic of a Stirling engine for different working points and gases. - Highlights: • A Stirling engine of 1 kW is tested using air and helium as working gas. • Effects of working pressure, speed and temperature on power are studied. • A zero dimensional finite-time thermodynamic, three zones model of it is validated. • The validated model is used for parametric studies and optimization of the engine

  10. Alteration of 'R7T7' type nuclear glasses: statistical approach, experimental validation, local evolution model

    International Nuclear Information System (INIS)

    Thierry, F.

    2003-02-01

    The aim of this work is to propose an evolution of nuclear (R7T7-type) glass alteration modeling. The first part of this thesis is about development and validation of the 'r(t)' model. This model which predicts the decrease of alteration rates in confined conditions is based upon a coupling between a first-order dissolution law and a diffusion barrier effect of the alteration gel layer. The values and the uncertainties regarding the main adjustable parameters of the model (α, Dg and C*) have been determined from a systematic study of the available experimental data. A program called INVERSION has been written for this purpose. This work lead to characterize the validity domain of the 'r(t)' model and to parametrize it. Validation experiments have been undertaken, confirming the validity of the parametrization over 200 days. A new model is proposed in the second part of this thesis. It is based on an inhibition of glass dissolution reaction by silicon coupled with a local description of silicon retention in the alteration gel layer. This model predicts the evolutions of boron and silicon concentrations in solution as well as the concentrations and retention profiles in the gel layer. These predictions have been compared to measurements of retention profiles by the secondary ion mass spectrometry (SIMS) method. The model has been validated on fractions of gel layer which reactivity present low or moderate disparities. (author)

  11. Three-dimensional shape optimization of a cemented hip stem and experimental validations.

    Science.gov (United States)

    Higa, Masaru; Tanino, Hiromasa; Nishimura, Ikuya; Mitamura, Yoshinori; Matsuno, Takeo; Ito, Hiroshi

    2015-03-01

    This study proposes novel optimized stem geometry with low stress values in the cement using a finite element (FE) analysis combined with an optimization procedure and experimental measurements of cement stress in vitro. We first optimized an existing stem geometry using a three-dimensional FE analysis combined with a shape optimization technique. One of the most important factors in the cemented stem design is to reduce stress in the cement. Hence, in the optimization study, we minimized the largest tensile principal stress in the cement mantle under a physiological loading condition by changing the stem geometry. As the next step, the optimized stem and the existing stem were manufactured to validate the usefulness of the numerical models and the results of the optimization in vitro. In the experimental study, strain gauges were embedded in the cement mantle to measure the strain in the cement mantle adjacent to the stems. The overall trend of the experimental study was in good agreement with the results of the numerical study, and we were able to reduce the largest stress by more than 50% in both shape optimization and strain gauge measurements. Thus, we could validate the usefulness of the numerical models and the results of the optimization using the experimental models. The optimization employed in this study is a useful approach for developing new stem designs.

  12. Comparison of Monte Carlo simulation of gamma ray attenuation coefficients of amino acids with XCOM program and experimental data

    Science.gov (United States)

    Elbashir, B. O.; Dong, M. G.; Sayyed, M. I.; Issa, Shams A. M.; Matori, K. A.; Zaid, M. H. M.

    2018-06-01

    The mass attenuation coefficients (μ/ρ), effective atomic numbers (Zeff) and electron densities (Ne) of some amino acids obtained experimentally by the other researchers have been calculated using MCNP5 simulations in the energy range 0.122-1.330 MeV. The simulated values of μ/ρ, Zeff, and Ne were compared with the previous experimental work for the amino acids samples and a good agreement was noticed. Moreover, the values of mean free path (MFP) for the samples were calculated using MCNP5 program and compared with the theoretical results obtained by XCOM. The investigation of μ/ρ, Zeff, Ne and MFP values of amino acids using MCNP5 simulations at various photon energies when compared with the XCOM values and previous experimental data for the amino acids samples revealed that MCNP5 code provides accurate photon interaction parameters for amino acids.

  13. Macroscopic Dynamic Modeling of Sequential Batch Cultures of Hybridoma Cells: An Experimental Validation

    Directory of Open Access Journals (Sweden)

    Laurent Dewasme

    2017-02-01

    Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.

  14. PSpice Modeling Platform for SiC Power MOSFET Modules with Extensive Experimental Validation

    DEFF Research Database (Denmark)

    Ceccarelli, Lorenzo; Iannuzzo, Francesco; Nawaz, Muhammad

    2016-01-01

    to simulate the performance of high current rating (above 100 A), multi-chip SiC MOSFET modules both for static and switching behavior. Therefore, the simulation results have been validated experimentally in a wide range of operating conditions, including high temperatures, gate resistance and stray elements....... The whole process has been repeated for three different modules with voltage rating of 1.2 kV and 1.7 kV, manufactured by three different companies. Lastly, a parallel connection of two modules of the same type has been performed in order to observe the unbalancing and mismatches experimentally......The aim of this work is to present a PSpice implementation for a well-established and compact physics-based SiC MOSFET model, including a fast, experimental-based parameter extraction procedure in a MATLAB GUI environment. The model, originally meant for single-die devices, has been used...

  15. Converging Stereotactic Radiotherapy Using Kilovoltage X-Rays: Experimental Irradiation of Normal Rabbit Lung and Dose-Volume Analysis With Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Kawase, Takatsugu; Kunieda, Etsuo; Deloar, Hossain M.; Tsunoo, Takanori; Seki, Satoshi; Oku, Yohei; Saitoh, Hidetoshi; Saito, Kimiaki; Ogawa, Eileen N.; Ishizaka, Akitoshi; Kameyama, Kaori; Kubo, Atsushi

    2009-01-01

    Purpose: To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. Methods and Materials: A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. Results: A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. Conclusions: A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.

  16. Monte Carlo studies for irradiation process planning at the Portuguese gamma irradiation facility

    International Nuclear Information System (INIS)

    Oliveira, C.; Salgado, J.; Botelho, M.L.M. Luisa; Ferreira, L.M.

    2000-01-01

    The paper describes a Monte Carlo study for planning the irradiation of test samples for microbiological validation of distinct products in the Portuguese Gamma Irradiation Facility. Three different irradiation geometries have been used. Simulated and experimental results are compared and good agreement is observed. It is shown that Monte Carlo simulation improves process understanding, predicts absorbed dose distributions and calculates dose uniformity in different products. Based on these results, irradiation planning of the product can be performed

  17. Validation of the CATHARE2 code against experimental data from Brayton-cycle plants

    International Nuclear Information System (INIS)

    Bentivoglio, Fabrice; Tauveron, Nicolas; Geffraye, Genevieve; Gentner, Herve

    2008-01-01

    In recent years the Commissariat a l'Energie Atomique (CEA) has commissioned a wide range of feasibility studies of future-advanced nuclear reactors, in particular gas-cooled reactors (GCR). The thermohydraulic behaviour of these systems is a key issue for, among other things, the design of the core, the assessment of thermal stresses, and the design of decay heat removal systems. These studies therefore require efficient and reliable simulation tools capable of modelling the whole reactor, including the core, the core vessel, piping, heat exchangers and turbo-machinery. CATHARE2 is a thermal-hydraulic 1D reference safety code developed and extensively validated for the French pressurized water reactors. It has been recently adapted to deal also with gas-cooled reactor applications. In order to validate CATHARE2 for these new applications, CEA has initiated an ambitious long-term experimental program. The foreseen experimental facilities range from small-scale loops for physical correlations, to component technology and system demonstration loops. In the short-term perspective, CATHARE2 is being validated against existing experimental data. And in particular from the German power plants Oberhausen I and II. These facilities have both been operated by the German utility Energie Versorgung Oberhausen (E.V.O.) and their power conversion systems resemble to the high-temperature reactor concepts: Oberhausen I is a 13.75-MWe Brayton-cycle air turbine plant, and Oberhausen II is a 50-MWe Brayton-cycle helium turbine plant. The paper presents these two plants, the adopted CATHARE2 modelling and a comparison between experimental data and code results for both steady state and transient cases

  18. Experimental measurements and theoretical model of the cryogenic performance of bialkali photocathode and characterization with Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Huamu Xie

    2016-10-01

    Full Text Available High-average-current, high-brightness electron sources have important applications, such as in high-repetition-rate free-electron lasers, or in the electron cooling of hadrons. Bialkali photocathodes are promising high-quantum-efficiency (QE cathode materials, while superconducting rf (SRF electron guns offer continuous-mode operation at high acceleration, as is needed for high-brightness electron sources. Thus, we must have a comprehensive understanding of the performance of bialkali photocathode at cryogenic temperatures when they are to be used in SRF guns. To remove the heat produced by the radio-frequency field in these guns, the cathode should be cooled to cryogenic temperatures. We recorded an 80% reduction of the QE upon cooling the K_{2}CsSb cathode from room temperature down to the temperature of liquid nitrogen in Brookhaven National Laboratory (BNL’s 704 MHz SRF gun. We conducted several experiments to identify the underlying mechanism in this reduction. The change in the spectral response of the bialkali photocathode, when cooled from room temperature (300 K to 166 K, suggests that a change in the ionization energy (defined as the energy gap from the top of the valence band to vacuum level is the main reason for this reduction. We developed an analytical model of the process, based on Spicer’s three-step model. The change in ionization energy, with falling temperature, gives a simplified description of the QE’s temperature dependence. We also developed a 2D Monte Carlo code to simulate photoemission that accounts for the wavelength-dependent photon absorption in the first step, the scattering and diffusion in the second step, and the momentum conservation in the emission step. From this simulation, we established a correlation between ionization energy and reduction in the QE. The simulation yielded results comparable to those from the analytical model. The simulation offers us additional capabilities such as calculation

  19. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    Science.gov (United States)

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. © 2014 A. P. Dasgupta et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  20. An Experimental Validated Control Strategy of Maglev Vehicle-Bridge Self-Excited Vibration

    Directory of Open Access Journals (Sweden)

    Lianchun Wang

    2017-01-01

    Full Text Available This study discusses an experimentally validated control strategy of maglev vehicle-bridge vibration, which degrades the stability of the suspension control, deteriorates the ride comfort, and limits the cost of the magnetic levitation system. First, a comparison between the current-loop and magnetic flux feedback is carried out and a minimum model including flexible bridge and electromagnetic levitation system is proposed. Then, advantages and disadvantages of the traditional feedback architecture with the displacement feedback of electromagnet yE and bridge yB in pairs are explored. The results indicate that removing the feedback of the bridge’s displacement yB from the pairs (yE − yB measured by the eddy-current sensor is beneficial for the passivity of the levitation system and the control of the self-excited vibration. In this situation, the signal acquisition of the electromagnet’s displacement yE is discussed for the engineering application. Finally, to validate the effectiveness of the aforementioned control strategy, numerical validations are carried out and the experimental data are provided and analyzed.

  1. Experimental validation of TASS/SMR-S critical flow model for the integral reactor SMART

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Si Won; Ra, In Sik; Kim, Kun Yeup [ACT Co., Daejeon (Korea, Republic of); Chung, Young Jong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    An advanced integral PWR, SMART (System- Integrated Modular Advanced ReacTor) is being developed in KAERI. It has a compact size and a relatively small power rating (330MWt) compared to a conventional reactor. Because new concepts are applied to SMART, an experimental and analytical validation is necessary for the safety evaluation of SMART. The analytical safety validation is being accomplished by a safety analysis code for an integral reactor, TASS/SMR-S developed by KAERI. TASS/SMR-S uses a lumped parameter one dimensional node and path modeling for the thermal hydraulic calculation and it uses point kinetics for the reactor power calculation. It has models for a general usage such as a core heat transfer model, a wall heat structure model, a critical flow model, component models, and it also has many SMART specific models such as an once through helical coiled steam generator model, and a condensate heat transfer model. To ensure that the TASS/SMR-S code has the calculation capability for the safety evaluation of SMART, the code should be validated for the specific models with the separate effect test experimental results. In this study, TASS/SMR-S critical flow model is evaluated as compared with SMD (Super Moby Dick) experiment

  2. Validation of NEPTUNE-CFD two-phase flow models using experimental data

    International Nuclear Information System (INIS)

    Perez-Manes, Jorge; Sanchez Espinoza, Victor Hugo; Bottcher, Michael; Stieglitz, Robert; Sergio Chiva Vicent

    2014-01-01

    This paper deals with the validation of the two-phase flow models of the CFD code NEPTUNE-CFD using experimental data provided by the OECD BWR BFBT and PSBT Benchmark. Since the two-phase models of CFD codes are extensively being improved, the validation is a key step for the acceptability of such codes. The validation work is performed in the frame of the European NURISP Project and it was focused on the steady state and transient void fraction tests. The influence of different NEPTUNE-CFD model parameters on the void fraction prediction is investigated and discussed in detail. Due to the coupling of heat conduction solver SYRTHES with NEPTUNE-CFD, the description of the coupled fluid dynamics and heat transfer between the fuel rod and the fluid is improved significantly. The averaged void fraction predicted by NEPTUNE-CFD for selected PSBT and BFBT tests is in good agreement with the experimental data. Finally, areas for future improvements of the NEPTUNE-CFD code were identified, too. (authors)

  3. Dynamic modeling and experimental validation for direct contact membrane distillation (DCMD) process

    KAUST Repository

    Eleiwi, Fadi

    2016-02-01

    This work proposes a mathematical dynamic model for the direct contact membrane distillation (DCMD) process. The model is based on a 2D Advection–Diffusion Equation (ADE), which describes the heat and mass transfer mechanisms that take place inside the DCMD module. The model studies the behavior of the process in the time varying and the steady state phases, contributing to understanding the process performance, especially when it is driven by intermittent energy supply, such as the solar energy. The model is experimentally validated in the steady state phase, where the permeate flux is measured for different feed inlet temperatures and the maximum absolute error recorded is 2.78 °C. Moreover, experimental validation includes the time variation phase, where the feed inlet temperature ranges from 30 °C to 75 °C with 0.1 °C increment every 2min. The validation marks relative error to be less than 5%, which leads to a strong correlation between the model predictions and the experiments.

  4. Dynamic modeling and experimental validation for direct contact membrane distillation (DCMD) process

    KAUST Repository

    Eleiwi, Fadi; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Francis, Lijo; Laleg-Kirati, Taous-Meriem

    2016-01-01

    This work proposes a mathematical dynamic model for the direct contact membrane distillation (DCMD) process. The model is based on a 2D Advection–Diffusion Equation (ADE), which describes the heat and mass transfer mechanisms that take place inside the DCMD module. The model studies the behavior of the process in the time varying and the steady state phases, contributing to understanding the process performance, especially when it is driven by intermittent energy supply, such as the solar energy. The model is experimentally validated in the steady state phase, where the permeate flux is measured for different feed inlet temperatures and the maximum absolute error recorded is 2.78 °C. Moreover, experimental validation includes the time variation phase, where the feed inlet temperature ranges from 30 °C to 75 °C with 0.1 °C increment every 2min. The validation marks relative error to be less than 5%, which leads to a strong correlation between the model predictions and the experiments.

  5. DSMC simulation and experimental validation of shock interaction in hypersonic low density flow.

    Science.gov (United States)

    Xiao, Hong; Shang, Yuhe; Wu, Di

    2014-01-01

    Direct simulation Monte Carlo (DSMC) of shock interaction in hypersonic low density flow is developed. Three collision molecular models, including hard sphere (HS), variable hard sphere (VHS), and variable soft sphere (VSS), are employed in the DSMC study. The simulations of double-cone and Edney's type IV hypersonic shock interactions in low density flow are performed. Comparisons between DSMC and experimental data are conducted. Investigation of the double-cone hypersonic flow shows that three collision molecular models can predict the trend of pressure coefficient and the Stanton number. HS model shows the best agreement between DSMC simulation and experiment among three collision molecular models. Also, it shows that the agreement between DSMC and experiment is generally good for HS and VHS models in Edney's type IV shock interaction. However, it fails in the VSS model. Both double-cone and Edney's type IV shock interaction simulations show that the DSMC errors depend on the Knudsen number and the models employed for intermolecular interaction. With the increase in the Knudsen number, the DSMC error is decreased. The error is the smallest in HS compared with those in the VHS and VSS models. When the Knudsen number is in the level of 10(-4), the DSMC errors, for pressure coefficient, the Stanton number, and the scale of interaction region, are controlled within 10%.

  6. Experimental and simulation validation of ABHE for disinfection of Legionella in hot water systems

    International Nuclear Information System (INIS)

    Altorkmany, Lobna; Kharseh, Mohamad; Ljung, Anna-Lena; Staffan Lundström, T.

    2017-01-01

    Highlights: • ABHE system can supply a continues thermal treatment of water with saving energy. • Mathematical and experimental validation of ABHE performance are presented. • EES-based model is developed to simulate ABHE system. • Energy saving by ABHE is proved for different initial working parameters. - Abstract: The work refers to an innovative system inspired by nature that mimics the thermoregulation system that exists in animals. This method, which is called Anti Bacteria Heat Exchanger (ABHE), is proposed to achieve continuous thermal disinfection of bacteria in hot water systems with high energy efficiency. In particular, this study aims to demonstrate the opportunity to gain energy by means of recovering heat over a plate heat exchanger. Firstly, the thermodynamics of the ABHE is clarified to define the ABHE specification. Secondly, a first prototype of an ABHE is built with a specific configuration based on simplicity regarding design and construction. Thirdly, an experimental test is carried out. Finally, a computer model is built to simulate the ABHE system and the experimental data is used to validate the model. The experimental results indicate that the performance of the ABHE system is strongly dependent on the flow rate, while the supplied temperature has less effect. Experimental and simulation data show a large potential for saving energy of this thermal disinfection method by recovering heat. To exemplify, when supplying water at a flow rate of 5 kg/min and at a temperature of 50 °C, the heat recovery is about 1.5 kW while the required pumping power is 1 W. This means that the pressure drop is very small compared to the energy recovered and consequently high saving in total cost is promising.

  7. Computational Fluid Dynamics Modeling of the Human Pulmonary Arteries with Experimental Validation.

    Science.gov (United States)

    Bordones, Alifer D; Leroux, Matthew; Kheyfets, Vitaly O; Wu, Yu-An; Chen, Chia-Yuan; Finol, Ender A

    2018-05-21

    Pulmonary hypertension (PH) is a chronic progressive disease characterized by elevated pulmonary arterial pressure, caused by an increase in pulmonary arterial impedance. Computational fluid dynamics (CFD) can be used to identify metrics representative of the stage of PH disease. However, experimental validation of CFD models is often not pursued due to the geometric complexity of the model or uncertainties in the reproduction of the required flow conditions. The goal of this work is to validate experimentally a CFD model of a pulmonary artery phantom using a particle image velocimetry (PIV) technique. Rapid prototyping was used for the construction of the patient-specific pulmonary geometry, derived from chest computed tomography angiography images. CFD simulations were performed with the pulmonary model with a Reynolds number matching those of the experiments. Flow rates, the velocity field, and shear stress distributions obtained with the CFD simulations were compared to their counterparts from the PIV flow visualization experiments. Computationally predicted flow rates were within 1% of the experimental measurements for three of the four branches of the CFD model. The mean velocities in four transversal planes of study were within 5.9 to 13.1% of the experimental mean velocities. Shear stresses were qualitatively similar between the two methods with some discrepancies in the regions of high velocity gradients. The fluid flow differences between the CFD model and the PIV phantom are attributed to experimental inaccuracies and the relative compliance of the phantom. This comparative analysis yielded valuable information on the accuracy of CFD predicted hemodynamics in pulmonary circulation models.

  8. Advanced Reactors-Intermediate Heat Exchanger (IHX) Coupling: Theoretical Modeling and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Utgikar, Vivek [Univ. of Idaho, Moscow, ID (United States); Sun, Xiaodong [The Ohio State Univ., Columbus, OH (United States); Christensen, Richard [The Ohio State Univ., Columbus, OH (United States); Sabharwall, Piyush [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-12-29

    The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate the models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.

  9. Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin.

    Science.gov (United States)

    Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R

    2015-11-01

    The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a "complete mystical experience" that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. © The Author(s) 2015.

  10. Experimental validation of tape springs to be used as thin-walled space structures

    Science.gov (United States)

    Oberst, S.; Tuttle, S. L.; Griffin, D.; Lambert, A.; Boyce, R. R.

    2018-04-01

    With the advent of standardised launch geometries and off-the-shelf payloads, space programs utilising nano-satellite platforms are growing worldwide. Thin-walled, flexible and self-deployable structures are commonly used for antennae, instrument booms or solar panels owing to their lightweight, ideal packaging characteristics and near zero energy consumption. However their behaviour in space, in particular in Low Earth Orbits with continually changing environmental conditions, raises many questions. Accurate numerical models, which are often not available due to the difficulty of experimental testing under 1g-conditions, are needed to answer these questions. In this study, we present on-earth experimental validations, as a starting point to study the response of a tape spring as a representative of thin-walled flexible structures under static and vibrational loading. Material parameters of tape springs in a singly (straight, open cylinder) and a doubly curved design, are compared to each other by combining finite element calculations, with experimental laser vibrometry within a single and multi-stage model updating approach. While the determination of the Young's modulus is unproblematic, the damping is found to be inversely proportional to deployment length. With updated material properties the buckling instability margin is calculated using different slenderness ratios. Results indicate a high sensitivity of thin-walled structures to miniscule perturbations, which makes proper experimental testing a key requirement for stability prediction on thin-elastic space structures. The doubly curved tape spring provides closer agreement with experimental results than a straight tape spring design.

  11. The Dynamic Similitude Design Method of Thin Walled Structures and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Zhong Luo

    2016-01-01

    Full Text Available For the applicability of dynamic similitude models of thin walled structures, such as engine blades, turbine discs, and cylindrical shells, the dynamic similitude design of typical thin walled structures is investigated. The governing equation of typical thin walled structures is firstly unified, which guides to establishing dynamic scaling laws of typical thin walled structures. Based on the governing equation, geometrically complete scaling law of the typical thin walled structure is derived. In order to determine accurate distorted scaling laws of typical thin walled structures, three principles are proposed and theoretically proved by combining the sensitivity analysis and governing equation. Taking the thin walled annular plate as an example, geometrically complete and distorted scaling laws can be obtained based on the principles of determining dynamic scaling laws. Furthermore, the previous five orders’ accurate distorted scaling laws of thin walled annular plates are presented and numerically validated. Finally, the effectiveness of the similitude design method is validated by experimental annular plates.

  12. Experimental Validation and Model Verification for a Novel Geometry ICPC Solar Collector

    DEFF Research Database (Denmark)

    Perers, Bengt; Duff, William S.; Daosukho, Jirachote

    A novel geometry ICPC solar collector was developed at the University of Chicago and Colorado State University. A ray tracing model has been designed to investigate the optical performance of both the horizontal and vertical fin versions of this collector. Solar radiation is modeled as discrete...... to the desired incident angle of the sun’s rays, performance of the novel ICPC solar collector at various specified angles along the transverse and longitudinal evacuated tube directions were experimentally determined. To validate the ray tracing model, transverse and longitudinal performance predictions...... at the corresponding specified incident angles are compared to the Sandia results. A 100 m2 336 Novel ICPC evacuated tube solar collector array has been in continuous operation at a demonstration project in Sacramento California since 1998. Data from the initial operation of the array are used to further validate...

  13. Numerical calibration and experimental validation of a PCM-Air heat exchanger model

    International Nuclear Information System (INIS)

    Stathopoulos, N.; El Mankibi, M.; Santamouris, Mattheos

    2017-01-01

    Highlights: • Development of a PCM-Air heat exchanger experimental unit and its numerical model. • Differential Scanning Calorimetry for PCM properties. • Ineptitude of DSC obtained heat capacity curves. • Creation of adequate heat capacity curves depending on heat transfer rates. • Confrontation of numerical and experimental results and validation of the model. - Abstract: Ambitious goals have been set at international, European and French level for energy consumption and greenhouse gas emissions decrease of the building sector. Achieving them requires renewable energy integration, a technology that presents however an important drawback: intermittent energy production. In response, thermal energy storage (TES) technology applications have been developed in order to correlate energy production and consumption of the building. Phase Change Materials (PCMs) have been widely used in TES applications as they offer a high storage density and adequate phase change temperature range. It is important to accurately know the thermophysical properties of the PCM, both for experimental (system design) and numerical (correct prediction) purposes. In this paper, the fabrication of a PCM – Air experimental prototype is presented at first, along with the development of a numerical model simulating the downstream temperature evolution of the heat exchanger. Particular focus is given to the calibration method and the validation of the model using experimental characterization results. Differential scanning calorimetry (DSC) is used to define the thermal properties of the PCM. Initial numerical results are underestimated compared to experimental ones. Various factors were investigated, pointing to the ineptitude of the heat capacity parameter, as DSC results depend on heating/cooling rates. Adequate heat capacity curves were empirically determined, depending on heat transfer rates and based on DSC results and experimental observations. The results of the proposed model

  14. An experimental approach to improve the Monte Carlo modelling of offline PET/CT-imaging of positron emitters induced by scanned proton beams

    Science.gov (United States)

    Bauer, J.; Unholtz, D.; Kurz, C.; Parodi, K.

    2013-08-01

    We report on the experimental campaign carried out at the Heidelberg Ion-Beam Therapy Center (HIT) to optimize the Monte Carlo (MC) modelling of proton-induced positron-emitter production. The presented experimental strategy constitutes a pragmatic inverse approach to overcome the known uncertainties in the modelling of positron-emitter production due to the lack of reliable cross-section data for the relevant therapeutic energy range. This work is motivated by the clinical implementation of offline PET/CT-based treatment verification at our facility. Here, the irradiation induced tissue activation in the patient is monitored shortly after the treatment delivery by means of a commercial PET/CT scanner and compared to a MC simulated activity expectation, derived under the assumption of a correct treatment delivery. At HIT, the MC particle transport and interaction code FLUKA is used for the simulation of the expected positron-emitter yield. For this particular application, the code is coupled to externally provided cross-section data of several proton-induced reactions. Studying experimentally the positron-emitting radionuclide yield in homogeneous phantoms provides access to the fundamental production channels. Therefore, five different materials have been irradiated by monoenergetic proton pencil beams at various energies and the induced β+ activity subsequently acquired with a commercial full-ring PET/CT scanner. With the analysis of dynamically reconstructed PET images, we are able to determine separately the spatial distribution of different radionuclide concentrations at the starting time of the PET scan. The laterally integrated radionuclide yields in depth are used to tune the input cross-section data such that the impact of both the physical production and the imaging process on the various positron-emitter yields is reproduced. The resulting cross-section data sets allow to model the absolute level of measured β+ activity induced in the investigated

  15. Monte Carlo simulations and experimental results on neutron production in the spallation target QUINTA irradiated with 660 MeV protons

    International Nuclear Information System (INIS)

    Khushvaktov, J.H.; Yuldashev, B.S.; Adam, J.; Vrzalova, J.; Baldin, A.A.; Furman, W.I.; Gustov, S.A.; Kish, Yu.V.; Solnyshkin, A.A.; Stegailov, V.I.; Tichy, P.; Tsoupko-Sitnikov, V.M.; Tyutyunnikov, S.I.; Zavorka, L.; Svoboda, J.; Zeman, M.; Vespalec, R.; Wagner, V.

    2017-01-01

    The activation experiment was performed using the accelerated beam of the Phasotron accelerator at the Joint Institute for Nuclear Research (JINR). The natural uranium spallation target QUINTA was irradiated with protons of energy 660 MeV. Monte Carlo simulations were performed using the FLUKA and Geant4 codes. The number of leakage neutrons from the sections of the uranium target surrounded by the lead shielding and the number of leakage neutrons from the lead shield were determined. The total number of fissions in the setup QUINTA were determined. Experimental values of reaction rates for the produced nuclei in the "1"2"7I sample were obtained, and several values of the reaction rates were compared with the results of simulations by the FLUKA and Geant4 codes. The experimentally determined fluence of neutrons in the energy range of 10-200 MeV using the (n, xn) reactions in the "1"2"7I(NaI) sample was compared with the results of simulations. Possibility of transmutation of the long-lived radionuclide "1"2"9I in the QUINTA setup was estimated. [ru

  16. Computational Design of Creep-Resistant Alloys and Experimental Validation in Ferritic Superalloys

    Energy Technology Data Exchange (ETDEWEB)

    Liaw, Peter

    2014-12-31

    A new class of ferritic superalloys containing B2-type zones inside parent L21-type precipitates in a disordered solid-solution matrix, also known as a hierarchical-precipitate strengthened ferritic alloy (HPSFA), has been developed for high-temperature structural applications in fossil-energy power plants. These alloys were designed by the addition of the Ti element into a previously-studied NiAl-strengthened ferritic alloy (denoted as FBB8 in this study). In the present research, systematic investigations, including advanced experimental techniques, first-principles calculations, and numerical simulations, have been integrated and conducted to characterize the complex microstructures and excellent creep resistance of HPSFAs. The experimental techniques include transmission-electron microscopy, scanningtransmission- electron microscopy, neutron diffraction, and atom-probe tomography, which provide detailed microstructural information of HPSFAs. Systematic tension/compression creep tests revealed that HPSFAs exhibit the superior creep resistance, compared with the FBB8 and conventional ferritic steels (i.e., the creep rates of HPSFAs are about 4 orders of magnitude slower than the FBB8 and conventional ferritic steels.) First-principles calculations include interfacial free energies, anti-phase boundary (APB) free energies, elastic constants, and impurity diffusivities in Fe. Combined with kinetic Monte- Carlo simulations of interdiffusion coefficients, and the integration of computational thermodynamics and kinetics, these calculations provide great understanding of thermodynamic and mechanical properties of HPSFAs. In addition to the systematic experimental approach and first-principles calculations, a series of numerical tools and algorithms, which assist in the optimization of creep properties of ferritic superalloys, are utilized and developed. These numerical simulation results are compared with the available experimental data and previous first

  17. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    Science.gov (United States)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  18. Electrocatalysis of borohydride oxidation: a review of density functional theory approach combined with experimental validation

    International Nuclear Information System (INIS)

    Sison Escaño, Mary Clare; Arevalo, Ryan Lacdao; Kasai, Hideaki; Gyenge, Elod

    2014-01-01

    The electrocatalysis of borohydride oxidation is a complex, up-to-eight-electron transfer process, which is essential for development of efficient direct borohydride fuel cells. Here we review the progress achieved by density functional theory (DFT) calculations in explaining the adsorption of BH 4 − on various catalyst surfaces, with implications for electrocatalyst screening and selection. Wherever possible, we correlate the theoretical predictions with experimental findings, in order to validate the proposed models and to identify potential directions for further advancements. (topical review)

  19. Signal Validation: A Survey of Theoretical and Experimental Studies at the KFKI Atomic Energy Research Institute

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.

    1996-07-01

    The aim of this survey paper is to collect the results of the theoretical and experimental work that has been done on early failure and change detection, signal/detector validation, parameter estimation and system identification problems in the Applied Reactor Physics Department of the KFKI-AEI. The present paper reports different applications of the theoretical methods using real and computer simulated data. The final goal is two-sided: 1) to better understand the mathematical/physical background of the applied methods and 2) to integrate the useful algorithms into a large, complex diagnostic software system. The software is under development, a preliminary version (called JEDI) has already been accomplished. (author)

  20. Experimental validation of field cooling simulations for linear superconducting magnetic bearings

    Energy Technology Data Exchange (ETDEWEB)

    Dias, D H N; Motta, E S; Sotelo, G G; De Andrade Jr, R, E-mail: ddias@coe.ufrj.b [Laboratorio de aplicacao de Supercondutores (LASUP), Universidade Federal do Rio de Janeiro, Rio de Janeiro (Brazil)

    2010-07-15

    For practical stability of a superconducting magnetic bearing the refrigeration process must occur with the superconductor in the presence of the magnetic field (a field cooling (FC) process). This paper presents an experimental validation of a method for simulating this system in the FC case. Measured and simulated results for a vertical force between a high temperature superconductor and a permanent magnet rail are compared. The main purpose of this work is to consolidate a simulation tool that can help in future projects on superconducting magnetic bearings for MagLev vehicles.

  1. Modeling and experimental validation of water mass balance in a PEM fuel cell stack

    DEFF Research Database (Denmark)

    Liso, Vincenzo; Araya, Samuel Simon; Olesen, Anders Christian

    2016-01-01

    Polymer electrolyte membrane (PEM) fuel cells require good hydration in order to deliver high performance and ensure long life operation. Water is essential for proton conductivity in the membrane which increases by nearly six orders of magnitude from dry to fully hydrated. Adequate water...... management in PEM fuel cell is crucial in order to avoid an imbalance between water production and water removal from the fuel cell. In the present study, a novel mathematical zero-dimensional model has been formulated for the water mass balance and hydration of a polymer electrolyte membrane. This model...... is validated against experimental data. In the results it is shown that the fuel cell water balance calculated by this model shows better fit with experimental data-points compared with model where only steady state operation were considered. We conclude that this discrepancy is due a different rate of water...

  2. Absorber and regenerator models for liquid desiccant air conditioning systems. Validation and comparison using experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Krause, M.; Heinzen, R.; Jordan, U.; Vajen, K. [Kassel Univ., Inst. of Thermal Engineering, Kassel (Germany); Saman, W.; Halawa, E. [Sustainable Energy Centre, Univ. of South Australia, Mawson Lakes, Adelaide (Australia)

    2008-07-01

    Solar assisted air conditioning systems using liquid desiccants represent a promising option to decrease high summer energy demand caused by electrically driven vapor compression machines. The main components of liquid desiccant systems are absorbers for dehumidifying and cooling of supply air and regenerators for concentrating the desiccant. However, high efficient and validated reliable components are required and the design and operation have to be adjusted to each respective building design, location, and user demand. Simulation tools can help to optimize component and system design. The present paper presents new developed numerical models for absorbers and regenerators, as well as experimental data of a regenerator prototype. The models have been compared with a finite-difference method model as well as experimental data. The data are gained from the regenerator prototype presented and an absorber presented in the literature. (orig.)

  3. Computational Prediction and Rationalization, and Experimental Validation of Handedness Induction in Helical Aromatic Oligoamide Foldamers.

    Science.gov (United States)

    Liu, Zhiwei; Hu, Xiaobo; Abramyan, Ara M; Mészáros, Ádám; Csékei, Márton; Kotschy, András; Huc, Ivan; Pophristic, Vojislava

    2017-03-13

    Metadynamics simulations were used to describe the conformational energy landscapes of several helically folded aromatic quinoline carboxamide oligomers bearing a single chiral group at either the C or N terminus. The calculations allowed the prediction of whether a helix handedness bias occurs under the influence of the chiral group and gave insight into the interactions (sterics, electrostatics, hydrogen bonds) responsible for a particular helix sense preference. In the case of camphanyl-based and morpholine-based chiral groups, experimental data confirming the validity of the calculations were already available. New chiral groups with a proline residue were also investigated and were predicted to induce handedness. This prediction was verified experimentally through the synthesis of proline-containing monomers, their incorporation into an oligoamide sequence by solid phase synthesis and the investigation of handedness induction by NMR spectroscopy and circular dichroism. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    Science.gov (United States)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  5. De novo peptide design and experimental validation of histone methyltransferase inhibitors.

    Directory of Open Access Journals (Sweden)

    James Smadbeck

    Full Text Available Histones are small proteins critical to the efficient packaging of DNA in the nucleus. DNA–protein complexes, known as nucleosomes, are formed when the DNA winds itself around the surface of the histones. The methylation of histone residues by enhancer of zeste homolog 2 (EZH2 maintains gene repression over successive cell generations. Overexpression of EZH2 can silence important tumor suppressor genes leading to increased invasiveness of many types of cancers. This makes the inhibition of EZH2 an important target in the development of cancer therapeutics. We employed a three-stage computational de novo peptide design method to design inhibitory peptides of EZH2. The method consists of a sequence selection stage and two validation stages for fold specificity and approximate binding affinity. The sequence selection stage consists of an integer linear optimization model that was solved to produce a rank-ordered list of amino acid sequences with increased stability in the bound peptide-EZH2 structure. These sequences were validated through the calculation of the fold specificity and approximate binding affinity of the designed peptides. Here we report the discovery of novel EZH2 inhibitory peptides using the de novo peptide design method. The computationally discovered peptides were experimentally validated in vitro using dose titrations and mechanism of action enzymatic assays. The peptide with the highest in vitro response, SQ037, was validated in nucleo using quantitative mass spectrometry-based proteomics. This peptide had an IC50 of 13.5 mM, demonstrated greater potency as an inhibitor when compared to the native and K27A mutant control peptides, and demonstrated competitive inhibition versus the peptide substrate. Additionally, this peptide demonstrated high specificity to the EZH2 target in comparison to other histone methyltransferases. The validated peptides are the first computationally designed peptides that directly inhibit EZH2

  6. De novo peptide design and experimental validation of histone methyltransferase inhibitors.

    Directory of Open Access Journals (Sweden)

    James Smadbeck

    Full Text Available Histones are small proteins critical to the efficient packaging of DNA in the nucleus. DNA-protein complexes, known as nucleosomes, are formed when the DNA winds itself around the surface of the histones. The methylation of histone residues by enhancer of zeste homolog 2 (EZH2 maintains gene repression over successive cell generations. Overexpression of EZH2 can silence important tumor suppressor genes leading to increased invasiveness of many types of cancers. This makes the inhibition of EZH2 an important target in the development of cancer therapeutics. We employed a three-stage computational de novo peptide design method to design inhibitory peptides of EZH2. The method consists of a sequence selection stage and two validation stages for fold specificity and approximate binding affinity. The sequence selection stage consists of an integer linear optimization model that was solved to produce a rank-ordered list of amino acid sequences with increased stability in the bound peptide-EZH2 structure. These sequences were validated through the calculation of the fold specificity and approximate binding affinity of the designed peptides. Here we report the discovery of novel EZH2 inhibitory peptides using the de novo peptide design method. The computationally discovered peptides were experimentally validated in vitro using dose titrations and mechanism of action enzymatic assays. The peptide with the highest in vitro response, SQ037, was validated in nucleo using quantitative mass spectrometry-based proteomics. This peptide had an IC50 of 13.5 [Formula: see text]M, demonstrated greater potency as an inhibitor when compared to the native and K27A mutant control peptides, and demonstrated competitive inhibition versus the peptide substrate. Additionally, this peptide demonstrated high specificity to the EZH2 target in comparison to other histone methyltransferases. The validated peptides are the first computationally designed peptides that directly

  7. Experimentally Manipulating Items Informs on the (Limited Construct and Criterion Validity of the Humor Styles Questionnaire

    Directory of Open Access Journals (Sweden)

    Willibald Ruch

    2017-04-01

    Full Text Available How strongly does humor (i.e., the construct-relevant content in the Humor Styles Questionnaire (HSQ; Martin et al., 2003 determine the responses to this measure (i.e., construct validity? Also, how much does humor influence the relationships of the four HSQ scales, namely affiliative, self-enhancing, aggressive, and self-defeating, with personality traits and subjective well-being (i.e., criterion validity? The present paper answers these two questions by experimentally manipulating the 32 items of the HSQ to only (or mostly contain humor (i.e., construct-relevant content or to substitute the humor content with non-humorous alternatives (i.e., only assessing construct-irrelevant context. Study 1 (N = 187 showed that the HSQ affiliative scale was mainly determined by humor, self-enhancing and aggressive were determined by both humor and non-humorous context, and self-defeating was primarily determined by the context. This suggests that humor is not the primary source of the variance in three of the HQS scales, thereby limiting their construct validity. Study 2 (N = 261 showed that the relationships of the HSQ scales to the Big Five personality traits and subjective well-being (positive affect, negative affect, and life satisfaction were consistently reduced (personality or vanished (subjective well-being when the non-humorous contexts in the HSQ items were controlled for. For the HSQ self-defeating scale, the pattern of relationships to personality was also altered, supporting an positive rather than a negative view of the humor in this humor style. The present findings thus call for a reevaluation of the role that humor plays in the HSQ (construct validity and in the relationships to personality and well-being (criterion validity.

  8. Experimentally Manipulating Items Informs on the (Limited) Construct and Criterion Validity of the Humor Styles Questionnaire.

    Science.gov (United States)

    Ruch, Willibald; Heintz, Sonja

    2017-01-01

    How strongly does humor (i.e., the construct-relevant content) in the Humor Styles Questionnaire (HSQ; Martin et al., 2003) determine the responses to this measure (i.e., construct validity)? Also, how much does humor influence the relationships of the four HSQ scales, namely affiliative, self-enhancing, aggressive, and self-defeating, with personality traits and subjective well-being (i.e., criterion validity)? The present paper answers these two questions by experimentally manipulating the 32 items of the HSQ to only (or mostly) contain humor (i.e., construct-relevant content) or to substitute the humor content with non-humorous alternatives (i.e., only assessing construct-irrelevant context). Study 1 ( N = 187) showed that the HSQ affiliative scale was mainly determined by humor, self-enhancing and aggressive were determined by both humor and non-humorous context, and self-defeating was primarily determined by the context. This suggests that humor is not the primary source of the variance in three of the HQS scales, thereby limiting their construct validity. Study 2 ( N = 261) showed that the relationships of the HSQ scales to the Big Five personality traits and subjective well-being (positive affect, negative affect, and life satisfaction) were consistently reduced (personality) or vanished (subjective well-being) when the non-humorous contexts in the HSQ items were controlled for. For the HSQ self-defeating scale, the pattern of relationships to personality was also altered, supporting an positive rather than a negative view of the humor in this humor style. The present findings thus call for a reevaluation of the role that humor plays in the HSQ (construct validity) and in the relationships to personality and well-being (criterion validity).

  9. Preliminary experimentally-validated forced and mixed convection computational simulations of the Rotatable Buoyancy Tunnel

    International Nuclear Information System (INIS)

    Clifford, Corey E.; Kimber, Mark L.

    2015-01-01

    Although computational fluid dynamics (CFD) has not been directly utilized to perform safety analyses of nuclear reactors in the United States, several vendors are considering adopting commercial numerical packages for current and future projects. To ensure the accuracy of these computational models, it is imperative to validate the assumptions and approximations built into commercial CFD codes against physical data from flows analogous to those in modern nuclear reactors. To this end, researchers at Utah State University (USU) have constructed the Rotatable Buoyancy Tunnel (RoBuT) test facility, which is designed to provide flow and thermal validation data for CFD simulations of forced and mixed convection scenarios. In order to evaluate the ability of current CFD codes to capture the complex physics associated with these types of flows, a computational model of the RoBuT test facility is created using the ANSYS Fluent commercial CFD code. The numerical RoBuT model is analyzed at identical conditions to several experimental trials undertaken at USU. Each experiment is reconstructed numerically and evaluated with the second-order Reynolds stress model (RSM). Two different thermal boundary conditions at the heated surface of the RoBuT test section are investigated: constant temperature (isothermal) and constant surface heat flux (isoflux). Additionally, the fluid velocity at the inlet of the test section is varied in an effort to modify the relative importance of natural convection heat transfer from the heated wall of the RoBuT. Mean velocity, both in the streamwise and transverse directions, as well as components of the Reynolds stress tensor at three points downstream of the RoBuT test section inlet are compared to results obtained from experimental trials. Early computational results obtained from this research initiative are in good agreement with experimental data obtained from the RoBuT facility and both the experimental data and numerical method can be used

  10. Sliding spool design for reducing the actuation forces in direct operated proportional directional valves: Experimental validation

    International Nuclear Information System (INIS)

    Amirante, Riccardo; Distaso, Elia; Tamburrano, Paolo

    2016-01-01

    Highlights: • An innovative procedure to design a commercial proportional directional valve is shown. • Experimental tests are performed to demonstrate the flow force reduction. • The design is improved by means of a previously made optimization procedure. • Great reduction in the flow forces without reducing the flow rate is demonstrated. - Abstract: This paper presents the experimental validation of a new methodology for the design of the spool surfaces of four way three position direct operated proportional directional valves. The proposed methodology is based on the re-design of both the compensation profile (the central conical surface of the spool) and the lateral surfaces of the spool, in order to reduce the flow forces acting on the spool and hence the actuation forces. The aim of this work is to extend the application range of these valves to higher values of pressure and flow rate, thus avoiding the employment of more expensive two stage configurations in the case of high-pressure conditions and/or flow rate. The paper first presents a theoretical approach and a general strategy for the sliding spool design to be applied to any four way three position direct operated proportional directional valve. Then, the proposed approach is experimentally validated on a commercially available valve using a hydraulic circuit capable of measuring the flow rate as well as the actuation force over the entire spool stroke. The experimental results, performed using both the electronic driver provided by the manufacturer and a manual actuation system, show that the novel spool surface requires remarkably lower actuation forces compared to the commercial configuration, while maintaining the same flow rate trend as a function of the spool position.

  11. Methodology for experimental validation of a CFD model for predicting noise generation in centrifugal compressors

    International Nuclear Information System (INIS)

    Broatch, A.; Galindo, J.; Navarro, R.; García-Tíscar, J.

    2014-01-01

    Highlights: • A DES of a turbocharger compressor working at peak pressure point is performed. • In-duct pressure signals are measured in a steady flow rig with 3-sensor arrays. • Pressure spectra comparison is performed as a validation for the numerical model. • A suitable comparison methodology is developed, relying on pressure decomposition. • Whoosh noise at outlet duct is detected in experimental and numerical spectra. - Abstract: Centrifugal compressors working in the surge side of the map generate a broadband noise in the range of 1–3 kHz, named as whoosh noise. This noise is perceived at strongly downsized engines operating at particular conditions (full load, tip-in and tip-out maneuvers). A 3-dimensional CFD model of a centrifugal compressor is built to analyze fluid phenomena related to whoosh noise. A detached eddy simulation is performed with the compressor operating at the peak pressure point of 160 krpm. A steady flow rig mounted on an anechoic chamber is used to obtain experimental measurements as a means of validation for the numerical model. In-duct pressure signals are obtained in addition to standard averaged global variables. The numerical simulation provides global variables showing excellent agreement with experimental measurements. Pressure spectra comparison is performed to assess noise prediction capability of numerical model. The influence of the type and position of the virtual pressure probes is evaluated. Pressure decomposition is required by the simulations to obtain meaningful spectra. Different techniques for obtaining pressure components are analyzed. At the simulated conditions, a broadband noise in 1–3 kHz frequency band is detected in the experimental measurements. This whoosh noise is also captured by the numerical model

  12. Response of CR-39 SSNTD to high energy neutrons using zirconium convertors - a Monte Carlo and experimental study

    International Nuclear Information System (INIS)

    Pal, Rupali; Sapra, B.K.; Bakshi, A.K.; Datta, D.; Biju, K.; Suryanarayana, S.V.; Nayak, B.K.

    2016-01-01

    Neutron dosimetry in ion accelerators is a challenging field as the neutron spectrum varies from thermal, to fast and high-energy neutrons usually extending beyond 20 MeV. Solid-state Nuclear Track Detectors (SSNTDs) have been increasingly used in numerous fields related to nuclear physics. Extensive work has also been carried out on determining the response characteristics of such detectors as nuclear spectrometers. In nuclear reaction studies, identification of reaction products according to their type and energy is frequently required. For normally incident particles, energy-dispersive track-diameter methods have become useful scientific tools using CR-39 SSNTD. CR-39 along with 1 mm polyethylene convertor can cover a neutron energy range from 100 keV to 10 MeV. The neutron interacts with the hydrogen in CR-39 producing recoil protons from elastic collisions. This detectable neutron energy range can be increased by modification in the radiator/convertor used along with CR-39. CR39 detectors placed in conjunction with judiciously chosen thicknesses of a polyethylene radiator and a lead absorber (or degrader) are used to increase energy range upto 19 MeV. A portable neutron counter has been proposed for high-energy neutron measurement with 1 cm thick Zirconium (Zr) as the converter outside a spherical HDPE shell of 7 inch diameter. Zr metal has been found to show (n,2n) cross section for energies above 10 MeV starting from 0.01 barns for 8 MeV upto 1 barns for 22 MeV. Above these energies, the experimental data is scarce. In this paper, Zr was used in conjunction with CR-39 which showed an enhancement of track density on the CR-39. This paper demonstrates the enhancement of neutron response using Zr on CR-39 with both theoretical and experimental studies

  13. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    Science.gov (United States)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  14. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    International Nuclear Information System (INIS)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  15. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  16. Experimental validation for combustion analysis of GOTHIC code in 2-dimensional combustion chamber

    International Nuclear Information System (INIS)

    Lee, J. W.; Yang, S. Y.; Park, K. C.; Jung, S. H.

    2002-01-01

    In this study, the prediction capability of GOTHIC code for hydrogen combustion phenomena was validated with the results of two-dimensional premixed hydrogen combustion experiment executed by Seoul National University. The experimental chamber has about 24 liter free volume (1x0.024x1 m 3 ) and 2-dimensional rectangular shape. The test were preformed with 10% hydrogen/air gas mixture and conducted with combination of two igniter positions (top center, top corner) and two boundary conditions (bottom full open, bottom right half open). Using the lumped parameter and mechanistic combustion model in GOTHIC code, the SNU experiments were simulated under the same conditions. The GOTHIC code prediction of the hydrogen combustion phenomena did not compare well with the experimental results. In case of lumped parameter simulation, the combustion time was predicted appropriately. But any other local information related combustion phenomena could not be obtained. In case of mechanistic combustion analysis, the physical combustion phenomena of gas mixture were not matched experimental ones. In boundary open cases, the GOTHIC predicted very long combustion time and the flame front propagation could not simulate appropriately. Though GOTHIC showed flame propagation phenomenon in adiabatic calculation, the induction time of combustion was still very long compare with experimental results. Also, it was found that the combustion model of GOTHIC code had some weak points in low concentration of hydrogen combustion simulation

  17. Finite-Geometry and Polarized Multiple-Scattering Corrections of Experimental Fast- Neutron Polarization Data by Means of Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Aspelund, O; Gustafsson, B

    1967-05-15

    After an introductory discussion of various methods for correction of experimental left-right ratios for polarized multiple-scattering and finite-geometry effects necessary and sufficient formulas for consistent tracking of polarization effects in successive scattering orders are derived. The simplifying assumptions are then made that the scattering is purely elastic and nuclear, and that in the description of the kinematics of the arbitrary Scattering {mu}, only one triple-parameter - the so-called spin rotation parameter {beta}{sup ({mu})} - is required. Based upon these formulas a general discussion of the importance of the correct inclusion of polarization effects in any scattering order is presented. Special attention is then paid to the question of depolarization of an already polarized beam. Subsequently, the afore-mentioned formulas are incorporated in the comprehensive Monte Carlo program MULTPOL, which has been designed so as to correctly account for finite-geometry effects in the sense that both the scattering sample and the detectors (both having cylindrical shapes) are objects of finite dimensions located at finite distances from each other and from the source of polarized fast-neutrons. A special feature of MULTPOL is the application of the method of correlated sampling for reduction of the standard deviations .of the results of the simulated experiment. Typical data of performance of MULTPOL have been obtained by the application of this program to the correction of experimental polarization data observed in n + '{sup 12}C elastic scattering between 1 and 2 MeV. Finally, in the concluding remarks the possible modification of MULTPOL to other experimental geometries is briefly discussed.

  18. Modelling of the RA-1 reactor using a Monte Carlo code; Modelado del reactor RA-1 utilizando un codigo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Quinteiro, Guillermo F; Calabrese, Carlos R [Comision Nacional de Energia Atomica, General San Martin (Argentina). Dept. de Reactores y Centrales Nucleares

    2000-07-01

    It was carried out for the first time, a model of the Argentine RA-1 reactor using the MCNP Monte Carlo code. This model was validated using data for experimental neutron and gamma measurements at different energy ranges and locations. In addition, the resulting fluxes were compared with the data obtained using a 3D diffusion code. (author)

  19. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    International Nuclear Information System (INIS)

    Zeile, Christian; Maione, Ivan A.

    2015-01-01

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  20. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences

    Science.gov (United States)

    Parsons, Thomas D.

    2015-01-01

    An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869

  1. Summary: Experimental validation of real-time fault-tolerant systems

    Science.gov (United States)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  2. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    Directory of Open Access Journals (Sweden)

    F. Terzuoli

    2008-01-01

    Full Text Available Pressurized thermal shock (PTS modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV lifetime is the cold water emergency core cooling (ECC injection into the cold leg during a loss of coolant accident (LOCA. Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mécanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX, and a research code (NEPTUNE CFD. The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling.

  3. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    International Nuclear Information System (INIS)

    Terzuoli, F.; Galassi, M.C.; Mazzini, D.; D'Auria, F.

    2008-01-01

    Pressurized thermal shock (PTS) modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV) lifetime is the cold water emergency core cooling (ECC) injection into the cold leg during a loss of coolant accident (LOCA). Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM) Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs) code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mecanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX), and a research code (NEPTUNE CFD). The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling

  4. Experimental Study of the Twin Turbulent Water Jets Using Laser Doppler Anemometry for Validating Numerical Models

    International Nuclear Information System (INIS)

    Wang Huhu; Lee Saya; Hassan, Yassin A.; Ruggles, Arthur E.

    2014-01-01

    The design of next generation (Gen. IV) high-temperature nuclear reactors including gas-cooled and sodium-cooled ones involves massive numerical works especially the Computational Fluid Dynamics (CFD) simulations. The high cost of large-scale experiments and the inherent uncertainties existing in the turbulent models and wall functions of any CFD codes solving Reynolds-averaged Navier-Stokes (RANS) equations necessitate the high-spacial experimental data sets for benchmarking the simulation results. In Gen. IV conceptual reactors, the high- temperature flows mix in the upper plenum before entering the secondary cooling system. The mixing condition should be accurately estimated and fully understood as it is related to the thermal stresses induced in the upper plenum and the magnitudes of output power oscillations due to any changes of primary coolant temperature. The purpose of this study is to use Laser Doppler Anemometry (LDA) technique to measure the flow field of two submerged parallel jets issuing from two rectangular channels. The LDA data sets can be used to validate the corresponding simulation results. The jets studied in this work were at room temperature. The turbulent characteristics including the distributions of mean velocities, turbulence intensities, Reynolds stresses were studied. Uncertainty analysis was also performed to study the errors involved in this experiment. The experimental results in this work are valid for benchmarking any steady-state numerical simulations using turbulence models to solve RANS equations. (author)

  5. The International Experimental Thermal Hydraulic Systems database – TIETHYS: A new NEA validation tool

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, Upendra S.

    2018-07-22

    Nuclear reactor codes require validation with appropriate data representing the plant for specific scenarios. The thermal-hydraulic data is scattered in different locations and in different formats. Some of the data is in danger of being lost. A relational database is being developed to organize the international thermal hydraulic test data for various reactor concepts and different scenarios. At the reactor system level, that data is organized to include separate effect tests and integral effect tests for specific scenarios and corresponding phenomena. The database relies on the phenomena identification sections of expert developed PIRTs. The database will provide a summary of appropriate data, review of facility information, test description, instrumentation, references for the experimental data and some examples of application of the data for validation. The current database platform includes scenarios for PWR, BWR, VVER, and specific benchmarks for CFD modelling data and is to be expanded to include references for molten salt reactors. There are place holders for high temperature gas cooled reactors, CANDU and liquid metal reactors. This relational database is called The International Experimental Thermal Hydraulic Systems (TIETHYS) database and currently resides at Nuclear Energy Agency (NEA) of the OECD and is freely open to public access. Going forward the database will be extended to include additional links and data as they become available. https://www.oecd-nea.org/tiethysweb/

  6. Modeling and experimental validation of a Hybridized Energy Storage System for automotive applications

    Science.gov (United States)

    Fiorenti, Simone; Guanetti, Jacopo; Guezennec, Yann; Onori, Simona

    2013-11-01

    This paper presents the development and experimental validation of a dynamic model of a Hybridized Energy Storage System (HESS) consisting of a parallel connection of a lead acid (PbA) battery and double layer capacitors (DLCs), for automotive applications. The dynamic modeling of both the PbA battery and the DLC has been tackled via the equivalent electric circuit based approach. Experimental tests are designed for identification purposes. Parameters of the PbA battery model are identified as a function of state of charge and current direction, whereas parameters of the DLC model are identified for different temperatures. A physical HESS has been assembled at the Center for Automotive Research The Ohio State University and used as a test-bench to validate the model against a typical current profile generated for Start&Stop applications. The HESS model is then integrated into a vehicle simulator to assess the effects of the battery hybridization on the vehicle fuel economy and mitigation of the battery stress.

  7. Monte Carlo calculations and experimental results of Bonner spheres systems with a new cylindrical Helium-3 proportional counter

    CERN Document Server

    Müller, H; Bouassoule, T; Fernández, F; Pochat, J L; Tomas, M; Van Ryckeghem, L

    2002-01-01

    The experimental results on neutron energy spectra, integral fluences and equivalent dose measurements performed by means of a Bonner sphere system placed inside the containment building of the Vandellos II Nuclear Power Plant (Tarragona, Spain) are presented. The equivalent dose results obtained with this system are compared to those measured with different neutron area detectors (Berthold, Dineutron, Harwell). A realistic geometry model of the Bonner sphere system with a new cylindrical counter type 'F' (0,5NH1/1KI--Eurisys Mesures) and with a set of eight polyethylene moderating spheres is described in detail. The response function in fluence of this new device, to mono-energetic neutrons from thermal energy to 20 MeV, is calculated by the MCNP-4B code for each moderator sphere. The system has been calibrated at IPSN Cadarache facility for ISO Am-Be calibrated source and thermal neutron field, then the response functions were confirmed by measurements at PTB (Germany) for ISO recommended energies of mono-e...

  8. Experimental validation of thermal design of top shield for a pool type SFR

    International Nuclear Information System (INIS)

    Aithal, Sriramachandra; Babu, V. Rajan; Balasubramaniyan, V.; Velusamy, K.; Chellapandi, P.

    2016-01-01

    Highlights: • Overall thermal design of top shield in a SFR is experimentally verified. • Air jet cooling is effective in ensuring the temperatures limits for top shield. • Convection patterns in narrow annulus are in line with published CFD results. • Wire mesh insulation ensures gradual thermal gradient at top portion of main vessel. • Under loss of cooling scenario, sufficient time is available for corrective action. - Abstract: An Integrated Top Shield Test Facility towards validation of thermal design of top shield for a pool type SFR has been conceived, constructed & commissioned. Detailed experiments were performed in this experimental facility having full-scale features. Steady state temperature distribution within the facility is measured for various heater plate temperatures in addition to simulating different operating states of the reactor. Following are the important observations (i) jet cooling system is effective in regulating the roof slab bottom plate temperature and thermal gradient across roof slab simulating normal operation of reactor, (ii) wire mesh insulation provided in roof slab-main vessel annulus is effective in obtaining gradual thermal gradient along main vessel top portion and inhibiting the setting up of cellular convection within annulus and (iii) cellular convection with four distinct convective cells sets in the annular gap between roof slab and small rotatable plug measuring ∼ϕ4 m in diameter & gap width varying from 16 mm to 30 mm. Repeatability of results is also ensured during all the above tests. The results presented in this paper is expected to provide reference data for validation of thermal hydraulic models in addition to serving as design validation of jet cooling system for pool type SFR.

  9. Experimental Validation of a Differential Variational Inequality-Based Approach for Handling Friction and Contact in Vehicle

    Science.gov (United States)

    2015-11-20

    terrain modeled using the discrete element method (DEM). Experimental Validation of a Differential Variational Inequality -Based Approach for Handling...COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Experimental Validation of a Differential Variational Inequality -Based Approach for...sinkage, and single wheel tests. 1.1. Modeling Frictional Contact Via Differential Variational Inequalities Consider a three dimensional (3D) system of

  10. Experimental Validation of Stratified Flow Phenomena, Graphite Oxidation, and Mitigation Strategies of Air Ingress Accidents

    Energy Technology Data Exchange (ETDEWEB)

    Chang Ho Oh; Eung Soo Kim; Hee Cheon No; Nam Zin Cho

    2008-12-01

    The US Department of Energy is performing research and development (R&D) that focuses on key phenomena that are important during challenging scenarios that may occur in the Next Generation Nuclear Plant (NGNP) Program / GEN-IV Very High Temperature Reactor (VHTR). Phenomena identification and ranking studies (PIRT) to date have identified the air ingress event, following on the heels of a VHTR depressurization, as very important (Schultz et al., 2006). Consequently, the development of advanced air ingress-related models and verification and validation (V&V) are very high priority for the NGNP program. Following a loss of coolant and system depressurization, air will enter the core through the break. Air ingress leads to oxidation of the in-core graphite structure and fuel. The oxidation will accelerate heat-up of the bottom reflector and the reactor core and will cause the release of fission products eventually. The potential collapse of the bottom reflector because of burn-off and the release of CO lead to serious safety problems. For estimation of the proper safety margin we need experimental data and tools, including accurate multi-dimensional thermal-hydraulic and reactor physics models, a burn-off model, and a fracture model. We also need to develop effective strategies to mitigate the effects of oxidation. The results from this research will provide crucial inputs to the INL NGNP/VHTR Methods R&D project. This project is focused on (a) analytical and experimental study of air ingress caused by density-driven, stratified, countercurrent flow, (b) advanced graphite oxidation experiments, (c) experimental study of burn-off in the bottom reflector, (d) structural tests of the burnt-off bottom reflector, (e) implementation of advanced models developed during the previous tasks into the GAMMA code, (f) full air ingress and oxidation mitigation analyses, (g) development of core neutronic models, (h) coupling of the core neutronic and thermal hydraulic models, and (i

  11. Experimental and Monte Carlo investigation of visible diffuse-reflectance imaging sensitivity to diffusing particle size changes in an optical model of a bladder wall

    Science.gov (United States)

    Kalyagina, N.; Loschenov, V.; Wolf, D.; Daul, C.; Blondel, W.; Savelieva, T.

    2011-11-01

    We have investigated the influence of scatterer size changes on the laser light diffusion, induced by collimated monochromatic laser irradiation, in tissue-like optical phantoms using diffuse-reflectance imaging. For that purpose, three-layer optical phantoms were prepared, in which nano- and microsphere size varied in order to simulate the scattering properties of healthy and cancerous urinary bladder walls. The informative areas of the surface diffuse-reflected light distributions were about 15×18 pixels for the smallest scattering particles of 0.05 μm, about 21×25 pixels for the medium-size particles of 0.53 μm, and about 25×30 pixels for the largest particles of 5.09 μm. The computation of the laser spot areas provided useful information for the analysis of the light distribution with high measurement accuracy of up to 92%. The minimal stability of 78% accuracy was observed for superficial scattering signals on the phantoms with the largest particles. The experimental results showed a good agreement with the results obtained by the Monte Carlo simulations. The presented method shows a good potential to be useful for a tissue-state diagnosis of the urinary bladder.

  12. Black liquor devolatilization and swelling - a detailed droplet model and experimental validation

    International Nuclear Information System (INIS)

    Jaervinen, M.; Zevenhoven, R.; Vakkilainen, E.; Forssen, M.

    2003-01-01

    In this paper, we present results from a new detailed physical model for single black liquor droplet pyrolysis and swelling, and validate them against experimental data from a non-oxidizing environment using two different reactor configurations. In the detailed model, we solve for the heat transfer and gas phase mass transfer in the droplet and thereby, the intra-particle gas-char and gas-gas interactions during drying and devolatilization can be studied. In the experimental part, the mass change, the swelling behaviour, and the volume fraction of larger voids, i.e. cenospheres in the droplets were determined in a non-oxidizing environment. The model gave a good correlation with experimental swelling and mass loss data. Calculations suggest that a considerable amount of the char can be consumed before the entire droplet has experienced the devolatilization and drying stages of combustion. Char formed at the droplet surface layer is generally consumed by gasification with H 2 O flowing outwards from the droplet interior. The extent of char conversion during devolatilization and the rate of devolatilization are greatly affected by swelling and the formation of larger voids in the particle. The more the particle swells and the more homogeneous the particle structure is, the larger is the conversion of char at the end of devolatilization

  13. Validation of Code ASTEC with LIVE-L1 Experimental Results

    International Nuclear Information System (INIS)

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  14. Validation of an experimental polyurethane model for biomechanical studies on implant supported prosthesis - tension tests

    Directory of Open Access Journals (Sweden)

    Mariane Miyashiro

    2011-06-01

    Full Text Available OBJECTIVES: The complexity and heterogeneity of human bone, as well as ethical issues, frequently hinder the development of clinical trials. The purpose of this in vitro study was to determine the modulus of elasticity of a polyurethane isotropic experimental model via tension tests, comparing the results to those reported in the literature for mandibular bone, in order to validate the use of such a model in lieu of mandibular bone in biomechanical studies. MATERIAL AND METHODS: Forty-five polyurethane test specimens were divided into 3 groups of 15 specimens each, according to the ratio (A/B of polyurethane reagents (PU-1: 1/0.5, PU-2: 1/1, PU-3: 1/1.5. RESULTS: Tension tests were performed in each experimental group and the modulus of elasticity values found were 192.98 MPa (SD=57.20 for PU-1, 347.90 MPa (SD=109.54 for PU-2 and 304.64 MPa (SD=25.48 for PU-3. CONCLUSION: The concentration of choice for building the experimental model was 1/1.

  15. Simulation and experimental validation of the performance of a absorption refrigerator

    International Nuclear Information System (INIS)

    Olbricht, Michael; Luke, Andrea

    2015-01-01

    The two biggest obstacles to a stronger market penetration of absorption refrigerators are their high cost and the size of the apparatus, which are due to the inaccurate methods for plant design. In order to contribute to an improved design a thermodynamic model is presented to describe the performance of a absorption refrigerator with the working fluid water/lithium. In this model, the processes are displayed in the single apparatus and coupled to each other in the systemic context. Thereby the interactions between the apparatus can specifically investigated and thus the process limiting component can be identified under the respective conditions. A validation of the simulation model and the boundary conditions used is done based on experimental data operating a self-developed absorption refrigerator. In the simulation, the heat transfer surfaces in accordance with the real system can be specified. The heat transport is taken into account based on typical values for the heat transfer in the individual apparatuses. Simulation results show good agreement with the experimental data. The physical relationships and influences externally defined operating parameters are correctly reproduced. Due to the chosen low heat transfer coefficient, the calculated cooling capacities by the model are below the experimentally measured. Finally, the possibilities and limitations are discussed by using the model and further improvement possibilities are suggested. [de

  16. Modeling and Experimental Validation for 3D mm-wave Radar Imaging

    Science.gov (United States)

    Ghazi, Galia

    As the problem of identifying suicide bombers wearing explosives concealed under clothing becomes increasingly important, it becomes essential to detect suspicious individuals at a distance. Systems which employ multiple sensors to determine the presence of explosives on people are being developed. Their functions include observing and following individuals with intelligent video, identifying explosives residues or heat signatures on the outer surface of their clothing, and characterizing explosives using penetrating X-rays, terahertz waves, neutron analysis, or nuclear quadrupole resonance. At present, mm-wave radar is the only modality that can both penetrate and sense beneath clothing at a distance of 2 to 50 meters without causing physical harm. Unfortunately, current mm-wave radar systems capable of performing high-resolution, real-time imaging require using arrays with a large number of transmitting and receiving modules; therefore, these systems present undesired large size, weight and power consumption, as well as extremely complex hardware architecture. The overarching goal of this thesis is the development and experimental validation of a next generation inexpensive, high-resolution radar system that can distinguish security threats hidden on individuals located at 2-10 meters range. In pursuit of this goal, this thesis proposes the following contributions: (1) Development and experimental validation of a new current-based, high-frequency computational method to model large scattering problems (hundreds of wavelengths) involving lossy, penetrable and multi-layered dielectric and conductive structures, which is needed for an accurate characterization of the wave-matter interaction and EM scattering in the target region; (2) Development of combined Norm-1, Norm-2 regularized imaging algorithms, which are needed for enhancing the resolution of the images while using a minimum number of transmitting and receiving antennas; (3) Implementation and experimental

  17. SU-F-J-146: Experimental Validation of 6 MV Photon PDD in Parallel Magnetic Field Calculated by EGSnrc

    Energy Technology Data Exchange (ETDEWEB)

    Ghila, A; Steciw, S; Fallone, B; Rathee, S [Cross Cancer Institute, Edmonton, AB (Canada)

    2016-06-15

    Purpose: Integrated linac-MR systems are uniquely suited for real time tumor tracking during radiation treatment. Understanding the magnetic field dose effects and incorporating them in treatment planning is paramount for linac-MR clinical implementation. We experimentally validated the EGSnrc dose calculations in the presence of a magnetic field parallel to the radiation beam travel. Methods: Two cylindrical bore electromagnets produced a 0.21 T magnetic field parallel to the central axis of a 6 MV photon beam. A parallel plate ion chamber was used to measure the PDD in a polystyrene phantom, placed inside the bore in two setups: phantom top surface coinciding with the magnet bore center (183 cm SSD), and with the magnet bore’s top surface (170 cm SSD). We measured the field of the magnet at several points and included the exact dimensions of the coils to generate a 3D magnetic field map in a finite element model. BEAMnrc and DOSXYZnrc simulated the PDD experiments in parallel magnetic field (i.e. 3D magnetic field included) and with no magnetic field. Results: With the phantom surface at the top of the electromagnet, the surface dose increased by 10% (compared to no-magnetic field), due to electrons being focused by the smaller fringe fields of the electromagnet. With the phantom surface at the bore center, the surface dose increased by 30% since extra 13 cm of air column was in relatively higher magnetic field (>0.13T) in the magnet bore. EGSnrc Monte Carlo code correctly calculated the radiation dose with and without the magnetic field, and all points passed the 2%, 2 mm Gamma criterion when the ion chamber’s entrance window and air cavity were included in the simulated phantom. Conclusion: A parallel magnetic field increases the surface and buildup dose during irradiation. The EGSnrc package can model these magnetic field dose effects accurately. Dr. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi

  18. MO-A-BRD-10: A Fast and Accurate GPU-Based Proton Transport Monte Carlo Simulation for Validating Proton Therapy Treatment Plans

    Energy Technology Data Exchange (ETDEWEB)

    Wan Chan Tseung, H; Ma, J; Beltran, C [Mayo Clinic, Rochester, MN (United States)

    2014-06-15

    Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil

  19. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  20. ENGINEERING DESIGN OPTIMIZATION OF HEEL TESTING EQUIPMENT IN THE EXPERIMENTAL VALIDATION OF SAFE WALKING

    Directory of Open Access Journals (Sweden)

    Cristiano Fragassa

    2017-06-01

    Full Text Available Experimental test methods for the evaluation of the resistance of heels of ladies' shoes in the case of impact loads are fully defined by International Organization for Standardization (ISO procedures that indicate all the conditions of experiment. A first Standard (ISO 19553 specifies the test method for determining the strength of the heels in the case of single impact. The result offers a valuation of the liability to fail under the sporadic heavy blows. A second Standard (ISO 19556 details a method for testing the capability of heels of women' shoes to survive to the repetition of small impacts provoked by normal walking. These Standards strictly define the features for two different testing devices (with specific materials, geometries, weights, etc. and all the experimental procedures to be followed during tests. On the contrary, this paper describes the technical solutions adopted to design one single experimental device able to perform impact testing of heels in both conditions. Joining the accuracy of mechanic movements with the speed of an electronic control system, a new and flexible equipment for the complete characterization of heels respect to (single or fatigue impacts was developed. Moreover a new level of performances in experimental validation of heel resistance was introduced by the versatility of the user-defined software control programs, able to encode every complex time-depending cycle of impact loads. Dynamic simulations permitted to investigate the impacts on heel in different conditions of testing, optimizing the machine design. The complexity of real stresses on shoes during an ordinary walk and in other common situations (as going up and downstairs was considered for a proper dimensioning.

  1. Thermodynamic properties of 1-naphthol: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range 5 K to 445 K. • Vapor pressures were measured for the temperature range 370 K to 570 K. • Computed and derived properties for ideal gas entropies are in excellent accord. • The enthalpy of combustion was measured and shown to be consistent with reliable literature values. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Thermodynamic properties for 1-naphthol (Chemical Abstracts registry number [90-15-3]) in the ideal-gas state are reported based on both experimental and computational methods. Measured properties included the triple-point temperature, enthalpy of fusion, and heat capacities for the crystal and liquid phases by adiabatic calorimetry; vapor pressures by inclined-piston manometry and comparative ebulliometry; and the enthalpy of combustion of the crystal phase by oxygen bomb calorimetry. Critical properties were estimated. Entropies for the ideal-gas state were derived from the experimental studies for the temperature range 298.15 ⩽ T/K ⩽ 600, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. The mutual validation of the independent experimental and computed results is achieved with a scaling factor of 0.975 applied to the calculated vibrational frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in a series of recent articles by this research group. This article reports the first extension of this approach to a hydroxy-aromatic compound. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous. The enthalpy of combustion for 1-naphthol was also measured in this research, and excellent

  2. Experimental validation of Villain's conjecture about magnetic ordering in quasi-1D helimagnets

    Energy Technology Data Exchange (ETDEWEB)

    Cinti, F., E-mail: fabio.cinti@fi.infn.i [CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (Italy); CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Rettori, A. [CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (Italy); CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Pini, M.G. [ISC-CNR, Via Madonna del Piano 10, I-50019 Sesto Fiorentino (Italy); Mariani, M.; Micotti, E. [Department of Physics A. Volta and CNR-INFM, University of Pavia, Via Bassi 6, I-27100 Pavia (Italy); Lascialfari, A. [Department of Physics A. Volta and CNR-INFM, University of Pavia, Via Bassi 6, I-27100 Pavia (Italy); Institute of General Physiology and Biological Chemistry, University of Milano, Via Trentacoste 2, I-20134 Milano (Italy); CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Papinutto, N. [CIMeC, University of Trento, Via delle Regole, 101 38060 Mattarello (Italy); Department of Physics A. Volta and CNR-INFM, University of Pavia, Via Bassi 6, I-27100 Pavia (Italy); Amato, A. [Paul Scherrer Institute, CH-5232 Villingen PSI (Switzerland); Caneschi, A.; Gatteschi, D. [INSTM R.U. Firenze and Department of Chemistry, University of Florence, Via della Lastruccia 3, I-50019 Sesto Fiorentino (Italy); Affronte, M. [CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Department of Physics, University of Modena and Reggio Emilia Via Campi 213/A, I-41100 Modena (Italy)

    2010-05-15

    Low-temperature magnetic susceptibility, zero-field muon spin resonance and specific heat measurements have been performed in the quasi-one-dimensional (1D) molecular helimagnetic compound Gd(hfac){sub 3}NITEt. The specific heat presents two anomalies at T{sub 0}=2.19(2)K and T{sub N}=1.88(2)K, while susceptibility and zero-field muon spin resonance show anomalies only at T{sub N}=1.88(2)K. The results suggest an experimental validation of Villain's conjecture of a two-step magnetic ordering in quasi-1D XY helimagnets: the paramagnetic phase and the helical spin solid phases are separated by a chiral spin liquid, where translational invariance is broken without violation of rotational invariance.

  3. Experimental validation of Villain's conjecture about magnetic ordering in quasi-1D helimagnets

    International Nuclear Information System (INIS)

    Cinti, F.; Rettori, A.; Pini, M.G.; Mariani, M.; Micotti, E.; Lascialfari, A.; Papinutto, N.; Amato, A.; Caneschi, A.; Gatteschi, D.; Affronte, M.

    2010-01-01

    Low-temperature magnetic susceptibility, zero-field muon spin resonance and specific heat measurements have been performed in the quasi-one-dimensional (1D) molecular helimagnetic compound Gd(hfac) 3 NITEt. The specific heat presents two anomalies at T 0 =2.19(2)K and T N =1.88(2)K, while susceptibility and zero-field muon spin resonance show anomalies only at T N =1.88(2)K. The results suggest an experimental validation of Villain's conjecture of a two-step magnetic ordering in quasi-1D XY helimagnets: the paramagnetic phase and the helical spin solid phases are separated by a chiral spin liquid, where translational invariance is broken without violation of rotational invariance.

  4. Development, Implementation and Experimental Validations of Activation Products Models for Water Pool Reactors

    International Nuclear Information System (INIS)

    Petriw, S.N.

    2001-01-01

    Some parameters were obtained both calculations and experiments in order to determined the source of the meaning activation products in water pool reactors. In this case, the study was done in RA-6 reactor (Centro Atomico Bariloche - Argentina).In normal operation, neutron flux on core activates aluminium plates.The activity on coolant water came from its impurities activation and meanly from some quantity of aluminium that, once activated, leave the cladding and is transported by water cooling system.This quantity depends of the 'recoil range' of each activation reaction.The 'staying time' on pool (the time that nuclides are circulating on the reactor pool) is another characteristic parameter of the system.Stationary state activity of some nuclides depends of this time.Also, several theoretical models of activation on coolant water system are showed, and their experimental validations

  5. Development of robust flexible OLED encapsulations using simulated estimations and experimental validations

    International Nuclear Information System (INIS)

    Lee, Chang-Chun; Shih, Yan-Shin; Wu, Chih-Sheng; Tsai, Chia-Hao; Yeh, Shu-Tang; Peng, Yi-Hao; Chen, Kuang-Jung

    2012-01-01

    This work analyses the overall stress/strain characteristic of flexible encapsulations with organic light-emitting diode (OLED) devices. A robust methodology composed of a mechanical model of multi-thin film under bending loads and related stress simulations based on nonlinear finite element analysis (FEA) is proposed, and validated to be more reliable compared with related experimental data. With various geometrical combinations of cover plate, stacked thin films and plastic substrate, the position of the neutral axis (NA) plate, which is regarded as a key design parameter to minimize stress impact for the concerned OLED devices, is acquired using the present methodology. The results point out that both the thickness and mechanical properties of the cover plate help in determining the NA location. In addition, several concave and convex radii are applied to examine the reliable mechanical tolerance and to provide an insight into the estimated reliability of foldable OLED encapsulations. (paper)

  6. Design, Manufacturing and Experimental Validation of Optical Fiber Sensors Based Devices for Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Angela CORICCIATI

    2016-06-01

    Full Text Available The use of optical fiber sensors is a promising and rising technique used for Structural Health Monitoring (SHM, because permit to monitor continuously the strain and the temperature of the structure where they are applied. In the present paper three different types of smart devices, that are composite materials with an optical fiber sensor embedded inside them during the manufacturing process, are described: Smart Patch, Smart Rebar and Smart Textile, which are respectively a plate for local exterior intervention, a rod for shear and flexural interior reinforcement and a textile for an external whole application. In addition to the monitoring aim, the possible additional function of these devices could be the reinforcement of the structures where they are applied. In the present work, after technology manufacturing description, the experimental laboratory characterization of each device is discussed. At last, smart devices application on medium scale masonry walls and their validation by mechanical tests is described.

  7. Experimental Validation of Surrogate Models for Predicting the Draping of Physical Interpolating Surfaces

    DEFF Research Database (Denmark)

    Christensen, Esben Toke; Lund, Erik; Lindgaard, Esben

    2018-01-01

    This paper concerns the experimental validation of two surrogate models through a benchmark study involving two different variable shape mould prototype systems. The surrogate models in question are different methods based on kriging and proper orthogonal decomposition (POD), which were developed...... to the performance of the studied surrogate models. By comparing surrogate model performance for the two variable shape mould systems, and through a numerical study involving simple finite element models, the underlying cause of this effect is explained. It is concluded that for a variable shape mould prototype...... hypercube approach. This sampling method allows for generating a space filling and high-quality sample plan that respects mechanical constraints of the variable shape mould systems. Through the benchmark study, it is found that mechanical freeplay in the modeled system is severely detrimental...

  8. Experimental Equipment Validation for Methane (CH4) and Carbon Dioxide (CO2) Hydrates

    Science.gov (United States)

    Saad Khan, Muhammad; Yaqub, Sana; Manner, Naathiya; Ani Karthwathi, Nur; Qasim, Ali; Mellon, Nurhayati Binti; Lal, Bhajan

    2018-04-01

    Clathrate hydrates are eminent structures regard as a threat to the gas and oil industry in light of their irritating propensity to subsea pipelines. For natural gas transmission and processing, the formation of gas hydrate is one of the main flow assurance delinquent has led researchers toward conducting fresh and meticulous studies on various aspects of gas hydrates. This paper highlighted the thermodynamic analysis on pure CH4 and CO2 gas hydrates on the custom fabricated equipment (Sapphire cell hydrate reactor) for experimental validation. CO2 gas hydrate formed at lower pressure (41 bar) as compared to CH4 gas hydrate (70 bar) while comparison of thermodynamic properties between CH4 and CO2 also presented in this study. This preliminary study could provide pathways for the quest of potent hydrate inhibitors.

  9. Experimental validation of GADRAS's coupled neutron-photon inverse radiation transport solver

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Harding, Lee T.

    2010-01-01

    Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of α-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.

  10. An experimentally validated simulation model for a four-stage spray dryer

    DEFF Research Database (Denmark)

    Petersen, Lars Norbert; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2017-01-01

    mathematical model is an index-1 differential algebraic equation (DAE) model with 12 states, 9 inputs, 8 disturbances, and 30 parameters. The parameters in the model are identified from well-excited experimental data obtained from the industrialtype spray dryer. The simulated outputs ofthe model are validated...... is divided into four consecutive stages: a primary spray drying stage, two heated fluid bed stages, and a cooling fluid bed stage. Each of these stages in the model is assumed ideally mixed and the dynamics are described by mass- and energy balances. These balance equations are coupled with constitutive...... equations such as a thermodynamic model, the water evaporation rate, the heat transfer rates, and an equation for the stickiness of the powder (glass transition temperature). Laboratory data is used to model the equilibrium moisture content and the glass transition temperature of the powder. The resulting...

  11. Servo-hydraulic actuator in controllable canonical form: Identification and experimental validation

    Science.gov (United States)

    Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.

    2018-02-01

    Hydraulic actuators have been widely used to experimentally examine structural behavior at multiple scales. Real-time hybrid simulation (RTHS) is one innovative testing method that largely relies on such servo-hydraulic actuators. In RTHS, interface conditions must be enforced in real time, and controllers are often used to achieve tracking of the desired displacements. Thus, neglecting the dynamics of hydraulic transfer system may result either in system instability or sub-optimal performance. Herein, we propose a nonlinear dynamical model for a servo-hydraulic actuator (a.k.a. hydraulic transfer system) coupled with a nonlinear physical specimen. The nonlinear dynamical model is transformed into controllable canonical form for further tracking control design purposes. Through a number of experiments, the controllable canonical model is validated.

  12. CPV cells cooling system based on submerged jet impingement: CFD modeling and experimental validation

    Science.gov (United States)

    Montorfano, Davide; Gaetano, Antonio; Barbato, Maurizio C.; Ambrosetti, Gianluca; Pedretti, Andrea

    2014-09-01

    Concentrating photovoltaic (CPV) cells offer higher efficiencies with regard to the PV ones and allow to strongly reduce the overall solar cell area. However, to operate correctly and exploit their advantages, their temperature has to be kept low and as uniform as possible and the cooling circuit pressure drops need to be limited. In this work an impingement water jet cooling system specifically designed for an industrial HCPV receiver is studied. Through the literature and by means of accurate computational fluid dynamics (CFD) simulations, the nozzle to plate distance, the number of jets and the nozzle pitch, i.e. the distance between adjacent jets, were optimized. Afterwards, extensive experimental tests were performed to validate pressure drops and cooling power simulation results.

  13. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  14. Experimental validation of a heat transfer model for concentrating photovoltaic system

    International Nuclear Information System (INIS)

    Sendhil Kumar, Natarajan; Matty, Katz; Rita, Ebner; Simon, Weingaertner; Ortrun, Aßländer; Alex, Cole; Roland, Wertz; Tim, Giesen; Tapas Kumar, Mallick

    2012-01-01

    In this paper, a three dimensional heat transfer model is presented for a novel concentrating photovoltaic design for Active Solar Panel Initiative System (ASPIS). The concentration ratio of two systems (early and integrated prototype) are 5× and 10× respectively, designed for roof-top integrated Photovoltaic systems. ANSYS 12.1, CFX package was effectively used to predict the temperatures of the components of the both ASPIS systems at various boundary conditions. The predicted component temperatures of an early prototype were compared with experimental results of ASPIS, which were carried out in Solecta – Israel and at the Austrian Institute of Technology (AIT) – Austria. It was observed that the solar cell and lens temperature prediction shows good agreement with Solecta measurements. The minimum and maximum deviation of 3.8% and 17.9% were observed between numerical and Solecta measurements and the maximum deviations of 16.9% were observed between modeling and AIT measurements. Thus, the developed validated thermal model enables to predict the component temperatures for concentrating photovoltaic systems. - Highlights: ► Experimentally validated heat transfer model for concentrating Photovoltaic system developed. ► Predictions of solar cell temperatures for parallactic tracking CPV system for roof integration. ► The ASPIS module contains 2 mm wide 216 solar cells manufactured based on SATURN technology. ► A solar cell temperature of 44 °C was predicted for solar radiation intensity was 1000 W/m 2 and ambient temperature was 20 °C. ► Average deviation was 6% and enabled to predict temperature of any CPV system.

  15. Site characterization and validation - Head variations during the entire experimental period

    International Nuclear Information System (INIS)

    Haigh, D.; Brightman, M.; Black, J.; Parry, S.

    1992-01-01

    The site characterization and validation project lasted for five years from 1986 to 1991. It consisted of a number of experiments within the region known as the SCV site. During this period of experimentation a monitoring system was established within the mine for the purpose of measuring the variation of head at a number of locations within and around the site. The system installed was based around a set of equipment known as a Piezomac TM system. In this system there is one central pressure transducer and each borehole interval is connected to it in turn. It can measure up to 55 separate points during each measurement 'cycle'. Monitoring points were either complete boreholes or sections of boreholes isolated by packers. In order to produce reasonable file size, data sets were screened. The results show that the SCV site was always responding to some form of hydrogeological disturbance. Many key tests were performed against changing background trends. This was particularly so of the simulated drift experiment and the large scale crosshole tests. However, some estimates of long term equilibrium heads before and after excavation of the validation drift have been made. Contoured plots of heads before and after show significant reduction of steady state heads as a result of drift excavation. Furthermore contouring the estimated long term drawdowns responding to the simulated drift experiment shows the specific influence of the H zone and the A/B zone. Overall the results of the monitoring show that the mine was a very active hydrogeological environment during the experimentation. Additionally it was often very difficult to clearly identify the causes of such disturbances. (au)

  16. Dynamic model with experimental validation of a biogas-fed SOFC plant

    International Nuclear Information System (INIS)

    D'Andrea, G.; Gandiglio, M.; Lanzini, A.; Santarelli, M.

    2017-01-01

    Highlights: • 60% of DIR into the SOFC anode reduces the air blower parasitic losses by 14%. • PID-controlled cathode airflow enables fast thermal regulation of the SOFC. • Stack overheating occurs due to unexpected reductions in the cathode airflow. • Current ramp rates higher than +0.30 A/min lead to an excessive stack overheating. - Abstract: The dynamic model of a poly-generation system based on a biogas-fed solid oxide fuel cell (SOFC) plant is presented in this paper. The poly-generation plant was developed in the framework of the FP7 EU-funded project SOFCOM ( (www.sofcom.eu)), which consists of a fuel-cell based polygeneration plant with CO_2 capture and re-use. CO_2 is recovered from the anode exhaust of the SOFC (after oxy-combustion, cooling and water condensation) and the Carbon is fixed in the form of micro-algae in a tubular photobioreactor. This work focuses on the dynamic operation of the SOFC module running on steam-reformed biogas. Both steady state and dynamic operation of the fuel cell stack and the related Balance-of-Plant (BoP) has been modeled in order to simulate the thermal behavior and performance of the system. The model was validated against experimental data gathered during the operation of the SOFCOM proof-of-concept showing good agreement with the experimental data. The validated model has been used to investigate further on the harsh off-design operation of the proof-of-concept. Simulation results provide guidelines for an improved design of the control system of the plant, highlighting the feasible operating region under safe conditions and means to maximize the overall system efficiency.

  17. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til [Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany) and Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany); Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany); Philips Healthcare, X-Ray Pre-Development, Veenpluis 4-6, 5684PC Best (Netherlands); Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany)

    2010-09-15

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  18. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    International Nuclear Information System (INIS)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til

    2010-01-01

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  19. Numerical modelling of transdermal delivery from matrix systems: parametric study and experimental validation with silicone matrices.

    Science.gov (United States)

    Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már

    2014-08-01

    A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Experimental validation of a model for diffusion-controlled absorption of organic compounds in the trachea

    Energy Technology Data Exchange (ETDEWEB)

    Gerde, P. [National Inst. for Working Life, Solna (Sweden); Muggenburg, B.A.; Thornton-Manning, J.R. [and others

    1995-12-01

    Most chemically induced lung cancer originates in the epithelial cells in the airways. Common conceptions are that chemicals deposited on the airway surface are rapidly absorbed through mucous membranes, limited primarily by the rate of blood perfusion in the mucosa. It is also commonly thought that for chemicals to induce toxicity at the site of entry, they must be either rapidly reactive, readily metabolizable, or especially toxic to the tissues at the site of entry. For highly lipophilic toxicants, there is a third option. Our mathematical model predicts that as lipophilicity increases, chemicals partition more readily into the cellular lipid membranes and diffuse more slowly through the tissues. Therefore, absorption of very lipophilic compounds will be almost entirely limited by the rate of diffusion through the epithelium rather than by perfusion of the capillary bed in the subepithelium. We have reported on a preliminary model for absorption through mucous membranes of any substance with a lipid/aqueous partition coefficient larger than one. The purpose of this work was to experimentally validate the model in Beagle dogs. This validated model on toxicant absorption in the airway mucosa will improve risk assessment of inhaled

  1. Model development and experimental validation of capnophilic lactic fermentation and hydrogen synthesis by Thermotoga neapolitana.

    Science.gov (United States)

    Pradhan, Nirakar; Dipasquale, Laura; d'Ippolito, Giuliana; Fontana, Angelo; Panico, Antonio; Pirozzi, Francesco; Lens, Piet N L; Esposito, Giovanni

    2016-08-01

    The aim of the present study was to develop a kinetic model for a recently proposed unique and novel metabolic process called capnophilic (CO2-requiring) lactic fermentation (CLF) pathway in Thermotoga neapolitana. The model was based on Monod kinetics and the mathematical expressions were developed to enable the simulation of biomass growth, substrate consumption and product formation. The calibrated kinetic parameters such as maximum specific uptake rate (k), semi-saturation constant (kS), biomass yield coefficient (Y) and endogenous decay rate (kd) were 1.30 h(-1), 1.42 g/L, 0.1195 and 0.0205 h(-1), respectively. A high correlation (>0.98) was obtained between the experimental data and model predictions for both model validation and cross validation processes. An increase of the lactate production in the range of 40-80% was obtained through CLF pathway compared to the classic dark fermentation model. The proposed kinetic model is the first mechanistically based model for the CLF pathway. This model provides useful information to improve the knowledge about how acetate and CO2 are recycled back by Thermotoga neapolitana to produce lactate without compromising the overall hydrogen yield. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Modelling of PEM Fuel Cell Performance: Steady-State and Dynamic Experimental Validation

    Directory of Open Access Journals (Sweden)

    Idoia San Martín

    2014-02-01

    Full Text Available This paper reports on the modelling of a commercial 1.2 kW proton exchange membrane fuel cell (PEMFC, based on interrelated electrical and thermal models. The electrical model proposed is based on the integration of the thermodynamic and electrochemical phenomena taking place in the FC whilst the thermal model is established from the FC thermal energy balance. The combination of both models makes it possible to predict the FC voltage, based on the current demanded and the ambient temperature. Furthermore, an experimental characterization is conducted and the parameters for the models associated with the FC electrical and thermal performance are obtained. The models are implemented in Matlab Simulink and validated in a number of operating environments, for steady-state and dynamic modes alike. In turn, the FC models are validated in an actual microgrid operating environment, through the series connection of 4 PEMFC. The simulations of the models precisely and accurately reproduce the FC electrical and thermal performance.

  3. Final Design and Experimental Validation of the Thermal Performance of the LHC Lattice Cryostats

    International Nuclear Information System (INIS)

    Bourcey, N.; Capatina, O.; Parma, V.; Poncet, A.; Rohmig, P.; Serio, L.; Skoczen, B.; Tock, J.-P.; Williams, L. R.

    2004-01-01

    The recent commissioning and operation of the LHC String 2 have given a first experimental validation of the global thermal performance of the LHC lattice cryostat at nominal cryogenic conditions. The cryostat designed to minimize the heat inleak from ambient temperature, houses under vacuum and thermally protects the cold mass, which contains the LHC twin-aperture superconducting magnets operating at 1.9 K in superfluid helium. Mechanical components linking the cold mass to the vacuum vessel, such as support posts and insulation vacuum barriers are designed with efficient thermalisations for heat interception to minimise heat conduction. Heat inleak by radiation is reduced by employing multilayer insulation (MLI) wrapped around the cold mass and around an aluminium thermal shield cooled to about 60 K.Measurements of the total helium vaporization rate in String 2 gives, after substraction of supplementary heat loads and end effects, an estimate of the total thermal load to a standard LHC cell (107 m) including two Short Straight Sections and six dipole cryomagnets. Temperature sensors installed at critical locations provide a temperature mapping which allows validation of the calculated and estimated thermal performance of the cryostat components, including efficiency of the heat interceptions

  4. Final Design and Experimental Validation of the Thermal Performance of the LHC Lattice Cryostats

    Science.gov (United States)

    Bourcey, N.; Capatina, O.; Parma, V.; Poncet, A.; Rohmig, P.; Serio, L.; Skoczen, B.; Tock, J.-P.; Williams, L. R.

    2004-06-01

    The recent commissioning and operation of the LHC String 2 have given a first experimental validation of the global thermal performance of the LHC lattice cryostat at nominal cryogenic conditions. The cryostat designed to minimize the heat inleak from ambient temperature, houses under vacuum and thermally protects the cold mass, which contains the LHC twin-aperture superconducting magnets operating at 1.9 K in superfluid helium. Mechanical components linking the cold mass to the vacuum vessel, such as support posts and insulation vacuum barriers are designed with efficient thermalisations for heat interception to minimise heat conduction. Heat inleak by radiation is reduced by employing multilayer insulation (MLI) wrapped around the cold mass and around an aluminium thermal shield cooled to about 60 K. Measurements of the total helium vaporization rate in String 2 gives, after substraction of supplementary heat loads and end effects, an estimate of the total thermal load to a standard LHC cell (107 m) including two Short Straight Sections and six dipole cryomagnets. Temperature sensors installed at critical locations provide a temperature mapping which allows validation of the calculated and estimated thermal performance of the cryostat components, including efficiency of the heat interceptions.

  5. Energy performance of a ventilated façade by simulation with experimental validation

    International Nuclear Information System (INIS)

    Aparicio-Fernández, Carolina; Vivancos, José-Luis; Ferrer-Gisbert, Pablo; Royo-Pastor, Rafael

    2014-01-01

    A model for a building with ventilated façade was created using the software tool TRNSYS, version 17, and airflow parameters were simulated using TRNFlow. The results obtained with the model are compared and validated with experimental data. The temperature distribution along the air cavity was analysed and a chimney effect was observed, which produced the highest temperature gradient on the first floor. The heat flux of the external wall was analysed, and greater temperatures were observed on the external layer and inside the cavity. The model allows to calculate the energy demand of the building façade proposing and evaluating passive strategies. The corresponding office building for computer laboratories located in Valencia (Spain), was monitored for a year. The thermal behaviour of the floating external sheet was analysed using an electronic panel designed for the reading and storage of data. A feasibility study of the recovery of hot air inside the façade into the building was performed. The results obtained showed a lower heating demand when hot air is introduced inside the building, increasing the efficiency of heat recovery equipment. - Highlights: •An existing office building was monitored for a year. •A model of a ventilated façade by TRNSYS simulation tool was validated. •Air flow parameters inside the ventilated façade were identified. •Recovery of the hot air inside the façade for input into the building was studied

  6. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  7. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  8. Modelling of the RA-1 reactor using a Monte Carlo code

    International Nuclear Information System (INIS)

    Quinteiro, Guillermo F.; Calabrese, Carlos R.

    2000-01-01

    It was carried out for the first time, a model of the Argentine RA-1 reactor using the MCNP Monte Carlo code. This model was validated using data for experimental neutron and gamma measurements at different energy ranges and locations. In addition, the resulting fluxes were compared with the data obtained using a 3D diffusion code. (author)

  9. Experimental Validation of the Electrokinetic Theory and Development of Seismoelectric Interferometry by Cross-Correlation

    Directory of Open Access Journals (Sweden)

    F. C. Schoemaker

    2012-01-01

    Full Text Available We experimentally validate a relatively recent electrokinetic formulation of the streaming potential (SP coefficient as developed by Pride (1994. The start of our investigation focuses on the streaming potential coefficient, which gives rise to the coupling of mechanical and electromagnetic fields. It is found that the theoretical amplitude values of this dynamic SP coefficient are in good agreement with the normalized experimental results over a wide frequency range, assuming no frequency dependence of the bulk conductivity. By adopting the full set of electrokinetic equations, a full-waveform wave propagation model is formulated. We compare the model predictions, neglecting the interface response and modeling only the coseismic fields, with laboratory measurements of a seismic wave of frequency 500 kHz that generates electromagnetic signals. Agreement is observed between measurement and electrokinetic theory regarding the coseismic electric field. The governing equations are subsequently adopted to study the applicability of seismoelectric interferometry. It is shown that seismic sources at a single boundary location are sufficient to retrieve the 1D seismoelectric responses, both for the coseismic and interface components, in a layered model.

  10. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within ±10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the 92 Mo(n, 2n) 91g Mo reaction in FENDL, and lack of activation cross section data, e.g., the 138 Ba(n, 2n) 137m Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  11. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within {+-}10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the {sup 92}Mo(n, 2n){sup 91g}Mo reaction in FENDL, and lack of activation cross section data, e.g., the {sup 138}Ba(n, 2n){sup 137m}Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  12. Multiphysics modelling and experimental validation of an air-coupled array of PMUTs with residual stresses

    Science.gov (United States)

    Massimino, G.; Colombo, A.; D'Alessandro, L.; Procopio, F.; Ardito, R.; Ferrera, M.; Corigliano, A.

    2018-05-01

    In this paper a complete multiphysics modelling via the finite element method (FEM) of an air-coupled array of piezoelectric micromachined ultrasonic transducers (PMUT) and its experimental validation are presented. Two numerical models are described for the single transducer, axisymmetric and 3D, with the following features: the presence of fabrication induced residual stresses, which determine a non-linear initial deformed configuration of the diaphragm and a substantial fundamental mode frequency shift; the multiple coupling between different physics, namely electro-mechanical coupling for the piezo-electric model, thermo-acoustic-structural interaction and thermo-acoustic-pressure interaction for the waves propagation in the surrounding fluid. The model for the single transducer is enhanced considering the full set of PMUTs belonging to the silicon dye in a 4 × 4 array configuration. The results of the numerical multiphysics models are compared with experimental ones in terms of the initial static pre-deflection, of the diaphragm central point spectrum and of the sound intensity at 3.5 cm on the vertical direction along the axis of the diaphragm.

  13. NUMERICAL MODELLING AND EXPERIMENTAL INFLATION VALIDATION OF A BIAS TWO-WHEEL TIRE

    Directory of Open Access Journals (Sweden)

    CHUNG KET THEIN

    2016-02-01

    Full Text Available This paper presents a parametric study on the development of a computational model for bias two-wheel tire through finite element analysis (FEA. An 80/90- 17 bias two-wheel tire was adopted which made up of four major layers of rubber compound with different material properties to strengthen the structure. Mooney-Rivlin hyperelastic model was applied to represent the behaviour of incompressible rubber compound. A 3D tire model was built for structural static finite element analysis. The result was validated from the inflation analysis. Structural static finite element analysis method is suitable for evaluation of the tire design and improvement of the tire behaviour to desired performance. Experimental tire was inflated at various pressures and the geometry between numerical and experimental tire were compared. There are good agreements between numerical simulation model and the experiment results. This indicates that the simulation model can be applied to the bias two-wheel tire design in order to predict the tire behaviour and improve its mechanical characteristics.

  14. Investigation and experimental validation of the contribution of optical interconnects in the SYMPHONIE massively parallel computer

    International Nuclear Information System (INIS)

    Scheer, Patrick

    1998-01-01

    Progress in microelectronics lead to electronic circuits which are increasingly integrated, with an operating frequency and an inputs/outputs count larger than the ones supported by printed circuit board and back-plane technologies. As a result, distributed systems with several boards cannot fully exploit the performance of integrated circuits. In synchronous parallel computers, the situation is worsen since the overall system performances rely on the efficiency of electrical interconnects between the integrated circuits which include the processing elements (PE). The study of a real parallel computer named SYMPHONIE shows for instance that the system operating frequency is far smaller than the capabilities of the microelectronics technology used for the PE implementation. Optical interconnections may cancel these limitations by providing more efficient connections between the PE. Especially, free-space optical interconnections based on vertical-cavity surface-emitting lasers (VCSEL), micro-lens and PIN photodiodes are compatible with the required features of the PE communications. Zero bias modulation of VCSEL with CMOS-compatible digital signals is studied and experimentally demonstrated. A model of the propagation of truncated gaussian beams through micro-lenses is developed. It is then used to optimise the geometry of the detection areas. A dedicated mechanical system is also proposed and implemented for integrating free-space optical interconnects in a standard electronic environment, representative of the one of parallel computer systems. A specially designed demonstrator provides the experimental validation of the above physical concepts. (author) [fr

  15. A mathematical model for hydrogen evolution in an electrochemical cell and experimental validation

    International Nuclear Information System (INIS)

    Mahmut D Mat; Yuksel Kaplan; Beycan Ibrahimoglu; Nejat Veziroglu; Rafig Alibeyli; Sadiq Kuliyev

    2006-01-01

    Electrochemical reaction is largely employed in various industrial areas such as hydrogen production, chlorate process, electroplating, metal purification etc. Most of these processes often take place with gas evaluation on the electrodes. Presence of gas phase in the liquid phase makes the problem two-phase flow which is much knowledge available from heat transfer and fluid mechanics studies. The motivation of this study is to investigate hydrogen release in an electrolysis processes from two-phase flow point of view and investigate effect of gas release on the electrolysis process. Hydrogen evolution, flow field and current density distribution in an electrochemical cell are investigated with a two-phase flow model. The mathematical model involves solutions of transport equations for the variables of each phase with allowance for inter phase transfer of mass and momentum. An experimental set-up is established to collect data to validate and improve the mathematical model. Void fraction is determined from measurement of resistivity changes in the system due to the presence of bubbles. A good agreement is obtained between numerical results and experimental data. (authors)

  16. Experimental and Numerical Investigations on Feasibility and Validity of Prismatic Rock Specimen in SHPB

    Directory of Open Access Journals (Sweden)

    Xibing Li

    2016-01-01

    Full Text Available The paper presents experimental and numerical studies on the feasibility and validity of using prismatic rock specimens in split Hopkinson pressure bar (SHPB test. Firstly, the experimental tests are conducted to evaluate the stress and strain uniformity in the prismatic specimens during impact loading. The stress analysis at the ends of the specimen shows that stress equilibrium can be achieved after about three wave reflections in the specimen, and the balance can be well maintained for a certain time after peak stress. The strain analysis reveals that the prismatic specimen deforms uniformly during the dynamic loading period. Secondly, numerical simulation is performed to further verify the stress and strain uniformity in the prismatic specimen in SHPB test. It indicates that the stress equilibrium can be achieved in prismatic specimen despite a certain degree of stress concentration at the corners. The comparative experiments demonstrate that the change of specimen shape has no significant effect on dynamic responses and failure patterns of the specimen. Finally, a dynamic crack propagation test is presented to show the application of the present work in studying fracturing mechanisms under dynamic loading.

  17. An experimental approach to improve the Monte Carlo modelling of offline PET/CT-imaging of positron emitters induced by scanned proton beams

    International Nuclear Information System (INIS)

    Bauer, J; Unholtz, D; Kurz, C; Parodi, K

    2013-01-01

    We report on the experimental campaign carried out at the Heidelberg Ion-Beam Therapy Center (HIT) to optimize the Monte Carlo (MC) modelling of proton-induced positron-emitter production. The presented experimental strategy constitutes a pragmatic inverse approach to overcome the known uncertainties in the modelling of positron-emitter production due to the lack of reliable cross-section data for the relevant therapeutic energy range. This work is motivated by the clinical implementation of offline PET/CT-based treatment verification at our facility. Here, the irradiation induced tissue activation in the patient is monitored shortly after the treatment delivery by means of a commercial PET/CT scanner and compared to a MC simulated activity expectation, derived under the assumption of a correct treatment delivery. At HIT, the MC particle transport and interaction code FLUKA is used for the simulation of the expected positron-emitter yield. For this particular application, the code is coupled to externally provided cross-section data of several proton-induced reactions. Studying experimentally the positron-emitting radionuclide yield in homogeneous phantoms provides access to the fundamental production channels. Therefore, five different materials have been irradiated by monoenergetic proton pencil beams at various energies and the induced β + activity subsequently acquired with a commercial full-ring PET/CT scanner. With the analysis of dynamically reconstructed PET images, we are able to determine separately the spatial distribution of different radionuclide concentrations at the starting time of the PET scan. The laterally integrated radionuclide yields in depth are used to tune the input cross-section data such that the impact of both the physical production and the imaging process on the various positron-emitter yields is reproduced. The resulting cross-section data sets allow to model the absolute level of measured β + activity induced in the investigated

  18. Numerical modelling and experimental validation of hydrodynamics of an emulsion in an extraction column

    International Nuclear Information System (INIS)

    Paisant, Jean-Francois

    2014-01-01

    a second approach, an experimental device was sized in order to establish an extensional flow in order to characterize and validate the physical model by data acquisition. These series of experiments were conducted by coupling particle image velocimetry with laser induced fluorescence (FIL). Continuous phases velocity was obtained by PIV and a drop detecting and tracking algorithm has been developed to estimate dispersed and continuous phases velocities and the volume fraction of the dispersed phase. These results, such as velocities and strain rate tensor, have been used in a first validation of the model. (author) [fr

  19. Experimental and numerical validation of a two-region-designed pebble bed reactor with dynamic core

    International Nuclear Information System (INIS)

    Jiang, S.Y.; Yang, X.T.; Tang, Z.W.; Wang, W.J.; Tu, J.Y.; Liu, Z.Y.; Li, J.

    2012-01-01

    Highlights: ► The experimental installation has been built to investigate the pebble flow. ► The feasibility of two-region pebble bed reactor has been verified. ► The pebble flow is more uniform in a taller vessel than that in a lower vessel. ► Larger base cone angle will decrease the scale of the stagnant zone. - Abstract: The pebble flow is the principal issue for the design of the pebble bed reactor. In order to verify the feasibility of a two-region-designed pebble bed reactor, the experimental installation with a taller vessel has been built, which is proportional to the real pebble bed reactor. With the aid of the experimental installation, the stable establishment and maintenance of the two-region arrangement has been verified, at the same time, the applicability of the DEM program has been also validated. Research results show: (1) The pebble's bouncing on the free surface is an important factor for the mixing of the different colored pebbles. (2) Through the guide plates installed in the top of the pebble packing, the size of the mixing zone can be reduced from 6–7 times to 3–4 times the pebble diameter. (3) The relationship between the width of the central region and the ratio of loading pebbles is approximately linear in the taller vessel. (4) The heighten part of the pebble packing can improve the uniformity of the flowing in the lower. (5) To increase the base cone angle can decrease the scale of the stagnant zone. All of these conclusions are meaningful to the design of the real pebble reactor.

  20. CFD simulation and experimental validation of a GM type double inlet pulse tube refrigerator

    Science.gov (United States)

    Banjare, Y. P.; Sahoo, R. K.; Sarangi, S. K.

    2010-04-01

    Pulse tube refrigerator has the advantages of long life and low vibration over the conventional cryocoolers, such as GM and stirling coolers because of the absence of moving parts in low temperature. This paper performs a three-dimensional computational fluid dynamic (CFD) simulation of a GM type double inlet pulse tube refrigerator (DIPTR) vertically aligned, operating under a variety of thermal boundary conditions. A commercial computational fluid dynamics (CFD) software package, Fluent 6.1 is used to model the oscillating flow inside a pulse tube refrigerator. The simulation represents fully coupled systems operating in steady-periodic mode. The externally imposed boundary conditions are sinusoidal pressure inlet by user defined function at one end of the tube and constant temperature or heat flux boundaries at the external walls of the cold-end heat exchangers. The experimental method to evaluate the optimum parameters of DIPTR is difficult. On the other hand, developing a computer code for CFD analysis is equally complex. The objectives of the present investigations are to ascertain the suitability of CFD based commercial package, Fluent for study of energy and fluid flow in DIPTR and to validate the CFD simulation results with available experimental data. The general results, such as the cool down behaviours of the system, phase relation between mass flow rate and pressure at cold end, the temperature profile along the wall of the cooler and refrigeration load are presented for different boundary conditions of the system. The results confirm that CFD based Fluent simulations are capable of elucidating complex periodic processes in DIPTR. The results also show that there is an excellent agreement between CFD simulation results and experimental results.

  1. Experimental Definition and Validation of Protein Coding Transcripts in Chlamydomonas reinhardtii

    Energy Technology Data Exchange (ETDEWEB)

    Kourosh Salehi-Ashtiani; Jason A. Papin

    2012-01-13

    Algal fuel sources promise unsurpassed yields in a carbon neutral manner that minimizes resource competition between agriculture and fuel crops. Many challenges must be addressed before algal biofuels can be accepted as a component of the fossil fuel replacement strategy. One significant challenge is that the cost of algal fuel production must become competitive with existing fuel alternatives. Algal biofuel production presents the opportunity to fine-tune microbial metabolic machinery for an optimal blend of biomass constituents and desired fuel molecules. Genome-scale model-driven algal metabolic design promises to facilitate both goals by directing the utilization of metabolites in the complex, interconnected metabolic networks to optimize production of the compounds of interest. Using Chlamydomonas reinhardtii as a model, we developed a systems-level methodology bridging metabolic network reconstruction with annotation and experimental verification of enzyme encoding open reading frames. We reconstructed a genome-scale metabolic network for this alga and devised a novel light-modeling approach that enables quantitative growth prediction for a given light source, resolving wavelength and photon flux. We experimentally verified transcripts accounted for in the network and physiologically validated model function through simulation and generation of new experimental growth data, providing high confidence in network contents and predictive applications. The network offers insight into algal metabolism and potential for genetic engineering and efficient light source design, a pioneering resource for studying light-driven metabolism and quantitative systems biology. Our approach to generate a predictive metabolic model integrated with cloned open reading frames, provides a cost-effective platform to generate metabolic engineering resources. While the generated resources are specific to algal systems, the approach that we have developed is not specific to algae and

  2. Experimental determination of the radial dose distribution in high gradient regions around 192Ir wires: Comparison of electron paramagnetic resonance imaging, films, and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Kolbun, N.; Leveque, Ph.; Abboud, F.; Bol, A.; Vynckier, S.; Gallez, B.

    2010-01-01

    Purpose: The experimental determination of doses at proximal distances from radioactive sources is difficult because of the steepness of the dose gradient. The goal of this study was to determine the relative radial dose distribution for a low dose rate 192 Ir wire source using electron paramagnetic resonance imaging (EPRI) and to compare the results to those obtained using Gafchromic EBT film dosimetry and Monte Carlo (MC) simulations. Methods: Lithium formate and ammonium formate were chosen as the EPR dosimetric materials and were used to form cylindrical phantoms. The dose distribution of the stable radiation-induced free radicals in the lithium formate and ammonium formate phantoms was assessed by EPRI. EBT films were also inserted inside in ammonium formate phantoms for comparison. MC simulation was performed using the MCNP4C2 software code. Results: The radical signal in irradiated ammonium formate is contained in a single narrow EPR line, with an EPR peak-to-peak linewidth narrower than that of lithium formate (∼0.64 and 1.4 mT, respectively). The spatial resolution of EPR images was enhanced by a factor of 2.3 using ammonium formate compared to lithium formate because its linewidth is about 0.75 mT narrower than that of lithium formate. The EPRI results were consistent to within 1% with those of Gafchromic EBT films and MC simulations at distances from 1.0 to 2.9 mm. The radial dose values obtained by EPRI were about 4% lower at distances from 2.9 to 4.0 mm than those determined by MC simulation and EBT film dosimetry. Conclusions: Ammonium formate is a suitable material under certain conditions for use in brachytherapy dosimetry using EPRI. In this study, the authors demonstrated that the EPRI technique allows the estimation of the relative radial dose distribution at short distances for a 192 Ir wire source.

  3. Experimental Validation of UTDefect: Scattering in Anisotropic Media and Near-field Behavior

    International Nuclear Information System (INIS)

    Pecorari, Claudio

    2002-11-01

    Theoretical models that simulate measurements of ultrasonic waves undergoing scattering by material defects have been developed by Prof. Bostroem and co-workers at Chalmers Univ. of Tech. for a variety of experimental configurations and defects. A software program named UTDefect has been developed at the same time, which gathers the theoretical results obtained so far in a single package. A discussion of the motivations behind such an effort and details concerning UTDefect can be found in articles by Bostroem. Following an initial effort to validate some of the theoretical predictions available at the time, the present project has been conceived as a support to the on-going theoretical work. In fact, the goal of the project described in this report has been the experimental validation of two aspects of the above theory that have not yet been tested: the scattering of a finite ultrasonic beam by a surface-breaking crack in an anisotropic medium, and an improved model of the behaviour of a finite ultrasonic beam in the near-field region of the source. In the last case, the supporting medium is supposed to be isotropic. To carry out the first task, a single crystal, silicon sample was employed. A surface-breaking notch with a depth of approximately 1.8 mm was introduced by means of a wire-cutting saw to simulate a scattering defect. Two kinds of measurements were performed of this sample. The first one considered the signal amplitude as a function of the transducer position. To this end, three wedges generating beams propagating in different directions were used. The second series of measurements concerned the frequency content of the backscattered signals at the position where the amplitude was maximum. All three wedges mentioned above were used also in this part of the work. The experimental results were compared to the values of the physical quantities of interest as predicted by UTDefect, with the only difference that UTDefect was run for a sub-surface rectangular

  4. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  5. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  6. Radiolysis of liquid water: an attempt to reconcile Monte-Carlo calculations with new experimental hydrated electron yield data at early times

    International Nuclear Information System (INIS)

    Muroya, Y.; Meesungnoen, J.; Jay-Gerin, J.-P.; Filali-Mouhim, A.; Goulet, T.; Katsumura, Y.; Mankhetkorn, S.

    2002-01-01

    A re-examination of our Monte-Carlo modeling of the radiolysis of liquid water by low linear-energy-transfer (LET ∼ 0.3 keV μm -1 ) radiation is undertaken herein in an attempt to reconcile the results of our simulation code with recently revised experimental hydrated electron (e aq - ) yield data at early times. The thermalization distance of subexcitation electrons, the recombination cross section of the electrons with their water parent cations prior to thermalization, and the branching ratios of the different competing mechanisms in the dissociative decay of vibrationally excited states of water molecules were taken as adjustable parameters in our simulations. Using a global-fit procedure, we have been unable to find a set of values for those parameters to simultaneously reproduce (i) the revised e aq - yield of 4.0 ± 0.2 molecules per 100 eV at 'time zero' (that is, a reduction of ∼20% over the hitherto accepted value of 4.8 molecules per 100 eV), (ii) the newly measured e aq - decay kinetic profile from 100 ps to 10 ns, and (iii) the time-dependent yields of the other radiolytic species H . , . OH, H 2 , and H 2 O 2 (up to ∼1 μs). The lowest possible limiting 'time-zero' yield of e aq - that we could in fact obtain, while ensuring an acceptable agreement between all computed and experimental yields, was ∼4.4 to 4.5 molecules per 100 eV. Under these conditions, the mean values of the electron thermalization distance and of the geminate electron-cation recombination probability, averaged over the subexcitation electron 'entry spectrum,' are found to be equal to ∼139 A and ∼18%, respectively. These values are to be compared with those obtained in our previous simulations of liquid water radiolysis, namely ∼88 A and ∼5.5%, respectively. Our average electron thermalization distance is also to be compared with the typical size (∼64-80 A) of the initial hydrated electron distributions estimated in current deterministic models of 'spur' chemistry

  7. Experimental validation of calculated capture rate for nucleus involved in fuel cycle

    International Nuclear Information System (INIS)

    Benslimane-Bouland, A.

    1997-09-01

    The framework of this study was the evaluation of the nuclear data requirements for Actinides and Fission Products applied to current nuclear reactors as well as future applications. This last item includes extended irradiation campaigns, 100 % Mixed Oxide fuel, transmutation or even incineration. The first part of this study presents different types of integral measurements which are available for capture rate measurements, as well as the methods used for reactor core calculation route design and nuclear data library validation. The second section concerns the analysis of three specific irradiation experiments. The results have shown the extent of the current knowledge on nuclear data as well as the associated uncertainties. The third and last section shows both the coherency between all the results, and the statistical method applied for nuclear data library adjustment. A relevant application of this method has demonstrated that only specifically chosen integral experiments can be of use for the validation of nuclear data libraries. The conclusion is reached that even if co-ordinated efforts between reactor and nuclear physicists have made possible a huge improvement in the knowledge of capture cross sections of the main nuclei such as uranium and plutonium, some improvements are currently necessary for the minor actinides (Np, Am and Cm). Both integral and differential measurements are recommended to improve the knowledge of minor actinide cross sections. As far as integral experiments are concerned, a set of criteria to be followed during the experimental conception have been defined in order to both reduce the number of required calculation approximations, and to increase as much as possible the maximum amount of extracted information. (author)

  8. Combined Heat Transfer in High-Porosity High-Temperature Fibrous Insulations: Theory and Experimental Validation

    Science.gov (United States)

    Daryabeigi, Kamran; Cunnington, George R.; Miller, Steve D.; Knutson, Jeffry R.

    2010-01-01

    Combined radiation and conduction heat transfer through various high-temperature, high-porosity, unbonded (loose) fibrous insulations was modeled based on first principles. The diffusion approximation was used for modeling the radiation component of heat transfer in the optically thick insulations. The relevant parameters needed for the heat transfer model were derived from experimental data. Semi-empirical formulations were used to model the solid conduction contribution of heat transfer in fibrous insulations with the relevant parameters inferred from thermal conductivity measurements at cryogenic temperatures in a vacuum. The specific extinction coefficient for radiation heat transfer was obtained from high-temperature steady-state thermal measurements with large temperature gradients maintained across the sample thickness in a vacuum. Standard gas conduction modeling was used in the heat transfer formulation. This heat transfer modeling methodology was applied to silica, two types of alumina, and a zirconia-based fibrous insulation, and to a variation of opacified fibrous insulation (OFI). OFI is a class of insulations manufactured by embedding efficient ceramic opacifiers in various unbonded fibrous insulations to significantly attenuate the radiation component of heat transfer. The heat transfer modeling methodology was validated by comparison with more rigorous analytical solutions and with standard thermal conductivity measurements. The validated heat transfer model is applicable to various densities of these high-porosity insulations as long as the fiber properties are the same (index of refraction, size distribution, orientation, and length). Furthermore, the heat transfer data for these insulations can be obtained at any static pressure in any working gas environment without the need to perform tests in various gases at various pressures.

  9. Zero-G experimental validation of a robotics-based inertia identification algorithm

    Science.gov (United States)

    Bruggemann, Jeremy J.; Ferrel, Ivann; Martinez, Gerardo; Xie, Pu; Ma, Ou

    2010-04-01

    The need to efficiently identify the changing inertial properties of on-orbit spacecraft is becoming more critical as satellite on-orbit services, such as refueling and repairing, become increasingly aggressive and complex. This need stems from the fact that a spacecraft's control system relies on the knowledge of the spacecraft's inertia parameters. However, the inertia parameters may change during flight for reasons such as fuel usage, payload deployment or retrieval, and docking/capturing operations. New Mexico State University's Dynamics, Controls, and Robotics Research Group has proposed a robotics-based method of identifying unknown spacecraft inertia properties1. Previous methods require firing known thrusts then measuring the thrust, and the velocity and acceleration changes. The new method utilizes the concept of momentum conservation, while employing a robotic device powered by renewable energy to excite the state of the satellite. Thus, it requires no fuel usage or force and acceleration measurements. The method has been well studied in theory and demonstrated by simulation. However its experimental validation is challenging because a 6- degree-of-freedom motion in a zero-gravity condition is required. This paper presents an on-going effort to test the inertia identification method onboard the NASA zero-G aircraft. The design and capability of the test unit will be discussed in addition to the flight data. This paper also introduces the design and development of an airbearing based test used to partially validate the method, in addition to the approach used to obtain reference value for the test system's inertia parameters that can be used for comparison with the algorithm results.

  10. Experimental validation of calculation schemes connected with PWR absorbers and burnable poisons; Validation experimentale des schemas de calcul relatifs aux absorbants et poisons consommables dans les REP

    Energy Technology Data Exchange (ETDEWEB)

    Klenov, P.

    1995-10-01

    In France 80% of electricity is produced by PWR reactors. For a better exploitation of these reactors a modular computer code Apollo-II has been developed. his code compute the flux transport by discrete ordinate method or by probabilistic collisions on extended configurations such as reactor cells, assemblies or little cores. For validation of this code on mixed oxide fuel lattices with absorbers an experimental program Epicure in the reactor Eole was induced. This thesis is devoted to the validation of the Apollo code according to the results of the Epicure program. 43 refs., 65 figs., 1 append.

  11. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    computed counterparts resulting in an agreement within 2.5%, 5%, and 8% within solid water, bone, and lung, respectively. Conclusions: The proposed virtual point source model and characterization method can be used to compute absorbed dose in both the homogeneous and heterogeneous block phantoms within of 2%–8% of measured values, depending on the phantom and the beam quality. The authors’ results also provide experimental validation for their kV dose computation software, kVDoseCalc.

  12. Underwater behaviour of bitumen coated radioactive wastes: experimental validation of the Colonbo degradation model

    International Nuclear Information System (INIS)

    Gwinner, B.

    2004-03-01

    In the release scenario considered for geologic repository, water is thought to be the main aggressive agent with regards to bituminized radioactive waste (composed in general of 60 weight % of bitumen, 40% of soluble/insoluble salts and a few ppm of radionuclides). Since liquid water can diffuse in pure bitumen, leaching of bituminized waste results in the dissolution of the most soluble salts and leads to the development of a more or less concentrated saline solution-filled pore structure (called permeable layer). In consequence of the generation of a porous layer in the bituminized waste, leaching of salts and radionuclides can then take place. Research performed at the Atomic Energy Commission (CEA) aims therefore at understanding the consequences of ground-water immersion on the transport properties and radionuclides leaching of bituminized waste materials. To this end, a constitutive model (called COLONBO) which describes mathematically the leaching of bituminized waste has been developed. The COLONBO model is based on the following assumptions: 1. Water and dissolved salts migrate in the permeable layer according to Fick's first law. The diffusion of water and salts are quantified by effective diffusion coefficients which are unknown. 2. The mechanical properties of the bitumen matrix are not considered during leaching (free swelling). Up to now, the COLONBO model has been used only to model experimental water uptake and salt leach curves, leading (theoretical) estimates of the effective diffusion coefficients of water and salts in the permeable layer. The aim of this work was to validate experimentally the numerical results obtained with the COLONBO model. First, the correspondence between experimental and simulated water uptake and salt leach rates obtained on various bituminized waste materials is checked, leading estimates of the effective diffusion coefficients of water and salts in the permeable layer. Second, the evolution of the thickness and of the

  13. CFD simulation of a burner for syngas characterization and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Fantozzi, Francesco; Desideri, Umberto [University of Perugia (Italy). Dept. of Industrial Engineering], Emails: fanto@unipg.it, umberto.desideri@unipg.it; D' Amico, Michele [University of Perugia (Italy). Dept. of Energetic Engineering], E-mail: damico@crbnet.it

    2009-07-01

    Biomass and waste are distributed and renewable energy sources that may contribute effectively to sustainability if used on a small and micro scale. This requires the transformation through efficient technologies (gasification, pyrolysis and anaerobic digestion) into a suitable gaseous fuel to use in small internal combustion engines and gas turbines. The characterization of biomass derived syngas during combustion is therefore a key issue to improve the performance of small scale integrated plants because synthesis gas show significant differences with respect to Natural Gas (mixture of gases, low calorific value, hydrogen content, tar and particulate content) that may turn into ignition problems, combustion instabilities, difficulties in emission control and fouling. To this aim a burner for syngas combustion and LHV measurement through mass and energy balance was realized and connected to the rotary-kiln laboratory scale pyrolyzer at the Department of Industrial Engineering of the University of Perugia. A computational fluid dynamics (CFD) simulation of the burner was carried out considering the combustion of propane to investigate temperature and pressure distribution, heat transmission and distribution of the combustion products and by products. The simulation was carried out using the CFD program Star-CD. Before the simulation a geometrical model of the burner was built and the volume of model was subdivided in cells. A sensibility analysis of cells was carried out to estimate the approximation degree of the model. Experimental data about combustion emission were carried out with the propane combustion in the burner, the comparison between numerical results and experimental data was studied to validate the simulation for future works involved with the combustion of treated or raw (syngas with tar) syngas obtained from pyrolysis process. (author)

  14. On-chip gradient generation in 256 microfluidic cell cultures: simulation and experimental validation.

    Science.gov (United States)

    Somaweera, Himali; Haputhanthri, Shehan O; Ibraguimov, Akif; Pappas, Dimitri

    2015-08-07

    A microfluidic diffusion diluter was used to create a stable concentration gradient for dose response studies. The microfluidic diffusion diluter used in this study consisted of 128 culture chambers on each side of the main fluidic channel. A calibration method was used to find unknown concentrations with 12% error. Flow rate dependent studies showed that changing the flow rates generated different gradient patterns. Mathematical simulations using COMSOL Multi-physics were performed to validate the experimental data. The experimental data obtained for the flow rate studies agreed with the simulation results. Cells could be loaded into culture chambers using vacuum actuation and cultured for long times under low shear stress. Decreasing the size of the culture chambers resulted in faster gradient formation (20 min). Mass transport into the side channels of the microfluidic diffusion diluter used in this study is an important factor in creating the gradient using diffusional mixing as a function of the distance. To demonstrate the device's utility, an H2O2 gradient was generated while culturing Ramos cells. Cell viability was assayed in the 256 culture chambers, each at a discrete H2O2 concentration. As expected, the cell viability for the high concentration side channels increased (by injecting H2O2) whereas the cell viability in the low concentration side channels decreased along the chip due to diffusional mixing as a function of distance. COMSOL simulations were used to identify the effective concentration of H2O2 for cell viability in each side chamber at 45 min. The gradient effects were confirmed using traditional H2O2 culture experiments. Viability of cells in the microfluidic device under gradient conditions showed a linear relationship with the viability of the traditional culture experiment. Development of the microfluidic device used in this study could be used to study hundreds of concentrations of a compound in a single experiment.

  15. Development and experimental validation of a thermoelectric test bench for laboratory lessons

    Directory of Open Access Journals (Sweden)

    Antonio Rodríguez

    2013-12-01

    Full Text Available The refrigeration process reduces the temperature of a space or a given volume while the power generation process employs a source of thermal energy to generate electrical power. Because of the importance of these two processes, training of engineers in this area is of great interest. In engineering courses it is normally studied the vapor compression and absorption refrigeration, and power generation systems such as gas turbine and steam turbine. Another type of cooling and generation less studied within the engineering curriculum, having a great interest, it is cooling and thermal generation based on Peltier and Seebeck effects. The theoretical concepts are useful, but students have difficulties understanding the physical meaning of their possible applications. Providing students with tools to test and apply the theory in real applications, will lead to a better understanding of the subject. Engineers must have strong theoretical, computational and also experimental skills. A prototype test bench has been built and experimentally validated to perform practical lessons of thermoelectric generation and refrigeration. Using this prototype students learn the most effective way of cooling systems and thermal power generation as well as basic concepts associated with thermoelectricity. It has been proven that students learn the process of data acquisition, and the technology used in thermoelectric devices. These practical lessons are implemented for a 60 people group of students in the development of subject of Thermodynamic including in the Degree in Engineering in Industrial Technologies of Public University of Navarra. Normal 0 21 false false false ES X-NONE X-NONE Normal 0 21 false false false ES X-NONE X-NONE VIRmiRNA: a comprehensive resource for experimentally validated viral miRNAs and their targets.

    Science.gov (United States)

    Qureshi, Abid; Thakur, Nishant; Monga, Isha; Thakur, Anamika; Kumar, Manoj

    2014-01-01

    Viral microRNAs (miRNAs) regulate gene expression of viral and/or host genes to benefit the virus. Hence, miRNAs play a key role in host-virus interactions and pathogenesis of viral diseases. Lately, miRNAs have also shown potential as important targets for the development of novel antiviral therapeutics. Although several miRNA and their target repositories are available for human and other organisms in literature, but a dedicated resource on viral miRNAs and their targets are lacking. Therefore, we have developed a comprehensive viral miRNA resource harboring information of 9133 entries in three subdatabases. This includes 1308 experimentally validated miRNA sequences with their isomiRs encoded by 44 viruses in viral miRNA ' VIRMIRNA: ' and 7283 of their target genes in ' VIRMIRTAR': . Additionally, there is information of 542 antiviral miRNAs encoded by the host against 24 viruses in antiviral miRNA ' AVIRMIR': . The web interface was developed using Linux-Apache-MySQL-PHP (LAMP) software bundle. User-friendly browse, search, advanced search and useful analysis tools are also provided on the web interface. VIRmiRNA is the first specialized resource of experimentally proven virus-encoded miRNAs and their associated targets. This database would enhance the understanding of viral/host gene regulation and may also prove beneficial in the development of antiviral therapeutics. Database URL: http://crdd.osdd.net/servers/virmirna. © The Author(s) 2014. Published by Oxford University Press.

  16. An Experimentally Validated Numerical Modeling Technique for Perforated Plate Heat Exchangers.

    Science.gov (United States)

    White, M J; Nellis, G F; Kelin, S A; Zhu, W; Gianchandani, Y

    2010-11-01

    Cryogenic and high-temperature systems often require compact heat exchangers with a high resistance to axial conduction in order to control the heat transfer induced by axial temperature differences. One attractive design for such applications is a perforated plate heat exchanger that utilizes high conductivity perforated plates to provide the stream-to-stream heat transfer and low conductivity spacers to prevent axial conduction between the perforated plates. This paper presents a numerical model of a perforated plate heat exchanger that accounts for axial conduction, external parasitic heat loads, variable fluid and material properties, and conduction to and from the ends of the heat exchanger. The numerical model is validated by experimentally testing several perforated plate heat exchangers that are fabricated using microelectromechanical systems based manufacturing methods. This type of heat exchanger was investigated for potential use in a cryosurgical probe. One of these heat exchangers included perforated plates with integrated platinum resistance thermometers. These plates provided in situ measurements of the internal temperature distribution in addition to the temperature, pressure, and flow rate measured at the inlet and exit ports of the device. The platinum wires were deposited between the fluid passages on the perforated plate and are used to measure the temperature at the interface between the wall material and the flowing fluid. The experimental testing demonstrates the ability of the numerical model to accurately predict both the overall performance and the internal temperature distribution of perforated plate heat exchangers over a range of geometry and operating conditions. The parameters that were varied include the axial length, temperature range, mass flow rate, and working fluid.

  17. DC microgrid power flow optimization by multi-layer supervision control. Design and experimental validation

    International Nuclear Information System (INIS)

    Sechilariu, Manuela; Wang, Bao Chao; Locment, Fabrice; Jouglet, Antoine

    2014-01-01

    Highlights: • DC microgrid (PV array, storage, power grid connection, DC load) with multi-layer supervision control. • Power balancing following power flow optimization while providing interface for smart grid communication. • Optimization under constraints: storage capability, grid power limitations, grid time-of-use pricing. • Experimental validation of DC microgrid power flow optimization by multi-layer supervision control. • DC microgrid able to perform peak shaving, to avoid undesired injection, and to make full use of locally energy. - Abstract: Urban areas have great potential for photovoltaic (PV) generation, however, direct PV power injection has limitations for high level PV penetration. It induces additional regulations in grid power balancing because of lacking abilities of responding to grid issues such as reducing grid peak consumption or avoiding undesired injections. The smart grid implementation, which is designed to meet these requirements, is facilitated by microgrids development. This paper presents a DC microgrid (PV array, storage, power grid connection, DC load) with multi-layer supervision control which handles instantaneous power balancing following the power flow optimization while providing interface for smart grid communication. The optimization takes into account forecast of PV power production and load power demand, while satisfying constraints such as storage capability, grid power limitations, grid time-of-use pricing and grid peak hour. Optimization, whose efficiency is related to the prediction accuracy, is carried out by mixed integer linear programming. Experimental results show that the proposed microgrid structure is able to control the power flow at near optimum cost and ensures self-correcting capability. It can respond to issues of performing peak shaving, avoiding undesired injection, and making full use of locally produced energy with respect to rigid element constraints

  18. On the selection of shape and orientation of a greenhouse. Thermal modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Sethi, V.P. [Department of Mechanical Engineering, Punjab Agricultural University, Ludhiana 141 004, Punjab (India)

    2009-01-15

    In this study, five most commonly used single span shapes of greenhouses viz. even-span, uneven-span, vinery, modified arch and quonset type have been selected for comparison. The length, width and height (at the center) are kept same for all the selected shapes. A mathematical model for computing transmitted total solar radiation (beam, diffused and ground reflected) at each hour, for each month and at any latitude for the selected geometry greenhouses (through each wall, inclined surfaces and roofs) is developed for both east-west and north-south orientation. Computed transmitted solar radiation is then introduced in a transient thermal model developed to compute hourly inside air temperature for each shape and orientation. Experimental validation of both the models is carried out for the measured total solar radiation and inside air temperature for an east-west orientation, even-span greenhouse (for a typical day in summer) at Ludhiana (31 N and 77 E) Punjab, India. During the experimentation, capsicum crop is grown inside the greenhouse. The predicted and measured values are in close agreement. Results show that uneven-span shape greenhouse receives the maximum and quonset shape receives the minimum solar radiation during each month of the year at all latitudes. East-west orientation is the best suited for year round greenhouse applications at all latitudes as this orientation receives greater total radiation in winter and less in summer except near the equator. Results also show that inside air temperature rise depends upon the shape of the greenhouse and this variation from uneven-span shape to quonset shape is 4.6 C (maximum) and 3.5 C (daily average) at 31 N latitude. (author)

  19. Validity of the classical monte carlo method to model the magnetic properties of a large transition-metal cluster: Mn19.

    Science.gov (United States)

    Lima, Nicola; Caneschi, Andrea; Gatteschi, Dante; Kritikos, Mikael; Westin, L Gunnar

    2006-03-20

    The susceptibility of the large transition-metal cluster [Mn19O12(MOE)14(MOEH)10].MOEH (MOE = OC2H2O-CH3) has been fitted through classical Monte Carlo simulation, and an estimation of the exchange coupling constants has been done. With these results, it has been possible to perform a full-matrix diagonalization of the cluster core, which was used to provide information on the nature of the low-lying levels.

  1. Design and experimental validation for direct-drive fault-tolerant permanent-magnet vernier machines.

    Science.gov (United States)

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.

  2. Theoretical modeling and experimental validation of a torsional piezoelectric vibration energy harvesting system

    Science.gov (United States)

    Qian, Feng; Zhou, Wanlu; Kaluvan, Suresh; Zhang, Haifeng; Zuo, Lei

    2018-04-01

    Vibration energy harvesting has been extensively studied in recent years to explore a continuous power source for sensor networks and low-power electronics. Torsional vibration widely exists in mechanical engineering; however, it has not yet been well exploited for energy harvesting. This paper presents a theoretical model and an experimental validation of a torsional vibration energy harvesting system comprised of a shaft and a shear mode piezoelectric transducer. The piezoelectric transducer position on the surface of the shaft is parameterized by two variables that are optimized to obtain the maximum power output. The piezoelectric transducer can work in d 15 mode (pure shear mode), coupled mode of d 31 and d 33, and coupled mode of d 33, d 31 and d 15, respectively, when attached at different angles. Approximate expressions of voltage and power are derived from the theoretical model, which gave predictions in good agreement with analytical solutions. Physical interpretations on the implicit relationship between the power output and the position parameters of the piezoelectric transducer is given based on the derived approximate expression. The optimal position and angle of the piezoelectric transducer is determined, in which case, the transducer works in the coupled mode of d 15, d 31 and d 33.

  3. Optimal Control of Diesel Engines: Numerical Methods, Applications, and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Jonas Asprion

    2014-01-01

    become complex systems. The exploitation of any leftover potential during transient operation is crucial. However, even an experienced calibration engineer cannot conceive all the dynamic cross couplings between the many actuators. Therefore, a highly iterative procedure is required to obtain a single engine calibration, which in turn causes a high demand for test-bench time. Physics-based mathematical models and a dynamic optimisation are the tools to alleviate this dilemma. This paper presents the methods required to implement such an approach. The optimisation-oriented modelling of diesel engines is summarised, and the numerical methods required to solve the corresponding large-scale optimal control problems are presented. The resulting optimal control input trajectories over long driving profiles are shown to provide enough information to allow conclusions to be drawn for causal control strategies. Ways of utilising this data are illustrated, which indicate that a fully automated dynamic calibration of the engine control unit is conceivable. An experimental validation demonstrates the meaningfulness of these results. The measurement results show that the optimisation predicts the reduction of the fuel consumption and the cumulative pollutant emissions with a relative error of around 10% on highly transient driving cycles.

  4. Analysis and experimental validation of through-thickness cracked large-scale biaxial fracture tests

    International Nuclear Information System (INIS)

    Wiesner, C.S.; Goldthorpe, M.R.; Andrews, R.M.; Garwood, S.J.

    1999-01-01

    Since 1984 TWI has been involved in an extensive series of tests investigating the effects of biaxial loading on the fracture behaviour of A533B steel. Testing conditions have ranged from the lower to upper shelf regions of the transition curve and covered a range of biaxiality ratios. In an attempt to elucidate the trends underlying the experimental results, finite element-based mechanistic models were used to analyse the effects of biaxial loading. For ductile fracture, a modified Gunson model was used and important effects on tearing behaviour were found for through thickness cracked wide plates, as observed in upper shelf tests. For cleavage fracture, both simple T-stress methods and the Anderson-Dodds and Beremin models were used. Whilst the effect of biaxiality on surface cracked plates was small, a marked effect of biaxial loading was found for the through-thickness crack. To further validate the numerical predictions for cleavage fracture, TWI have performed an additional series of lower shelf through thickness cracked biaxial wide plate fracture tests. These tests were performed using various biaxiality loading conditions varying from simple uniaxial loading, through equibiaxial loading, to a biaxiality ratio equivalent to a circumferential crack in a pressure vessel. These tests confirmed the predictions that there is a significant effect of biaxial loading on cleavage fracture of through thickness cracked plate. (orig.)

  5. MATLAB/Simulink Pulse-Echo Ultrasound System Simulator Based on Experimentally Validated Models.

    Science.gov (United States)

    Kim, Taehoon; Shin, Sangmin; Lee, Hyongmin; Lee, Hyunsook; Kim, Heewon; Shin, Eunhee; Kim, Suhwan

    2016-02-01

    A flexible clinical ultrasound system must operate with different transducers, which have characteristic impulse responses and widely varying impedances. The impulse response determines the shape of the high-voltage pulse that is transmitted and the specifications of the front-end electronics that receive the echo; the impedance determines the specification of the matching network through which the transducer is connected. System-level optimization of these subsystems requires accurate modeling of pulse-echo (two-way) response, which in turn demands a unified simulation of the ultrasonics and electronics. In this paper, this is realized by combining MATLAB/Simulink models of the high-voltage transmitter, the transmission interface, the acoustic subsystem which includes wave propagation and reflection, the receiving interface, and the front-end receiver. To demonstrate the effectiveness of our simulator, the models are experimentally validated by comparing the simulation results with the measured data from a commercial ultrasound system. This simulator could be used to quickly provide system-level feedback for an optimized tuning of electronic design parameters.

  6. Time Reversal UWB Communication System: A Novel Modulation Scheme with Experimental Validation

    Directory of Open Access Journals (Sweden)

    Khaleghi A

    2010-01-01

    Full Text Available A new modulation scheme is proposed for a time reversal (TR ultra wide-band (UWB communication system. The new modulation scheme uses the binary pulse amplitude modulation (BPAM and adds a new level of modulation to increase the data rate of a TR UWB communication system. Multiple data bits can be transmitted simultaneously with a cost of little added interference. Bit error rate (BER performance and the maximum achievable data rate of the new modulation scheme are theoretically analyzed. Two separate measurement campaigns are carried out to analyze the proposed modulation scheme. In the first campaign, the frequency responses of a typical indoor channel are measured and the performance is studied by the simulations using the measured frequency responses. Theoretical and the simulative performances are in strong agreement with each other. Furthermore, the BER performance of the proposed modulation scheme is compared with the performance of existing modulation schemes. It is shown that the proposed modulation scheme outperforms QAM and PAM for in an AWGN channel. In the second campaign, an experimental validation of the proposed modulation scheme is done. It is shown that the performances with the two measurement campaigns are in good agreement.

  7. Theoretical modeling and experimental validation of transport and separation properties of carbon nanotube electrospun membrane distillation

    KAUST Repository

    Lee, Jung Gil; Lee, Eui-Jong; Jeong, Sanghyun; Guo, Jiaxin; An, Alicia Kyoungjin; Guo, Hong; Kim, Joonha; Leiknes, TorOve; Ghaffour, NorEddine

    2016-01-01

    Developing a high flux and selective membrane is required to make membrane distillation (MD) a more attractive desalination process. Amongst other characteristics membrane hydrophobicity is significantly important to get high vapor transport and low wettability. In this study, a laboratory fabricated carbon nanotubes (CNTs) composite electrospun (E-CNT) membrane was tested and has showed a higher permeate flux compared to poly(vinylidene fluoride-co-hexafluoropropylene) (PH) electrospun membrane (E-PH membrane) in a direct contact MD (DCMD) configuration. Only 1% and 2% of CNTs incorporation resulted in an enhanced permeate flux with lower sensitivity to feed salinity while treating a 35 and 70 g/L NaCl solutions. Experimental results and the mechanisms of E-CNT membrane were validated by a proposed new step-modeling approach. The increased vapor transport in E-CNT membranes could not be elucidated by an enhancement of mass transfer only at a given physico-chemical properties. However, the theoretical modeling approach considering the heat and mass transfers simultaneously enabled to explain successfully the enhanced flux in the DCMD process using E-CNT membranes. This indicates that both mass and heat transfers improved by CNTs are attributed to the enhanced vapor transport in the E-CNT membrane.

  8. Material characterization and non destructive testing by ultrasounds; modelling, simulation and experimental validation

    International Nuclear Information System (INIS)

    Noroy-Nadal, M.H.

    2002-06-01

    This memory presents the research concerning the characterization of materials and the Non Destructive Testing (N.D.T) by ultrasonics. The different topics include three steps: modeling, computations and experimental validation. The studied materials concern mainly metals. The memory is divided in four parts. The first one concerns the characterization of materials versus temperature. The determination of the shear modulus G(T) is especially studied for a large temperature range, and around the melting point. The second part is devoted to studies by photothermal devices essentially focused on the modeling of the mechanical displacement and the stress field in coated materials. In this particular field of interest, applications concern either the mechanical characterization of the coating, the defect detection in the structure and finally the evaluation of the coating adhesion. The third section is dedicated to microstructural characterization using acoustic microscopy. The evaluation of crystallographic texture is especially approached, for metallic objects obtained by forming. Before concluding and pointing out some perspectives to this work, the last section concerns the introduction of optimization techniques, applied to the material characterization by acoustic microscopy. (author)

  9. Experimental validation of plugging during drop formation in a T-junction.

    Science.gov (United States)

    Abate, Adam R; Mary, Pascaline; van Steijn, Volkert; Weitz, David A

    2012-04-21

    At low capillary number, drop formation in a T-junction is dominated by interfacial effects: as the dispersed fluid flows into the drop maker nozzle, it blocks the path of the continuous fluid; this leads to a pressure rise in the continuous fluid that, in turn, squeezes on the dispersed fluid, inducing pinch-off of a drop. While the resulting drop volume predicted by this "squeezing" mechanism has been validated for a range of systems, as of yet, the pressure rise responsible for the actual pinch-off has not been observed experimentally. This is due to the challenge of measuring the pressures in a T-junction with the requisite speed, accuracy, and localization. Here, we present an empirical study of the pressures in a T-junction during drop formation. Using Laplace sensors, pressure probes we have developed, we confirm the central ideas of the squeezing mechanism; however, we also uncover other findings, including that the pressure of the dispersed fluid is not constant but rather oscillates in anti-phase with that of the continuous fluid. In addition, even at the highest capillary number for which monodisperse drops can be formed, pressure oscillations persist, indicating that drop formation in confined geometries does not transition to an entirely shear-driven mechanism, but to a mechanism combining squeezing and shearing.

  10. External gear pumps operating with non-Newtonian fluids: Modelling and experimental validation

    Science.gov (United States)

    Rituraj, Fnu; Vacca, Andrea

    2018-06-01

    External Gear Pumps are used in various industries to pump non-Newtonian viscoelastic fluids like plastics, paints, inks, etc. For both design and analysis purposes, it is often a matter of interest to understand the features of the displacing action realized by meshing of the gears and the description of the behavior of the leakages for this kind of pumps. However, very limited work can be found in literature about methodologies suitable to model such phenomena. This article describes the technique of modelling external gear pumps that operate with non-Newtonian fluids. In particular, it explains how the displacing action of the unit can be modelled using a lumped parameter approach which involves dividing fluid domain into several control volumes and internal flow connections. This work is built upon the HYGESim simulation tool, conceived by the authors' research team in the last decade, which is for the first time extended for the simulation of non-Newtonian fluids. The article also describes several comparisons between simulation results and experimental data obtained from numerous experiments performed for validation of the presented methodology. Finally, operation of external gear pump with fluids having different viscosity characteristics is discussed.

  11. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    Science.gov (United States)

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  12. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    Science.gov (United States)

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.

  13. LES Modeling with Experimental Validation of a Compound Channel having Converging Floodplain

    Science.gov (United States)

    Mohanta, Abinash; Patra, K. C.

    2018-04-01

    Computational fluid dynamics (CFD) is often used to predict flow structures in developing areas of a flow field for the determination of velocity field, pressure, shear stresses, effect of turbulence and others. A two phase three-dimensional CFD model along with the large eddy simulation (LES) model is used to solve the turbulence equation. This study aims to validate CFD simulations of free surface flow or open channel flow by using volume of fluid method by comparing the data observed in hydraulics laboratory of the National Institute of Technology, Rourkela. The finite volume method with a dynamic sub grid scale was carried out for a constant aspect ratio and convergence condition. The results show that the secondary flow and centrifugal force influence flow pattern and show good agreement with experimental data. Within this paper over-bank flows have been numerically simulated using LES in order to predict accurate open channel flow behavior. The LES results are shown to accurately predict the flow features, specifically the distribution of secondary circulations both for in-bank channels as well as over-bank channels at varying depth and width ratios in symmetrically converging flood plain compound sections.

  14. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance.

    Science.gov (United States)

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines

    Directory of Open Access Journals (Sweden)

    Guohai Liu

    2014-01-01

    Full Text Available A fault-tolerant permanent-magnet vernier (FT-PMV machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs. This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM, the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.

  16. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    Science.gov (United States)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  17. Spatiotemporally Representative and Cost-Efficient Sampling Design for Validation Activities in Wanglang Experimental Site

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2017-11-01

    Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.

  18. Theoretical modeling and experimental validation of transport and separation properties of carbon nanotube electrospun membrane distillation

    KAUST Repository

    Lee, Jung Gil

    2016-12-27

    Developing a high flux and selective membrane is required to make membrane distillation (MD) a more attractive desalination process. Amongst other characteristics membrane hydrophobicity is significantly important to get high vapor transport and low wettability. In this study, a laboratory fabricated carbon nanotubes (CNTs) composite electrospun (E-CNT) membrane was tested and has showed a higher permeate flux compared to poly(vinylidene fluoride-co-hexafluoropropylene) (PH) electrospun membrane (E-PH membrane) in a direct contact MD (DCMD) configuration. Only 1% and 2% of CNTs incorporation resulted in an enhanced permeate flux with lower sensitivity to feed salinity while treating a 35 and 70 g/L NaCl solutions. Experimental results and the mechanisms of E-CNT membrane were validated by a proposed new step-modeling approach. The increased vapor transport in E-CNT membranes could not be elucidated by an enhancement of mass transfer only at a given physico-chemical properties. However, the theoretical modeling approach considering the heat and mass transfers simultaneously enabled to explain successfully the enhanced flux in the DCMD process using E-CNT membranes. This indicates that both mass and heat transfers improved by CNTs are attributed to the enhanced vapor transport in the E-CNT membrane.

  19. Thermal fluid-solid interaction model and experimental validation for hydrostatic mechanical face seals

    Science.gov (United States)

    Huang, Weifeng; Liao, Chuanjun; Liu, Xiangfeng; Suo, Shuangfu; Liu, Ying; Wang, Yuming

    2014-09-01

    Hydrostatic mechanical face seals for reactor coolant pumps are very important for the safety and reliability of pressurized-water reactor power plants. More accurate models on the operating mechanism of the seals are needed to help improve their performance. The thermal fluid-solid interaction (TFSI) mechanism of the hydrostatic seal is investigated in this study. Numerical models of the flow field and seal assembly are developed. Based on the mechanism for the continuity condition of the physical quantities at the fluid-solid interface, an on-line numerical TFSI model for the hydrostatic mechanical seal is proposed using an iterative coupling method. Dynamic mesh technology is adopted to adapt to the changing boundary shape. Experiments were performed on a test rig using a full-size test seal to obtain the leakage rate as a function of the differential pressure. The effectiveness and accuracy of the TFSI model were verified by comparing the simulation results and experimental data. Using the TFSI model, the behavior of the seal is presented, including mechanical and thermal deformation, and the temperature field. The influences of the rotating speed and differential pressure of the sealing device on the temperature field, which occur widely in the actual use of the seal, are studied. This research proposes an on-line and assembly-based TFSI model for hydrostatic mechanical face seals, and the model is validated by full-sized experiments.

  20. Experimental Validation of Pulse Phase Tracking for X-Ray Pulsar Based

    Science.gov (United States)

    Anderson, Kevin

    2012-01-01

    Pulsars are a form of variable celestial source that have shown to be usable as aids for autonomous, deep space navigation. Particularly those sources emitting in the X-ray band are ideal for navigation due to smaller detector sizes. In this paper X-ray photons arriving from a pulsar are modeled as a non-homogeneous Poisson process. The method of pulse phase tracking is then investigated as a technique to measure the radial distance traveled by a spacecraft over an observation interval. A maximum-likelihood phase estimator (MLE) is used for the case where the observed frequency signal is constant. For the varying signal frequency case, an algorithm is used in which the observation window is broken up into smaller blocks over which an MLE is used. The outputs of this phase estimation process were then looped through a digital phase-locked loop (DPLL) in order to reduce the errors and produce estimates of the doppler frequency. These phase tracking algorithms were tested both in a computer simulation environment and using the NASA Goddard Space flight Center X-ray Navigation Laboratory Testbed (GXLT). This provided an experimental validation with photons being emitted by a modulated X-ray source and detected by a silicon-drift detector. Models of the Crab pulsar and the pulsar B1821-24 were used in order to generate test scenarios. Three different simulated detector trajectories were used to be tracked by the phase tracking algorithm: a stationary case, one with constant velocity, and one with constant acceleration. All three were performed in one-dimension along the line of sight to the pulsar. The first two had a constant signal frequency and the third had a time varying frequency. All of the constant frequency cases were processed using the MLE, and it was shown that they tracked the initial phase within 0.15% for the simulations and 2.5% in the experiments, based on an average of ten runs. The MLE-DPLL cascade version of the phase tracking algorithm was used in

  1. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail

  2. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-01-01

    generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all

  3. Experimental Component Characterization, Monte-Carlo-Based Image Generation and Source Reconstruction for the Neutron Imaging System of the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, C A; Moran, M J

    2007-08-21

    The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS

  4. Experimental validation of a true-scale morphing flap for large civil aircraft applications

    Science.gov (United States)

    Pecora, R.; Amoroso, F.; Arena, M.; Noviello, M. C.; Rea, F.

    2017-04-01

    systems were duly analyzed and experimentally validated thus proving the overall device compliance with industrial standards and applicable airworthiness requirements.

  5. Chemical looping reforming in packed-bed reactors : modelling, experimental validation and large-scale reactor design

    NARCIS (Netherlands)

    Spallina, V.; Marinello, B.; Gallucci, F.; Romano, M.C.; van Sint Annaland, M.

    This paper addresses the experimental demonstration and model validation of chemical looping reforming in dynamically operated packed-bed reactors for the production of H2 or CH3OH with integrated CO2 capture. This process is a combination of auto-thermal and steam methane reforming and is carried

  6. Texas Panhandle soil-crop-beef food chain for uranium: a dynamic model validated by experimental data

    International Nuclear Information System (INIS)

    Wenzel, W.J.; Wallwork-Barber, K.M.; Rodgers, J.C.; Gallegos, A.F.

    1982-01-01

    Long-term simulations of uranium transport in the soil-crop-beef food chain were performed using the BIOTRAN model. Experimental data means from an extensive Pantex beef cattle study are presented. Experimental data were used to validate the computer model. Measurements of uranium in air, soil, water, range grasses, feed, and cattle tissues are compared to simulated uranium output values in these matrices when the BIOTRAN model was set at the measured soil and air values. The simulations agreed well with experimental data even though metabolic details for ruminants and uranium chemical form in the environment remain to be studied

  7. Statistical method for the determination of the ignition energy of dust cloud - experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, S.; Lebecki, K.; Gillard, P.; Youinou, L.; Baudry, G. [University of Orleans, Bourges (France)

    2010-05-15

    Powdery materials such as metallic or polymer powders play a considerable role in many industrial processes. Their use requires the introduction of preventive safeguard to control the plants safety. The mitigation of an explosion hazard, according to the ATEX 137 Directive (1999/92/EU), requires the assessment of the dust ignition sensitivity. PRISME laboratory (University of Orleans) has developed an experimental set-up and methodology, using the Langlie test, for the quick determination of the explosion sensitivity of dusts. This method requires only 20 shots and ignition sensitivity is evaluated through the E{sub 50} (energy with an ignition probability of 0.5) A Hartmann tube, with a volume of 1.3l, was designed and built. Many results on the energy ignition thresholds of partially oxidised aluminium were obtained using this experimental device and compared to literature. E-50 evolution is the same as MIE but their respective values are different and MIE is lower than E{sub 50} however the link between E{sub 50} and MIE has not been elucidated In this paper, the Langlie method is explained in detail for the determination of the parameters (mean value E{sub 50} and standard deviation {sigma}) of the associated statistic law. The ignition probability versus applied energy is firstly measured for Lycopodium in order to validate the method A comparison between the normal and the lognormal law was achieved and the best fit was obtained with the lognormal law. In a second part, the Langlie test was performed on different dusts such as aluminium, cornstarch, lycopodium, coal, and PA12 in order to determine E-50 and {sigma} for each dust. The energies E{sub 05} and E{sub 10} corresponding respectively to an ignition probability of 0.05 and 0.1 are determined with the lognormal law and compared to MIE find in literature. E{sub 05} and E{sub 10} values of ignition energy were found to be very close and were in good agreement with MIE in the literature.

  8. Process simulation and experimental validation of Hot Metal Gas Forming with new press hardening steels

    Science.gov (United States)

    Paul, A.; Reuther, F.; Neumann, S.; Albert, A.; Landgrebe, D.

    2017-09-01

    One field in the work of the Fraunhofer Institute for Machine Tools and Forming Technology IWU in Chemnitz is industry applied research in Hot Metal Gas Forming, combined with press hardening in one process step. In this paper the results of investigations on new press hardening steels from SSAB AB (Docol®1800 Bor and Docol®2000 Bor) are presented. Hot tensile tests recorded by the project partner (University of West Bohemia, Faculty of Mechanical Engineering) were used to create a material model for thermo-mechanical forming simulations. For this purpose the provided raw data were converted into flow curve approximations of the real stress-real strain-curves for both materials and afterwards integrated in a LS-DYNA simulation model of Hot Metal Gas Forming with all relevant boundary conditions and sub-stages. Preliminary experimental tests were carried out using a tool at room temperature to permit evaluation of the forming behaviour of Docol 1800 Bor and Docol 2000 Bor tubes as well as validation of the simulation model. Using this demonstrator geometry (outer diameter 57 mm, tube length 300 mm, wall thickness 1.5 mm), the intention was to perform a series of tests with different furnace temperatures (from 870 °C to 1035 °C), maximum internal pressures (up to 67 MPa) and pressure build-up rates (up to 40 MPa/s) to evaluate the formability of Docol 1800 Bor and Docol 2000 Bor. Selected demonstrator parts produced in that way were subsequently analysed by wall thickness and hardness measurements. The tests were carried out using the completely modernized Dunkes/AP&T HS3-1500 hydroforming press at the Fraunhofer IWU. In summary, creating a consistent simulation model with all relevant sub-stages was successfully established in LS-DYNA. The computation results show a high correlation with the experimental data regarding the thinning behaviour. The Hot Metal Gas Forming of the demonstrator geometry was successfully established as well. Different hardness values

  9. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.-M.

    2008-01-01

    CFD code validation requires experimental data that characterize the distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The paper reports about the use of wire-mesh sensors to study turbulent mixing processes in single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of other non

  10. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.M.

    2007-01-01

    CFD code validation requires experimental data that characterize distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The presentation reports about the use of wire-mesh sensors to study turbulent mixing processes in the single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of

  11. Testing the Validity of Local Flux Laws in an Experimental Eroding Landscape

    Science.gov (United States)

    Sweeney, K. E.; Roering, J. J.; Ellis, C.

    2015-12-01

    Linking sediment transport to landscape evolution is fundamental to interpreting climate and tectonic signals from topography and sedimentary deposits. Most geomorphic process laws consist of simple continuum relationships between sediment flux and local topography. However, recent work has shown that nonlocal formulations, whereby sediment flux depends on upslope conditions, are more accurate descriptions of sediment motion, particularly in steep topography. Discriminating between local and nonlocal processes in natural landscapes is complicated by the scarcity of high-resolution topographic data and by the difficulty of measuring sediment flux. To test the validity of local formulations of sediment transport, we use an experimental erosive landscape that combines disturbance-driven, diffusive sediment transport and surface runoff. We conducted our experiments in the eXperimental Landscape Model at St. Anthony Falls Laboratory a 0.5 x 0.5 m test flume filled with crystalline silica (D50 = 30μ) mixed with water to increase cohesion and preclude surface infiltration. Topography is measured with a sheet laser scanner; total sediment flux is tracked with a series of load cells. We simulate uplift (relative baselevel fall) by dropping two parallel weirs at the edges of the experiment. Diffusive sediment transport in our experiments is driven by rainsplash from a constant head drip tank fitted with 625 blunt needles of fixed diameter; sediment is mobilized both through drop impact and the subsequent runoff of the drops. To drive advective transport, we produce surface runoff via a ring of misters that produce droplets that are too small to disturb the sediment surface on impact. Using the results from five experiments that systematically vary the time of drip box rainfall relative to misting rainfall, we calculate local erosion in our experiments by differencing successive time-slices of topography and test whether these patterns are related to local topographic

  12. Structural Properties of Pure Simple Alcohols from Ethanol, Propanol, Butanol, Pentanol, to Hexanol: Comparing Monte Carlo Simulations with Experimental SAXS Data

    Czech Academy of Sciences Publication Activity Database

    Tomšič, M.; Jamnik, A.; Fritz-Popovski, G.; Glatter, O.; Vlček, Lukáš

    2007-01-01

    Roč. 111, č. 7 (2007), s. 1738-1751 ISSN 1520-6106 Institutional research plan: CEZ:AV0Z40720504 Keywords : alcohols * saxs * monte carlo Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.086, year: 2007

  13. Partition method and experimental validation for impact dynamics of flexible multibody system

    Science.gov (United States)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  14. Validation of the stream function method used for reconstruction of experimental ionospheric convection patterns

    Directory of Open Access Journals (Sweden)

    P.L. Israelevich

    Full Text Available In this study we test a stream function method suggested by Israelevich and Ershkovich for instantaneous reconstruction of global, high-latitude ionospheric convection patterns from a limited set of experimental observations, namely, from the electric field or ion drift velocity vector measurements taken along two polar satellite orbits only. These two satellite passes subdivide the polar cap into several adjacent areas. Measured electric fields or ion drifts can be considered as boundary conditions (together with the zero electric potential condition at the low-latitude boundary for those areas, and the entire ionospheric convection pattern can be reconstructed as a solution of the boundary value problem for the stream function without any preliminary information on ionospheric conductivities. In order to validate the stream function method, we utilized the IZMIRAN electrodynamic model (IZMEM recently calibrated by the DMSP ionospheric electrostatic potential observations. For the sake of simplicity, we took the modeled electric fields along the noon-midnight and dawn-dusk meridians as the boundary conditions. Then, the solution(s of the boundary value problem (i.e., a reconstructed potential distribution over the entire polar region is compared with the original IZMEM/DMSP electric potential distribution(s, as well as with the various cross cuts of the polar cap. It is found that reconstructed convection patterns are in good agreement with the original modelled patterns in both the northern and southern polar caps. The analysis is carried out for the winter and summer conditions, as well as for a number of configurations of the interplanetary magnetic field.

    Key words: Ionosphere (electric fields and currents; plasma convection; modelling and forecasting

  15. Experimental validation of a method for removing the capacitive leakage artifact from electrical bioimpedance spectroscopy measurements

    International Nuclear Information System (INIS)

    Buendia, R; Seoane, F; Gil-Pita, R

    2010-01-01

    Often when performing electrical bioimpedance (EBI) spectroscopy measurements, the obtained EBI data present a hook-like deviation, which is most noticeable at high frequencies in the impedance plane. The deviation is due to a capacitive leakage effect caused by the presence of stray capacitances. In addition to the data deviation being remarkably noticeable at high frequencies in the phase and the reactance spectra, the measured EBI is also altered in the resistance and the modulus. If this EBI data deviation is not properly removed, it interferes with subsequent data analysis processes, especially with Cole model-based analyses. In other words, to perform any accurate analysis of the EBI spectroscopy data, the hook deviation must be properly removed. Td compensation is a method used to compensate the hook deviation present in EBI data; it consists of mult