WorldWideScience

Sample records for experimentally calibrated computational

  1. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  2. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  3. Experimentally calibrated computational chemistry of tryptophan hydroxylase: Trans influence, hydrogen-bonding, and 18-electron rule govern O-2-activation

    DEFF Research Database (Denmark)

    Haahr, Lærke Tvedebrink; Kepp, Kasper Planeta; Boesen, Jane

    2010-01-01

    with the experimental value (0.25 mm/s) which we propose as the structure of the hydroxylating intermediate, with the tryptophan substrate well located for further reaction 3.5 Å from the ferryl group. Based on the optimized transition states, the activation barriers for the two paths (glu and his) are similar, so......Insight into the nature of oxygen activation in tryptophan hydroxylase has been obtained from density functional computations. Conformations of O2-bound intermediates have been studied with oxygen trans to glutamate and histidine, respectively. An O2-adduct with O2 trans to histidine (Ohis...... towards the cofactor and a more activated O–O bond (1.33 Å) than in Oglu (1.30 Å). It is shown that the cofactor can hydrogen bond to O2 and activate the O–O bond further (from 1.33 to 1.38 Å). The Ohis intermediate leads to a ferryl intermediate (Fhis) with an isomer shift of 0.34 mm/s, also consistent...

  4. Reduced Calibration Curve for Proton Computed Tomography

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-01-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  5. CVM-X: Calibrating contingent values with experimental auction markets

    International Nuclear Information System (INIS)

    Fox, J.A.; Shogren, J.F.; Hayes, D.J.; Kliebenstein, J.B.

    1998-01-01

    We design and implement a method, CVM-X, to calibrate hypothetical survey values using experimental auction markets. We test the procedure using consumer willingness-to-pay for irradiated/nonirradiated meat. Our results show that calibration factors for those who favor the irradiation process (0.67-0.69) are less severe than for those with an initial dislike of the process (0.55-0.59), suggesting that calibration may be commodity specific

  6. Exploratory Experimentation and Computation

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2010-02-25

    We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

  7. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...

  8. Experimental Mathematics and Computational Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  9. Experimental facility and void fraction calibration methods for impedance probes

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Fernando L. de; Rocha, Marcelo S., E-mail: floliveira@ipen.br, E-mail: msrocha@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    An experimental facility was designed and constructed with aims of to calibrate a capacitance probe for gas-liquid flow void fraction measurements. The facility is composed of a metallic hack with a vertical 2,300 mm high glass tube with 38 mm ID with stagnant water and compressed air bubbling system simulating the gas phase (vapor). At the lower part, a mixing section with a porous media element releases the air bubbles into the water, and the compressed air flow is measured by two calibrated rotameters. At the upper part a stagnant water tank separates the liquid and gas. Two pressure taps are located near the lower and upper sides of the glass tube for pressure difference measurement. The pressure difference is used for low void fraction values (0-15%) calibration methods, as described in the work. Two electrically controlled quick closing valves are installed between the porous media element and the upward separation tank for high void fraction values measurement (15-50%) used to calibrate the capacitance probe. The experimental facility design, construction, capacitance probe calibration methods and results, as well as flow pattern visualization, are presented. Finally, the capacitance probe will be installed on a natural circulation circuit mounted at the Nuclear Engineering Center (CEN/IPEN/CNEN-SP) for measurement of the instantaneous bulk void. Instantaneous signals generated by the capacitance probe will allow the determination of natural circulation loop global energy balance. (author)

  10. Experimental facility and void fraction calibration methods for impedance probes

    International Nuclear Information System (INIS)

    Oliveira, Fernando L. de; Rocha, Marcelo S.

    2013-01-01

    An experimental facility was designed and constructed with aims of to calibrate a capacitance probe for gas-liquid flow void fraction measurements. The facility is composed of a metallic hack with a vertical 2,300 mm high glass tube with 38 mm ID with stagnant water and compressed air bubbling system simulating the gas phase (vapor). At the lower part, a mixing section with a porous media element releases the air bubbles into the water, and the compressed air flow is measured by two calibrated rotameters. At the upper part a stagnant water tank separates the liquid and gas. Two pressure taps are located near the lower and upper sides of the glass tube for pressure difference measurement. The pressure difference is used for low void fraction values (0-15%) calibration methods, as described in the work. Two electrically controlled quick closing valves are installed between the porous media element and the upward separation tank for high void fraction values measurement (15-50%) used to calibrate the capacitance probe. The experimental facility design, construction, capacitance probe calibration methods and results, as well as flow pattern visualization, are presented. Finally, the capacitance probe will be installed on a natural circulation circuit mounted at the Nuclear Engineering Center (CEN/IPEN/CNEN-SP) for measurement of the instantaneous bulk void. Instantaneous signals generated by the capacitance probe will allow the determination of natural circulation loop global energy balance. (author)

  11. Computer Generated Hologram System for Wavefront Measurement System Calibration

    Science.gov (United States)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  12. Experimental comparison between total calibration factors and components calibration factors of reference dosemeters used in secondary standard laboratory dosemeters

    International Nuclear Information System (INIS)

    Silva, T.A. da.

    1981-06-01

    A quantitative comparison of component calibration factors with the corresponding overall calibration factor was used to evaluate the adopted component calibration procedure in regard to parasitic elements. Judgement of significance is based upon the experimental uncertainty of a well established procedure for determination of the overall calibration factor. The experimental results obtained for different ionization chambers and different electrometers demonstrate that for one type of electrometer the parasitic elements have no influence on its sensitivity considering the experimental uncertainty of the calibration procedures. In this case the adopted procedure for determination of component calibration factors is considered to be equivalent to the procedure of determination of the overall calibration factor and thus might be used as a strong quality control measure in routine calibration. (Author) [pt

  13. Experimental and calculated calibration of ionization chambers with air circulation

    CERN Document Server

    Peetermans, A

    1972-01-01

    The reports describes the method followed in order to calibrate the different ionization chambers with air circulation, used by the 'Health Physics Group'. The calculations agree more precisely with isotopes cited previously (/sup 11/C, /sup 13/N, /sup 15/O, /sup 41 /Ar, /sup 14/O, /sup 38/Cl) as well as for /sup 85/Kr, /sup 133/Xe, /sup 14/C and tritium which are used for the experimental standardisation of different chambers.

  14. 3D artefact for concurrent scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises a carbon fibre tubular structure on which a number of reference ruby spheres are glued. The artefact is positioned and scanned together with the workpiece inside the CT scanner...

  15. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo

    1997-01-01

    The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  16. A New Perspective for the Calibration of Computational Predictor Models.

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  17. Experimental quantum computing without entanglement.

    Science.gov (United States)

    Lanyon, B P; Barbieri, M; Almeida, M P; White, A G

    2008-11-14

    Deterministic quantum computation with one pure qubit (DQC1) is an efficient model of computation that uses highly mixed states. Unlike pure-state models, its power is not derived from the generation of a large amount of entanglement. Instead it has been proposed that other nonclassical correlations are responsible for the computational speedup, and that these can be captured by the quantum discord. In this Letter we implement DQC1 in an all-optical architecture, and experimentally observe the generated correlations. We find no entanglement, but large amounts of quantum discord-except in three cases where an efficient classical simulation is always possible. Our results show that even fully separable, highly mixed, states can contain intrinsically quantum mechanical correlations and that these could offer a valuable resource for quantum information technologies.

  18. Computational integration of the phases and procedures of calibration processes for radioprotection

    International Nuclear Information System (INIS)

    Santos, Gleice R. dos; Thiago, Bibiana dos S.; Rocha, Felicia D.G.; Santos, Gelson P. dos; Potiens, Maria da Penha A.; Vivolo, Vitor

    2011-01-01

    This work proceed the computational integration of the processes phases by using only a single computational software, from the entrance of the instrument at the Instrument Calibration Laboratory (LCI-IPEN) to the conclusion of calibration procedures. So, the initial information such as trade mark, model, manufacturer, owner, and the calibration records are digitized once until the calibration certificate emission

  19. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters

    International Nuclear Information System (INIS)

    Kolev, N.A.

    1981-07-01

    A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)

  20. Cosmic reionization on computers. I. Design and calibration of simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States)

    2014-09-20

    Cosmic Reionization On Computers is a long-term program of numerical simulations of cosmic reionization. Its goal is to model fully self-consistently (albeit not necessarily from the first principles) all relevant physics, from radiative transfer to gas dynamics and star formation, in simulation volumes of up to 100 comoving Mpc, and with spatial resolution approaching 100 pc in physical units. In this method paper, we describe our numerical method, the design of simulations, and the calibration of numerical parameters. Using several sets (ensembles) of simulations in 20 h {sup –1} Mpc and 40 h {sup –1} Mpc boxes with spatial resolution reaching 125 pc at z = 6, we are able to match the observed galaxy UV luminosity functions at all redshifts between 6 and 10, as well as obtain reasonable agreement with the observational measurements of the Gunn-Peterson optical depth at z < 6.

  1. Calibration

    International Nuclear Information System (INIS)

    Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.

    1981-01-01

    Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%

  2. Accurate technique for complete geometric calibration of cone-beam computed tomography systems

    International Nuclear Information System (INIS)

    Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.

    2005-01-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 deg. (around beam direction) to 0.3 deg. (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in

  3. Electronic transport in VO2—Experimentally calibrated Boltzmann transport modeling

    International Nuclear Information System (INIS)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y.; Kado, Motohisa; Ling, Chen; Zhu, Gaohua; Banerjee, Debasish

    2015-01-01

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO 2 has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO 2 in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO 2 films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties

  4. Electronic transport in VO{sub 2}—Experimentally calibrated Boltzmann transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y., E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Center for Nanoscale Materials, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Kado, Motohisa [Higashifuji Technical Center, Toyota Motor Corporation, Susono, Shizuoka 410-1193 (Japan); Ling, Chen; Zhu, Gaohua; Banerjee, Debasish, E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Materials Research Department, Toyota Motor Engineering and Manufacturing North America, Inc., Ann Arbor, Michigan 48105 (United States)

    2015-12-28

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO{sub 2} has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO{sub 2} in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO{sub 2} films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties.

  5. Computations for a condenser. Experimental results

    International Nuclear Information System (INIS)

    Walden, Jean.

    1975-01-01

    Computations for condensers are presented with experimental results. The computations are concerned with the steam flux at the condenser input, and inside the tube bundle. Experimental results are given for the flux inside the condenser sleeve and the flow passing through the tube bundle [fr

  6. [Animal experimentation, computer simulation and surgical research].

    Science.gov (United States)

    Carpentier, Alain

    2009-11-01

    We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.

  7. The computer library of experimental neutron data

    International Nuclear Information System (INIS)

    Bychkov, V.M.; Manokhin, V.N.; Surgutanov, V.V.

    1976-05-01

    The paper describes the computer library of experimental neutron data at the Obninsk Nuclear Data Centre. The format of the library (EXFOR) and the system of programmes for supplying the library are briefly described. (author)

  8. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  9. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-03-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  10. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  11. Program for large scanning-measuring table calibration on-line with the TRA-1001 computer

    International Nuclear Information System (INIS)

    Kalmykova, L.P.; Ososkov, G.A.; Pogodina, G.A.; Skryl', I.I.

    1975-01-01

    A description of a programme variant for complex calibration of the system of boards BPS-2, is given in which the stage of measurements and accumulation of calibration data is separated from that of data processing on CDC-1604A computers. Stage-by-stage calibration allows for simultaneous and parallel measurements on all the 6 boards. Such a possibility of the boards' operation on a computer-controlled line has been used as checking of the operators' actions with current control of the data obtained. Mathematical formulae are followed by programmes' descriptions. Reception and accumulation of coordinates of the units of the calibrating plate, with a highly precise net of straight lines, are accomplished with the help of the DATREC programme working on a guiding TPA-1001 computer with simultaneous calibration of all the six BPS-2 boards. The DATREC programme is written in the SLANG-1 language. The CALBPS programme works on CDC-1604A computer, is written in FORTRAN, and calculates transformation coefficients and corresponding precision characteristics. The work has resulted in reducing the calibration time from 10-15 min. per board to 10-15 min. per all the 6 boards. The time of accumulation in the memory TPA-1001 and of recording on the CDC-608 tape recorder of calibration data is from 3 to 5 min.; the remaining time is spent on processing on a bigger CDC-1604A computer. Examples of typical output and certain results of calibration measurements are given

  12. "Calibration" system for spectral measurements and its experimental results

    Science.gov (United States)

    Bruchkouskaya, Sviatlana I.; Katkovsky, Leonid V.; Belyaev, Boris I.; Malyshev, Vladislav B.

    2017-04-01

    "Calibration" system has been developed at A. N. Sevchenko Research Institute of Applied Physical Problems of the Belarusian State University. It was designed for measuring the characteristics of spectral reflectance of all types of natural surfaces (test sites) in ground conditions or on board of aircraft carriers and has the following components: - Photospectroradiometr (PhSR) of high resolution with a range of 400-900 nm, equipped with a digital time-lapse video system; - Two-channel modular spectroradiometer (TMS) with a range of 400-900 nm, designed for simultaneous measurements of reflected light brightness of the underlying surface and the incident radiation from the upper hemisphere; - Two portable spectroradiometers (PSR-700 and PSR-1300) with a spectral range 800-1500 nm; 1200-2500 nm; - Scanning solar spectropolarimeter (SSP-600) with a range of 350-950 nm for measurements of direct sunlight and scattered by the atmosphere at different angles; "Calibration" system provides spectral resolution of 5.2 nm in a range of 400-900 nm, 10 nm in a range of 800-1500 nm and 15 nm in a range of 1200-2500 nm. Measurements of the optical characteristics of solar radiation (for determining parameters of the atmosphere) and that of underlying surface are synchronous. There is also a set of special nozzles for measurements of spectral brightness coefficients, polarization characteristics and spectral albedo. Spectra and images are geotagged to the navigation data (time, GPS). For the measurements of spectral reflection dependencies within "Monitoring-SG" framework expeditions to the Kuril Islands, Kursk aerospace test site and Kamchatka Peninsula were conducted in 2015 and 2016. The spectra of different underlying surfaces have been obtained: soils, plants and water objects, sedimentary and volcanic rocks. These surveys are a valuable material for further researches and selection of test facilities for flight calibration of space imaging systems. Information obtained

  13. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  14. Computational and Experimental Approaches to Visual Aesthetics

    Science.gov (United States)

    Brachmann, Anselm; Redies, Christoph

    2017-01-01

    Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view. PMID:29184491

  15. Absolute intensity calibration of the 32-channel heterodyne radiometer on experimental advanced superconducting tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X.; Zhao, H. L.; Liu, Y., E-mail: liuyong@ipp.ac.cn; Li, E. Z.; Han, X.; Ti, A.; Hu, L. Q.; Zhang, X. D. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California at Davis, Davis, California 95616 (United States)

    2014-09-15

    This paper presents the results of the in situ absolute intensity calibration for the 32-channel heterodyne radiometer on the experimental advanced superconducting tokamak. The hot/cold load method is adopted, and the coherent averaging technique is employed to improve the signal to noise ratio. Measured spectra and electron temperature profiles are compared with those from an independent calibrated Michelson interferometer, and there is a relatively good agreement between the results from the two different systems.

  16. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    Science.gov (United States)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  17. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    Science.gov (United States)

    2016-09-17

    ABSTRACT (Maximum 200 words) Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current...Abstract Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current methods. Additional...complexity, is a difficult and time consuming process that has historically be a separate process from the experimental testing. As such, additional

  18. Numerical treatment of experimental data in calibration procedures

    International Nuclear Information System (INIS)

    Moreno, C.

    1993-06-01

    A discussion of a numerical procedure to find the proportionality factor between two measured quantities is given in the framework of the least-squares method. Variable, as well as constant, amounts of experimental uncertainties are considered for each variable along their measured range. The variance of the proportionality factor is explicitly given as a closed analytical expression valid for the general case. Limits of the results obtained here have been studied allowing comparisons with those obtained using classical least-squares expressions. Analytical and numerical examples are also discussed. (author). 11 refs, 1 fig., 1 tab

  19. Automatic calibration system of the temperature instrument display based on computer vision measuring

    Science.gov (United States)

    Li, Zhihong; Li, Jinze; Bao, Changchun; Hou, Guifeng; Liu, Chunxia; Cheng, Fang; Xiao, Nianxin

    2010-07-01

    With the development of computers and the techniques of dealing with pictures and computer optical measurement, various measuring techniques are maturing gradually on the basis of optical picture processing technique and using in practice. On the bases, we make use of the many years' experience and social needs in temperature measurement and computer vision measurement to come up with the completely automatic way of the temperature measurement meter with integration of the computer vision measuring technique. It realizes synchronization collection with theory temperature value, improves calibration efficiency. based on least square fitting principle, integrate data procession and the best optimize theory, rapidly and accurately realizes automation acquisition and calibration of temperature.

  20. Statistical analysis on experimental calibration data for flowmeters in pressure pipes

    Science.gov (United States)

    Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto

    2017-08-01

    This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .

  1. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  2. Experimental procedures for the calibration of scintillation cells used in the determination of radon gas concentrations

    International Nuclear Information System (INIS)

    Grenier, M; Bigu, J.

    1982-02-01

    Experimental and analytical procedures are described for the calibration of scintillation cells used for the determination of radon gas concentration. In-house designed and built scintillation cells, used routinely in the monitoring of radon gas in uranium mine underground environments and in the laboratory, were calibrated. The cells had a volume of approximately 158 cm 3 and an α-counting efficiency ranging from 50% to 64%. Calibration factors for the cells were determined. Values ranged approximately from 0.177 cpm/pCiL -1 (4.77 cpm/BqL -1 ) to 0.224 cpm/pCiL -1 (6.05 cpm/BqL -1 ). The calibration facilities at the Elliot Lake Laboratory are briefly described

  3. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    Science.gov (United States)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  4. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  5. Modeling and Experimental Analysis of Piezoelectric Shakers for High-Frequency Calibration of Accelerometers

    International Nuclear Information System (INIS)

    Vogl, Gregory W.; Harper, Kari K.; Payne, Bev

    2010-01-01

    Piezoelectric shakers have been developed and used at the National Institute of Standards and Technology (NIST) for decades for high-frequency calibration of accelerometers. Recently, NIST researchers built new piezoelectric shakers in the hopes of reducing the uncertainties in the calibrations of accelerometers while extending the calibration frequency range beyond 20 kHz. The ability to build and measure piezoelectric shakers invites modeling of these systems in order to improve their design for increased performance, which includes a sinusoidal motion with lower distortion, lower cross-axial motion, and an increased frequency range. In this paper, we present a model of piezoelectric shakers and match it to experimental data. The equations of motion for all masses are solved along with the coupled state equations for the piezoelectric actuator. Finally, additional electrical elements like inductors, capacitors, and resistors are added to the piezoelectric actuator for matching of experimental and theoretical frequency responses.

  6. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...

  7. Radiochromic film calibration for dosimetry in computed tomography tests

    Energy Technology Data Exchange (ETDEWEB)

    Costa, K. C.; Prata M, A. [Federal Center for Technological Education of Minas Gerais, Biomedical Engineering Center, Av. Amazonas 5253, Nova Suica, 30421-169 Belo Horizonte, Minas Gerais (Brazil); Ladino G, A. M. [Federal University of Minas Gerais, Department of Nuclear Engineering, Av. Antonio Carlos 6627, Pampulha, 31270-90 Belo Horizonte, Minas Gerais (Brazil); Costa, K. L., E-mail: apratabhz@gmail.com [University of Itauna, Medicine Department, Rodovia Mg 431 Km 45 s/n, El Dorado, 35680-142 Itauna, Minas Gerais (Brazil)

    2017-10-15

    Radiochromic film applications in dosimetry have become increasingly significant for studies on radiotherapy and diagnostic tests. Due to sensitivity to exposure to ionizing radiation, radiochromic films are commonly used to obtain dose distribution maps. The objective of this study is to obtain the calibration curves of the radiographic film for exposure with X-ray beam in a computerized tomography (CT) scanner to realize measures of typical doses found in radiodiagnosis tests. It was used Gafchromic Xr-AQ2 film, which shows little sensitivity to visible light and a response in the range of 0.1 to 20 Gy for X-ray beam in a tube voltage supply range ranging from 20 kV to 200 kV. In the experiments, a head polymethylmethacrylate (PMMA) phantom, with a cylindrical shape with five openings was used. This phantom was placed in the CT scanner isocenter and radiochromic film strips were placed into two openings. The irradiations were performed in a Toshiba Asteion scanner that allows making acquisitions in helical mode. The central slice of the head phantom was irradiated to obtain the values of air kerma in PMMA measured with a pencil ionization chamber. Thereafter, radiochromic film strips were placed into the central and one peripheral opening and 10 cm long scans of the central region of the phantom were carried out with feed voltage of 120 kV. The strips irradiated with different X-ray tube currents were scanned and processed using the ImageJ software to obtain the intensity values resulting from the absorbed radiation by optical density analysis. The calibration curves were obtained for both region, central and peripheral corresponding to the values of air kerma in PMMA measured with ionization chamber. With the curves in hand, CT experiments with applied beams can use radiochromic films as a dosimetry method and then seek the generation of images with lower dose deposition and higher diagnostic quality. (Author)

  8. Radiochromic film calibration for dosimetry in computed tomography tests

    International Nuclear Information System (INIS)

    Costa, K. C.; Prata M, A.; Ladino G, A. M.; Costa, K. L.

    2017-10-01

    Radiochromic film applications in dosimetry have become increasingly significant for studies on radiotherapy and diagnostic tests. Due to sensitivity to exposure to ionizing radiation, radiochromic films are commonly used to obtain dose distribution maps. The objective of this study is to obtain the calibration curves of the radiographic film for exposure with X-ray beam in a computerized tomography (CT) scanner to realize measures of typical doses found in radiodiagnosis tests. It was used Gafchromic Xr-AQ2 film, which shows little sensitivity to visible light and a response in the range of 0.1 to 20 Gy for X-ray beam in a tube voltage supply range ranging from 20 kV to 200 kV. In the experiments, a head polymethylmethacrylate (PMMA) phantom, with a cylindrical shape with five openings was used. This phantom was placed in the CT scanner isocenter and radiochromic film strips were placed into two openings. The irradiations were performed in a Toshiba Asteion scanner that allows making acquisitions in helical mode. The central slice of the head phantom was irradiated to obtain the values of air kerma in PMMA measured with a pencil ionization chamber. Thereafter, radiochromic film strips were placed into the central and one peripheral opening and 10 cm long scans of the central region of the phantom were carried out with feed voltage of 120 kV. The strips irradiated with different X-ray tube currents were scanned and processed using the ImageJ software to obtain the intensity values resulting from the absorbed radiation by optical density analysis. The calibration curves were obtained for both region, central and peripheral corresponding to the values of air kerma in PMMA measured with ionization chamber. With the curves in hand, CT experiments with applied beams can use radiochromic films as a dosimetry method and then seek the generation of images with lower dose deposition and higher diagnostic quality. (Author)

  9. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    Science.gov (United States)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  10. Wavelength calibration of x-ray imaging crystal spectrometer on Joint Texas Experimental Tokamak

    International Nuclear Information System (INIS)

    Yan, W.; Chen, Z. Y.; Jin, W.; Huang, D. W.; Ding, Y. H.; Li, J. C.; Zhang, X. Q.; Zhuang, G.; Lee, S. G.; Shi, Y. J.

    2014-01-01

    The wavelength calibration of x-ray imaging crystal spectrometer is a key issue for the measurements of plasma rotation. For the lack of available standard radiation source near 3.95 Å and there is no other diagnostics to measure the core rotation for inter-calibration, an indirect method by using tokamak plasma itself has been applied on joint Texas experimental tokamak. It is found that the core toroidal rotation velocity is not zero during locked mode phase. This is consistent with the observation of small oscillations on soft x-ray signals and electron cyclotron emission during locked-mode phase

  11. Computer programs for TRIGA calibration, burnup evaluation, and bookkeeping

    International Nuclear Information System (INIS)

    Nelson, George W.

    1978-01-01

    Several computer programs have been developed at the University of Arizona to assist the direction and operation of the TRIGA Reactor Laboratory. The programs fall into the following three categories: 1. Programs for calculation of burnup of each fuel element in the reactor core, for maintaining an inventory of fuel element location and fissile content at any time, and for evaluation of the reactivity effects of burnup or proposed fuel element rearrangement in the core. 2. Programs for evaluation, function fitting, and tabulation of control rod measurements. 3. Bookkeeping programs to summarize and tabulate reactor runs and irradiations according to time, energy release, purpose, responsible party, etc. These summarized data are reported in an annual operating report for the facility. The use of these programs has saved innumerable hours of repetitious work, assuring more accurate, objective results, and requiring a minimum of effort to repeat calculations when input data are modified. The programs are written in FORTRAN-IV, and have been used on a CDC-6400 computer. (author)

  12. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    Science.gov (United States)

    Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.

    2014-11-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  13. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    International Nuclear Information System (INIS)

    Fonseca, T C Ferreira; Vanhavere, F; Bogaerts, R; Hunt, John

    2014-01-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium. (paper)

  14. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...

  15. Computer network for experimental research using ISDN

    International Nuclear Information System (INIS)

    Ida, Katsumi; Nakanishi, Hideya

    1997-01-01

    This report describes the development of a computer network that uses the Integrated Service Digital Network (ISDN) for real-time analysis of experimental plasma physics and nuclear fusion research. Communication speed, 64/128kbps (INS64) or 1.5Mbps (INS1500) per connection, is independent of how busy the network is. When INS-1500 is used, the communication speed, which is proportional to the public telephone connection fee, can be dynamically varied from 64kbps to 1472kbps (depending on how much data are being transferred using the Bandwidth-on-Demand (BOD) function in the ISDN Router. On-demand dial-up and time-out disconnection reduce the public telephone connection fee by 10%-97%. (author)

  16. Calibrating EEG-based motor imagery brain-computer interface from passive movement.

    Science.gov (United States)

    Ang, Kai Keng; Guan, Cuntai; Wang, Chuanchu; Phua, Kok Soon; Tan, Adrian Hock Guan; Chin, Zheng Yang

    2011-01-01

    EEG data from performing motor imagery are usually collected to calibrate a subject-specific model for classifying the EEG data during the evaluation phase of motor imagery Brain-Computer Interface (BCI). However, there is no direct objective measure to determine if a subject is performing motor imagery correctly for proper calibration. Studies have shown that passive movement, which is directly observable, induces Event-Related Synchronization patterns that are similar to those induced from motor imagery. Hence, this paper investigates the feasibility of calibrating EEG-based motor imagery BCI from passive movement. EEG data of 12 healthy subjects were collected during motor imagery and passive movement of the hand by a haptic knob robot. The calibration models using the Filter Bank Common Spatial Pattern algorithm on the EEG data from motor imagery were compared against using the EEG data from passive movement. The performances were compared based on the 10×10-fold cross-validation accuracies of the calibration data, and off-line session-to-session transfer kappa values to other sessions of motor imagery performed on another day. The results showed that the calibration performed using passive movement yielded higher model accuracy and off-line session-to-session transfer (73.6% and 0.354) than the calibration performed using motor imagery (71.3% and 0.311), and no significant differences were observed between the two groups (p=0.20, 0.23). Hence, this study shows that it is feasible to calibrate EEG-based motor imagery BCI from passive movement.

  17. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    Science.gov (United States)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  18. Experimental calibration of the mathematical model of Air Torque Position dampers with non-cascading blades

    Directory of Open Access Journals (Sweden)

    Bikić Siniša M.

    2016-01-01

    Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058

  19. CAMAC-system for calibration and control of experimental apparatus with scintillation counters

    International Nuclear Information System (INIS)

    Petrov, A.G.; Sinaev, A.N.

    1977-01-01

    The CAMAC-system is described, connected to the minicomputer PH-2116C, for calibration and control of an experiment on pion scattering on He-nuclei performed with a streamer chamber triggered by a scintillation hodoscope. The following operations are performed: -delay calibration in telescope and hodoscope tracts involving 22 scintillation counters; -control of relative efficiency of hodoscope counters and other parameters of the experiments; -control of HV supply of photomultipliers; -control of the currents of magnets and lenses of the muon track of the synchrocyclotron; -measurement of pulse-hight spectra from a Cherenkov counter to determine the beam composition. The working programs are initiated and the dialogue with the computer is carried out using an alphanumerical display connected to the PH-2116C via a CAMAC interface

  20. Precision alignment and calibration of optical systems using computer generated holograms

    Science.gov (United States)

    Coyle, Laura Elizabeth

    As techniques for manufacturing and metrology advance, optical systems are being designed with more complexity than ever before. Given these prescriptions, alignment and calibration can be a limiting factor in their final performance. Computer generated holograms (CGHs) have several unique properties that make them powerful tools for meeting these demanding tolerances. This work will present three novel methods for alignment and calibration of optical systems using computer generated holograms. Alignment methods using CGHs require that the optical wavefront created by the CGH be related to a mechanical datum to locate it space. An overview of existing methods is provided as background, then two new alignment methods are discussed in detail. In the first method, the CGH contact Ball Alignment Tool (CBAT) is used to align a ball or sphere mounted retroreflector (SMR) to a Fresnel zone plate pattern with micron level accuracy. The ball is bonded directly onto the CGH substrate and provides permanent, accurate registration between the optical wavefront and a mechanical reference to locate the CGH in space. A prototype CBAT was built and used to align and bond an SMR to a CGH. In the second method, CGH references are used to align axi-symmetric optics in four degrees of freedom with low uncertainty and real time feedback. The CGHs create simultaneous 3D optical references where the zero order reflection sets tilt and the first diffracted order sets centration. The flexibility of the CGH design can be used to accommodate a wide variety of optical systems and maximize sensitivity to misalignments. A 2-CGH prototype system was aligned multiplied times and the alignment uncertainty was quantified and compared to an error model. Finally, an enhanced calibration method is presented. It uses multiple perturbed measurements of a master sphere to improve the calibration of CGH-based Fizeau interferometers ultimately measuring aspheric test surfaces. The improvement in the

  1. Establishment of a new calibration method of pencil ionization chamber for dosimetry in computed tomography

    International Nuclear Information System (INIS)

    Dias, Daniel Menezes

    2010-01-01

    Pencil ionization chambers are used for beam dosimetry in computed tomography equipment (CT). In this study, a new calibration methodology was established, in order to make the Calibration Laboratory of Instituto de Pesquisas Energeticas e Nucleares (LCI) suitable to international metrological standards, dealing with specific procedures for calibration of these chambers used in CT. Firstly, the setup for the new RQT radiation qualities was mounted, in agreement with IEC61267 from the International Electrotechnical Commission (IEC). After the establishment of these radiation qualities, a specific calibration methodology for pencil ionization chambers was set, according to Technical Report Series No. 457, from the International Atomic Energy Agency (IAEA), which describes particularities of the procedure to be followed by the Secondary Standard Dosimetry Laboratories (SSDL's), concerning to collimation and positioning related to the radiation beam. Initially, PPV (kV) measurements and the determination of copper additional filtrations were carried out, measuring the half value layers (HVL) recommended by the IEC 61267 standard, after that the RQT 8, RQT 9 and RQT 10 radiation quality references were established. For additional filters, aluminum and copper of high purity (around 99.9%) were used. RQT's in thickness of copper filters equivalent to the set 'RQR (Al) + Additional Filtration (Cu)' was directly found by an alternative methodology used to determine additional filtrations, which is a good option when RQR's have not the possibility of be setting up. With the establishment of this new methodology for the ionization pencil chambers calibration, the LCI is ready to calibrate these instruments according to the most recent international standards. Therefore, an improvement in calibration traceability, as well as in metrological services offered by IPEN to all Brazil is achieved. (author)

  2. Computer aided method for colour calibration and analysis of digital rock photographs

    Directory of Open Access Journals (Sweden)

    Matic Potočnik

    2015-12-01

    Full Text Available The methods used in geology to determine colour and colour coverage are expensive, time consuming, and/ or subjective. Estimates of colour coverage can only be approximate since they are based on rough comparisonbased measuring etalons and subjective estimation, which is dependent upon the skill and experience of the person performing the estimation. We present a method which accelerates, simplifis, and objectifis these tasks using a computer application. It automatically calibrates the colours of a digital photo, and enables the user to read colour values and coverage, even after returning from fild work. Colour identifiation is based on the Munsell colour system. For the purposes of colour calibration we use the X-Rite ColorChecker Passport colour chart placed onto the photographed scene. Our computer application detects the ColorChecker colour chart, and fids a colour space transformation to calibrate the colour in the photo. The user can then use the application to read colours within selected points or regions of the photo. The results of the computerised colour calibration were compared to the reference values of the ColorChecker chart. The values slightly deviate from the exact values, but the deviation is around the limit of human capability for visual comparison. We have devised an experiment, which compares the precision of the computerised colour analysis and manual colour analysis performed on a variety of rock samples with the help of geology students using Munsell Rock-color Chart. The analysis showed that the precision of manual comparative identifiation on multicoloured samples is somewhat problematic, since the choice of representative colours and observation points for a certain part of a sample are subjective. The computer based method has the edge in verifibility and repeatability of the analysis since the application the original photo to be saved with colour calibration, and tagging of colouranalysed points and regions.

  3. Calibration of a gamma spectrometer for measuring natural radioactivity. Experimental measurements and modeling by Monte-Carlo methods

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-01-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  4. A parallel calibration utility for WRF-Hydro on high performance computers

    Science.gov (United States)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  5. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  6. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  7. Experimental and computational studies of nanofluids

    Science.gov (United States)

    Vajjha, Ravikanth S.

    The goals of this dissertation were (i) to experimentally investigate the fluid dynamic and heat transfer performance of nanofluids in a circular tube, (ii) to study the influence of temperature and particle volumetric concentration of nanofluids on thermophysical properties, heat transfer and pumping power, (iii) to measure the rheological properties of various nanofluids and (iv) to investigate using a computational fluid dynamic (CFD) technique the performance of nanofluids in the flat tube of a radiator. Nanofluids are a new class of fluids prepared by dispersing nanoparticles with average sizes of less than 100 nm in traditional heat transfer fluids such as water, oil, ethylene glycol and propylene glycol. In cold regions of the world, the choice of base fluid for heat transfer applications is an ethylene glycol or propylene glycol mixed with water in different proportions. In the present research, a 60% ethylene glycol (EG) or propylene glycol (PG) and 40% water (W) by mass fluid mixture (60:40 EG/W or 60:40 PG/W) was used as a base fluid, which provides freeze protection to a very low level of temperature. Experiments were conducted to measure the convective heat transfer coefficient and pressure loss of nanofluids flowing in a circular tube in the fully developed turbulent regime. The experimental measurements were carried out for aluminum oxide (Al2O3), copper oxide (CuO) and silicon dioxide (SiO2) nanoparticles dispersed in 60:40 EG/W base fluid. Experiments revealed that the heat transfer coefficient of nanofluids showed an increase with the particle volumetric concentration. Pressure loss was also observed to increase with the nanoparticle volumetric concentration. New correlations for the Nusselt number and the friction factor were developed. The effects of temperature and particle volumetric concentration on different thermophysical properties (e.g. viscosity, thermal conductivity, specific heat and density) and subsequently on the Prandtl number

  8. Understanding organometallic reaction mechanisms and catalysis experimental and computational tools computational and experimental tools

    CERN Document Server

    Ananikov, Valentin P

    2014-01-01

    Exploring and highlighting the new horizons in the studies of reaction mechanisms that open joint application of experimental studies and theoretical calculations is the goal of this book. The latest insights and developments in the mechanistic studies of organometallic reactions and catalytic processes are presented and reviewed. The book adopts a unique approach, exemplifying how to use experiments, spectroscopy measurements, and computational methods to reveal reaction pathways and molecular structures of catalysts, rather than concentrating solely on one discipline. The result is a deeper

  9. Calibration of an experimental model of tritium storage bed designed for 'in situ' accountability

    International Nuclear Information System (INIS)

    Bidica, Nicolae; Stefanescu, Ioan; Bucur, Ciprian; Bulubasa, Gheorghe; Deaconu, Mariea

    2009-01-01

    Full text: Objectives: Tritium accountancy of the storage beds in tritium facilities is an important issue for tritium inventory control. The purpose of our work was to perform calibration of an experimental model of tritium storage bed with a special design, using electric heaters to simulate tritium decay, and to evaluate the detection limit of the accountancy method. The objective of this paper is to present an experimental method used for calibration of the storage bed and the experimental results consisting of calibration curves and detection limit. Our method is based on a 'self-assaying' tritium storage bed. The basic characteristics of the design of our storage bed consists, in principle, of a uniform distribution of the storage material on several copper thin fins (in order to obtain a uniform temperature field inside the bed), an electrical heat source to simulate the tritium decay heat, a system of thermocouples for measuring the temperature field inside the bed, and good thermal isolation of the bed from the external environment. Within this design of the tritium storage bed, the tritium accounting method is based on determining the decay heat of tritium by measuring the temperature increase of the isolated storage bed. Experimental procedure consisted in measuring of temperature field inside the bed for few values of the power injected with the aid of electrical heat source. Data have been collected for few hours and the temperature increase rate was determined for each value of the power injected. Graphical representation of temperature rise versus injected powers was obtained. This accounting method of tritium inventory stored as metal tritide is a reliable solution for in-situ tritium accountability in a tritium handling facility. Several improvements can be done regarding the design of the storage bed in order to improve the measurement accuracy and to obtain a lower detection limit as for instance use of more accurate thermocouples or special

  10. Calibration-free quantification of interior properties of porous media with x-ray computed tomography

    International Nuclear Information System (INIS)

    Hussein, Esam M.A.; Agbogun, H.M.D.; Al, Tom A.

    2015-01-01

    A method is presented for interpreting the values of x-ray attenuation coefficients reconstructed in computed tomography of porous media, while overcoming the ambiguity caused by the multichromatic nature of x-rays, dilution by void, and material heterogeneity. The method enables determination of porosity without relying on calibration or image segmentation or thresholding to discriminate pores from solid material. It distinguishes between solution-accessible and inaccessible pores, and provides the spatial and frequency distributions of solid-matrix material in a heterogeneous medium. This is accomplished by matching an image of a sample saturated with a contrast solution with that saturated with a transparent solution. Voxels occupied with solid-material and inaccessible pores are identified by the fact that they maintain the same location and image attributes in both images, with voxels containing inaccessible pores appearing empty in both images. Fully porous and accessible voxels exhibit the maximum contrast, while the rest are porous voxels containing mixtures of pore solutions and solid. This matching process is performed with an image registration computer code, and image processing software that requires only simple subtraction and multiplication (scaling) processes. The process is demonstrated in dolomite (non-uniform void distribution, homogeneous solid matrix) and sandstone (nearly uniform void distribution, heterogeneous solid matrix) samples, and its overall performance is shown to compare favorably with a method based on calibration and thresholding. - Highlights: • A method is presented for quantifying x-ray CT data for porous media. • The method neither requires calibration nor segmentation nor thresholding. • We use voxel matching between images of a sample saturated with two distinct solutions. • Method is demonstrated for micro-CT images of dolomite and sandstone samples. • Overall performance compares favorably with a calibration

  11. Experimental Demonstrations of Optical Neural Computers

    OpenAIRE

    Hsu, Ken; Brady, David; Psaltis, Demetri

    1988-01-01

    We describe two experiments in optical neural computing. In the first a closed optical feedback loop is used to implement auto-associative image recall. In the second a perceptron-like learning algorithm is implemented with photorefractive holography.

  12. Multi-proxy experimental calibration in cold water corals for high resolution paleoreconstructions

    Science.gov (United States)

    Pelejero, C.; Martínez-Dios, A.; Ko, S.; Sherrell, R. M.; Kozdon, R.; López-Sanz, À.; Calvo, E.

    2017-12-01

    Cold-water corals (CWCs) display an almost cosmopolitan distribution over a wide range of depths. Similar to their tropical counterparts, they can provide continuous, high-resolution records of up to a century or more. Several CWC elemental and isotopic ratios have been suggested as useful proxies, but robust calibrations under controlled conditions in aquaria are needed. Whereas a few such calibrations have been performed for tropical corals, they are still pending for CWCs. This reflects the technical challenges involved in maintaining these slow-growing animals alive during the long-term experiments required to achieve sufficient skeletal growth for geochemical analyses. We will show details of the set up and initial stages of a long-term experiment being run at the ICM (Barcelona), where live specimens (>150) of Desmophyllum dianthus sampled in Comau Fjord (Chile) are kept under controlled and manipulated physical chemistry (temperature, pH, phosphate, barium, cadmium) and feeding conditions. With this set up, we aim to calibrate experimentally several specific elemental ratios including P/Ca, Ba/Ca, Cd/Ca, B/Ca, U/Ca and Mg/Li as proxies of nutrients dynamics, pH, carbonate ion concentration and temperature. For the trace element analysis, we are analyzing coral skeletons using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS), running quantitative analyses on spot sizes of tens of microns, and comparing to micromilling and solution ICP-MS. Preliminary data obtained using these techniques will be presented, as well as measurements of calcification rate. Since coral-water corals are potentially vulnerable to ocean acidification, the same experiment is being exploited to assess potential effects of the pH stressor in D. dianthus; main findings to date will be summarized.

  13. Theoretical and experimental study of radon measurement with designing and calibration domestic canister with active charcoal

    International Nuclear Information System (INIS)

    Urosevic, V.; Nikezic, D.; Zekic, R.

    2005-01-01

    Radon concentration in air may change significantly large variation due to atmospheric variation. Measurement with active charcoal can be inaccurate because the variation in radon concentration. We made model to simulate radon measurements with active charcoal in order to optimize and improve integration characteristic. A numerical method and computer code based on the method of finite elements is developed for the case of variable radon concentration in air. This program simulates radon adsorption by the activated charcoal bed, enabling determination of sensitivity. The dependence of sensitivity on different parameters, such as temperature, thickness of the charcoal, etc. was studied using this program. Using results of theoretical investigation we designed and calibrated our canister with active charcoal for radon measurements. (author)

  14. An experimental test of CSR theory using a globally calibrated ordination method.

    Science.gov (United States)

    Li, Yuanzhi; Shipley, Bill

    2017-01-01

    Can CSR theory, in conjunction with a recently proposed globally calibrated CSR ordination ("StrateFy"), using only three easily measured leaf traits (leaf area, specific leaf area and leaf dry matter content) predict the functional signature of herbaceous vegetation along experimentally manipulated gradients of soil fertility and disturbance? To determine this, we grew 37 herbaceous species in mixture for five years in 24 experimental mesocosms differing in factorial levels of soil resources (stress) and density-independent mortality (disturbance). We measured 16 different functional traits and then ordinated the resulting vegetation within the CSR triangle using StrateFy. We then calculated community-weighted mean (CWM) values of the competitor (CCWM), stress-tolerator (SCWM) and ruderal (RCWM) scores for each mesocosm. We found a significant increase in SCWM from low to high stress mesocosms, and an increase in RCWM from lowly to highly disturbed mesocosms. However, CCWM did not decline significantly as intensity of stress or disturbance increased, as predicted by CSR theory. This last result likely arose because our herbaceous species were relatively poor competitors in global comparisons and thus no strong competitors in our species pool were selectively favoured in low stress and low disturbed mesocosms. Variation in the 13 other traits, not used by StrateFy, largely argeed with the predictions of CSR theory. StrateFy worked surprisingly well in our experimental study except for the C-dimension. Despite loss of some precision, it has great potential applicability in future studies due to its simplicity and generality.

  15. [Experimental and computation studies of polar solvation

    International Nuclear Information System (INIS)

    1990-01-01

    This report from the Pennsylvania State University contains seven sections: (1) radiative rate effects in solvatlvatochromic probes; (2) intramolecular charge transfer reactions; (3) Solvation dynamics in low temperature alcohols; (4) Ionic solvation dynamics; (5) solvation and proton-transfer dynamics in 7-azaindole; (6) computer simulations of solvation dynamics; (7) solvation in supercritical fluids. 20 refs., 11 figs

  16. Experimental comparison of two quantum computing architectures.

    Science.gov (United States)

    Linke, Norbert M; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A; Wright, Kenneth; Monroe, Christopher

    2017-03-28

    We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www. ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.

  17. Green Cloud Computing: An Experimental Validation

    International Nuclear Information System (INIS)

    Monteiro, Rogerio Castellar; Dantas, M A R; Rodriguez y Rodriguez, Martius Vicente

    2014-01-01

    Cloud configurations can be computational environment with interesting cost efficiency for several organizations sizes. However, the indiscriminate action of buying servers and network devices may not represent a correspondent performance number. In the academic and commercial literature, some researches highlight that these environments are idle for long periods. Therefore, energy management is an essential approach in any organization, because energy bills can causes remarkable negative impacts to these organizations in term of costs. In this paper, we present a research work that is characterized by an analysis of energy consumption in a private cloud computing environment, considering both computational resources and network devices. This study was motivated by a real case of a large organization. Therefore, the first part of the study we considered empirical experiments. In a second moment we used the GreenCloud simulator which was utilized to foresee some different configurations. The research reached a successful and differentiated goal in presenting key issues from computational resources and network, related to the energy consumption for real private cloud

  18. Experimental and computational study of thaumasite structure

    Energy Technology Data Exchange (ETDEWEB)

    Scholtzová, Eva, E-mail: Eva.Scholtzova@savba.sk [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, 845 36 Bratislava (Slovakia); Kucková, Lenka; Kožíšek, Jozef [Department of Physical Chemistry, Institute of Physical Chemistry and Chemical Physics, Faculty of Chemical and Food Technology, Slovak University of Technology in Bratislava, Radlinského 9, 812 37 Bratislava (Slovakia); Pálková, Helena [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, 845 36 Bratislava (Slovakia); Tunega, Daniel [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, 845 36 Bratislava (Slovakia); Institute for Soil Science, University of Natural Resources and Life Sciences, Peter-Jordanstrasse 82, A-1190 Wien (Austria)

    2014-05-01

    The structure of thaumasite has been studied experimentally by means of a single crystal X-ray diffraction and FTIR methods, and theoretically using density functional theory (DFT) method. Very good agreement was achieved between calculated and experimental structural parameters. In addition, calculations offered the refinement of the positions of the hydrogen atoms. The detailed analysis of the hydrogen bonds existing in the thaumasite structure has been performed. Several types of hydrogen bonds have been classified. The water molecules coordinating Ca{sup 2+} cation act as proton donors in moderate O-H···O hydrogen bonds formed with CO₃⁻²and SO₄⁻² anions. The multiple O-H···O hydrogen bonds exist among water molecules themselves. Finally, relatively weak hydrogen bonds form water molecules with the OH groups from the coordination sphere of the Si(OH)₆⁻² anion. Further, calculated vibrational spectrum allowed complete assignment of all vibrational modes which are not available from the experimental spectrum that has a complex structure with overlapped bands, especially below 1500 cm⁻¹. Highlights: • The thaumasite structure was studied experimentally and using DFT method. • We used DFT method for the refinement of the positions of hydrogen atoms. • A detailed analysis of the hydrogen bonds was done. • A complete assignment of all bands to particular types of vibrations was done.

  19. Numerical calibration and experimental validation of a PCM-Air heat exchanger model

    International Nuclear Information System (INIS)

    Stathopoulos, N.; El Mankibi, M.; Santamouris, Mattheos

    2017-01-01

    Highlights: • Development of a PCM-Air heat exchanger experimental unit and its numerical model. • Differential Scanning Calorimetry for PCM properties. • Ineptitude of DSC obtained heat capacity curves. • Creation of adequate heat capacity curves depending on heat transfer rates. • Confrontation of numerical and experimental results and validation of the model. - Abstract: Ambitious goals have been set at international, European and French level for energy consumption and greenhouse gas emissions decrease of the building sector. Achieving them requires renewable energy integration, a technology that presents however an important drawback: intermittent energy production. In response, thermal energy storage (TES) technology applications have been developed in order to correlate energy production and consumption of the building. Phase Change Materials (PCMs) have been widely used in TES applications as they offer a high storage density and adequate phase change temperature range. It is important to accurately know the thermophysical properties of the PCM, both for experimental (system design) and numerical (correct prediction) purposes. In this paper, the fabrication of a PCM – Air experimental prototype is presented at first, along with the development of a numerical model simulating the downstream temperature evolution of the heat exchanger. Particular focus is given to the calibration method and the validation of the model using experimental characterization results. Differential scanning calorimetry (DSC) is used to define the thermal properties of the PCM. Initial numerical results are underestimated compared to experimental ones. Various factors were investigated, pointing to the ineptitude of the heat capacity parameter, as DSC results depend on heating/cooling rates. Adequate heat capacity curves were empirically determined, depending on heat transfer rates and based on DSC results and experimental observations. The results of the proposed model

  20. Experimental Comparison of Two Quantum Computing Architectures

    Science.gov (United States)

    2017-03-28

    trap experiment on an independent quantum computer of identical size and comparable capability but with a different physical implementation at its core... locked laser. These optical controllers con- sist of an array of individual addressing beams and a coun- terpropagating global beam that illuminates...generally programmable. This allows identical quantum tasks or algorithms to be imple- mented on radically different technologies to inform further

  1. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    International Nuclear Information System (INIS)

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R; Avery, G; Balcam, S; Needler, L

    2014-01-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQ m  and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma. (paper)

  2. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    Science.gov (United States)

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  3. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  4. Calibration-free quantification of interior properties of porous media with x-ray computed tomography.

    Science.gov (United States)

    Hussein, Esam M A; Agbogun, H M D; Al, Tom A

    2015-03-01

    A method is presented for interpreting the values of x-ray attenuation coefficients reconstructed in computed tomography of porous media, while overcoming the ambiguity caused by the multichromatic nature of x-rays, dilution by void, and material heterogeneity. The method enables determination of porosity without relying on calibration or image segmentation or thresholding to discriminate pores from solid material. It distinguishes between solution-accessible and inaccessible pores, and provides the spatial and frequency distributions of solid-matrix material in a heterogeneous medium. This is accomplished by matching an image of a sample saturated with a contrast solution with that saturated with a transparent solution. Voxels occupied with solid-material and inaccessible pores are identified by the fact that they maintain the same location and image attributes in both images, with voxels containing inaccessible pores appearing empty in both images. Fully porous and accessible voxels exhibit the maximum contrast, while the rest are porous voxels containing mixtures of pore solutions and solid. This matching process is performed with an image registration computer code, and image processing software that requires only simple subtraction and multiplication (scaling) processes. The process is demonstrated in dolomite (non-uniform void distribution, homogeneous solid matrix) and sandstone (nearly uniform void distribution, heterogeneous solid matrix) samples, and its overall performance is shown to compare favorably with a method based on calibration and thresholding. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Computers in experimental nuclear power facilities

    International Nuclear Information System (INIS)

    Jukl, M.

    1982-01-01

    The CIS 3000 information system is described used for monitoring the operating modes of large technological equipment. The CIS system consists of two ADT computers, an external drum store an analog input side, a bivalent input side, 4 control consoles with monitors and acoustic signalling, a print-out area with typewriters and punching machines and linear recorders. Various applications are described of the installed CIS configuration as is the general-purpose program for processing measured values into a protocol. The program operates in the conversational mode. Different processing variants are shown on the display monitor. (M.D.)

  6. The experimental modification of a computer software package for ...

    African Journals Online (AJOL)

    The experimental modification of a computer software package for graphing algebraic functions. ... No Abstract Available South African Journal of Education Vol.25(2) 2005: 61-68. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  7. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    Science.gov (United States)

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  8. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  9. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    Science.gov (United States)

    Trombetti, Tomaso

    This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral

  10. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  11. Development, calibration and experimental results obtained with an innovative calorimeter (CALMOS) for nuclear heating measurements

    International Nuclear Information System (INIS)

    Carcreff, H.; Cloute-Cazalaa, V.; Salmon, L.

    2011-01-01

    Nuclear heating inside an MTR reactor has to be known in order to be able to control samples temperature during irradiation experiments. An R and D program has been carried out at CEA to design a new type of in-core calorimetric system. This new development, started in 2002, has for main objective to manufacture a calorimeter suitable to monitoring nuclear heating inside the 70 MWth OSIRIS material testing reactor operated by CEA's Nuclear Energy Div. at the Saclay research center. An innovative calorimetric probe, associated to a specific handling system, has been designed to provide access to measurements both along the fissile height and on the upper part of the core, where nuclear heating still remains high. Two mock-ups of the probe were manufactured and tested in 2005 and 2009 in ex-core area of OSIRIS reactor for process validation, while a displacement system has been especially studied to move the probe along a given axial measurement range. This paper deals with the development, tests on preliminary mock-ups and the finalization of the probe. Main modeling and experimental results are presented. Moreover, alternative methods to calibration for nuclear heating rate measurements which are now possible with this new calorimeter are presented and discussed. (authors)

  12. Development, calibration, and experimental results obtained with an innovative calorimeter (CALMOS) for nuclear heating measurements

    International Nuclear Information System (INIS)

    Carcreff, Hubert; Cloute-Cazalaa, Veronique; Salmon, Laurent

    2012-01-01

    Nuclear heating inside an MTR reactor has to be known in order to be able to control samples temperature during irradiation experiments. An R and D program has been carried out at CEA to design a new type of in-core calorimetric system. This new development, started in 2002, has for main objective to manufacture a calorimeter suitable to monitoring nuclear heating inside the 70 MWth OSIRIS material testing reactor operated by CEA's Nuclear Energy Division at the Saclay research center. An innovative calorimetric probe, associated to a specific handling system, has been designed to provide access to measurements both along the fissile height and on the upper part of the core, where nuclear heating still remains high. Two mock-ups of the probe were manufactured and tested in 2005 and 2009 in ex-core area of OSIRIS reactor for process validation, while a displacement system has been especially studied to move the probe along a given axial measurement range. This paper deals with the development, tests on preliminary mock-ups and the finalization of the probe. Main modeling and experimental results are presented. Moreover, alternative methods to calibration for nuclear heating rate measurements which are now possible with this new calorimeter are presented and discussed. (authors)

  13. EXPERIMENTAL CALIBRATION OF UNDERGROUND HEAT TRANSFER MODELS UNDER A WINERY BUILDING IN A RURAL AREA

    Directory of Open Access Journals (Sweden)

    Francesco Tinti

    2017-01-01

    Full Text Available Ground temperature and hydrogeological conditions are key parameters for many engineering applications, such as the design of building basements and underground spaces and the assessment of shallow geothermal energy potential. Especially in urban areas, in the very shallow depths, it is diffi cult to fi nd natural undisturbed underground thermal conditions because of anthropic interventions. The assessment of underground behaviour in disturbed conditions will become more and more relevant because of increasing awareness to energy effi ciency and renewable energy topics. The purpose of this paper is to show a three-dimensional representation - based on models calibrated on experimental data - of the underground thermal behaviour aff ected by a building in a rural area in Italy. Temperature varies in space and time and it depends on ground, climate and building characteristics, and all these parameters are taken into account by the seasonal periodic modelling implemented. The results obtained in a context of low urbanization indirectly suggest the importance of these eff ects in dense urban areas; taking greater account of these aspects could lead to improvements in the design of underground spaces and geo-exchanger fi elds for geothermal energy exploitation.

  14. Experimental device for obtaining calibration factor for the total count technique

    International Nuclear Information System (INIS)

    Gonçalves, Eduardo R.; Braz, Delson; Brandão, Luís Eduardo B.

    2017-01-01

    Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)

  15. Experimental device for obtaining calibration factor for the total count technique

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Eduardo R.; Braz, Delson [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Brandão, Luís Eduardo B. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Divisao de Reatores

    2017-07-01

    Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)

  16. Application of Computer Vision for quality control in frozen mixed berries production: colour calibration issues

    Directory of Open Access Journals (Sweden)

    D. Ricauda Aimonino

    2013-09-01

    Full Text Available Computer vision is becoming increasingly important in quality control of many food processes. The appearance properties of food products (colour, texture, shape and size are, in fact, correlated with organoleptic characteristics and/or the presence of defects. Quality control based on image processing eliminates the subjectivity of human visual inspection, allowing rapid and non-destructive analysis. However, most food matrices show a wide variability in appearance features, therefore robust and customized image elaboration algorithms have to be implemented for each specific product. For this reason, quality control by visual inspection is still rather diffused in several food processes. The case study inspiring this paper concerns the production of frozen mixed berries. Once frozen, different kinds of berries are mixed together, in different amounts, according to a recipe. The correct quantity of each kind of fruit, within a certain tolerance, has to be ensured by producers. Quality control relies on bringing few samples for each production lot (samples of the same weight and, manually, counting the amount of each species. This operation is tedious, subject to errors, and time consuming, while a computer vision system (CVS could determine the amount of each kind of berries in a few seconds. This paper discusses the problem of colour calibration of the CVS used for frozen berries mixture evaluation. Images are acquired by a digital camera coupled with a dome lighting system, which gives a homogeneous illumination on the entire visible surface of the berries, and a flat bed scanner. RBG device dependent data are then mapped onto CIELab colorimetric colour space using different transformation operators. The obtained results show that the proposed calibration procedure leads to colour discrepancies comparable or even below the human eyes sensibility.

  17. Image calibration and registration in cone-beam computed tomogram for measuring the accuracy of computer-aided implant surgery

    Science.gov (United States)

    Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.

    2015-02-01

    Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.

  18. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  19. Partial safety factor calibration from stochastic finite element computation of welded joint with random geometries

    International Nuclear Information System (INIS)

    Schoefs, Franck; Chevreuil, Mathilde; Pasqualini, Olivier; Cazuguel, Mikaël

    2016-01-01

    Welded joints are used in various structures and infrastructures like bridges, ships and offshore structures, and are submitted to cyclic stresses. Their fatigue behaviour is an industrial key issue to deal with and still offers original research subjects. One of the available methods relies on the computing of the stress concentration factor. Even if some studies were previously driven to evaluate this factor onto some cases of welded structures, the shape of the weld joint is generally idealized through a deterministic parametric geometry. Previous experimental works however have shown that this shape plays a key role in the lifetime assessment. We propose in this paper a methodology for computing the stress concentration factor in presence of random geometries of welded joints. In view to make the results available by engineers, this method merges stochastic computation and semi-probabilistic analysis by computing partial safety factors with a dedicated method. - Highlights: • Numerical computation of stress concentration factor with random geometry of weld. • Real data are used for probabilistic modelling. • Identification of partial safety factor from SFEM computation in case of random geometries.

  20. Experimental verification of self-calibration radiometer based on spontaneous parametric downconversion

    Science.gov (United States)

    Gao, Dongyang; Zheng, Xiaobing; Li, Jianjun; Hu, Youbo; Xia, Maopeng; Salam, Abdul; Zhang, Peng

    2018-03-01

    Based on spontaneous parametric downconversion process, we propose a novel self-calibration radiometer scheme which can self-calibrate the degradation of its own response and ultimately monitor the fluctuation of a target radiation. Monitor results were independent of its degradation and not linked to the primary standard detector scale. The principle and feasibility of the proposed scheme were verified by observing bromine-tungsten lamp. A relative standard deviation of 0.39 % was obtained for stable bromine-tungsten lamp. Results show that the proposed scheme is advanced of its principle. The proposed scheme could make a significant breakthrough in the self-calibration issue on the space platform.

  1. On the calibration strategies of the Johnson–Cook strength model: Discussion and applications to experimental data

    International Nuclear Information System (INIS)

    Gambirasio, Luca; Rizzi, Egidio

    2014-01-01

    The present paper aims at assessing the various procedures adoptable for calibrating the parameters of the so-called Johnson–Cook strength model, expressing the deviatoric behavior of elastoplastic materials, with particular reference to the description of High Strain Rate (HSR) phenomena. The procedures rely on input experimental data corresponding to a set of hardening functions recorded at different equivalent plastic strain rates and temperatures. After a brief review of the main characteristics of the Johnson–Cook strength model, five different calibration strategies are framed and widely described. The assessment is implemented through a systematic application of each calibration strategy to three different real material cases, i.e. a DH-36 structural steel, a commercially pure niobium and an AL-6XN stainless steel. Experimental data available in the literature are considered. Results are presented in terms of plots showing the predicted Johnson–Cook hardening functions against the experimental trends, together with tables describing the fitting problematics which arise in each case, by assessing both lower yield stress and overall plastic flow introduced errors. The consequences determined by each calibration approach are then carefully compared and evaluated. A discussion on the positive and negative aspects of each strategy is presented and some suggestions on how to choose the best calibration approach are outlined, by considering the available experimental data and the objectives of the following modeling process. The proposed considerations should provide a useful guideline in the process of determining the best Johnson–Cook parameters in each specific situation in which the model is going to be adopted. A last section introduces some considerations about the calibration of the Johnson–Cook strength model through experimental data different from those consisting in a set of hardening functions relative to different equivalent plastic strain

  2. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    Science.gov (United States)

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements

  3. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  4. 1st International Conference on Computational and Experimental Biomedical Sciences

    CERN Document Server

    Jorge, RM

    2015-01-01

    This book contains the full papers presented at ICCEBS 2013 – the 1st International Conference on Computational and Experimental Biomedical Sciences, which was organized in Azores, in October 2013. The included papers present and discuss new trends in those fields, using several methods and techniques, including active shape models, constitutive models, isogeometric elements, genetic algorithms, level sets, material models, neural networks, optimization, and the finite element method, in order to address more efficiently different and timely applications involving biofluids, computer simulation, computational biomechanics, image based diagnosis, image processing and analysis, image segmentation, image registration, scaffolds, simulation, and surgical planning. The main audience for this book consists of researchers, Ph.D students, and graduate students with multidisciplinary interests related to the areas of artificial intelligence, bioengineering, biology, biomechanics, computational fluid dynamics, comput...

  5. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry

    International Nuclear Information System (INIS)

    Brady, S. L.; Kaufman, R. A.

    2012-01-01

    Purpose: The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ∼25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. Methods: The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Results: Calibration precision was measured to be better than 5%–7%, 3%–5%, and 2%–4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy −1 versus the CT scatter phantom 29.2 ± 1.0 mV cGy −1 and FIA with x-ray 29.9 ± 1.1 mV cGy −1 methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ∼3000 mV. Conclusions: The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the

  6. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry.

    Science.gov (United States)

    Brady, S L; Kaufman, R A

    2012-06-01

    The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12

  7. Contribution of computer science to the evaluation of experimental data

    International Nuclear Information System (INIS)

    Steuerwald, J.

    1978-11-01

    The GALE data acquisition system and EDDAR data processing system, used at Max-Planck-Institut fuer Plasmaphysik, serve to illustrate some of the various ways in which computer science plays a major role in developing the evaluation of experimental data. (orig.) [de

  8. The quark gluon plasma: Lattice computations put to experimental test

    Indian Academy of Sciences (India)

    I describe how lattice computations are being used to extract experimentally relevant features of the quark gluon plasma. I deal specifically with relaxation times, photon emissivity, strangeness yields, event-by-event fluctuations of conserved quantities and hydrodynamic flow. Finally I give evidence that the plasma is rather ...

  9. Multimodal Hazard Rate for Relapse in Breast Cancer: Quality of Data and Calibration of Computer Simulation

    Directory of Open Access Journals (Sweden)

    Michael Retsky

    2014-11-01

    Full Text Available Much has occurred since our 2010 report in Cancers. In the past few years we published several extensive reviews of our research so a brief review is all that will be provided here. We proposed in the earlier reports that most relapses in breast cancer occur within 5 years of surgery and seem to be associated with some unspecified manner of surgery-induced metastatic initiation. These events can be identified in relapse data and are correlated with clinical data. In the last few years an unexpected mechanism has become apparent. Retrospective analysis of relapse events by a Brussels anesthesiology group reported that a perioperative NSAID analgesic seems to reduce early relapses five-fold. We then proposed that primary surgery produces a transient period of systemic inflammation. This has now been identified by inflammatory markers in serum post mastectomy. That could explain the early relapses. It is possible that an inexpensive and non-toxic NSAID can reduce breast cancer relapses significantly. We want to take this opportunity to discuss database quality issues and our relapse hazard data in some detail. We also present a demonstration that the computer simulation can be calibrated with Adjuvant-on-line, an often used clinical tool for prognosis in breast cancer.

  10. Experimental demonstration of deterministic one-way quantum computing on a NMR quantum computer

    OpenAIRE

    Ju, Chenyong; Zhu, Jing; Peng, Xinhua; Chong, Bo; Zhou, Xianyi; Du, Jiangfeng

    2008-01-01

    One-way quantum computing is an important and novel approach to quantum computation. By exploiting the existing particle-particle interactions, we report the first experimental realization of the complete process of deterministic one-way quantum Deutsch-Josza algorithm in NMR, including graph state preparation, single-qubit measurements and feed-forward corrections. The findings in our experiment may shed light on the future scalable one-way quantum computation.

  11. Comparison of theoretical and experimental determinations of calibration factors for cylindrical and parallel plates ionization chambers

    International Nuclear Information System (INIS)

    Vallejos, Matias; Montano, Gustavo A.; Stefanic, Amalia; Saravi, Margarita

    2009-01-01

    The Ionizing Radiation Dosimetry Section of CNEA is the National Laboratory of Dosimeter Reference, having been designated by the National Institute of Industrial Technology (INTI, deposit taker by Law 19,511/72 of the national standards for metrology) for the safekeeping and operation of the national standards for dosimetry (Agreement INTI - CNEA, February 2004). From their creation, the CRRD provides, among other services, the calibration of dosemeters used in radiotherapy, in terms of Kerma in air, and since year 2002 provides calibration in terms of absorbed dose in water. In this work, those elements appear whereupon it counts the laboratory and that they tend to consolidate the securing of the quality of the results obtained in the calibrations of dosemeters. (author)

  12. Computer simulations and the changing face of scientific experimentation

    CERN Document Server

    Duran, Juan M

    2013-01-01

    Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi

  13. Revisiting dibenzothiophene thermochemical data: Experimental and computational studies

    International Nuclear Information System (INIS)

    Freitas, Vera L.S.; Gomes, Jose R.B.; Ribeiro da Silva, Maria D.M.C.

    2009-01-01

    Thermochemical data of dibenzothiophene were studied in the present work by experimental techniques and computational calculations. The standard (p 0 =0.1MPa) molar enthalpy of formation, at T = 298.15 K, in the gaseous phase, was determined from the enthalpy of combustion and sublimation, obtained by rotating bomb calorimetry in oxygen, and by Calvet microcalorimetry, respectively. This value was compared with estimated data from G3(MP2)//B3LYP computations and also with the other results available in the literature.

  14. Computational tools for the construction of calibration curves for use in dose calculations in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Oliveira, Alex C.H.; Vieira, Jose W.; Escola Politecnica de Pernambuco , Recife, PE

    2011-01-01

    The realization of tissue inhomogeneity corrections in image-based treatment planning improves the accuracy of radiation dose calculations for patients undergoing external-beam radiotherapy. Before the tissue inhomogeneity correction can be applied, the relationship between the computed tomography (CT) numbers and density must be established. This relationship is typically established by a calibration curve empirically obtained from CT images of a phantom that has several inserts of tissue-equivalent materials, covering a wide range of densities. This calibration curve is scanner-dependent and allows the conversion of CT numbers in densities for use in dose calculations. This paper describes the implementation of computational tools necessary to construct calibration curves. These tools are used for reading and displaying of CT images in DICOM format, determination of the mean CT numbers (and their standard deviations) of each tissue-equivalent material and construction of calibration curves by fits with bilinear equations. All these tools have been implemented in the Microsoft Visual Studio 2010 in C≠ programming language. (author)

  15. Statistical Methodologies to Integrate Experimental and Computational Research

    Science.gov (United States)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  16. Phase behavior of multicomponent membranes: Experimental and computational techniques

    DEFF Research Database (Denmark)

    Bagatolli, Luis; Kumar, P.B. Sunil

    2009-01-01

    Recent developments in biology seems to indicate that the Fluid Mosaic model of membrane proposed by Singer and Nicolson, with lipid bilayer functioning only as medium to support protein machinery, may be too simple to be realistic. Many protein functions are now known to depend on the compositio....... This review includes basic foundations on membrane model systems and experimental approaches applied in the membrane research area, stressing on recent advances in the experimental and computational techniques....... membranes. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating liquid ordered-liquid disordered coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior...

  17. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  18. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.

    1989-01-01

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  19. Spelling is just a click away – a user-centered brain-computer interface including auto-calibration and predictive text entry

    Directory of Open Access Journals (Sweden)

    Tobias eKaufmann

    2012-05-01

    Full Text Available Brain Computer Interfaces (BCI based on event-related potentials (ERP allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert interference, which is inevitable for establishing BCIs in end-user’s daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE that directly incorporates predictive text into the character matrix. N=19 BCI novices handled a user-centred ERP-BCI application on their own without expert interference. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration. All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without. Our PTE increased spelling speed and importantly did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.

  20. Spelling is Just a Click Away - A User-Centered Brain-Computer Interface Including Auto-Calibration and Predictive Text Entry.

    Science.gov (United States)

    Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea

    2012-01-01

    Brain-computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user's daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP-BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.

  1. Calibration method for a radwaste assay system

    International Nuclear Information System (INIS)

    Dulama, C.; Dobrin, R.; Toma, Al.; Paunoiu, C.

    2004-01-01

    A waste assay system entirely designed and manufactured in the Institute for Nuclear Research is used in radwaste treatment and conditioning stream to ensure compliance with national repository radiological requirements. Usually, waste assay systems are calibrated by using various experimental arrangements including calibration phantoms. The paper presents a comparative study concerning the efficiency calibration performed by shell source method and a semiempirical, computational method based on a Monte Carlo algorithm. (authors)

  2. Computed tomography and plain radiography in experimental fracture healing

    International Nuclear Information System (INIS)

    Braunstein, E.M.; Goldstein, S.A.; Ku, J.; Smith, P.; Matthews, L.S.

    1986-01-01

    We evaluated the relative contribution of plain radiographs and computed tomography to the assessment of fracture healing under experimental circumstances. In 15 sheep, we performed midshaft femoral osteotomies and internal fixation of the resultant segmental fractures. Radiographs were obtained preoperatively and immediately postoperatively. Animals were sacrificed at 3 weeks, 6 weeks, 12 weeks, 24 weeks, and 36 weeks after surgery, and the femoral specimens radiographed. After removal of the internal fixation devices, computed tomographic scans of the specimens were performed. Computed tomography may be of value in the evaluation of fractures of long bones in those cases in which clinical examination and plain radiographs fail to give adequate information as to the status of healing. (orig./SHA)

  3. Experimental and computational investigation of flow of pebbles in a pebble bed nuclear reactor

    Science.gov (United States)

    Khane, Vaibhav B.

    The Pebble Bed Reactor (PBR) is a 4th generation nuclear reactor which is conceptually similar to moving bed reactors used in the chemical and petrochemical industries. In a PBR core, nuclear fuel in the form of pebbles moves slowly under the influence of gravity. Due to the dynamic nature of the core, a thorough understanding about slow and dense granular flow of pebbles is required from both a reactor safety and performance evaluation point of view. In this dissertation, a new integrated experimental and computational study of granular flow in a PBR has been performed. Continuous pebble re-circulation experimental set-up, mimicking flow of pebbles in a PBR, is designed and developed. Experimental investigation of the flow of pebbles in a mimicked test reactor was carried out for the first time using non-invasive radioactive particle tracking (RPT) and residence time distribution (RTD) techniques to measure the pebble trajectory, velocity, overall/zonal residence times, flow patterns etc. The tracer trajectory length and overall/zonal residence time is found to increase with change in pebble's initial seeding position from the center towards the wall of the test reactor. Overall and zonal average velocities of pebbles are found to decrease from the center towards the wall. Discrete element method (DEM) based simulations of test reactor geometry were also carried out using commercial code EDEM(TM) and simulation results were validated using the obtained benchmark experimental data. In addition, EDEM(TM) based parametric sensitivity study of interaction properties was carried out which suggests that static friction characteristics play an important role from a packed/pebble beds structural characterization point of view. To make the RPT technique viable for practical applications and to enhance its accuracy, a novel and dynamic technique for RPT calibration was designed and developed. Preliminary feasibility results suggest that it can be implemented as a non

  4. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chao [Pacific Northwest National Laboratory, Physical and Computational Sciences Directorate, Richland WA; Xu, Zhijie [Pacific Northwest National Laboratory, Physical and Computational Sciences Directorate, Richland WA; Lai, Kevin [Pacific Northwest National Laboratory, Physical and Computational Sciences Directorate, Richland WA; Whyatt, Greg [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland WA; Marcy, Peter W. [Los Alamos National Laboratory, Statistical Sciences Group, Los Alamos NM; Sun, Xin [Oak Ridge National Laboratory, Energy and Transportation Science Division, Oak Ridge TN

    2017-10-24

    The first part of this paper (Part 1) presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work has the ability to account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry’s constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.

  5. Thermodynamic properties of indan: Experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2016-01-01

    Highlights: • Heat capacities were measured for the temperature range (5 to 445) K. • Vapor pressures were measured for the temperature range (338 to 495) K. • Densities at saturation pressure were measured from T = (323 to 523) K. • Computed and experimentally derived properties for ideal gas entropies are in excellent accord. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Measurements leading to the calculation of thermodynamic properties in the ideal-gas state for indan (Chemical Abstracts registry number [496-11-7], 2,3-dihydro-1H-indene) are reported. Experimental methods were adiabatic heat-capacity calorimetry, differential scanning calorimetry, comparative ebulliometry, and vibrating-tube densitometry. Molar thermodynamic functions (enthalpies, entropies, and Gibbs energies) for the condensed and ideal-gas states were derived from the experimental studies at selected temperatures. Statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d, p) level of theory. Computed ideal-gas properties derived with the rigid-rotor harmonic-oscillator approximation are shown to be in excellent accord with ideal-gas entropies derived from thermophysical property measurements of this research, as well as with experimental heat capacities for the ideal-gas state reported in the literature. Literature spectroscopic studies and ab initio calculations report a range of values for the barrier to ring puckering. Results of the present work are consistent with a large barrier that allows use of the rigid-rotor harmonic-oscillator approximation for ideal-gas entropy and heat-capacity calculations, even with the stringent uncertainty requirements imposed by the calorimetric and physical property measurements reported here. All experimental results are compared with property values reported in the literature.

  6. A complete dosimetry experimental program in support to the core characterization and to the power calibration of the CABRI reactor. A complete dosimetry experimental program in support of the core characterization and of the power calibration of the CABRI reactor

    Energy Technology Data Exchange (ETDEWEB)

    Rodiac, F.; Hudelot, JP.; Lecerf, J.; Garnier, Y.; Ritter, G. [CEA, DEN, CAD/DER/SRES/LPRE, Cadarache, F-13108 Saint-Paul-lez-Durance, (France); Gueton, O.; Colombier, AC. [CEA, DEN, CAD/DER/SPRC/LPN, Cadarache, F-13108 Saint-Paul-lez-Durance, (France); Domergue, C. [CEA, DEN, CAD/DER/SPEx/LDCI, Cadarache, F-13108 Saint-Paul-lez-Durance, (France)

    2015-07-01

    CABRI is an experimental pulse reactor operated by CEA at the Cadarache research center. Since 1978 the experimental programs have aimed at studying the fuel behavior under Reactivity Initiated Accident (RIA) conditions. Since 2003, it has been refurbished in order to be able to provide RIA and LOCA (Loss Of Coolant Accident) experiments in prototypical PWR conditions (155 bar, 300 deg. C). This project is part of a broader scope including an overall facility refurbishment and a safety review. The global modification is conducted by the CEA project team. It is funded by IRSN, which is conducting the CIP experimental program, in the framework of the OECD/NEA project CIP. It is financed in the framework of an international collaboration. During the reactor restart, commissioning tests are realized for all equipment, systems and circuits of the reactor. In particular neutronics and power commissioning tests will be performed respectively in 2015 and 2016. This paper focuses on the design of a complete and original dosimetry program that was built in support to the CABRI core characterization and to the power calibration. Each one of the above experimental goals will be fully described, as well as the target uncertainties and the forecasted experimental techniques and data treatment. (authors)

  7. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  8. Experimental and computational laser tissue welding using a protein patch.

    Science.gov (United States)

    Small, W; Heredia, N J; Maitland, D J; Eder, D C; Celliers, P M; Da Silva, L B; London, R A; Matthews, D L

    1998-01-01

    An in vitro study of laser tissue welding mediated with a dye-enhanced protein patch was conducted. Fresh sections of porcine aorta were used for the experiments. Arteriotomies were treated using an indocyanine green dye-enhanced collagen patch activated by an 805-nm continuous-wave fiber-delivered diode laser. Temperature histories of the surface of the weld site were obtained using a hollow glass optical fiber-based two-color infrared thermometer. The experimental effort was complemented by simulations with the LATIS (LAser-TISsue) computer code, which uses coupled Monte Carlo, thermal transport, and mass transport models. Comparison of simulated and experimental thermal data indicated that evaporative cooling clamped the surface temperature of the weld site below 100 °C. For fluences of approximately 200 J/cm2, peak surface temperatures averaged 74°C and acute burst strengths consistently exceeded 0.14×106 dyn/cm (hoop tension). The combination of experimental and simulation results showed that the inclusion of water transport and evaporative losses in the computer code has a significant impact on the thermal distributions and hydration levels throughout the tissue volume. The solid-matrix protein patch provided a means of controllable energy delivery and yielded consistently strong welds. © 1998 Society of Photo-Optical Instrumentation Engineers.

  9. Distributed computing testbed for a remote experimental environment

    International Nuclear Information System (INIS)

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.

    1995-01-01

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ''Collaboratory.'' The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation's Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility

  10. Experimental all-optical one-way quantum computing

    International Nuclear Information System (INIS)

    Prevedel, R.

    2009-01-01

    In recent years, the relatively new field of quantum information processing (QIP) has attracted the attention of many scientists around the world due to its promise of increased computational speed, absolute secure communication and the potential to simulate complex quantum mechanical systems. The very essence of this new quantum information technology are two concepts at the very heart of quantum mechanics, namely superposition and entanglement. The present Thesis contains the results of four different experiments that were all aimed at the demonstration of an entirely new model for quantum computing with linear optics, the 'one-way' quantum computer. For this purpose a multi-photon entangled state of four photons has been generated via the process of spontaneous parametric down-conversion and by using an interferometric setup. This entangled state acts as a resource that allowed for novel demonstrations of quantum algorithms and relevant experimental techniques. By exploiting the advances developed in both theory and experiment, in this Thesis we report the implementation of fast, active feed-forward that allowed, for the first time, the realization of deterministic linear optics quantum computing at an unprecedented speed. Further we were able to demonstrate the Deutsch algorithm on our one-way quantum computer, an important quantum algorithm that is capable of distinguishing whether a function is constant or balanced. Classically one needs to query the algorithm at least 2N/2 + 1 times for an N-bit binary input string, however, in the quantum regime, this can be done with one evaluation of the algorithm, independent of the size of the input. In another experiment we succeeded in playing an instance of a quantum game - the so-called Prisoner's dilemma - on our one-way quantum computer. Playing such a game is essentially the execution of a quantum algorithm made up of a distinct set of one- and two-qubit gates. This allows the individual players to increase their

  11. Computational fluid dynamic (CFD) investigation of thermal uniformity in a thermal cycling based calibration chamber for MEMS

    Science.gov (United States)

    Gui, Xulong; Luo, Xiaobing; Wang, Xiaoping; Liu, Sheng

    2015-12-01

    Micro-electrical-mechanical system (MEMS) has become important for many industries such as automotive, home appliance, portable electronics, especially with the emergence of Internet of Things. Volume testing with temperature compensation has been essential in order to provide MEMS based sensors with repeatability, consistency, reliability, and durability, but low cost. Particularly, in the temperature calibration test, temperature uniformity of thermal cycling based calibration chamber becomes more important for obtaining precision sensors, as each sensor is different before the calibration. When sensor samples are loaded into the chamber, we usually open the door of the chamber, then place fixtures into chamber and mount the samples on the fixtures. These operations may affect temperature uniformity in the chamber. In order to study the influencing factors of sample-loading on the temperature uniformity in the chamber during calibration testing, numerical simulation work was conducted first. Temperature field and flow field were simulated in empty chamber, chamber with open door, chamber with samples, and chamber with fixtures, respectively. By simulation, it was found that opening chamber door, sample size and number of fixture layers all have effects on flow field and temperature field. By experimental validation, it was found that the measured temperature value was consistent with the simulated temperature value.

  12. Absolute calibration of in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations

    International Nuclear Information System (INIS)

    Mallett, M.W.

    1991-01-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. This method uses magnetic resonance imaging (MRI) to determine the anatomical makeup of an individual. A new MRI technique is also employed that is capable of resolving the fat and water content of the human tissue. This anatomical and biochemical information is used to model a mathematical phantom. Monte Carlo methods are then used to simulate the transport of radiation throughout the phantom. By modeling the detection equipment of the in vivo measurement system into the code, calibration factors are generated that are specific to the individual. Furthermore, this method eliminates the need for surrogate human structures in the calibration process. A demonstration of the proposed method is being performed using a fat/water matrix

  13. Experimental calibration and validation of sewer/surface flow exchange equations in steady and unsteady flow conditions

    Science.gov (United States)

    Rubinato, Matteo; Martins, Ricardo; Kesserwani, Georges; Leandro, Jorge; Djordjević, Slobodan; Shucksmith, James

    2017-09-01

    The linkage between sewer pipe flow and floodplain flow is recognised to induce an important source of uncertainty within two-dimensional (2D) urban flood models. This uncertainty is often attributed to the use of empirical hydraulic formulae (the one-dimensional (1D) weir and orifice steady flow equations) to achieve data-connectivity at the linking interface, which require the determination of discharge coefficients. Because of the paucity of high resolution localised data for this type of flows, the current understanding and quantification of a suitable range for those discharge coefficients is somewhat lacking. To fulfil this gap, this work presents the results acquired from an instrumented physical model designed to study the interaction between a pipe network flow and a floodplain flow. The full range of sewer-to-surface and surface-to-sewer flow conditions at the exchange zone are experimentally analysed in both steady and unsteady flow regimes. Steady state measured discharges are first analysed considering the relationship between the energy heads from the sewer flow and the floodplain flow; these results show that existing weir and orifice formulae are valid for describing the flow exchange for the present physical model, and yield new calibrated discharge coefficients for each of the flow conditions. The measured exchange discharges are also integrated (as a source term) within a 2D numerical flood model (a finite volume solver to the 2D Shallow Water Equations (SWE)), which is shown to reproduce the observed coefficients. This calibrated numerical model is then used to simulate a series of unsteady flow tests reproduced within the experimental facility. Results show that the numerical model overestimated the values of mean surcharge flow rate. This suggests the occurrence of additional head losses in unsteady conditions which are not currently accounted for within flood models calibrated in steady flow conditions.

  14. Empirical dual energy calibration (EDEC) for cone-beam computed tomography

    International Nuclear Information System (INIS)

    Stenner, Philip; Berkus, Timo; Kachelriess, Marc

    2007-01-01

    Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p 1 and p 2 are obtained as functions of the measured attenuation data q 1 and q 2 (one DECT scan=two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical μ values and density values. Since EDEC is an empirical technique it inherently compensates for scatter

  15. Electrochemistry of moexipril: experimental and computational approach and voltammetric determination.

    Science.gov (United States)

    Taşdemir, Hüdai I; Kiliç, E

    2014-09-01

    The electrochemistry of moexipril (MOE) was studied by electrochemical methods with theoretical calculations performed at B3LYP/6-31 + G (d)//AM1. Cyclic voltammetric studies were carried out based on a reversible and adsorption-controlled reduction peak at -1.35 V on a hanging mercury drop electrode (HMDE). Concurrently irreversible diffusion-controlled oxidation peak at 1.15 V on glassy carbon electrode (GCE) was also employed. Potential values are according to Ag/AgCI, (3.0 M KCI) and measurements were performed in Britton-Robinson buffer of pH 5.5. Tentative electrode mechanisms were proposed according to experimental results and ab-initio calculations. Square-wave adsorptive stripping voltammetric methods have been developed and validated for quantification of MOE in pharmaceutical preparations. Linear working range was established as 0.03-1.35 microM for HMDE and 0.2-20.0 microM for GCE. Limit of quantification (LOQ) was calculated to be 0.032 and 0.47 microM for HMDE and GCE, respectively. Methods were successfully applied to assay the drug in tablets by calibration and standard addition methods with good recoveries between 97.1% and 106.2% having relative standard deviation less than 10%.

  16. Computational/experimental studies of isolated, single component droplet combustion

    Science.gov (United States)

    Dryer, Frederick L.

    1993-01-01

    Isolated droplet combustion processes have been the subject of extensive experimental and theoretical investigations for nearly 40 years. The gross features of droplet burning are qualitatively embodied by simple theories and are relatively well understood. However, there remain significant aspects of droplet burning, particularly its dynamics, for which additional basic knowledge is needed for thorough interpretations and quantitative explanations of transient phenomena. Spherically-symmetric droplet combustion, which can only be approximated under conditions of both low Reynolds and Grashof numbers, represents the simplest geometrical configuration in which to study the coupled chemical/transport processes inherent within non-premixed flames. The research summarized here, concerns recent results on isolated, single component, droplet combustion under microgravity conditions, a program pursued jointly with F.A. Williams of the University of California, San Diego. The overall program involves developing and applying experimental methods to study the burning of isolated, single component droplets, in various atmospheres, primarily at atmospheric pressure and below, in both drop towers and aboard space-based platforms such as the Space Shuttle or Space Station. Both computational methods and asymptotic methods, the latter pursued mainly at UCSD, are used in developing the experimental test matrix, in analyzing results, and for extending theoretical understanding. Methanol, and the normal alkanes, n-heptane, and n-decane, have been selected as test fuels to study time-dependent droplet burning phenomena. The following sections summarizes the Princeton efforts on this program, describe work in progress, and briefly delineate future research directions.

  17. Experimental calibration of forward and inverse neural networks for rotary type magnetorheological damper

    DEFF Research Database (Denmark)

    Bhowmik, Subrata; Weber, Felix; Høgsberg, Jan Becker

    2013-01-01

    This paper presents a systematic design and training procedure for the feed-forward backpropagation neural network (NN) modeling of both forward and inverse behavior of a rotary magnetorheological (MR) damper based on experimental data. For the forward damper model, with damper force as output...

  18. Prototype of an auto-calibrating, context-aware, hybrid brain-computer interface.

    Science.gov (United States)

    Faller, J; Torrellas, S; Miralles, F; Holzner, C; Kapeller, C; Guger, C; Bund, J; Müller-Putz, G R; Scherer, R

    2012-01-01

    We present the prototype of a context-aware framework that allows users to control smart home devices and to access internet services via a Hybrid BCI system of an auto-calibrating sensorimotor rhythm (SMR) based BCI and another assistive device (Integra Mouse mouth joystick). While there is extensive literature that describes the merit of Hybrid BCIs, auto-calibrating and co-adaptive ERD BCI training paradigms, specialized BCI user interfaces, context-awareness and smart home control, there is up to now, no system that includes all these concepts in one integrated easy-to-use framework that can truly benefit individuals with severe functional disabilities by increasing independence and social inclusion. Here we integrate all these technologies in a prototype framework that does not require expert knowledge or excess time for calibration. In a first pilot-study, 3 healthy volunteers successfully operated the system using input signals from an ERD BCI and an Integra Mouse and reached average positive predictive values (PPV) of 72 and 98% respectively. Based on what we learned here we are planning to improve the system for a test with a larger number of healthy volunteers so we can soon bring the system to benefit individuals with severe functional disability.

  19. A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison

    Science.gov (United States)

    2017-03-01

    computed at various pitch angles through changes in actuator length. The linear spring stiffness was estimated by using the internal volume of the...Vehicle Technology Directorate Mechanics Division (ATTN: RDRL-VTM) Aberdeen Proving Ground, MD 21005-5066 8. PERFORMING ORGANIZATION REPORT NUMBER...Finally, the balance loads are not only induced by the rotor hub loads, but also by loads transmitted via the pitch links to the swashplate. Thus

  20. Assembly and calibration of a new experimental apparatus for production and utilization of capture gamma rays

    International Nuclear Information System (INIS)

    Semmler, R.

    1993-01-01

    A new experimental apparatus has been mounted at the tangential beam tube B H 4/12 of the IPEN IEA-R1 (2 MW) reactor, for production and utilization of capture gamma rays. In this type of experiment, monochromatic gamma radiation, with energy resolution of about 10 eV, is produced by thermal neutron capture in several materials placed near the reactor core. By changing the target material it was possible to obtain up to 30 gamma lines in the 5 to 11 MeV energy range and so, the present experimental arrangement may be considered as an excellent gamma ray source for photonuclear reactions studies in low excitation energies. (author)

  1. Experimental and computational investigation of lateral gauge response in polycarbonate

    Science.gov (United States)

    Eliot, Jim; Harris, Ernst; Hazell, Paul; Appleby-Thomas, Gareth; Winter, Ronald; Wood, David; Owen, Gareth

    2011-06-01

    Polycarbonate's use in personal armour systems means its high strain-rate response has been extensively studied. Interestingly, embedded lateral manganin stress gauges in polycarbonate have shown gradients behind incident shocks, suggestive of increasing shear strength. However, such gauges need to be embedded in a central (typically) epoxy interlayer - an inherently invasive approach. Recently, research has suggested that in such metal systems interlayer/target impedance may contribute to observed gradients in lateral stress. Here, experimental T-gauge (Vishay Micro-Measurements® type J2M-SS-580SF-025) traces from polycarbonate targets are compared to computational simulations. This work extends previous efforts such that similar impedance exists between the interlayer and matrix (target) interface. Further, experiments and simulations are presented investigating the effects of a ``dry joint'' in polycarbonate, in which no encapsulating medium is employed.

  2. QSPIN: A High Level Java API for Quantum Computing Experimentation

    Science.gov (United States)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  3. Helicopter fuselage drag - combined computational fluid dynamics and experimental studies

    Science.gov (United States)

    Batrakov, A.; Kusyumov, A.; Mikhailov, S.; Pakhov, V.; Sungatullin, A.; Valeev, M.; Zherekhov, V.; Barakos, G.

    2015-06-01

    In this paper, wind tunnel experiments are combined with Computational Fluid Dynamics (CFD) aiming to analyze the aerodynamics of realistic fuselage configurations. A development model of the ANSAT aircraft and an early model of the AKTAI light helicopter were employed. Both models were tested at the subsonic wind tunnel of KNRTU-KAI for a range of Reynolds numbers and pitch and yaw angles. The force balance measurements were complemented by particle image velocimetry (PIV) investigations for the cases where the experimental force measurements showed substantial unsteadiness. The CFD results were found to be in fair agreement with the test data and revealed some flow separation at the rear of the fuselages. Once confidence on the CFD method was established, further modifications were introduced to the ANSAT-like fuselage model to demonstrate drag reduction via small shape changes.

  4. Experimental and computational studies on a gasifier based stove

    International Nuclear Information System (INIS)

    Varunkumar, S.; Rajan, N.K.S.; Mukunda, H.S.

    2012-01-01

    Highlights: ► A simple method to calculate the fraction of HHC was devised. ► η g for stove is same as that of a downdraft gasifier. ► Gas from stove contains 5.5% of CH 4 equivalent of HHC. ► Effect of vessel size on utilization efficiency brought out clearly. ► Contribution of radiative heat transfer from char bed to efficiency is 6%. - Abstract: The work reported here is concerned with a detailed thermochemical evaluation of the flaming mode behaviour of a gasifier based stove. Determination of the gas composition over the fuel bed, surface and gas temperatures in the gasification process constitute principal experimental features. A simple atomic balance for the gasification reaction combined with the gas composition from the experiments is used to determine the CH 4 equivalent of higher hydrocarbons and the gasification efficiency (η g ). The components of utilization efficiency, namely, gasification–combustion and heat transfer are explored. Reactive flow computational studies using the measured gas composition over the fuel bed are used to simulate the thermochemical flow field and heat transfer to the vessel; hither-to-ignored vessel size effects in the extraction of heat from the stove are established clearly. The overall flaming mode efficiency of the stove is 50–54%; the convective and radiative components of heat transfer are established to be 45–47 and 5–7% respectively. The efficiency estimates from reacting computational fluid dynamics (RCFD) compare well with experiments.

  5. Experimental demonstration of reservoir computing on a silicon photonics chip

    Science.gov (United States)

    Vandoorne, Kristof; Mechet, Pauline; van Vaerenbergh, Thomas; Fiers, Martin; Morthier, Geert; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter

    2014-03-01

    In today’s age, companies employ machine learning to extract information from large quantities of data. One of those techniques, reservoir computing (RC), is a decade old and has achieved state-of-the-art performance for processing sequential data. Dedicated hardware realizations of RC could enable speed gains and power savings. Here we propose the first integrated passive silicon photonics reservoir. We demonstrate experimentally and through simulations that, thanks to the RC paradigm, this generic chip can be used to perform arbitrary Boolean logic operations with memory as well as 5-bit header recognition up to 12.5 Gbit s-1, without power consumption in the reservoir. It can also perform isolated spoken digit recognition. Our realization exploits optical phase for computing. It is scalable to larger networks and much higher bitrates, up to speeds >100 Gbit s-1. These results pave the way for the application of integrated photonic RC for a wide range of applications.

  6. Mutations that Cause Human Disease: A Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    can be used to understand how an amino acid change affects the protein. The experimental methods that provide the most detailed structural information on proteins are X-ray crystallography and NMR spectroscopy. However, these methods are labor intensive and currently cannot be carried out on a genomic scale. Nonetheless, Structural Genomics projects are being pursued by more than a dozen groups and consortia worldwide and as a result the number of experimentally determined structures is rising exponentially. Based on the expectation that protein structures will continue to be determined at an ever-increasing rate, reliable structure prediction schemes will become increasingly valuable, leading to information on protein function and disease for many different proteins. Given known genetic variability and experimentally determined protein structures, can we accurately predict the effects of single amino acid substitutions? An objective assessment of this question would involve comparing predicted and experimentally determined structures, which thus far has not been rigorously performed. The completed research leveraged existing expertise at LLNL in computational and structural biology, as well as significant computing resources, to address this question.

  7. Studies on the calibration of mammography automatic exposure mode with computed radiology

    International Nuclear Information System (INIS)

    Zhu Hongzhou; Shao Guoliang; Shi Lei; Liu Qing

    2010-01-01

    Objective: To realize the optimization of image quality and radiation dose by correcting mammography automatic exposure, according to automatic exposure controlled mode of mammography film-screen system. Methods: The film-screen system (28 kV) was applied to perform automatic exposure of plexiglass (40 mm) and get the standard dose of exposure, the exposure mode of CR base on LgM=2.0 was rectified, which was divided into 10 steps. Mammary glands pattern (Fluke NA18-220) were examined with CR (26, 28, and 30 kV) by the automatic exposure mode corrected. The exposure values (mAs) were recorded. CR image was diagnosed and evaluated in double blind way by 4 radiologists according to American Collage of Radiology (ACR) standard. Results: Based on the standard of CR automatic exposure with the dose higher than the traditional exposure of film-screen system, the calibration of mammography automatic exposure was accomplished. The test results of the calibrated mode was better than the scoring system of ACR. Conclusions: Comparative study showed improvement in acquiring high-quality image and reduction of radiation dose. The corrected mammography automatic exposure mode might be a better method for clinical use. (authors)

  8. COMPUTATIONAL AND EXPERIMENTAL MODELING OF SLURRY BUBBLE COLUMN REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    Paul C.K. Lam; Isaac K. Gamwo; Dimitri Gidaspow

    2002-05-01

    The objective of this study was to develop a predictive experimentally verified computational fluid dynamics (CFD) model for gas-liquid-solid flow. A three dimensional transient computer code for the coupled Navier-Stokes equations for each phase was developed and is appended in this report. The principal input into the model is the viscosity of the particulate phase which was determined from a measurement of the random kinetic energy of the 800 micron glass beads and a Brookfield viscometer. The details are presented in the attached paper titled ''CFD Simulation of Flow and Turbulence in a Slurry Bubble Column''. This phase of the work is in press in a referred journal (AIChE Journal, 2002) and was presented at the Fourth International Conference on Multiphase Flow (ICMF 2001) in New Orleans, May 27-June 1, 2001 (Paper No. 909). The computed time averaged particle velocities and concentrations agree with Particle Image Velocimetry (PIV) measurements of velocities and concentrations, obtained using a combination of gamma-ray and X-ray densitometers, in a slurry bubble column, operated in the bubbly-coalesced fluidization regime with continuous flow of water. Both the experiment and the simulation show a down-flow of particles in the center of the column and up-flow near the walls and nearly uniform particle concentration. Normal and shear Reynolds stresses were constructed from the computed instantaneous particle velocities. The PIV measurement and the simulation produced instantaneous particle velocities. The PIV measurement and the simulation produced similar nearly flat horizontal profiles of turbulent kinetic energy of particles. To better understand turbulence we studied fluidization in a liquid-solid bed. This work was also presented at the Fourth International Conference on Multiphase Flow (ICMF 2001, Paper No. 910). To understand turbulence in risers, measurements were done in the IIT riser with 530 micron glass beads using a PIV

  9. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chao [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Physical and Computational Sciences Directorate; Xu, Zhijie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Physical and Computational Sciences Directorate; Lai, Kevin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Physical and Computational Sciences Directorate; Whyatt, Greg [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Energy and Environment Directorate; Marcy, Peter W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sun, Xin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Energy and Transportation Science Division

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. In this study, to generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. Finally, the calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.

  10. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Science.gov (United States)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  11. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Directory of Open Access Journals (Sweden)

    Tan Yuan

    2017-01-01

    Full Text Available As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR, such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  12. Computational modeling and experimental characterization of indoor aerosol transport

    International Nuclear Information System (INIS)

    Konecni, Snezana; Whicker, Jeffrey J.; Martin, Richard A.

    2002-01-01

    When a hazardous aerosol or gas is inadvertently or deliberately released in an occupied facility, the airborne material presents a hazard to people. Inadvertent accidents and exposures continue to occur in Los Alamos and other nuclear facilities despite state-of-art engineering and administrative controls, and heightened diligence. Despite the obvious need in occupational settings and for homeland defense, the body of research in hazardous aerosol dispersion and control in large, complex, ventilated enclosures is extremely limited. The science governing generation, transport, inhalation, and detection of airborne hazards is lacking and must be developed to where it can be used by engineers or safety professionals in the prediction of worker exposure, in the prevention of accidents, or in the mitigation of terrorist actions. In this study, a commercial computational fluid dynamics (CFD) code, CFX5.4, and experiments were used to assess flow field characteristics, and to investigate aerosol release and transport in a large, ventilated workroom in a facility at Savannah River Site. Steady state CFD results illustrating a complex, ventilation-induced, flow field with vortices, velocity gradients, and quiet zones are presented, as are time-dependent CFD and experimental aerosol dispersion results. The comparison of response times between CFD and experimental results was favorable. It is believed that future applications of CFD and experiments can have a favorable impact on the design of ventilation (HVAC) systems and worker safety with consideration to facility costs. Ultimately, statistical methods will be used in conjunction with CFD calculations to determine the optimal number and location of detectors, as well as optimal egress routes in event of a release.

  13. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    Science.gov (United States)

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  14. Experimental and computational prediction of glass transition temperature of drugs.

    Science.gov (United States)

    Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S

    2014-12-22

    Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.

  15. Experimental validation of a computer simulation of radiographic film

    International Nuclear Information System (INIS)

    Goncalves, Elicardo A. de S.; Azeredo, Raphaela; Assis, Joaquim T.; Anjos, Marcelino J. dos; Oliveira, Davi F.; Oliveira, Luis F. de

    2015-01-01

    In radiographic films, the behavior of characteristic curve is very important for the image quality. Digitization/visualization are always performed by light transmission and the characteristic curve is known as a behavior of optical density in function of exposure. In a first approach, in a Monte-Carlo computer simulation trying to build a Hurter-Driffield curve by a stochastic model, the results showed the same known shape, but some behaviors, like the influence of silver grain size, are not expected. A real H and D curve was build exposing films, developing and measuring the optical density. When comparing model results with a real curve, trying to fit them and estimating some parameters, a difference in high exposure region shows a divergence between the models and the experimental data. Since the optical density is a function of metallic silver generated by chemical development, direct proportion was considered, but the results suggests a limitation in this proportion. In fact, when the optical density was changed by another way to measure silver concentration, like x-ray fluorescence, the new results agree with the models. Therefore, overexposed films can contain areas with different silver concentrations but it can't be seen due to the fact that optical density measurement is limited. Mapping the silver concentration in the film area can be a solution to reveal these dark images, and x-ray fluorescence has shown to be the best way to perform this new way to digitize films. (author)

  16. Experimental validation of a computer simulation of radiographic film

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil). Laboratorio de Instrumentacao e Simulacao Computacional Cientificas Aplicadas; Azeredo, Raphaela, E-mail: raphaelaazeredo@yahoo.com.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Instituto de Fisica Armando Dias Tavares. Programa de Pos-Graduacao em Fisica; Assis, Joaquim T., E-mail: joaquim@iprj.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico; Anjos, Marcelino J. dos; Oliveira, Davi F.; Oliveira, Luis F. de, E-mail: marcelin@uerj.br, E-mail: davi.oliveira@uerj.br, E-mail: lfolive@uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Instituto de Fisica Armando Dias Tavares. Departamento de Fisica Aplicada e Termodinamica

    2015-07-01

    In radiographic films, the behavior of characteristic curve is very important for the image quality. Digitization/visualization are always performed by light transmission and the characteristic curve is known as a behavior of optical density in function of exposure. In a first approach, in a Monte-Carlo computer simulation trying to build a Hurter-Driffield curve by a stochastic model, the results showed the same known shape, but some behaviors, like the influence of silver grain size, are not expected. A real H and D curve was build exposing films, developing and measuring the optical density. When comparing model results with a real curve, trying to fit them and estimating some parameters, a difference in high exposure region shows a divergence between the models and the experimental data. Since the optical density is a function of metallic silver generated by chemical development, direct proportion was considered, but the results suggests a limitation in this proportion. In fact, when the optical density was changed by another way to measure silver concentration, like x-ray fluorescence, the new results agree with the models. Therefore, overexposed films can contain areas with different silver concentrations but it can't be seen due to the fact that optical density measurement is limited. Mapping the silver concentration in the film area can be a solution to reveal these dark images, and x-ray fluorescence has shown to be the best way to perform this new way to digitize films. (author)

  17. Computational Optimization of a Natural Laminar Flow Experimental Wing Glove

    Science.gov (United States)

    Hartshom, Fletcher

    2012-01-01

    Computational optimization of a natural laminar flow experimental wing glove that is mounted on a business jet is presented and discussed. The process of designing a laminar flow wing glove starts with creating a two-dimensional optimized airfoil and then lofting it into a three-dimensional wing glove section. The airfoil design process does not consider the three dimensional flow effects such as cross flow due wing sweep as well as engine and body interference. Therefore, once an initial glove geometry is created from the airfoil, the three dimensional wing glove has to be optimized to ensure that the desired extent of laminar flow is maintained over the entire glove. TRANAIR, a non-linear full potential solver with a coupled boundary layer code was used as the main tool in the design and optimization process of the three-dimensional glove shape. The optimization process uses the Class-Shape-Transformation method to perturb the geometry with geometric constraints that allow for a 2-in clearance from the main wing. The three-dimensional glove shape was optimized with the objective of having a spanwise uniform pressure distribution that matches the optimized two-dimensional pressure distribution as closely as possible. Results show that with the appropriate inputs, the optimizer is able to match the two dimensional pressure distributions practically across the entire span of the wing glove. This allows for the experiment to have a much higher probability of having a large extent of natural laminar flow in flight.

  18. A novel angle computation and calibration algorithm of bio-inspired sky-light polarization navigation sensor.

    Science.gov (United States)

    Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao

    2014-09-15

    Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice.

  19. Calibration of UAS imagery inside and outside of shadows for improved vegetation index computation

    Science.gov (United States)

    Bondi, Elizabeth; Salvaggio, Carl; Montanaro, Matthew; Gerace, Aaron D.

    2016-05-01

    Vegetation health and vigor can be assessed with data from multi- and hyperspectral airborne and satellite- borne sensors using index products such as the normalized difference vegetation index (NDVI). Recent advances in unmanned aerial systems (UAS) technology have created the opportunity to access these same image data sets in a more cost effective manner with higher temporal and spatial resolution. Another advantage of these systems includes the ability to gather data in almost any weather condition, including complete cloud cover, when data has not been available before from traditional platforms. The ability to collect in these varied conditions, meteorological and temporal, will present researchers and producers with many new challenges. Particularly, cloud shadows and self-shadowing by vegetation must be taken into consideration in imagery collected from UAS platforms to avoid variation in NDVI due to changes in illumination within a single scene, and between collection flights. A workflow is presented to compensate for variations in vegetation indices due to shadows and variation in illumination levels in high resolution imagery collected from UAS platforms. Other calibration methods that producers may currently be utilizing produce NDVI products that still contain shadow boundaries and variations due to illumination, whereas the final NDVI mosaic from this workflow does not.

  20. Computational and experimental study of copper–gold nitride formation

    Energy Technology Data Exchange (ETDEWEB)

    Ponce-Cázares, I., E-mail: iponce@cnyn.unam.mx [Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), Carretera Tijuana-Ensenada No. 3918, A. Postal 360, 22860 Ensenada, B.C. (Mexico); Soto, G., E-mail: gerardo@cnyn.unam.mx [Universidad Nacional Autónoma de México, Centro de Nanociencias y Nanotecnología, Km. 107 Carretera Tijuana-Ensenada, C.P. 22860 Ensenada, B.C. (Mexico); Moreno-Armenta, Ma. Guadalupe, E-mail: moreno@cnyn.unam.mx [Universidad Nacional Autónoma de México, Centro de Nanociencias y Nanotecnología, Km. 107 Carretera Tijuana-Ensenada, C.P. 22860 Ensenada, B.C. (Mexico); De la Cruz, W., E-mail: wencel@cnyn.unam.mx [Universidad Nacional Autónoma de México, Centro de Nanociencias y Nanotecnología, Km. 107 Carretera Tijuana-Ensenada, C.P. 22860 Ensenada, B.C. (Mexico)

    2015-08-25

    Highlights: • The new Cu{sub 3}Au-nitride compound was successfully grown by the sputtering method. • This material is Cu{sub 3}Au{sub 0.5}N with cubic system (Pm3m space group), where the gold randomly occupies half of the 1a Wyckoff sites. • The material was a pseudo-gap conductor with conductance as good as a heavily-doped semiconductor at room temperature. - Abstract: This work investigates the formation of a Cu{sub 3}Au-nitride alloy using experimental and computational methods. For this purpose, we prepared a custom-made Cu–Au target and then hit it with argon ions in the presence of molecular nitrogen that produced a film on Corning glass. This film was analyzed using spectroscopic and diffraction techniques. The four-point-probe method and Tauc plots were applied to determine the electrical and optical properties of this thin film. Using first principle calculations a structural model was constructed that validated our observations. The crystalline system that we used was cubic (Pm3m space group) with half the sites filled with Au randomly. The composition was close to Cu{sub 3}Au{sub 0.5}N. In agreement with the electrical measurements and calculations, the Cu{sub 3}Au{sub 0.5}N band structure was highly affected by the Au incorporation since the electrical resistance and carrier density were in the 10{sup −3} Ω cm and 10{sup 22} cm{sup −3} ranges, respectively, and the optical gap decreased 0.61 eV with respect to the Cu{sub 3}N. The material was a pseudo-gap conductor with conductance as good as a heavily-doped semiconductor at room temperature; this should give it great potential for use in the optoelectronics industry.

  1. Pathophysiological study of experimental hydrocephalus with computed tomography (CT) scan

    International Nuclear Information System (INIS)

    Murata, Takaho

    1980-01-01

    In order to investigate the pathophysiological changes during a development of hydrocephalus, the observations employing computed tomography (CT) scans and monitorings of intracranial epidural pressure (EDP) were performed in a series of kaolin-induced canine hydrocephalus. According to ''volume index'' of ventricles which was calculated from printed-out CT numbers, great individual variations were recognized in the degree of a ventricular enlargement as well as the rate of EDP. They are thought to be due to the difference in types of hydrocephalus, which have been induced by a discrepancy in the site and degree of an obstruction caused by kaolin. Periventricular lucency (PVL) of various degrees were also detected on CT scans of experimental hydrocephalus. It was always marked in the superolateral angle of frontal horn of the lateral ventricles, and differed in degree from severe to mild. PVLs were distinct in the acute stage with high EDP, and gradually became indistinct and had a tendency to disappear thereafter along with decreased EDP. They immediately disappeared after shunting operation. The pathogenesis of PVL was investigated with histological examinations, as well as by using contrast enhancement, Metrizamide ventriculography, the analysis of linear density profiles, and the measurement of regional cerebral blood flow (rCBF). Consequently, PVLs in hydrocephalus are considered to represent an acute edema or a chronic CSF retention in the periventricular white matter caused by increase of water content. In other words, they are regarded as a sign of present or preceding intraventricular hypertension on CT scan, and may become a clinical indication for shunting operation. (author)

  2. Computational and experimental study of copper–gold nitride formation

    International Nuclear Information System (INIS)

    Ponce-Cázares, I.; Soto, G.; Moreno-Armenta, Ma. Guadalupe; De la Cruz, W.

    2015-01-01

    Highlights: • The new Cu 3 Au-nitride compound was successfully grown by the sputtering method. • This material is Cu 3 Au 0.5 N with cubic system (Pm3m space group), where the gold randomly occupies half of the 1a Wyckoff sites. • The material was a pseudo-gap conductor with conductance as good as a heavily-doped semiconductor at room temperature. - Abstract: This work investigates the formation of a Cu 3 Au-nitride alloy using experimental and computational methods. For this purpose, we prepared a custom-made Cu–Au target and then hit it with argon ions in the presence of molecular nitrogen that produced a film on Corning glass. This film was analyzed using spectroscopic and diffraction techniques. The four-point-probe method and Tauc plots were applied to determine the electrical and optical properties of this thin film. Using first principle calculations a structural model was constructed that validated our observations. The crystalline system that we used was cubic (Pm3m space group) with half the sites filled with Au randomly. The composition was close to Cu 3 Au 0.5 N. In agreement with the electrical measurements and calculations, the Cu 3 Au 0.5 N band structure was highly affected by the Au incorporation since the electrical resistance and carrier density were in the 10 −3 Ω cm and 10 22 cm −3 ranges, respectively, and the optical gap decreased 0.61 eV with respect to the Cu 3 N. The material was a pseudo-gap conductor with conductance as good as a heavily-doped semiconductor at room temperature; this should give it great potential for use in the optoelectronics industry

  3. Experimental investigation of liquid chromatography columns by means of computed tomography

    DEFF Research Database (Denmark)

    Astrath, D.U.; Lottes, F.; Vu, Duc Thuong

    2007-01-01

    The efficiency of packed chromatographic columns was investigated experimentally by means of computed tomography (CT) techniques. The measurements were carried out by monitoring tracer fronts in situ inside the chromatographic columns. The experimental results were fitted using the equilibrium di...

  4. Energy-resolved computed tomography: first experimental results

    International Nuclear Information System (INIS)

    Shikhaliev, Polad M

    2008-01-01

    First experimental results with energy-resolved computed tomography (CT) are reported. The contrast-to-noise ratio (CNR) in CT has been improved with x-ray energy weighting for the first time. Further, x-ray energy weighting improved the CNR in material decomposition CT when applied to CT projections prior to dual-energy subtraction. The existing CT systems use an energy (charge) integrating x-ray detector that provides a signal proportional to the energy of the x-ray photon. Thus, the x-ray photons with lower energies are scored less than those with higher energies. This underestimates contribution of lower energy photons that would provide higher contrast. The highest CNR can be achieved if the x-ray photons are scored by a factor that would increase as the x-ray energy decreases. This could be performed by detecting each x-ray photon separately and measuring its energy. The energy selective CT data could then be saved, and any weighting factor could be applied digitally to a detected x-ray photon. The CT system includes a photon counting detector with linear arrays of pixels made from cadmium zinc telluride (CZT) semiconductor. A cylindrical phantom with 10.2 cm diameter made from tissue-equivalent material was used for CT imaging. The phantom included contrast elements representing calcifications, iodine, adipose and glandular tissue. The x-ray tube voltage was 120 kVp. The energy selective CT data were acquired, and used to generate energy-weighted and material-selective CT images. The energy-weighted and material decomposition CT images were generated using a single CT scan at a fixed x-ray tube voltage. For material decomposition the x-ray spectrum was digitally spilt into low- and high-energy parts and dual-energy subtraction was applied. The x-ray energy weighting resulted in CNR improvement of calcifications and iodine by a factor of 1.40 and 1.63, respectively, as compared to conventional charge integrating CT. The x-ray energy weighting was also applied

  5. The stability of mechanical calibration for a kV cone beam computed tomography system integrated with linear accelerator

    International Nuclear Information System (INIS)

    Sharpe, Michael B.; Moseley, Douglas J.; Purdie, Thomas G.

    2006-01-01

    The geometric accuracy and precision of an image-guided treatment system were assessed. Image guidance is performed using an x-ray volume imaging (XVI) system integrated with a linear accelerator and treatment planning system. Using an amorphous silicon detector and x-ray tube, volumetric computed tomography images are reconstructed from kilovoltage radiographs by filtered backprojection. Image fusion and assessment of geometric targeting are supported by the treatment planning system. To assess the limiting accuracy and precision of image-guided treatment delivery, a rigid spherical target embedded in an opaque phantom was subjected to 21 treatment sessions over a three-month period. For each session, a volumetric data set was acquired and loaded directly into an active treatment planning session. Image fusion was used to ascertain the couch correction required to position the target at the prescribed iso-center. Corrections were validated independently using megavoltage electronic portal imaging to record the target position with respect to symmetric treatment beam apertures. An initial calibration cycle followed by repeated image-guidance sessions demonstrated the XVI system could be used to relocate an unambiguous object to within less than 1 mm of the prescribed location. Treatment could then proceed within the mechanical accuracy and precision of the delivery system. The calibration procedure maintained excellent spatial resolution and delivery precision over the duration of this study, while the linear accelerator was in routine clinical use. Based on these results, the mechanical accuracy and precision of the system are ideal for supporting high-precision localization and treatment of soft-tissue targets

  6. Experimental determinations of correction factors as a function of vertical displacement of radioactive sources in the radionuclide calibrators of the CRCN-NE, Pernambuco, Brazil

    International Nuclear Information System (INIS)

    Fragoso, Maria da Conceiao de Farias; Albuquerque, Antonio Morais de Sa; Lacerda, Isabelle Viviane Batista de; Oliveira, Mercia L.

    2011-01-01

    In nuclear medicine, the accurate knowledge of the activity of radio-pharmaceuticals which will be administered to the patients is an important factor to ensure the success of diagnosis or therapy. The activity measurements are performed in reentrant ionization chambers, also known as radionuclide calibrators. These equipment are sensitive to changes in radioactive sample geometry and its position within the chamber well. The purpose this work was to evaluate the behavior of radionuclide calibrators by means of vertical displacement of radioactive sources in the well and to determine experimentally the correction factors for each radionuclide, recognizing the specific positions in which the measurements must be made to ensure the highest sensitivity. The highest activity was obtained between 6 and 8 cm from the bottom of the well for both radionuclide calibrators utilized at this work. (author)

  7. Experimental determinations of correction factors as a function of vertical displacement of radioactive sources in the radionuclide calibrators of the CRCN-NE, Pernambuco, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Fragoso, Maria da Conceiao de Farias; Albuquerque, Antonio Morais de Sa; Lacerda, Isabelle Viviane Batista de; Oliveira, Mercia L. [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, MG (Brazil)

    2011-07-01

    In nuclear medicine, the accurate knowledge of the activity of radio-pharmaceuticals which will be administered to the patients is an important factor to ensure the success of diagnosis or therapy. The activity measurements are performed in reentrant ionization chambers, also known as radionuclide calibrators. These equipment are sensitive to changes in radioactive sample geometry and its position within the chamber well. The purpose this work was to evaluate the behavior of radionuclide calibrators by means of vertical displacement of radioactive sources in the well and to determine experimentally the correction factors for each radionuclide, recognizing the specific positions in which the measurements must be made to ensure the highest sensitivity. The highest activity was obtained between 6 and 8 cm from the bottom of the well for both radionuclide calibrators utilized at this work. (author)

  8. Confirmation and calibration of computer modeling of tsunamis produced by Augustine volcano, Alaska

    Science.gov (United States)

    Beget, James E.; Kowalik, Zygmunt

    2006-01-01

    Numerical modeling has been used to calculate the characteristics of a tsunami generated by a landslide into Cook Inlet from Augustine Volcano. The modeling predicts travel times of ca. 50-75 minutes to the nearest populated areas, and indicates that significant wave amplification occurs near Mt. Iliamna on the western side of Cook Inlet, and near the Nanwelak and the Homer-Anchor Point areas on the east side of Cook Inlet. Augustine volcano last produced a tsunami during an eruption in 1883, and field evidence of the extent and height of the 1883 tsunamis can be used to test and constrain the results of the computer modeling. Tsunami deposits on Augustine Island indicate waves near the landslide source were more than 19 m high, while 1883 tsunami deposits in distal sites record waves 6-8 m high. Paleotsunami deposits were found at sites along the coast near Mt. Iliamna, Nanwelak, and Homer, consistent with numerical modeling indicating significant tsunami wave amplification occurs in these areas. 

  9. Optimisation of the energy efficiency of bread-baking ovens using a combined experimental and computational approach

    International Nuclear Information System (INIS)

    Khatir, Zinedine; Paton, Joe; Thompson, Harvey; Kapur, Nik; Toropov, Vassili

    2013-01-01

    Highlights: ► A scientific framework for optimising oven operating conditions is presented. ► Experiments measuring local convective heat transfer coefficient are undertaken. ► An energy efficiency model is developed with experimentally calibrated CFD analysis. ► Designing ovens with optimum heat transfer coefficients reduces energy use. ► Results demonstrate a strong case to design and manufacture energy optimised ovens. - Abstract: Changing legislation and rising energy costs are bringing the need for efficient baking processes into much sharper focus. High-speed air impingement bread-baking ovens are complex systems using air flow to transfer heat to the product. In this paper, computational fluid dynamics (CFD) is combined with experimental analysis to develop a rigorous scientific framework for the rapid generation of forced convection oven designs. A design parameterisation of a three-dimensional generic oven model is carried out for a wide range of oven sizes and flow conditions to optimise desirable features such as temperature uniformity throughout the oven, energy efficiency and manufacturability. Coupled with the computational model, a series of experiments measuring the local convective heat transfer coefficient (h c ) are undertaken. The facility used for the heat transfer experiments is representative of a scaled-down production oven where the air temperature and velocity as well as important physical constraints such as nozzle dimensions and nozzle-to-surface distance can be varied. An efficient energy model is developed using a CFD analysis calibrated using experimentally determined inputs. Results from a range of oven designs are presented together with ensuing energy usage and savings

  10. Experimental Blind Quantum Computing for a Classical Client

    Science.gov (United States)

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C.; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-01

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  11. Experimental Blind Quantum Computing for a Classical Client.

    Science.gov (United States)

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-04

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  12. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  13. Computational and experimental investigation of dynamic shock reflection phenomena

    CSIR Research Space (South Africa)

    Naidoo, K

    2007-07-01

    Full Text Available wedge are used to analyse dynamic flow field phenomena and response of the triple point below and within the dual solution domain. Computed, unsteady pressure traces on the reflection plane are also analysed...

  14. Experimental and computational approaches to electrical conductor loading characteristics

    International Nuclear Information System (INIS)

    Vary, M.; Goga, V.; Paulech, J.

    2012-01-01

    This article describes cooling analyses of horizontally arranged bare electric conductor using analytical and numerical methods. Results of these analyses will be compared to the results obtained from experimental measurement. (Authors)

  15. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  16. Computational and experimental studies on oxalic acid imprinted ...

    Indian Academy of Sciences (India)

    e-mail: rkkawadkar@chm.vnit.ac.in. MS received 13 ... vent or porogen to form a pre-polymerization complex, followed by .... tered off and the filtrate was analysed for oxalic acid by. UV/VIS ... The experimental binding data were fitted to the.

  17. Computational and Experimental Studies on the Hetero-Diels-Alder ...

    African Journals Online (AJOL)

    NJD

    Additionally, experimental studies carried out on the reactions of these enaminones 1 with sulphene were also found to be in ... reactions of simple enaminoketones with some dienophiles have been ... example, the only reported [4+2] cycloaddition reaction of .... (2a–c, 3a–c) are true ground state minima, verified by second.

  18. Experimental and Computational Techniques in Soft Condensed Matter Physics

    Science.gov (United States)

    Olafsen, Jeffrey

    2010-09-01

    1. Microscopy of soft materials Eric R. Weeks; 2. Computational methods to study jammed Systems Carl F. Schrek and Corey S. O'Hern; 3. Soft random solids: particulate gels, compressed emulsions and hybrid materials Anthony D. Dinsmore; 4. Langmuir monolayers Michael Dennin; 5. Computer modeling of granular rheology Leonardo E. Silbert; 6. Rheological and microrheological measurements of soft condensed matter John R. de Bruyn and Felix K. Oppong; 7. Particle-based measurement techniques for soft matter Nicholas T. Ouellette; 8. Cellular automata models of granular flow G. William Baxter; 9. Photoelastic materials Brian Utter; 10. Image acquisition and analysis in soft condensed matter Jeffrey S. Olafsen; 11. Structure and patterns in bacterial colonies Nicholas C. Darnton.

  19. Experimental magic state distillation for fault-tolerant quantum computing.

    Science.gov (United States)

    Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond

    2011-01-25

    Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.

  20. Challenges to Software/Computing for Experimentation at the LHC

    Science.gov (United States)

    Banerjee, Sunanda

    The demands of future high energy physics experiments towards software and computing have led the experiments to plan the related activities as a full-fledged project and to investigate new methodologies and languages to meet the challenges. The paths taken by the four LHC experiments ALICE, ATLAS, CMS and LHCb are coherently put together in an LHC-wide framework based on Grid technology. The current status and understandings have been broadly outlined.

  1. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    Science.gov (United States)

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  2. Computational and experimental model of transdermal iontophorethic drug delivery system.

    Science.gov (United States)

    Filipovic, Nenad; Saveljic, Igor; Rac, Vladislav; Graells, Beatriz Olalde; Bijelic, Goran

    2017-11-30

    The concept of iontophoresis is often applied to increase the transdermal transport of drugs and other bioactive agents into the skin or other tissues. It is a non-invasive drug delivery method which involves electromigration and electroosmosis in addition to diffusion and is shown to be a viable alternative to conventional administration routs such as oral, hypodermic and intravenous injection. In this study we investigated, experimentally and numerically, in vitro drug delivery of dexamethasone sodium phosphate to porcine skin. Different current densities, delivery durations and drug loads were investigated experimentally and introduced as boundary conditions for numerical simulations. Nernst-Planck equation was used for calculation of active substance flux through equivalent model of homogeneous hydrogel and skin layers. The obtained numerical results were in good agreement with experimental observations. A comprehensive in-silico platform, which includes appropriate numerical tools for fitting, could contribute to iontophoretic drug-delivery devices design and correct dosage and drug clearance profiles as well as to perform much faster in-silico experiments to better determine parameters and performance criteria of iontophoretic drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Fission Product Experimental Program: Validation and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Leclaire, N.; Ivanova, T.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Girault, E. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-02-15

    From 1998 to 2004, a series of critical experiments referred to as the fission product (FP) experimental program was performed at the Commissariat a l'Energie Atomique Valduc research facility. The experiments were designed by Institut de Radioprotection et de Surete Nucleaire (IRSN) and funded by AREVA NC and IRSN within the French program supporting development of a technical basis for burnup credit validation. The experiments were performed with the following six key fission products encountered in solution either individually or as mixtures: {sup 103}Rh, {sup 133}Cs, {sup nat}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd. The program aimed at compensating for the lack of information on critical experiments involving FPs and at establishing a basis for FPs credit validation. One hundred forty-five critical experiments were performed, evaluated, and analyzed with the French CRISTAL criticality safety package and the American SCALE5. 1 code system employing different cross-section libraries. The aim of the paper is to show the experimental data potential to improve the ability to perform validation of full burnup credit calculation. The paper describes three Phases of the experimental program; the results of preliminary evaluation, the calculation, and the sensitivity/uncertainty study of the FP experiments used to validate the APOLLO2-MORET 4 route in the CRISTAL criticality package for burnup credit applications. (authors)

  4. Optimization of a data acquisition and control system for calibration of X rays detectors with usage of computational tools

    International Nuclear Information System (INIS)

    Rodrigues, Yklys Santos

    2013-01-01

    The Brazilian standard ABNT ISO/IEC 17025/2005 specifies general requirements for the competence of testing and calibration facilities. One of these requirements states that these facilities must always optimize their processes and thus, uncertainties must be estimated and lowered as much as possible. In order to achieve such goal, the Laboratorio de Calibracao de Instrumentos (LCI), a laboratory responsible for dosimeter calibration at the Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN) have acquired some equipment for the LCI's X-Ray calibration system, such as a rotating filter holder, and it has also been developing software in order to provide the best calibration results with lowest external interferences (from the operator, for instance) to diminish the possibilities of error occurrences. A semi-automated LabVIEW-based calibration software has been developed at LCI and it has been subjected to constant testing and improvements so far. The software is divided in two basic modules: one is responsible for calibration of the monitor chamber with a reference standard dosimeter, while the other is used to perform instruments calibrations using the monitor chamber as reference. Several influence quantities have been analyzed and the proper corrections have been added to the software as multiplicative correction factors. The software is not only able to acquire data from all the equipment used in the laboratory, but also it is able to calculate calibration factors and its uncertainties. A later analysis shows how this system has optimized the whole calibration process, lowering the occupational dose, reducing the mean time of calibration, uncertainties and preventing errors caused by the system's users. (author)

  5. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    Science.gov (United States)

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a

  6. Experimental and computing strategies in advanced material characterization problems

    Energy Technology Data Exchange (ETDEWEB)

    Bolzon, G. [Department of Civil and Environmental Engineering, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milano, Italy gabriella.bolzon@polimi.it (Italy)

    2015-10-28

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  7. Experimental and computing strategies in advanced material characterization problems

    International Nuclear Information System (INIS)

    Bolzon, G.

    2015-01-01

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities

  8. A Combined Experimental and Computational Approach to Subject-Specific Analysis of Knee Joint Laxity

    Science.gov (United States)

    Harris, Michael D.; Cyr, Adam J.; Ali, Azhar A.; Fitzpatrick, Clare K.; Rullkoetter, Paul J.; Maletsky, Lorin P.; Shelburne, Kevin B.

    2016-01-01

    Modeling complex knee biomechanics is a continual challenge, which has resulted in many models of varying levels of quality, complexity, and validation. Beyond modeling healthy knees, accurately mimicking pathologic knee mechanics, such as after cruciate rupture or meniscectomy, is difficult. Experimental tests of knee laxity can provide important information about ligament engagement and overall contributions to knee stability for development of subject-specific models to accurately simulate knee motion and loading. Our objective was to provide combined experimental tests and finite-element (FE) models of natural knee laxity that are subject-specific, have one-to-one experiment to model calibration, simulate ligament engagement in agreement with literature, and are adaptable for a variety of biomechanical investigations (e.g., cartilage contact, ligament strain, in vivo kinematics). Calibration involved perturbing ligament stiffness, initial ligament strain, and attachment location until model-predicted kinematics and ligament engagement matched experimental reports. Errors between model-predicted and experimental kinematics averaged ligaments agreed with literature descriptions. These results demonstrate the ability of our constraint models to be customized for multiple individuals and simultaneously call attention to the need to verify that ligament engagement is in good general agreement with literature. To facilitate further investigations of subject-specific or population based knee joint biomechanics, data collected during the experimental and modeling phases of this study are available for download by the research community. PMID:27306137

  9. Effects of pressure and fuel dilution on coflow laminar methane-air diffusion flames: A computational and experimental study

    Science.gov (United States)

    Cao, Su; Ma, Bin; Giassi, Davide; Bennett, Beth Anne V.; Long, Marshall B.; Smooke, Mitchell D.

    2018-03-01

    In this study, the influence of pressure and fuel dilution on the structure and geometry of coflow laminar methane-air diffusion flames is examined. A series of methane-fuelled, nitrogen-diluted flames has been investigated both computationally and experimentally, with pressure ranging from 1.0 to 2.7 atm and CH4 mole fraction ranging from 0.50 to 0.65. Computationally, the MC-Smooth vorticity-velocity formulation was employed to describe the reactive gaseous mixture, and soot evolution was modelled by sectional aerosol equations. The governing equations and boundary conditions were discretised on a two-dimensional computational domain by finite differences, and the resulting set of fully coupled, strongly nonlinear equations was solved simultaneously at all points using a damped, modified Newton's method. Experimentally, chemiluminescence measurements of CH* were taken to determine its relative concentration profile and the structure of the flame front. A thin-filament ratio pyrometry method using a colour digital camera was employed to determine the temperature profiles of the non-sooty, atmospheric pressure flames, while soot volume fraction was quantified, after evaluation of soot temperature, through an absolute light calibration using a thermocouple. For a broad spectrum of flames in atmospheric and elevated pressures, the computed and measured flame quantities were examined to characterise the influence of pressure and fuel dilution, and the major conclusions were as follows: (1) maximum temperature increases with increasing pressure or CH4 concentration; (2) lift-off height decreases significantly with increasing pressure, modified flame length is roughly independent of pressure, and flame radius decreases with pressure approximately as P-1/2; and (3) pressure and fuel stream dilution significantly affect the spatial distribution and the peak value of the soot volume fraction.

  10. Heavy enzymes--experimental and computational insights in enzyme dynamics.

    Science.gov (United States)

    Swiderek, Katarzyna; Ruiz-Pernía, J Javier; Moliner, Vicent; Tuñón, Iñaki

    2014-08-01

    The role of protein motions in the chemical step of enzyme-catalyzed reactions is the subject of an open debate in the scientific literature. The systematic use of isotopically substituted enzymes has been revealed as a useful tool to quantify the role of these motions. According to the Born-Oppenheimer approximation, changing the mass of the protein does not change the forces acting on the system but alters the frequencies of the protein motions, which in turn can affect the rate constant. Experimental and theoretical studies carried out in this field are presented in this article and discussed in the framework of Transition State Theory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Combined experimental and computational study of the energetics of methylindoles

    International Nuclear Information System (INIS)

    Ribeiro da Silva, Manuel A.V.; Cabral, Joana I.T.A.; Gomes, Jose R.B.

    2009-01-01

    In order to understand the influence of the methyl group in the stability of the indole unit, the standard (p 0 =0.1MPa) molar enthalpies of formation of 1-, 2-, and 3-methylindoles, in the gaseous phase, were determined at T = 298.15 K. For that, combustion calorimetry was used to determine the massic energies of combustion and consequently the standard molar enthalpies of formation in the condensed phase, and Calvet microcalorimetry was employed to measure the standard molar enthalpy of phase transition (vaporization for the liquid 1-methylindole and sublimation for the other two solid compounds). The G3(MP2)//B3LYP composite approach was used to calculate the gas-phase enthalpies of formation, at T = 298.15 K, of all possible single methylated indoles using four different working reactions. The enthalpies calculated for the 1-, 2-, and 3-methylindoles are in excellent agreement with the values derived from the experimental work.

  12. Experimental and computational development of a natural breast phantom for dosimetry studies

    International Nuclear Information System (INIS)

    Nogueira, Luciana B.; Campos, Tarcisio P.R.

    2013-01-01

    This paper describes the experimental and computational development of a natural breast phantom, anthropomorphic and anthropometric for studies in dosimetry of brachytherapy and teletherapy of breast. The natural breast phantom developed corresponding to fibroadipose breasts of women aged 30 to 50 years, presenting radiographically medium density. The experimental breast phantom was constituted of three tissue-equivalents (TE's): glandular TE, adipose TE and skin TE. These TE's were developed according to chemical composition of human breast and present radiological response to exposure. Completed the construction of experimental breast phantom this was mounted on a thorax phantom previously developed by the research group NRI/UFMG. Then the computational breast phantom was constructed by performing a computed tomography (CT) by axial slices of the chest phantom. Through the images generated by CT a computational model of voxels of the thorax phantom was developed by SISCODES computational program, being the computational breast phantom represented by the same TE's of the experimental breast phantom. The images generated by CT allowed evaluating the radiological equivalence of the tissues. The breast phantom is being used in studies of experimental dosimetry both in brachytherapy as in teletherapy of breast. Dosimetry studies by MCNP-5 code using the computational model of the phantom breast are in progress. (author)

  13. Experimental evaluation of scattered thermal neutrons from various jig materials for use in fixing detectors for the calibration

    International Nuclear Information System (INIS)

    Shimizu, Shigeru; Yoshizawa, Michio

    2000-05-01

    Some jigs to fix detectors are used when radiation measuring instruments are calibrated or reference fluence rates are measured in thermal neutron irradiation fields. In this case, scattered thermal neutrons from the jigs, in particular, which contain hydrogenous materials, may affect the results of the calibration and measurements. In this study, scattered thermal neutrons were measured and calculated to clarify the characteristics of the thermal neutron scattered from various materials which are frequently used for the jigs. A spherical BF 3 -counter of 2-inches in diameter was used in the experiment. Ratios of the fluence of scattered neutrons to primaries (hereinafter, scattering ratio) were evaluated as a function of thickness and size of the materials, as well as the distance from the surface of the materials. The scattering ratios of the jigs that were actually-used in the calibration were also measured in order to select appropriate materials and thickness for the jigs. It was found that the scattering ratios were saturated with increase of thickness and size of the materials. The higher values were observed in the case of PMMA (polymethylmethacrylates) and paraffin since these materials contain more number of hydrogen atoms than the others. The saturated value was obtained 130% for PMMA and paraffin with the thickness of more than 5 cm and the size of 40 cm x 40 cm. The results for the actually-used jigs show that the thinner plate of styrofoam and aluminum reduces the scattering ratio to the value of less than 1%. The obtained data will be useful to improve the accuracy of the calibration of thermal neutron detectors and the measurement of reference fluence rates in thermal neutron irradiation fields. (author)

  14. Experimental evaluation of scattered thermal neutrons from various jig materials for use in fixing detectors for the calibration

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, Shigeru; Yoshizawa, Michio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Nemoto, Hisashi; Kurosawa, Koji [Institute of Radiation Measurement, Tokai, Ibaraki (Japan)

    2000-05-01

    Some jigs to fix detectors are used when radiation measuring instruments are calibrated or reference fluence rates are measured in thermal neutron irradiation fields. In this case, scattered thermal neutrons from the jigs, in particular, which contain hydrogenous materials, may affect the results of the calibration and measurements. In this study, scattered thermal neutrons were measured and calculated to clarify the characteristics of the thermal neutron scattered from various materials which are frequently used for the jigs. A spherical BF{sub 3}-counter of 2-inches in diameter was used in the experiment. Ratios of the fluence of scattered neutrons to primaries (hereinafter, scattering ratio) were evaluated as a function of thickness and size of the materials, as well as the distance from the surface of the materials. The scattering ratios of the jigs that were actually-used in the calibration were also measured in order to select appropriate materials and thickness for the jigs. It was found that the scattering ratios were saturated with increase of thickness and size of the materials. The higher values were observed in the case of PMMA (polymethylmethacrylates) and paraffin since these materials contain more number of hydrogen atoms than the others. The saturated value was obtained 130% for PMMA and paraffin with the thickness of more than 5 cm and the size of 40 cm x 40 cm. The results for the actually-used jigs show that the thinner plate of styrofoam and aluminum reduces the scattering ratio to the value of less than 1%. The obtained data will be useful to improve the accuracy of the calibration of thermal neutron detectors and the measurement of reference fluence rates in thermal neutron irradiation fields. (author)

  15. Experimental study of the conditions for universal calibration curve for the gamma-gamma probes in 2Π-geometry

    International Nuclear Information System (INIS)

    Gyurcsak, J.; Chau, N.D.

    1989-01-01

    We present the results of the measurements performed in order of establishing the possibility of constructing the universal calibration curves for gamma-gamma density probes. It has been proved that the unit λ p , in which the source-detector distance should be expressed, has the character of a mean free path of the photons forming the high-energetic part of the spectrum. 8 refs., 12 figs., 7 tabs. (author)

  16. Experimental and computational studies of electromagnetic cloaking at microwaves

    Science.gov (United States)

    Wang, Xiaohui

    An invisibility cloak is a device that can hide the target by enclosing it from the incident radiation. This intriguing device has attracted a lot of attention since it was first implemented at a microwave frequency in 2006. However, the problems of existing cloak designs prevent them from being widely applied in practice. In this dissertation, we try to remove or alleviate the three constraints for practical applications imposed by loosy cloaking media, high implementation complexity, and small size of hidden objects compared to the incident wavelength. To facilitate cloaking design and experimental characterization, several devices and relevant techniques for measuring the complex permittivity of dielectric materials at microwave frequencies are developed. In particular, a unique parallel plate waveguide chamber has been set up to automatically map the electromagnetic (EM) field distribution for wave propagation through the resonator arrays and cloaking structures. The total scattering cross section of the cloaking structures was derived based on the measured scattering field by using this apparatus. To overcome the adverse effects of lossy cloaking media, microwave cloaks composed of identical dielectric resonators made of low loss ceramic materials are designed and implemented. The effective permeability dispersion was provided by tailoring dielectric resonator filling fractions. The cloak performances had been verified by full-wave simulation of true multi-resonator structures and experimental measurements of the fabricated prototypes. With the aim to reduce the implementation complexity caused by metamaterials employment for cloaking, we proposed to design 2-D cylindrical cloaks and 3-D spherical cloaks by using multi-layer ordinary dielectric material (epsilon r>1) coating. Genetic algorithm was employed to optimize the dielectric profiles of the cloaking shells to provide the minimum scattering cross sections of the cloaked targets. The designed cloaks can

  17. Advanced Computational and Experimental Techniques for Nacelle Liner Performance Evaluation

    Science.gov (United States)

    Gerhold, Carl H.; Jones, Michael G.; Brown, Martha C.; Nark, Douglas

    2009-01-01

    The Curved Duct Test Rig (CDTR) has been developed to investigate sound propagation through a duct of size comparable to the aft bypass duct of typical aircraft engines. The axial dimension of the bypass duct is often curved and this geometric characteristic is captured in the CDTR. The semiannular bypass duct is simulated by a rectangular test section in which the height corresponds to the circumferential dimension and the width corresponds to the radial dimension. The liner samples are perforate over honeycomb core and are installed on the side walls of the test section. The top and bottom surfaces of the test section are acoustically rigid to simulate a hard wall bifurcation or pylon. A unique feature of the CDTR is the control system that generates sound incident on the liner test section in specific modes. Uniform air flow, at ambient temperature and flow speed Mach 0.275, is introduced through the duct. Experiments to investigate configuration effects such as curvature along the flow path on the acoustic performance of a sample liner are performed in the CDTR and reported in this paper. Combinations of treated and acoustically rigid side walls are investigated. The scattering of modes of the incident wave, both by the curvature and by the asymmetry of wall treatment, is demonstrated in the experimental results. The effect that mode scattering has on total acoustic effectiveness of the liner treatment is also shown. Comparisons of measured liner attenuation with numerical results predicted by an analytic model based on the parabolic approximation to the convected Helmholtz equation are reported. The spectra of attenuation produced by the analytic model are similar to experimental results for both walls treated, straight and curved flow path, with plane wave and higher order modes incident. The numerical model is used to define the optimized resistance and reactance of a liner that significantly improves liner attenuation in the frequency range 1900-2400 Hz. A

  18. Effect of Computer-Based Video Games on Children: An Experimental Study

    Science.gov (United States)

    Chuang, Tsung-Yen; Chen, Wei-Fan

    2009-01-01

    This experimental study investigated whether computer-based video games facilitate children's cognitive learning. In comparison to traditional computer-assisted instruction (CAI), this study explored the impact of the varied types of instructional delivery strategies on children's learning achievement. One major research null hypothesis was…

  19. An Experimental Study into the use of computers for teaching of ...

    African Journals Online (AJOL)

    This study was an experimental study which sought to establish how English language teachers used computers for teaching composition writing at Prince Edward High School in Harare. The findings of the study show that computers were rarely used in the teaching of composition despite the observation that the school ...

  20. Experimental and computational study on the thermochemistry of ethylpiperidines

    International Nuclear Information System (INIS)

    Ribeiro da Silva, Manuel A.V.; Cabral, Joana I.T.A.; Gomes, Jose R.B.

    2006-01-01

    The standard (p o =0.1MPa) massic energies of combustion in oxygen of 1-ethylpiperidine and 2-ethylpiperidine, both in the liquid phase, were measured at T=298.15K by static bomb calorimetry. These values were used to derive the standard molar enthalpies of combustion and the standard molar enthalpies of formation, in the condensed phase, for these compounds. Further, the standard molar enthalpies of vaporization, at T=298.15K, of these two ethylpiperidine isomers were determined by Calvet microcalorimetry. The combustion calorimetry results together with those from the Calvet microcalorimetry, were used to derive the standard molar enthalpies of formation, at T=298.15K, in the gaseous phase.-Δ c H m o (l)/(kJ.mol -1 )Δ l g H m o /(kJ.mol -1 )1-Ethylpiperidine4776.8+/- 1.639.44+/-0.652-Ethylpiperidine4740.3+/-1.548.22+/-0.89 In parallel, theoretical calculations have been carried out for all the ethylpiperidine isomers, enabling the estimation of gas-phase enthalpies of formation for these compounds. The comparison with the present experimental data is very good and, thus, supports the quality of the results calculated for the 3-ethyl and 4-ethylpiperidines.

  1. Helicopter noise in hover: Computational modelling and experimental validation

    Science.gov (United States)

    Kopiev, V. F.; Zaytsev, M. Yu.; Vorontsov, V. I.; Karabasov, S. A.; Anikin, V. A.

    2017-11-01

    The aeroacoustic characteristics of a helicopter rotor are calculated by a new method, to assess its applicability in assessing rotor performance in hovering. Direct solution of the Euler equations in a noninertial coordinate system is used to calculate the near-field flow around the spinning rotor. The far-field noise field is calculated by the Ffowcs Williams-Hawkings (FW-H) method using permeable control surfaces that include the blade. For a multiblade rotor, the signal obtained is duplicated and shifted in phase for each successive blade. By that means, the spectral characteristics of the far-field noise may be obtained. To determine the integral aerodynamic characteristics of the rotor, software is written to calculate the thrust and torque characteristics from the near-field flow solution. The results of numerical simulation are compared with experimental acoustic and aerodynamic data for a large-scale model of a helicopter main rotor in an open test facility. Two- and four-blade configurations of the rotor are considered, in different hover conditions. The proposed method satisfactorily predicts the aerodynamic characteristics of the blades in such conditions and gives good estimates for the first harmonics of the noise. That permits the practical use of the proposed method, not only for hovering but also for forward flight.

  2. Computational reverse shoulder prosthesis model: Experimental data and verification.

    Science.gov (United States)

    Martins, A; Quental, C; Folgado, J; Ambrósio, J; Monteiro, J; Sarmento, M

    2015-09-18

    The reverse shoulder prosthesis aims to restore the stability and function of pathological shoulders, but the biomechanical aspects of the geometrical changes induced by the implant are yet to be fully understood. Considering a large-scale musculoskeletal model of the upper limb, the aim of this study is to evaluate how the Delta reverse shoulder prosthesis influences the biomechanical behavior of the shoulder joint. In this study, the kinematic data of an unloaded abduction in the frontal plane and an unloaded forward flexion in the sagittal plane were experimentally acquired through video-imaging for a control group, composed of 10 healthy shoulders, and a reverse shoulder group, composed of 3 reverse shoulders. Synchronously, the EMG data of 7 superficial muscles were also collected. The muscle force sharing problem was solved through the minimization of the metabolic energy consumption. The evaluation of the shoulder kinematics shows an increase in the lateral rotation of the scapula in the reverse shoulder group, and an increase in the contribution of the scapulothoracic joint to the shoulder joint. Regarding the muscle force sharing problem, the musculoskeletal model estimates an increased activity of the deltoid, teres minor, clavicular fibers of the pectoralis major, and coracobrachialis muscles in the reverse shoulder group. The comparison between the muscle forces predicted and the EMG data acquired revealed a good correlation, which provides further confidence in the model. Overall, the shoulder joint reaction force was lower in the reverse shoulder group than in the control group. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Experimental and computational evidence of halogen bonds involving astatine

    Science.gov (United States)

    Guo, Ning; Maurice, Rémi; Teze, David; Graton, Jérôme; Champion, Julie; Montavon, Gilles; Galland, Nicolas

    2018-03-01

    The importance of halogen bonds—highly directional interactions between an electron-deficient σ-hole moiety in a halogenated compound and an acceptor such as a Lewis base—is being increasingly recognized in a wide variety of fields from biomedicinal chemistry to materials science. The heaviest halogens are known to form stronger halogen bonds, implying that if this trend continues down the periodic table, astatine should exhibit the highest halogen-bond donating ability. This may be mitigated, however, by the relativistic effects undergone by heavy elements, as illustrated by the metallic character of astatine. Here, the occurrence of halogen-bonding interactions involving astatine is experimentally evidenced. The complexation constants of astatine monoiodide with a series of organic ligands in cyclohexane solution were derived from distribution coefficient measurements and supported by relativistic quantum mechanical calculations. Taken together, the results show that astatine indeed behaves as a halogen-bond donor—a stronger one than iodine—owing to its much more electrophilic σ-hole.

  4. Experimental and computational thermochemical study of 3-hydroxypropanenitrile

    International Nuclear Information System (INIS)

    Roux, Maria Victoria; Notario, Rafael; Velez, Ederley; Temprado, Manuel; Guerrero, Andres; Verevkin, Sergey P.; Quijano, Jairo; Gaviria, Jair

    2007-01-01

    This paper reports the values of the standard (p 0 = 0.1 MPa) molar enthalpy of formation in the gas phase, at T = 298.15 K, for 3-hydroxypropanenitrile derived from the respective enthalpy of combustion in oxygen, measured by static bomb combustion calorimetry and the standard molar enthalpy of vaporization, at T = 298.15 K, determined using the transference (transpiration) method in a saturated N 2 stream, the value of the gas-phase enthalpy of formation being Δ f H m 0 (g)=-(91.2±1.4)kJ.mol -1 . Furthermore, we have performed high-level ab initio molecular orbital calculations at the G3 level. Four different conformations corresponding to local minima of the potential energy surface have been found for 3-hydroxypropanenitrile. The optimised molecular and electronic structure of the most stable conformer of 3-hydroxypropanenitrile shows a gauche arrangement with a weak intramolecular hydrogen bond from OH hydrogen to the π-bond charge cloud of the nitrile group. The calculated values for the enthalpy of formation of the most stable conformer using atomization and isodesmic bond separation reactions, -94.7 and -95.0 kJ . mol -1 , respectively, are in very good agreement with the experimental value

  5. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    Energy Technology Data Exchange (ETDEWEB)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D. [IRSN/DRPH/SDI/LEDI, BP 17, F-92 262 Fontenay-aux-Roses (France); Dudoignon, N. [IRSN/DRPH/SRBE/LRPAT, BP 17, F-92 262 Fontenay-aux-Roses (France); Rateau, S.; Van der Meeren, A.; Rouit, E. [CEA/DSV/DRR/SRCA/LRT BP no 12, F-91680 Bruyeres-le-Chatel (France); Bottlaender, M. [CEA/SHFJ, 4, place du General Leclerc F-91400 Orsay (France)

    2006-07-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  6. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    International Nuclear Information System (INIS)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D.; Dudoignon, N.; Rateau, S.; Van der Meeren, A.; Rouit, E.; Bottlaender, M.

    2006-01-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  7. Computations for the 1:5 model of the THTR pressure vessel compared with experimental results

    International Nuclear Information System (INIS)

    Stangenberg, F.

    1972-01-01

    In this report experimental results measured at the 1:5-model of the prestressed concrete pressure vessel of the THTR-nuclear power station Schmehausen in 1971, are compared with the results of axis-symmetrical computations. Linear-elastic computations were performed as well as approximate computations for overload pressures taking into consideration the influences of the load history (prestressing, temperature, creep) and the effects of the steel components. (orig.) [de

  8. Computational and Experimental Insight Into Single-Molecule Piezoelectric Materials

    Science.gov (United States)

    Marvin, Christopher Wayne

    Piezoelectric materials allow for the harvesting of ambient waste energy from the environment. Producing lightweight, highly responsive materials is a challenge for this type of material, requiring polymer, foam, or bio-inspired materials. In this dissertation, I explore the origin of the piezoelectric effect in single molecules through density functional theory (DFT), analyze the piezoresponse of bio-inspired peptidic materials through the use of atomic and piezoresponse force microscopy (AFM and PFM), and develop a novel class of materials combining flexible polyurethane foams and non-piezoelectric, polar dopants. For the DFT calculations, functional group, regiochemical, and heteroatom derivatives of [6]helicene were examined for their influence on the piezoelectric response. An aza[6]helicene derivative was found to have a piezoelectric response (108 pm/V) comparable to ceramics such as lead zirconium titanate (200+ pm/V). These computed materials have the possibility to compete with current field-leading piezomaterials such as lead zirconium titanate (PZT), zinc oxide (ZnO), and polyvinylidene difluoride (PVDF) and its derivatives. The use of AFM/PFM allows for the demonstration of the piezoelectric effect of the selfassembled monolayer (SAM) peptidic systems. Through PFM, the influence that the helicity and sequence of the peptide has on the overall response of the molecule can be analyzed. Finally, development of a novel class of piezoelectrics, the foam-based materials, expands the current understanding of the qualities required for a piezoelectric material from ceramic and rigid materials to more flexible, organic materials. Through the exploration of these novel types of piezoelectric materials, new design rules and figures of merit have been developed.

  9. Result interpretation of experimental calibration for milk citric acid determination via infra-red spectroscopy (MIR-FT

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2009-01-01

    Full Text Available Citric acid (KC in milk is indicator of cow energy metabolism. Milk laboratories set up KC determination. A method can be infra-red analyse (MIR-FT. The goal was to develop a relevant method for reference sample preparation for MIR-FT (indirect method, Lactoscope FTIR and MilkoScan FT 6000 calibration. As reference was used a photometric method (c; 428 nm. KC was added (n = 3 into some re­fe­ren­ce milk samples (n = 10, bulk milk. Mean value was 9.220 ± 3.094 mmol . l−1 with variation range from 6.206 to 15.975 mmol . l−1. Recovery c was from 100.8 to 120.2 %. Correlation between c and ­MIR-FT were from 0.979 to 0.992 (P < 0.001. These were lower in the set of native milk samples (n = 7, from 0.751 (Lactoscope FTIR; P < 0.05 to 0.947 (MilkoScan FT 6000; P < 0.001 in comparison to original va­lues from 0.981 to 0.992 (n = 10; P < 0.001. Correlations between calibrated MIR-FT instruments were from 0.958 to 1.0 (P < 0.001. Average recovery for instruments (n = 12 was 101.6 ± 18.1 %. The mean differences between c method and MIR-FT after calibration (n = 4 moved from −0.001 across zero to 0.037 %. Standard deviation of differences was from 0.0074 to 0.0187 % at MilkoScan FT 6000 and from 0.0105 to 0.0117 % for Lactoscope FTIR. Relative variability of differences (MIR-F (filter technology and FT for major components fat (T, proteins (B and lactose (L in total and minor components KC and free fatty acids (VMK was estimated to 1.0 and 7.2 and 34.4 %. The KC result is inferior than T, B and L superior than VMK. Autocorrelation (0.042; P > 0.05 of results demonstrated the in­de­pen­den­ce of consecutive measurements. Milk preservation effect amounted 0.2323 (P < 0.001 with bronopol and 0.0339 (P > 0.05 mmol . l−1 with dichromate. It was (3.0 and 0.44 % practically negligible, redeemable via relevant calibration. The results of proficiency testing in post-calibration period and evaluation of double

  10. Experimental approach for the uncertainty assessment of 3D complex geometry dimensional measurements using computed tomography at the mm and sub-mm scales

    DEFF Research Database (Denmark)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems......’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined...... experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile...

  11. Experimental pain leads to reorganisation of trapezius electromyography during computer work with active and passive pauses

    DEFF Research Database (Denmark)

    Samani, Afshin; Holtermann, Andreas; Søgaard, Karen

    2009-01-01

    The aim of this laboratory study was to investigate acute effects of experimental muscle pain on spatial electromyographic (EMG) activity of the trapezius muscle during computer work with active and passive pauses. Twelve healthy male subjects performed four sessions of computer work for 2 min...... in one day, with passive (relax) and active (30% maximum voluntary contraction of shoulder elevation) pauses given every 40 s without and with presence of experimental pain. Surface EMG signals were recorded from four parts of the trapezius. The centroid of exposure variation analysis along the time axis...... was lower during computer work with active pauses when compared with passive one in all muscle parts (P

  12. Instrumentation, computer software and experimental techniques used in low-frequency internal friction studies at WNRE

    International Nuclear Information System (INIS)

    Sprugmann, K.W.; Ritchie, I.G.

    1980-04-01

    A detailed and comprehensive account of the equipment, computer programs and experimental methods developed at the Whiteshell Nuclear Research Estalbishment for the study of low-frequency internal friction is presented. Part 1 describes the mechanical apparatus, electronic instrumentation and computer software, while Part II describes in detail the laboratory techniques and various types of experiments performed together with data reduction and analysis. Experimental procedures for the study of internal friction as a function of temperature, strain amplitude or time are described. Computer control of these experiments using the free-decay technique is outlined. In addition, a pendulum constant-amplitude drive system is described. (auth)

  13. Emerging Trends in Heart Valve Engineering: Part IV. Computational Modeling and Experimental Studies.

    Science.gov (United States)

    Kheradvar, Arash; Groves, Elliott M; Falahatpisheh, Ahmad; Mofrad, Mohammad K; Hamed Alavi, S; Tranquillo, Robert; Dasi, Lakshmi P; Simmons, Craig A; Jane Grande-Allen, K; Goergen, Craig J; Baaijens, Frank; Little, Stephen H; Canic, Suncica; Griffith, Boyce

    2015-10-01

    In this final portion of an extensive review of heart valve engineering, we focus on the computational methods and experimental studies related to heart valves. The discussion begins with a thorough review of computational modeling and the governing equations of fluid and structural interaction. We then move onto multiscale and disease specific modeling. Finally, advanced methods related to in vitro testing of the heart valves are reviewed. This section of the review series is intended to illustrate application of computational methods and experimental studies and their interrelation for studying heart valves.

  14. MCNPx computational estimation of the calibration factor of an In vivo counter for {sup 18}F-FDG activity incorporated in the brain

    Energy Technology Data Exchange (ETDEWEB)

    Melo M, B.; Ferreira F, T. C.; Duarte V, K.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Ribeiro de C, T. P., E-mail: bmm@cdtn.br [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Programa de Pos-graduacao em Ciencias e Tecnicas, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2016-10-15

    In previous work, the computational modeling of the Centro de Desenvolvimento da Tecnologia Nuclear in vivo counter for estimation of {sup 18}F-FGD activity incorporated into workers brains was validated. Here, we studied the calibration factor (Cf) for seven distinct voxelized computational phantoms including the reference models from ICRP 110. Computational simulations were also carried out to study the effect of increasing the distance between the detector and the brain up to 100 cm. The degree of correlation among geometric and anatomical parameters of the computational models and the calibration factors were also evaluated. The morphological diversity of the computational phantoms resulted Cf variations greater than 38% (39.8 ± 0.2 to 64.6 ± 0.5 Bq.CPS{sup -1}). Despite the variations, Cf has been reduced by the increasing distance, although the remarkable decrease in counting efficiency makes prohibitive this geometry. These findings suggest that head anatomic parameters can be used to improve Cf estimation. (Author)

  15. MCNPx computational estimation of the calibration factor of an In vivo counter for "1"8F-FDG activity incorporated in the brain

    International Nuclear Information System (INIS)

    Melo M, B.; Ferreira F, T. C.; Duarte V, K.; Da Silva, T. A.; Ribeiro de C, T. P.

    2016-10-01

    In previous work, the computational modeling of the Centro de Desenvolvimento da Tecnologia Nuclear in vivo counter for estimation of "1"8F-FGD activity incorporated into workers brains was validated. Here, we studied the calibration factor (Cf) for seven distinct voxelized computational phantoms including the reference models from ICRP 110. Computational simulations were also carried out to study the effect of increasing the distance between the detector and the brain up to 100 cm. The degree of correlation among geometric and anatomical parameters of the computational models and the calibration factors were also evaluated. The morphological diversity of the computational phantoms resulted Cf variations greater than 38% (39.8 ± 0.2 to 64.6 ± 0.5 Bq.CPS"-"1). Despite the variations, Cf has been reduced by the increasing distance, although the remarkable decrease in counting efficiency makes prohibitive this geometry. These findings suggest that head anatomic parameters can be used to improve Cf estimation. (Author)

  16. Photorefractive and computational holography in the experimental generation of Airy beams

    Science.gov (United States)

    Suarez, Rafael A. B.; Vieira, Tarcio A.; Yepes, Indira S. V.; Gesualdi, Marcos R. R.

    2016-05-01

    In this paper, we present the experimental generation of Airy beams via computational and photorefractive holography. Experimental generation of Airy beams using conventional optical components presents several difficulties and a practically infeasible. Thus, the optical generation of Airy beams has been made from the optical reconstruction of a computer generated hologram implemented by a spatial light modulator. In the photorefractive holography technique, being used for the first time to our knowledge, the hologram of an Airy beam is constructed (recorded) and reconstructed (read) optically in a nonlinear photorefractive medium. The Airy beam experimental realization was made by a setup of computational and photorefractive holography using a photorefractive Bi12 TiO20 crystal as holographic recording medium. Airy beams and Airy beam arrays were obtained experimentally in accordance with the predicted theory; with excellent prospects for applications in optical trapping and optical communications systems.

  17. Experimental and computational evaluation of area selectively immobilized horseradish peroxidase in a microfluidic device

    DEFF Research Database (Denmark)

    Hoffmann, Christian; Pereira Rosinha Grundtvig, Ines; Thrane, Joachim

    2017-01-01

    experimentally and by computational fluid dynamics (CFD) simulations. Ultimately, such a correlation would lead to faster development through computational pre-screening and optimized experimental design.In this proof-of-concept study, microreactors were prepared in a 2-step curing process of an off......-stoichiometric thiol-ene-epoxy (OSTE+) mixture employing both a thiol-ene (TEC) and a thiol-epoxy curing reaction. Subsequent surface functionalization of the remaining thiol groups on the reactor surface through stenciled photoinitiated TEC enabled the preparation of specific surface patterns in the reactor. Patterns...... as obtained from experimental determination. This good agreement between the obtained experimental and computational results confirmed the high potential of CFD models for predicting and optimizing the biocatalytic performance of such a reactor....

  18. RF impedance measurement calibration

    International Nuclear Information System (INIS)

    Matthews, P.J.; Song, J.J.

    1993-01-01

    The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references

  19. The Effects of Computer-Assisted Instruction of Simple Circuits on Experimental Process Skills

    Directory of Open Access Journals (Sweden)

    Şeyma ULUKÖK

    2013-01-01

    Full Text Available The experimental and control groups were composed of 30 sophomores majoring in Classroom Teaching for this study investigating the effects of computer-assisted instruction of simple circuits on the development of experimental process skills. The instruction includes experiments and studies about simple circuits and its elements (serial, parallel, and mixed conncetions of resistors covered in Science and Technology Laboratory II course curriculum. In this study where quantitative and qualitative methods were used together, the control list developed by the researchers was used to collect data. Results showed that experimental process skills of sophomores in experimental group were more developed than that of those in control group. Thus, it can be said that computer-assisted instruction has a positive impact on the development of experimental process skills of students.

  20. Cumulative error models for the tank calibration problem

    International Nuclear Information System (INIS)

    Goldman, A.; Anderson, L.G.; Weber, J.

    1983-01-01

    The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data

  1. Qualification of an out-of-pile Thermohydraulic test Bench (BETHY) developed to calibrate calorimetric cells under specific JHR experimental conditions

    International Nuclear Information System (INIS)

    De Vita, C.; Brun, J.; Carette, M.; Reynard-Carette, C.; Lyoussi, A.; Fourmentel, D.; Villard, J.F.; Guimbal, P.; Malo, J.Y.

    2013-06-01

    Online in-pile measurement methods are crucial during irradiations in material testing reactors to better understand the behavior of materials under accelerated ageing conditions and of nuclear fuels under high irradiation levels. Thus, the construction of a new Material Testing Reactor such as the Jules Horowitz Reactor (JHR) leads to new research and development programs devoted to innovative instrumentation and measurement methods. The presented works are performed in the framework of the IN-CORE program, 'Instrumentation for Nuclear radiations and Calorimetry Online in Reactor', between CEA and Aix-Marseille University. The program aim is to develop experimental devices and test bench to quantify more precisely the nuclear heating parameter in the JHR experimental channels. This in-pile parameter is usually measured by means of calorimeter or gamma thermometer. This paper focuses on a new out-of-pile test bench called BETHY. This bench was developed to study the response of a differential calorimeter during its preliminary calibration step according to specific thermal and hydraulic conditions occurring inside one type of JHR core channel. The first section of this paper is dedicated to a detailed description of the bench. The second part presents the study of the thermal characteristics established in the bench for two main thermal running modes. The last one concerns the calibration curve of the reference cell of the differential calorimeter in the case of homogenous temperature. (authors)

  2. Submicron beam X-ray diffraction of nanoheteroepitaxily grown GaN: Experimental challenges and calibration procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bonanno, P.L., E-mail: PeterBonanno@gatech.ed [Georgia Institute of Technology/GTL, UMI 2958 Georgia Tech-CNRS, 57070 Metz (France); Gautier, S. [Laboratoire Materiaux Optiques, Photonique et Micro-Nano Systemes, UMR CNRS 7132, Universite de Metz et SUPELEC, 2 rue E. Belin, 57070 Metz (France); Sirenko, A.A. [Department of Physics, New Jersey Institute of Technology, Newark, NJ 07102 (United States); Kazimirov, A. [Cornell High Energy Synchrotron Source (CHESS), Cornell University, Ithaca, New York 14853 (United States); Cai, Z.-H. [Advanced Photon Source, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Goh, W.H. [Georgia Institute of Technology/GTL, UMI 2958 Georgia Tech-CNRS, 57070 Metz (France); Martin, J. [Laboratoire Materiaux Optiques, Photonique et Micro-Nano Systemes, UMR CNRS 7132, Universite de Metz et SUPELEC, 2 rue E. Belin, 57070 Metz (France); Martinez, A. [Laboratoire de Photonique et de Nanostructures, UPR CNRS 20, Route de Nozay, 91460 Marcoussis (France); Moudakir, T. [Laboratoire Materiaux Optiques, Photonique et Micro-Nano Systemes, UMR CNRS 7132, Universite de Metz et SUPELEC, 2 rue E. Belin, 57070 Metz (France); Maloufi, N. [Laboratoire d' Etude des Textures et Application aux Materiaux, UMR CNRS 7078, Ile du Saulcy, 57045 Metz Cedex 1 (France); Assouar, M.B. [Laboratoire de Physique des Milieux Ionises et Applications, Nancy University, CNRS, BP 239, F-54506 Vandoeuvre-les-Nancy Cedex (France); Ramdane, A.; Gratiet, L. Le [Laboratoire de Photonique et de Nanostructures, UPR CNRS 20, Route de Nozay, 91460 Marcoussis (France); Ougazzaden, A. [Georgia Institute of Technology/GTL, UMI 2958 Georgia Tech-CNRS, 57070 Metz (France)

    2010-02-15

    Highly relaxed GaN nanodots and submicron ridges have been selectively grown in the NSAG regime using MOVPE on lattice mismatched 6H-SiC and AlN substrates. 2D real space and 3D reciprocal space mapping was performed with a CCD detector using 10.4 keV synchrotron X-ray radiation at the 2-ID-D micro-diffraction beamline at Advanced Photon Source (APS). Calibration procedures have been developed to overcome the unique challenges of analyzing NSAG structures grown on highly mismatched substrates. We studied crystallographic planar bending on the submicron scale and found its correlation with strain relaxation in the NSAG ridges.

  3. Instrumentation report 1: specification, design, calibration, and installation of instrumentation for an experimental, high-level, nuclear waste storage facility

    International Nuclear Information System (INIS)

    Brough, W.G.; Patrick, W.C.

    1982-01-01

    The Spent Fuel Test-Climax (SFT-C) is being conducted 420 m underground at the Nevada Test Site under the auspices of the US Department of Energy. The test facility houses 11 spent fuel assemblies from an operating commercial nuclear reactor and numerous other thermal sources used to simulate the near-field effects of a large repository. We developed a large-scale instrumentation plan to ensure that a sufficient quality and quantity of data were acquired during the three- to five-year test. These data help satisfy scientific, operational, and radiation safety objectives. Over 800 data channels are being scanned to measure temperature, electrical power, radiation, air flow, dew point, stress, displacement, and equipment operation status (on/off). This document details the criteria, design, specifications, installation, calibration, and current performance of the entire instrumentation package

  4. Calibration of Flick standards

    International Nuclear Information System (INIS)

    Thalmann, Ruedi; Spiller, Jürg; Küng, Alain; Jusko, Otto

    2012-01-01

    Flick standards or magnification standards are widely used for an efficient and functional calibration of the sensitivity of form measuring instruments. The results of a recent measurement comparison have shown to be partially unsatisfactory and revealed problems related to the calibration of these standards. In this paper the influence factors for the calibration of Flick standards using roundness measurement instruments are discussed in detail, in particular the bandwidth of the measurement chain, residual form errors of the device under test, profile distortions due to the diameter of the probing element and questions related to the definition of the measurand. The different contributions are estimated using simulations and are experimentally verified. Also alternative methods to calibrate Flick standards are investigated. Finally the practical limitations of Flick standard calibration are shown and the usability of Flick standards both to calibrate the sensitivity of roundness instruments and to check the filter function of such instruments is analysed. (paper)

  5. Experimental and Computational Investigation of Triple-rotating Blades in a Mower Deck

    Science.gov (United States)

    Chon, Woochong; Amano, Ryoichi S.

    Experimental and computational studies were performed on the 1.27m wide three-spindle lawn mower deck with side discharge arrangement. Laser Doppler Velocimetry was used to measure the air velocity at 12 different sections under the mower deck. The high-speed video camera test provided valuable visual evidence of airflow and grass discharge patterns. The strain gages were attached at several predetermined locations of the mower blades to measure the strain. In computational fluid dynamics work, computer based analytical studies were performed. During this phase of work, two different trials were attempted. First, two-dimensional blade shapes at several arbitrary radial sections were selected for flow computations around the blade model. Finally, a three-dimensional full deck model was developed and compared with the experimental results.

  6. Calculation of calibration factors and layout criteria for gamma scanning of waste drums from nuclear plants

    International Nuclear Information System (INIS)

    Inder Schmitten, W.; Sohnius, B.; Wehner, E.

    1990-01-01

    This paper present a procedure to calculate calibration factors for converting the measured gamma rate of waste drums into activity content and a layout and free release measurement criterion for waste drums. A computer program is developed that simulates drum scanning technique, which calculates calibration factors and eliminates laborious experimental measurements. The calculated calibration factors exhibit good agreement with experimentally determined values. By checking the calculated calibration factors for trial equipment layouts (including the waste drum and the scanning facility) using the layout and free release measurement criterion, a layout can be achieved that clearly determines whether there can be free release of a waste drum

  7. Computer techniques for experimental work in GDR nuclear power plants with WWER

    International Nuclear Information System (INIS)

    Stemmler, G.

    1985-01-01

    Nuclear power plant units with WWER are being increasingly equipped with high-performance, programmable process control computers. There are, however, essential reasons for further advancing the development of computer-aided measuring systems, in particular for experimental work. A special structure of such systems, which is based on the division into relatively rigid data registration and primary handling and into further processing by advanced programming language, has proved useful in the GDR. (author)

  8. Experimental and Computational Study of Ductile Fracture in Small Punch Tests

    Directory of Open Access Journals (Sweden)

    Betül Gülçimen Çakan

    2017-10-01

    Full Text Available A unified experimental-computational study on ductile fracture initiation and propagation during small punch testing is presented. Tests are carried out at room temperature with unnotched disks of different thicknesses where large-scale yielding prevails. In thinner specimens, the fracture occurs with severe necking under membrane tension, whereas for thicker ones a through thickness shearing mode prevails changing the crack orientation relative to the loading direction. Computational studies involve finite element simulations using a shear modified Gurson-Tvergaard-Needleman porous plasticity model with an integral-type nonlocal formulation. The predicted punch load-displacement curves and deformed profiles are in good agreement with the experimental results.

  9. Experimental and Computational Study of Ductile Fracture in Small Punch Tests.

    Science.gov (United States)

    Gülçimen Çakan, Betül; Soyarslan, Celal; Bargmann, Swantje; Hähner, Peter

    2017-10-17

    A unified experimental-computational study on ductile fracture initiation and propagation during small punch testing is presented. Tests are carried out at room temperature with unnotched disks of different thicknesses where large-scale yielding prevails. In thinner specimens, the fracture occurs with severe necking under membrane tension, whereas for thicker ones a through thickness shearing mode prevails changing the crack orientation relative to the loading direction. Computational studies involve finite element simulations using a shear modified Gurson-Tvergaard-Needleman porous plasticity model with an integral-type nonlocal formulation. The predicted punch load-displacement curves and deformed profiles are in good agreement with the experimental results.

  10. Combined computational and experimental study of Ar beam induced defect formation in graphite

    International Nuclear Information System (INIS)

    Pregler, Sharon K.; Hayakawa, Tetsuichiro; Yasumatsu, Hisato; Kondow, Tamotsu; Sinnott, Susan B.

    2007-01-01

    Irradiation of graphite, commonly used in nuclear power plants, is known to produce structural damage. Here, experimental and computational methods are used to study defect formation in graphite during Ar irradiation at incident energies of 50 eV. The experimental samples are analyzed with scanning tunneling microscopy to quantify the size distribution of the defects that form. The computational approach is classical molecular dynamic simulations that illustrate the mechanisms by which the defects are produced. The results indicate that defects in graphite grow in concentrated areas and are nucleated by the presence of existing defects

  11. Validation of thermohydraulic codes by comparison of experimental results with computer simulations

    International Nuclear Information System (INIS)

    Madeira, A.A.; Galetti, M.R.S.; Pontedeiro, A.C.

    1989-01-01

    The results obtained by simulation of three cases from CANON depressurization experience, using the TRAC-PF1 computer code, version 7.6, implanted in the VAX-11/750 computer of Brazilian CNEN, are presented. The CANON experience was chosen as first standard problem in thermo-hydraulic to be discussed at ENFIR for comparing results from different computer codes with results obtained experimentally. The ability of TRAC-PF1 code to prevent the depressurization phase of a loss of primary collant accident in pressurized water reactors is evaluated. (M.C.K.) [pt

  12. A combined experimental and computational investigation of excess molar enthalpies of (nitrobenzene + alkanol) mixtures

    International Nuclear Information System (INIS)

    Neyband, Razieh Sadat; Zarei, Hosseinali

    2015-01-01

    Highlights: • Excess molar enthalpies for the binary mixtures of nitrobenzene + alkanols mixtures were measured. • The infinite dilution excess partial molar enthalpies were calculated using the ab initio methods. • The PCM calculations were performed. • The computed excess partial molar enthalpies at infinite dilution were compared to experimental results. - Abstract: Excess molar enthalpies (H m E ) for the binary mixtures of {(nitrobenzene + ethanol), 1-propanol, 2-propanol, 1-butanol and 2-butanol} have been measured over the entire composition range at ambient pressure (81.5 kPa) and temperature 298 K using a Parr 1455 solution calorimeter. From the experimental results, the excess partial molar enthalpies (H i E ) and excess partial molar enthalpies at infinite dilution (H i E,∞ ) were calculated. The excess molar enthalpies (H m E ) are positive for all {nitrobenzene (1) + alkanol (2)} mixtures over the entire composition range. A state-of-the-art computational strategy for the evaluation of excess partial molar enthalpies at infinite dilution was followed at the M05-2X/6-311++G ∗∗ level of theory with the PCM model. The experimental excess partial molar enthalpies at infinite dilution have been compared to the computational data of the ab initio in liquid phase. Integrated experimental and computational results help to clarify the nature of the intermolecular interactions in {nitrobenzene (1) + alkanol (2)} mixtures. The experimental and computational work which was done in this study complements and extends the general research on the computation of excess partial molar enthalpy at infinite dilution of binary mixtures

  13. Geophysical data collection using an interactive personal computer system. Part 1. ; Experimental monitoring of Suwanosejima volcano

    Energy Technology Data Exchange (ETDEWEB)

    Iguchi, M. (Kyoto Univerdity, Kyoto (Japan). Disaster Prevention Reserach Institute)

    1991-10-15

    In the article, a computer-communication system was developed in order to collect geophysical data from remote volcanos via a public telephpne network. This system is composed of a host presonal computer at an observatory and several personal computers as terminals at remote stations. Each terminal acquires geophysical data, such as seismic, intrasonic, and ground deformation date. These gara are stored in the terminals temporarily, and transmitted to the host computer upon command from host computer. Experimental monitoring was conducted between Sakurajima Volcanological Observatory and several statins in the Satsunan Islands and southern Kyushu. The seismic and eruptive activities of Suwanosejima volcano were monitored by this system. Consequently, earthquakes and air-shocks accompanied by the explosive activity were observed. B-type earthquakes occurred prio to the relatively prolonged eruptive activity. Intermittent occurrences of volcanic tremors were also clearly recognized from the change in mean amplitubes of seismic waves. 7 refs., 10 figs., 2 tabs.

  14. EXPERIMENTAL VERIFICATION OF COMPUTER MODEL OF COOLING SYSTEM FOR POWERFUL SEMI- CONDUCTOR DEVICE

    Directory of Open Access Journals (Sweden)

    I. A. Khorunzhii

    2007-01-01

    Full Text Available A cooling system for powerful semi-conductor device (power -1 kW consisting of a pin-type radiator and a body is considered in the paper. Cooling is carried out by forced convection of a coolant. Calculated values of temperatures on the radiator surface and experimentally measured values of temperatures in the same surface points have been compared in the paper. It has been shown that the difference between calculated and experimentally measured temperatures does not exceed 0,1-0,2 °C and it is comparable with experimental error value. The given results confirm correctness of a computer model.

  15. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  16. Calibration of Recoil-In-Vacuum attenuations from first principles: comparison with recent experimental data on Fe isotopes

    International Nuclear Information System (INIS)

    Stone, Nicholas James; Stone, Jirina Rikovska; Stuchbery, Andrew E.; Jonsson, Per

    2015-01-01

    Precession of aligned nuclear spin systems in ions recoiling from the target into vacuum (RIV) with consequent attenuation of angular distributions of emitted radiation is, in principle, a versatile method for measurement of g-factors of nuclear excited states of lifetimes in the pico-second range (Stone et al., Phys. Rev. Lett., 94, 192501, 2005 and Stuchbery and Stone, Phys. Rev. C, 76, 034307, 2007). Calibration of the observed attenuations has been achieved in favourable cases through comparison with measurements on states having previously known g-factors and lifetimes. The general lack of suitable states with known g-factors has limited application of the RIV method. This paper concerns the present status of efforts to describe the states of excited ions recoiling into vacuum in detail so that the average interaction can be estimated with useful precision from a-priori theory. The calculations use the GRASP2K package (Froese-Fischer et al. 1997 and Jonsson, Comp. Phys. Comm., 177, 597, 2007 & 184, 2197, 2013) to obtain, for each recoiling ion change state, the individual possible electronic states, their configurations, lifetimes and hyperfine interactions. It is assumed that all possible ionic states are produced, up to a chosen excitation energy. This energy is selected to approximate the energy at which all states have lifetimes far shorter than the nuclear state of interest. It is further assumed that the ionic state total electron angular momenta are randomly oriented in space. The first estimates of the average attenuation of emission distributions, as a function of the product g τ of the nuclear state g-factor and mean lifetime, used an averaged precession frequency obtained neglecting transitions between electronic states. Improved calculations, which include such transitions, are described

  17. Computational Experimentation to Understand C2 for Teams of Autonomous Systems and People

    Science.gov (United States)

    2014-12-12

    Muller, J. 9/26/2012. With Driverless Cars, Once Again It Is California Leading The Way. Forbes; http://www.forbes.com/sites/joannmuller/2012/09...26/with- driverless -cars- once-again-it-is-california-leading-the-way/; retrieved 09/19/2013. Nissen, M.E. 2007. Computational Experimentation on New

  18. Experimental and Computer Modelling Studies of Metastability of Amorphous Silicon Based Solar Cells

    NARCIS (Netherlands)

    Munyeme, Geoffrey

    2003-01-01

    We present a combination of experimental and computer modelling studies of the light induced degradation in the performance of amorphous silicon based single junction solar cells. Of particular interest in this study is the degradation kinetics of different types of amorphous silicon single junction

  19. Video Analysis of Projectile Motion Using Tablet Computers as Experimental Tools

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-01-01

    Tablet computers were used as experimental tools to record and analyse the motion of a ball thrown vertically from a moving skateboard. Special applications plotted the measurement data component by component, allowing a simple determination of initial conditions and "g" in order to explore the underlying laws of motion. This experiment…

  20. Characteristic thermal-hydraulic problems in NHRs: Overview of experimental investigations and computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Falikov, A A; Vakhrushev, V V; Kuul, V S; Samoilov, O B; Tarasov, G I [OKBM, Nizhny Novgorod (Russian Federation)

    1997-09-01

    The paper briefly reviews the specific thermal-hydraulic problems for AST-type NHRs, the experimental investigations that have been carried out in the RF, and the design procedures and computer codes used for AST-500 thermohydraulic characteristics and safety validation. (author). 13 refs, 10 figs, 1 tab.

  1. Benchmarking Experimental and Computational Thermochemical Data: A Case Study of the Butane Conformers.

    Science.gov (United States)

    Barna, Dóra; Nagy, Balázs; Csontos, József; Császár, Attila G; Tasi, Gyula

    2012-02-14

    Due to its crucial importance, numerous studies have been conducted to determine the enthalpy difference between the conformers of butane. However, it is shown here that the most reliable experimental values are biased due to the statistical model utilized during the evaluation of the raw experimental data. In this study, using the appropriate statistical model, both the experimental expectation values and the associated uncertainties are revised. For the 133-196 and 223-297 K temperature ranges, 668 ± 20 and 653 ± 125 cal mol(-1), respectively, are recommended as reference values. Furthermore, to show that present-day quantum chemistry is a favorable alternative to experimental techniques in the determination of enthalpy differences of conformers, a focal-point analysis, based on coupled-cluster electronic structure computations, has been performed that included contributions of up to perturbative quadruple excitations as well as small correction terms beyond the Born-Oppenheimer and nonrelativistic approximations. For the 133-196 and 223-297 K temperature ranges, in exceptional agreement with the corresponding revised experimental data, our computations yielded 668 ± 3 and 650 ± 6 cal mol(-1), respectively. The most reliable enthalpy difference values for 0 and 298.15 K are also provided by the computational approach, 680.9 ± 2.5 and 647.4 ± 7.0 cal mol(-1), respectively.

  2. Preface to special issue of selected papers from Theoretical, Experimental, and Computational Mechanics (TECM)

    DEFF Research Database (Denmark)

    Jabbari, Masoud; Sarlak Chivaee, Hamid; Hattel, Jesper Henri

    2017-01-01

    We are pleased to introduce this special issue of the Applied Mathematical Modelling journal with highlights from theTheoretical, Experimental, and Computational Mechanics Symposium (TECM-2015). This special issue consists of four rigorouslyselected papers originally presented at TECM-2015...... as a part of the 13th International Conference of Numerical Analysisand Applied Mathematics 2015 (ICNAAM 2015), which was held on 23-29 September 2015 in Rhodes, Greece.The symposium attracted a broad range of international and local leaders in theoretical, experimental, and computational mechanics across...... various fields and application. The symposium did an excellent job of outlining the current landscape of computational mechanics and its capabilities in solving complex industrial problems in the process industries, and we agree with the editor-in-chief of the journal that it is certainly worthwhile...

  3. Materials by design: An experimental and computational investigation on the microanatomy arrangement of porous metallic glasses

    International Nuclear Information System (INIS)

    Sarac, Baran; Klusemann, Benjamin; Xiao, Tao; Bargmann, Swantje

    2014-01-01

    The correlation of a material’s structure with its properties is one of the important unresolved issues in materials science research. This paper discusses a novel experimental and computational approach by which the influence of the pores on the mechanical properties of bulk metallic glasses (BMGs) can be systematically and quantitatively analyzed. The experimental stage involves the fabrication of a template whose pore configurations are pre-determined by computer-aided design tools, and replication of the designed patterns with BMGs. Quasi-static mechanical characterization of these complex microstructures is conducted under uniaxial tension and in-plane compression. For the numerical simulations, a non-local gradient-enhanced continuum mechanical model is established, using thermodynamic principles and periodic boundary conditions. The combination of the experimental and numerical results has identified the importance of the pore configuration, overall porosity and diameter to the spacing ratio of the pores to attain optimized material properties

  4. Identifying controlling variables for math computation fluency through experimental analysis: the interaction of stimulus control and reinforcing consequences.

    Science.gov (United States)

    Hofstadter-Duke, Kristi L; Daly, Edward J

    2015-03-01

    This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.

  5. Calibration factor or calibration coefficient?

    International Nuclear Information System (INIS)

    Meghzifene, A.; Shortt, K.R.

    2002-01-01

    Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)

  6. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    Science.gov (United States)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  7. Experimental determination of the angular dependence factor for the dose equivalent for photons in calibration phantoms of PMMA

    International Nuclear Information System (INIS)

    Lund, E.; Carlsson, C.A.; Pernicka, F.

    1994-01-01

    The conversion coefficients from air kerma to dose equivalent at a depth of 10 mm in both a spherical and a slab phantom of PMMA have been determined for the X ray qualities: 40, 80 and 295 kV, ISO 'narrow' spectra; and for 137 Cs γ rays. The angular dependence factors have been experimentally determined for the same qualities and for different angles between 0 o and 180 o . The absorbed doses have been measured with thermoluminescence LiF dosemeters. The conversion coefficients and the angular dependence factors are generally found to agree well with calculated ones. Some minor discrepancies are found for the angular dependence factors and the 30 x 30 x 15 cm 3 PMMA slab phantom. (Author)

  8. Experimental and computational investigations of heat and mass transfer of intensifier grids

    International Nuclear Information System (INIS)

    Kobzar, Leonid; Oleksyuk, Dmitry; Semchenkov, Yuriy

    2015-01-01

    The paper discusses experimental and numerical investigations on intensification of thermal and mass exchange which were performed by National Research Centre ''Kurchatov Institute'' over the past years. Recently, many designs of heat mass transfer intensifier grids have been proposed. NRC ''Kurchatov Institute'' has accomplished a large scope of experimental investigations to study efficiency of intensifier grids of various types. The outcomes of experimental investigations can be used in verification of computational models and codes. On the basis of experimental data, we derived correlations to calculate coolant mixing and critical heat flux mixing in rod bundles equipped with intensifier grids. The acquired correlations were integrated in subchannel code SC-INT.

  9. A Comparison of Computed and Experimental Flowfields of the RAH-66 Helicopter

    Science.gov (United States)

    vanDam, C. P.; Budge, A. M.; Duque, E. P. N.

    1996-01-01

    This paper compares and evaluates numerical and experimental flowfields of the RAH-66 Comanche helicopter. The numerical predictions were obtained by solving the Thin-Layer Navier-Stokes equations. The computations use actuator disks to investigate the main and tail rotor effects upon the fuselage flowfield. The wind tunnel experiment was performed in the 14 x 22 foot facility located at NASA Langley. A suite of flow conditions, rotor thrusts and fuselage-rotor-tail configurations were tested. In addition, the tunnel model and the computational geometry were based upon the same CAD definition. Computations were performed for an isolated fuselage configuration and for a rotor on configuration. Comparisons between the measured and computed surface pressures show areas of correlation and some discrepancies. Local areas of poor computational grid-quality and local areas of geometry differences account for the differences. These calculations demonstrate the use of advanced computational fluid dynamic methodologies towards a flight vehicle currently under development. It serves as an important verification for future computed results.

  10. Node-to-node field calibration of wireless distributed air pollution sensor network.

    Science.gov (United States)

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All

  11. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  12. Pairing experimentation and computational modelling to understand the role of tissue inducer cells in the development of lymphoid organs

    Directory of Open Access Journals (Sweden)

    Kieran eAlden

    2012-07-01

    Full Text Available The use of genetic tools, imaging technologies and ex vivo culture systems has provided significant insights into the role of tissue inducer cells and associated signalling pathways in the formation and function of lymphoid organs. Despite advances in experimental technologies, the molecular and cellular process orchestrating the formation of a complex 3-dimensional tissue is difficult to dissect using current approaches. Therefore, a robust set of simulation tools have been developed to model the processes involved in lymphoid tissue development. Specifically the role of different tissue inducer cell populations in the dynamic formation of Peyer's Patches has been examined. Utilising approaches from critical systems engineering an unbiased model of lymphoid tissue inducer cell function has been developed, that permits the development of emerging behaviours that are statistically not different from that observed in vivo. These results provide the confidence to utilise statistical methods to explore how the simulator predicts cellular behaviour and outcomes under different physiological conditions. Such methods, known as sensitivity analysis techniques, can provide insight into when a component part of the system (such as a particular cell type, adhesion molecule, or chemokine begins to have an influence on observed behaviour, and quantifies the effect a component part has on the end result: the formation of lymphoid tissue. Through use of such a principled approach in the design, calibration, and analysis of a computer simulation, a robust in silico tool can be developed which can both further the understanding of a biological system being explored, and act as a tool for the generation of hypotheses which can be tested utilising experimental approaches.

  13. Video analysis of projectile motion using tablet computers as experimental tools

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-01-01

    Tablet computers were used as experimental tools to record and analyse the motion of a ball thrown vertically from a moving skateboard. Special applications plotted the measurement data component by component, allowing a simple determination of initial conditions and g in order to explore the underlying laws of motion. This experiment can easily be performed by students themselves, providing more autonomy in their problem-solving processes than traditional learning approaches. We believe that this autonomy and the authenticity of the experimental tool both foster their motivation.

  14. Experimental assessment of computer codes used for safety analysis of integral reactors

    Energy Technology Data Exchange (ETDEWEB)

    Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)

    1995-09-01

    Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.

  15. Experimental Validation of an Efficient Fan-Beam Calibration Procedure for k-Nearest Neighbor Position Estimation in Monolithic Scintillator Detectors

    Science.gov (United States)

    Borghi, Giacomo; Tabacchini, Valerio; Seifert, Stefan; Schaart, Dennis R.

    2015-02-01

    Monolithic scintillator detectors can achieve excellent spatial resolution and coincidence resolving time. However, their practical use for positron emission tomography (PET) and other applications in the medical imaging field is still limited due to drawbacks of the different methods used to estimate the position of interaction. Common statistical methods for example require the collection of an extensive dataset of reference events with a narrow pencil beam aimed at a fine grid of reference positions. Such procedures are time consuming and not straightforwardly implemented in systems composed of many detectors. Here, we experimentally demonstrate for the first time a new calibration procedure for k-nearest neighbor ( k-NN) position estimation that utilizes reference data acquired with a fan beam. The procedure is tested on two detectors consisting of 16 mm ×16 mm ×10 mm and 16 mm ×16 mm ×20 mm monolithic, Ca-codoped LSO:Ce crystals and digital photon counter (DPC) arrays. For both detectors, the spatial resolution and the bias obtained with the new method are found to be practically the same as those obtained with the previously used method based on pencil-beam irradiation, while the calibration time is reduced by a factor of 20. Specifically, a FWHM of 1.1 mm and a FWTM of 2.7 mm were obtained using the fan-beam method with the 10 mm crystal, whereas a FWHM of 1.5 mm and a FWTM of 6 mm were achieved with the 20 mm crystal. Using a fan beam made with a 4.5 MBq 22Na point-source and a tungsten slit collimator with 0.5 mm aperture, the total measurement time needed to acquire the reference dataset was 3 hours for the thinner crystal and 2 hours for the thicker one.

  16. A computer program to evaluate the experimental data in instrumental multielement neutron activation analysis

    International Nuclear Information System (INIS)

    Greim, L.; Motamedi, K.; Niedergesaess, R.

    1976-01-01

    A computer code evaluating experimental data of neutron activation analysis (NAA) for determination of atomic abundancies is described. The experimental data are, beside a probe designation, the probe weight, irradiation parameters and a Ge(Li)-pulse-height-spectrum from the activity measurement. The organisation of the necessary nuclear data, comprising all methods of activation in reactor-irradiations, is given. Furthermore the automatic evaluation of spectra, the designation of the resulting peaks to nuclei and the calculation of atomic abundancies are described. The complete evaluation of a spectrum with many lines, e.g. 100 lines of 20 nuclei, takes less than 1 minute machine-time on the TR 440 computer. (orig.) [de

  17. Preliminary experimentally-validated forced and mixed convection computational simulations of the Rotatable Buoyancy Tunnel

    International Nuclear Information System (INIS)

    Clifford, Corey E.; Kimber, Mark L.

    2015-01-01

    Although computational fluid dynamics (CFD) has not been directly utilized to perform safety analyses of nuclear reactors in the United States, several vendors are considering adopting commercial numerical packages for current and future projects. To ensure the accuracy of these computational models, it is imperative to validate the assumptions and approximations built into commercial CFD codes against physical data from flows analogous to those in modern nuclear reactors. To this end, researchers at Utah State University (USU) have constructed the Rotatable Buoyancy Tunnel (RoBuT) test facility, which is designed to provide flow and thermal validation data for CFD simulations of forced and mixed convection scenarios. In order to evaluate the ability of current CFD codes to capture the complex physics associated with these types of flows, a computational model of the RoBuT test facility is created using the ANSYS Fluent commercial CFD code. The numerical RoBuT model is analyzed at identical conditions to several experimental trials undertaken at USU. Each experiment is reconstructed numerically and evaluated with the second-order Reynolds stress model (RSM). Two different thermal boundary conditions at the heated surface of the RoBuT test section are investigated: constant temperature (isothermal) and constant surface heat flux (isoflux). Additionally, the fluid velocity at the inlet of the test section is varied in an effort to modify the relative importance of natural convection heat transfer from the heated wall of the RoBuT. Mean velocity, both in the streamwise and transverse directions, as well as components of the Reynolds stress tensor at three points downstream of the RoBuT test section inlet are compared to results obtained from experimental trials. Early computational results obtained from this research initiative are in good agreement with experimental data obtained from the RoBuT facility and both the experimental data and numerical method can be used

  18. An experimental and computational framework to build a dynamic protein atlas of human cell division

    OpenAIRE

    Kavur, Marina; Kavur, Marina; Kavur, Marina; Ellenberg, Jan; Peters, Jan-Michael; Ladurner, Rene; Martinic, Marina; Kueblbeck, Moritz; Nijmeijer, Bianca; Wachsmuth, Malte; Koch, Birgit; Walther, Nike; Politi, Antonio; Heriche, Jean-Karim; Hossain, M.

    2017-01-01

    Essential biological functions of human cells, such as division, require the tight coordination of the activity of hundreds of proteins in space and time. While live cell imaging is a powerful tool to study the distribution and dynamics of individual proteins after fluorescence tagging, it has not yet been used to map protein networks due to the lack of systematic and quantitative experimental and computational approaches. Using the cell and nuclear boundaries as landmarks, we generated a 4D ...

  19. Computer experimental analysis of the CHP performance of a 100 kW e SOFC Field Unit by a factorial design

    Science.gov (United States)

    Calì, M.; Santarelli, M. G. L.; Leone, P.

    Gas Turbine Technologies (GTT) and Politecnico di Torino, both located in Torino (Italy), have been involved in the design and installation of a SOFC laboratory in order to analyse the operation, in cogenerative configuration, of the CHP 100 kW e SOFC Field Unit, built by Siemens-Westinghouse Power Corporation (SWPC), which is at present (May 2005) starting its operation and which will supply electric and thermal power to the GTT factory. In order to take the better advantage from the analysis of the on-site operation, and especially to correctly design the scheduled experimental tests on the system, we developed a mathematical model and run a simulated experimental campaign, applying a rigorous statistical approach to the analysis of the results. The aim of this work is the computer experimental analysis, through a statistical methodology (2 k factorial experiments), of the CHP 100 performance. First, the mathematical model has been calibrated with the results acquired during the first CHP100 demonstration at EDB/ELSAM in Westerwoort. After, the simulated tests have been performed in the form of computer experimental session, and the measurement uncertainties have been simulated with perturbation imposed to the model independent variables. The statistical methodology used for the computer experimental analysis is the factorial design (Yates' Technique): using the ANOVA technique the effect of the main independent variables (air utilization factor U ox, fuel utilization factor U F, internal fuel and air preheating and anodic recycling flow rate) has been investigated in a rigorous manner. Analysis accounts for the effects of parameters on stack electric power, thermal recovered power, single cell voltage, cell operative temperature, consumed fuel flow and steam to carbon ratio. Each main effect and interaction effect of parameters is shown with particular attention on generated electric power and stack heat recovered.

  20. Behavioral heterogeneity affects individual performances in experimental and computational lowest unique integer games

    Science.gov (United States)

    Yamada, Takashi

    2017-12-01

    This study computationally examines (1) how the behaviors of subjects are represented, (2) whether the classification of subjects is related to the scale of the game, and (3) what kind of behavioral models are successful in small-sized lowest unique integer games (LUIGs). In a LUIG, N (>= 3) players submit a positive integer up to M(> 1) and the player choosing the smallest number not chosen by anyone else wins. For this purpose, the author considers four LUIGs with N = {3, 4} and M = {3, 4} and uses the behavioral data obtained in the laboratory experiment by Yamada and Hanaki (Physica A 463, pp. 88–102, 2016). For computational experiments, the author calibrates the parameters of typical learning models for each subject and then pursues round robin competitions. The main findings are in the following: First, the subjects who played not differently from the mixed-strategy Nash equilibrium (MSE) prediction tended to made use of not only their choices but also the game outcomes. Meanwhile those who deviated from the MSE prediction took care of only their choices as the complexity of the game increased. Second, the heterogeneity of player strategies depends on both the number of players (N) and the upper limit (M). Third, when groups consist of different agents like in the earlier laboratory experiment, sticking behavior is quite effective to win.

  1. Behavioral Heterogeneity Affects Individual Performances in Experimental and Computational Lowest Unique Integer Games

    Directory of Open Access Journals (Sweden)

    Takashi Yamada

    2017-12-01

    Full Text Available This study computationally examines (1 how the behaviors of subjects are represented, (2 whether the classification of subjects is related to the scale of the game, and (3 what kind of behavioral models are successful in small-sized lowest unique integer games (LUIGs. In a LUIG, N (≥ 3 players submit a positive integer up to M(> 1 and the player choosing the smallest number not chosen by anyone else wins. For this purpose, the author considers four LUIGs with N = {3, 4} and M = {3, 4} and uses the behavioral data obtained in the laboratory experiment by Yamada and Hanaki [1]. For computational experiments, the author calibrates the parameters of typical learning models for each subject and then pursues round robin competitions. The main findings are in the following: First, the subjects who played not differently from the mixed-strategy Nash equilibrium (MSE prediction tended to made use of not only their choices but also the game outcomes. Meanwhile those who deviated from the MSE prediction took care of only their choices as the complexity of the game increased. Second, the heterogeneity of player strategies depends on both the number of players (N and the upper limit (M. Third, when groups consist of different agents like in the earlier laboratory experiment, sticking behavior is quite effective to win.

  2. Modeling an Excitable Biosynthetic Tissue with Inherent Variability for Paired Computational-Experimental Studies.

    Directory of Open Access Journals (Sweden)

    Tanmay A Gokhale

    2017-01-01

    Full Text Available To understand how excitable tissues give rise to arrhythmias, it is crucially necessary to understand the electrical dynamics of cells in the context of their environment. Multicellular monolayer cultures have proven useful for investigating arrhythmias and other conduction anomalies, and because of their relatively simple structure, these constructs lend themselves to paired computational studies that often help elucidate mechanisms of the observed behavior. However, tissue cultures of cardiomyocyte monolayers currently require the use of neonatal cells with ionic properties that change rapidly during development and have thus been poorly characterized and modeled to date. Recently, Kirkton and Bursac demonstrated the ability to create biosynthetic excitable tissues from genetically engineered and immortalized HEK293 cells with well-characterized electrical properties and the ability to propagate action potentials. In this study, we developed and validated a computational model of these excitable HEK293 cells (called "Ex293" cells using existing electrophysiological data and a genetic search algorithm. In order to reproduce not only the mean but also the variability of experimental observations, we examined what sources of variation were required in the computational model. Random cell-to-cell and inter-monolayer variation in both ionic conductances and tissue conductivity was necessary to explain the experimentally observed variability in action potential shape and macroscopic conduction, and the spatial organization of cell-to-cell conductance variation was found to not impact macroscopic behavior; the resulting model accurately reproduces both normal and drug-modified conduction behavior. The development of a computational Ex293 cell and tissue model provides a novel framework to perform paired computational-experimental studies to study normal and abnormal conduction in multidimensional excitable tissue, and the methodology of modeling

  3. Experimental and computational analysis of steam condensation in the presence of air and helium

    International Nuclear Information System (INIS)

    Bucci, M.

    2010-01-01

    Among the different phenomena expected to occur within nuclear reactor containments during a postulated loss of coolant accident, condensation on containment walls plays a major role, since it represents an important heat sink for evacuating the energy released by the discharge of the primary water. Nevertheless, condensation strongly affects other relevant phenomena, like containment atmosphere mixing, that influences the distribution of non-condensable gases hypothetically delivered in severe accident conditions. In this scenario, the role of condensation is not obvious, since it can locally aid the hydrogen produced by the oxidation of the core claddings to concentrate and reach flammability limits, providing a dangerous effect instead of a positive one. The understanding of condensation in the presence of air and hydrogen is therefore a fundamental task for the safety analyses of reactor containments. This research has been carried out with the aim to contribute to the understanding of these phenomena. A double strategy has been adopted, including complementary experimental and computational activities. Novel data have been made available by the CONAN facility, investigating the effects induced by light non-condensable gases in experimental configurations that were scarcely investigated in past studies. Computational fluid dynamics (CFD) condensation models have been developed and validated. The suitability of helium as a substitute for hydrogen in experimental activities has been investigated by theoretical and computational analyses allowing to establish simple criteria for the scaling of condensation tests in the presence of a light non-condensable gas. (authors)

  4. Computational Fluid Dynamics Modeling of the Human Pulmonary Arteries with Experimental Validation.

    Science.gov (United States)

    Bordones, Alifer D; Leroux, Matthew; Kheyfets, Vitaly O; Wu, Yu-An; Chen, Chia-Yuan; Finol, Ender A

    2018-05-21

    Pulmonary hypertension (PH) is a chronic progressive disease characterized by elevated pulmonary arterial pressure, caused by an increase in pulmonary arterial impedance. Computational fluid dynamics (CFD) can be used to identify metrics representative of the stage of PH disease. However, experimental validation of CFD models is often not pursued due to the geometric complexity of the model or uncertainties in the reproduction of the required flow conditions. The goal of this work is to validate experimentally a CFD model of a pulmonary artery phantom using a particle image velocimetry (PIV) technique. Rapid prototyping was used for the construction of the patient-specific pulmonary geometry, derived from chest computed tomography angiography images. CFD simulations were performed with the pulmonary model with a Reynolds number matching those of the experiments. Flow rates, the velocity field, and shear stress distributions obtained with the CFD simulations were compared to their counterparts from the PIV flow visualization experiments. Computationally predicted flow rates were within 1% of the experimental measurements for three of the four branches of the CFD model. The mean velocities in four transversal planes of study were within 5.9 to 13.1% of the experimental mean velocities. Shear stresses were qualitatively similar between the two methods with some discrepancies in the regions of high velocity gradients. The fluid flow differences between the CFD model and the PIV phantom are attributed to experimental inaccuracies and the relative compliance of the phantom. This comparative analysis yielded valuable information on the accuracy of CFD predicted hemodynamics in pulmonary circulation models.

  5. Standard molar enthalpy of formation of 1-benzosuberone: An experimental and computational study

    International Nuclear Information System (INIS)

    Miranda, Margarida S.; Morais, Victor M.F.; Matos, M. Agostinha R.; Liebman, Joel F.

    2010-01-01

    The energetics of 1-benzosuberone was studied by a combination of calorimetric techniques and computational calculations. The standard (p o = 0.1 MPa) molar enthalpy of formation of 1-benzosuberone, in the liquid phase, was derived from the massic energy of combustion, in oxygen, at T = 298.15 K, measured by static bomb combustion calorimetry. The standard molar enthalpy of vaporization, at T = 298.15 K, was measured by Calvet microcalorimetry. From these two parameters the standard (p o = 0.1 MPa) molar enthalpy of formation, in the gaseous phase, at T = 298.15 K, was derived: -(96.1 ± 3.4) kJ . mol -1 . The G3(MP2)//B3LYP composite method and appropriate reactions were used to computationally calculate the standard molar enthalpy of formation of 1-benzosuberone, in the gaseous phase, at T = 298.15 K. The computational results are in very good agreement with the experimental value.

  6. Combined computational and experimental approach to improve the assessment of mitral regurgitation by echocardiography.

    Science.gov (United States)

    Sonntag, Simon J; Li, Wei; Becker, Michael; Kaestner, Wiebke; Büsen, Martin R; Marx, Nikolaus; Merhof, Dorit; Steinseifer, Ulrich

    2014-05-01

    Mitral regurgitation (MR) is one of the most frequent valvular heart diseases. To assess MR severity, color Doppler imaging (CDI) is the clinical standard. However, inadequate reliability, poor reproducibility and heavy user-dependence are known limitations. A novel approach combining computational and experimental methods is currently under development aiming to improve the quantification. A flow chamber for a circulatory flow loop was developed. Three different orifices were used to mimic variations of MR. The flow field was recorded simultaneously by a 2D Doppler ultrasound transducer and Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) simulations were conducted using the same geometry and boundary conditions. The resulting computed velocity field was used to simulate synthetic Doppler signals. Comparison between PIV and CFD shows a high level of agreement. The simulated CDI exhibits the same characteristics as the recorded color Doppler images. The feasibility of the proposed combination of experimental and computational methods for the investigation of MR is shown and the numerical methods are successfully validated against the experiments. Furthermore, it is discussed how the approach can be used in the long run as a platform to improve the assessment of MR quantification.

  7. Tracking evolution of myoglobin stability in cetaceans using experimentally calibrated computational methods that account for generic protein relaxation

    DEFF Research Database (Denmark)

    Holm, Jeppe; Dasmeh, Pouria; Kepp, Kasper Planeta

    2016-01-01

    The evolution of cetaceans (whales, dolphins, and porpoises) from land to water is one of the most spectacular events in mammal evolution. It has been suggested that selection for higher myoglobin stability (ΔG of folding) allowed whales to conquer the deep-diving niche. The stability of multi-si...

  8. Calibration and GEANT4 Simulations of the Phase II Proton Compute Tomography (pCT) Range Stack Detector

    Energy Technology Data Exchange (ETDEWEB)

    Uzunyan, S. A. [Northern Illinois Univ., DeKalb, IL (United States); Blazey, G. [Northern Illinois Univ., DeKalb, IL (United States); Boi, S. [Northern Illinois Univ., DeKalb, IL (United States); Coutrakon, G. [Northern Illinois Univ., DeKalb, IL (United States); Dyshkant, A. [Northern Illinois Univ., DeKalb, IL (United States); Francis, K. [Northern Illinois Univ., DeKalb, IL (United States); Hedin, D. [Northern Illinois Univ., DeKalb, IL (United States); Johnson, E. [Northern Illinois Univ., DeKalb, IL (United States); Kalnins, J. [Northern Illinois Univ., DeKalb, IL (United States); Zutshi, V. [Northern Illinois Univ., DeKalb, IL (United States); Ford, R. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Rauch, J. E. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Rubinov, P. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Sellberg, G. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Wilson, P. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Naimuddin, M. [Univ. of Delhi, New Delhi (India)

    2015-12-29

    Northern Illinois University in collaboration with Fermi National Accelerator Laboratory (FNAL) and Delhi University has been designing and building a proton CT scanner for applications in proton treatment planning. The Phase II proton CT scanner consists of eight planes of tracking detectors with two X and two Y coordinate measurements both before and after the patient. In addition, a range stack detector consisting of a stack of thin scintillator tiles, arranged in twelve eight-tile frames, is used to determine the water equivalent path length (WEPL) of each track through the patient. The X-Y coordinates and WEPL are required input for image reconstruction software to find the relative (proton) stopping powers (RSP) value of each voxel in the patient and generate a corresponding 3D image. In this Note we describe tests conducted in 2015 at the proton beam at the Central DuPage Hospital in Warrenville, IL, focusing on the range stack calibration procedure and comparisons with the GEANT~4 range stack simulation.

  9. Survey of engineering computational methods and experimental programs for estimating supersonic missile aerodynamic characteristics

    Science.gov (United States)

    Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.

    1982-01-01

    This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.

  10. Automated air-void system characterization of hardened concrete: Helping computers to count air-voids like people count air-voids---Methods for flatbed scanner calibration

    Science.gov (United States)

    Peterson, Karl

    Since the discovery in the late 1930s that air entrainment can improve the durability of concrete, it has been important for people to know the quantity, spacial distribution, and size distribution of the air-voids in their concrete mixes in order to ensure a durable final product. The task of air-void system characterization has fallen on the microscopist, who, according to a standard test method laid forth by the American Society of Testing and Materials, must meticulously count or measure about a thousand air-voids per sample as exposed on a cut and polished cross-section of concrete. The equipment used to perform this task has traditionally included a stereomicroscope, a mechanical stage, and a tally counter. Over the past 30 years, with the availability of computers and digital imaging, automated methods have been introduced to perform the same task, but using the same basic equipment. The method described here replaces the microscope and mechanical stage with an ordinary flatbed desktop scanner, and replaces the microscopist and tally counter with a personal computer; two pieces of equipment much more readily available than a microscope with a mechanical stage, and certainly easier to find than a person willing to sit for extended periods of time counting air-voids. Most laboratories that perform air-void system characterization typically have cabinets full of prepared samples with corresponding results from manual operators. Proponents of automated methods often take advantage of this fact by analyzing the same samples and comparing the results. A similar iterative approach is described here where scanned images collected from a significant number of samples are analyzed, the results compared to those of the manual operator, and the settings optimized to best approximate the results of the manual operator. The results of this calibration procedure are compared to an alternative calibration procedure based on the more rigorous digital image accuracy

  11. Computational and experimental optimization of the exhaust air energy recovery wind turbine generator

    International Nuclear Information System (INIS)

    Tabatabaeikia, Seyedsaeed; Ghazali, Nik Nazri Bin Nik; Chong, Wen Tong; Shahizare, Behzad; Izadyar, Nima; Esmaeilzadeh, Alireza; Fazlizan, Ahmad

    2016-01-01

    Highlights: • Studying the viability of harvesting wasted energy by exhaust air recovery generator. • Optimizing the design using response surface methodology. • Validation of optimization and computation result by performing experimental tests. • Investigation of flow behaviour using computational fluid dynamic simulations. • Performing the technical and economic study of the exhaust air recovery generator. - Abstract: This paper studies the optimization of an innovative exhaust air recovery wind turbine generator through computational fluid dynamic (CFD) simulations. The optimization strategy aims to optimize the overall system energy generation and simultaneously guarantee that it does not violate the cooling tower performance in terms of decreasing airflow intake and increasing fan motor power consumption. The wind turbine rotor position, modifying diffuser plates, and introducing separator plates to the design are considered as the variable factors for the optimization. The generated power coefficient is selected as optimization objective. Unlike most of previous optimizations in field of wind turbines, in this study, response surface methodology (RSM) as a method of analytical procedures optimization has been utilised by using multivariate statistic techniques. A comprehensive study on CFD parameters including the mesh resolution, the turbulence model and transient time step values is presented. The system is simulated using SST K-ω turbulence model and then both computational and optimization results are validated by experimental data obtained in laboratory. Results show that the optimization strategy can improve the wind turbine generated power by 48.6% compared to baseline design. Meanwhile, it is able to enhance the fan intake airflow rate and decrease fan motor power consumption. The obtained optimization equations are also validated by both CFD and experimental results and a negligible deviation in range of 6–8.5% is observed.

  12. A detailed experimental study of a DNA computer with two endonucleases.

    Science.gov (United States)

    Sakowski, Sebastian; Krasiński, Tadeusz; Sarnik, Joanna; Blasiak, Janusz; Waldmajer, Jacek; Poplawski, Tomasz

    2017-07-14

    Great advances in biotechnology have allowed the construction of a computer from DNA. One of the proposed solutions is a biomolecular finite automaton, a simple two-state DNA computer without memory, which was presented by Ehud Shapiro's group at the Weizmann Institute of Science. The main problem with this computer, in which biomolecules carry out logical operations, is its complexity - increasing the number of states of biomolecular automata. In this study, we constructed (in laboratory conditions) a six-state DNA computer that uses two endonucleases (e.g. AcuI and BbvI) and a ligase. We have presented a detailed experimental verification of its feasibility. We described the effect of the number of states, the length of input data, and the nondeterminism on the computing process. We also tested different automata (with three, four, and six states) running on various accepted input words of different lengths such as ab, aab, aaab, ababa, and of an unaccepted word ba. Moreover, this article presents the reaction optimization and the methods of eliminating certain biochemical problems occurring in the implementation of a biomolecular DNA automaton based on two endonucleases.

  13. Thermodynamic properties of 1-naphthol: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range 5 K to 445 K. • Vapor pressures were measured for the temperature range 370 K to 570 K. • Computed and derived properties for ideal gas entropies are in excellent accord. • The enthalpy of combustion was measured and shown to be consistent with reliable literature values. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Thermodynamic properties for 1-naphthol (Chemical Abstracts registry number [90-15-3]) in the ideal-gas state are reported based on both experimental and computational methods. Measured properties included the triple-point temperature, enthalpy of fusion, and heat capacities for the crystal and liquid phases by adiabatic calorimetry; vapor pressures by inclined-piston manometry and comparative ebulliometry; and the enthalpy of combustion of the crystal phase by oxygen bomb calorimetry. Critical properties were estimated. Entropies for the ideal-gas state were derived from the experimental studies for the temperature range 298.15 ⩽ T/K ⩽ 600, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. The mutual validation of the independent experimental and computed results is achieved with a scaling factor of 0.975 applied to the calculated vibrational frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in a series of recent articles by this research group. This article reports the first extension of this approach to a hydroxy-aromatic compound. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous. The enthalpy of combustion for 1-naphthol was also measured in this research, and excellent

  14. Computational and experimental analysis of supersonic air ejector: Turbulence modeling and assessment of 3D effects

    International Nuclear Information System (INIS)

    Mazzelli, Federico; Little, Adrienne B.; Garimella, Srinivas; Bartosiewicz, Yann

    2015-01-01

    Highlights: • Computational and experimental assessment of computational techniques for ejector flows. • Comparisons to 2D/3D (k–ε, k–ε realizable, k–ω SST, and stress–ω RSM) turbulence models. • k–ω SST model performs best while ε-based models more accurate at low motive pressures. • Good on-design agreement across 2D and 3D models; off-design needs 3D simulations. - Abstract: Numerical and experimental analyses are performed on a supersonic air ejector to evaluate the effectiveness of commonly-used computational techniques when predicting ejector flow characteristics. Three series of experimental curves at different operating conditions are compared with 2D and 3D simulations using RANS, steady, wall-resolved models. Four different turbulence models are tested: k–ε, k–ε realizable, k–ω SST, and the stress–ω Reynolds Stress Model. An extensive analysis is performed to interpret the differences between numerical and experimental results. The results show that while differences between turbulence models are typically small with respect to the prediction of global parameters such as ejector inlet mass flow rates and Mass Entrainment Ratio (MER), the k–ω SST model generally performs best whereas ε-based models are more accurate at low motive pressures. Good agreement is found across all 2D and 3D models at on-design conditions. However, prediction at off-design conditions is only acceptable with 3D models, making 3D simulations mandatory to correctly predict the critical pressure and achieve reasonable results at off-design conditions. This may partly depend on the specific geometry under consideration, which in the present study has a rectangular cross section with low aspect ratio.

  15. SPOTS Calibration Example

    Directory of Open Access Journals (Sweden)

    Patterson E.

    2010-06-01

    Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.

  16. Accuracy evaluation of optical distortion calibration by digital image correlation

    Science.gov (United States)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  17. A novel computational framework for deducing muscle synergies from experimental joint moments

    Directory of Open Access Journals (Sweden)

    Anantharaman eGopalakrishnan

    2014-12-01

    Full Text Available Prior experimental studies have hypothesized the existence of a ‘muscle synergy’ based control scheme for producing limb movements and locomotion in vertebrates. Such synergies have been suggested to consist of fixed muscle grouping schemes with the co-activation of all muscles in a synergy resulting in limb movement. Quantitative representations of these groupings (termed muscle weightings and their control signals (termed synergy controls have traditionally been derived by the factorization of experimentally measured EMG. This study presents a novel approach for deducing these weightings and controls from inverse dynamic joint moments that are computed from an alternative set of experimental measurements – movement kinematics and kinetics. This technique was applied to joint moments for healthy human walking at 0.7 and 1.7 m/s, and two sets of ‘simulated’ synergies were computed based on two different criteria (1 synergies were required to minimize errors between experimental and simulated joint moments in a musculoskeletal model (pure-synergy solution (2 along with minimizing joint moment errors, synergies also minimized muscle activation levels (optimal-synergy solution. On comparing the two solutions, it was observed that the introduction of optimality requirements (optimal-synergy to a control strategy solely aimed at reproducing the joint moments (pure-synergy did not necessitate major changes in the muscle grouping within synergies or the temporal profiles of synergy control signals. Synergies from both the simulated solutions exhibited many similarities to EMG derived synergies from a previously published study, thus implying that the analysis of the two different types of experimental data reveals similar, underlying synergy structures.

  18. Modification of silicon nitride surfaces with GOPES and APTES for antibody immobilization: computational and experimental studies

    International Nuclear Information System (INIS)

    To, Thien Dien; Nguyen, Anh Tuan; Phan, Khoa Nhat Thanh; Truong, An Thu Thi; Doan, Tin Chanh Duc; Dang, Chien Mau

    2015-01-01

    Chemical modification of silicon nitride (SiN) surfaces by silanization has been widely studied especially with 3-(aminopropyl)triethoxysilane (APTES) and 3-(glycidyloxypropyl) dimethylethoxysilane (GOPES). However few reports performed the experimental and computational studies together. In this study, surface modification of SiN surfaces with GOPES and APTES covalently bound with glutaraldehyde (GTA) was investigated for antibody immobilization. The monoclonal anti-cytokeratin-FITC (MACF) antibody was immobilized on the modified SiN surfaces. The modified surfaces were characterized by water contact angle measurements, atomic force microscopy and fluorescence microscopy. The FITC-fluorescent label indicated the existence of MACF antibody on the SiN surfaces and the efficiency of the silanization reaction. Absorption of APTES and GOPES on the oxidized SiN surfaces was computationally modeled and calculated by Materials Studio software. The computational and experimental results showed that modification of the SiN surfaces with APTES and GTA was more effective than the modification with GOPES. (paper)

  19. Investigation and experimental validation of the contribution of optical interconnects in the SYMPHONIE massively parallel computer

    International Nuclear Information System (INIS)

    Scheer, Patrick

    1998-01-01

    Progress in microelectronics lead to electronic circuits which are increasingly integrated, with an operating frequency and an inputs/outputs count larger than the ones supported by printed circuit board and back-plane technologies. As a result, distributed systems with several boards cannot fully exploit the performance of integrated circuits. In synchronous parallel computers, the situation is worsen since the overall system performances rely on the efficiency of electrical interconnects between the integrated circuits which include the processing elements (PE). The study of a real parallel computer named SYMPHONIE shows for instance that the system operating frequency is far smaller than the capabilities of the microelectronics technology used for the PE implementation. Optical interconnections may cancel these limitations by providing more efficient connections between the PE. Especially, free-space optical interconnections based on vertical-cavity surface-emitting lasers (VCSEL), micro-lens and PIN photodiodes are compatible with the required features of the PE communications. Zero bias modulation of VCSEL with CMOS-compatible digital signals is studied and experimentally demonstrated. A model of the propagation of truncated gaussian beams through micro-lenses is developed. It is then used to optimise the geometry of the detection areas. A dedicated mechanical system is also proposed and implemented for integrating free-space optical interconnects in a standard electronic environment, representative of the one of parallel computer systems. A specially designed demonstrator provides the experimental validation of the above physical concepts. (author) [fr

  20. Experimental and computer thermodynamics evaluations of an Al-Si-Coating on a quenchable steel

    International Nuclear Information System (INIS)

    Trindade, Vicente Braz

    2017-01-01

    High-strength steels are commonly used in the automobile industry in order to reduce the weight of the vehicles. However, a technical difficulty appears due to the need of hot stamping of the components, which leads to oxidation. Therefore, the application of a coating on the substrate to avoid high-temperature oxidation is used. In this work, experimental analysis and computer thermodynamic calculation were used to describe the phase transformations within an Al-Si coating on a quenchable high strength steel. The Al-Si coating was deposited by hot dipping and its characterization was done using SEM and XRD techniques. Computer thermodynamics calculations were done using the commercial software FactSage using the Calphad methodology. It demonstrated a good relationship between the experimental results and the computer calculations of phase stabilities for the as-deposited condition and after diffusion experiment at 920 deg C for 7 minutes, which simulates the thermal cycle of hot stamping of the quenchable steel used. (author)

  1. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing.

    Science.gov (United States)

    Sillin, Henry O; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V; Aono, Masakazu; Stieg, Adam Z; Gimzewski, James K

    2013-09-27

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  2. Experimental and computational study on thermoelectric generators using thermosyphons with phase change as heat exchangers

    International Nuclear Information System (INIS)

    Araiz, M.; Martínez, A.; Astrain, D.; Aranguren, P.

    2017-01-01

    Highlights: • Thermosyphon with phase change heat exchanger computational model. • Construction and experimentation of a prototype. • ±9% of maximum deviation from experimental values of the main outputs. • Influence of the auxiliary equipment on the net power generation. - Abstract: An important issue in thermoelectric generators is the thermal design of the heat exchangers since it can improve their performance by increasing the heat absorbed or dissipated by the thermoelectric modules. Due to its several advantages, compared to conventional dissipation systems, a thermosyphon heat exchanger with phase change is proposed to be placed on the cold side of thermoelectric generators. Some of these advantages are: high heat-transfer rates; absence of moving parts and lack of auxiliary consumption (because fans or pumps are not required); and the fact that these systems are wickless. A computational model is developed to design and predict the behaviour of this heat exchangers. Furthermore, a prototype has been built and tested in order to demonstrate its performance and validate the computational model. The model predicts the thermal resistance of the heat exchanger with a relative error in the interval [−8.09; 7.83] in the 95% of the cases. Finally, the use of thermosyphons with phase change in thermoelectric generators has been studied in a waste-heat recovery application, stating that including them on the cold side of the generators improves the net thermoelectric production by 36% compared to that obtained with finned dissipators under forced convection.

  3. Experimental and computer thermodynamics evaluations of an Al-Si-Coating on a quenchable steel

    Energy Technology Data Exchange (ETDEWEB)

    Trindade, Vicente Braz, E-mail: vicentebraz@yahoo.com.b [Universidade Federal de Ouro Preto (UFOP), Ouro Preto, MG (Brazil). Escola de Minas. Departamento de Engenharia Metalurgica e de Materiais; Christ, Hans-Juergen, E-mail: christ@ifwt.mb.uni-siegen.de [University of Siegen (Germany)

    2017-01-15

    High-strength steels are commonly used in the automobile industry in order to reduce the weight of the vehicles. However, a technical difficulty appears due to the need of hot stamping of the components, which leads to oxidation. Therefore, the application of a coating on the substrate to avoid high-temperature oxidation is used. In this work, experimental analysis and computer thermodynamic calculation were used to describe the phase transformations within an Al-Si coating on a quenchable high strength steel. The Al-Si coating was deposited by hot dipping and its characterization was done using SEM and XRD techniques. Computer thermodynamics calculations were done using the commercial software FactSage using the Calphad methodology. It demonstrated a good relationship between the experimental results and the computer calculations of phase stabilities for the as-deposited condition and after diffusion experiment at 920 deg C for 7 minutes, which simulates the thermal cycle of hot stamping of the quenchable steel used. (author)

  4. Computation of Propagation Speed and Reflection of Axially Symmetric Waves in Composite Cylinders, with Application to Impedance Tube and Calibrator.

    Science.gov (United States)

    1982-08-25

    hoth real and complex; and () compute the propagation speed in the acoustic cal Ibr:ator. (continued on reverse) DD IARM3 1473 EDITION OF I NOVSSIS...FORMAT (3F15.0) 0131 GO TO 230 0132 690 CONTINUE 0133 END 50 *> SUBROUTINES SUBROUTINE: DET C C SUBROUTINE DET C DECEMBER, 1980 C EDITED BY TINA RUGGIERO...33 AISA =AKDA/BAMS 0008 02A=AKA**’-AKDA**2 0009 S2A=AKA**2-AKSA**2 0010 IF (RHM) 65P65P40 0011 40 AKFA-AKDA/COCD 0012 OF2A-AKA**2-AKFA**2 0013 IF (OF2A

  5. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    Science.gov (United States)

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  6. Computerized 50 liter volume calibration system

    International Nuclear Information System (INIS)

    Proffitt, T.H.

    1990-01-01

    A system has been designed for the Savannah River Site that will be used to calibrate product shipping containers. For accountability purposes, it is necessary that these containers be calibrated to a very high precision. The Computerized 50 Liter Volume Calibration System (CVCS), which is based on the Ideal Gas Law (IGL), will use reference volumes with precision of no less ±0.03%, and helium to calibrate the containers to have a total error of no greater than ±0.10%. A statistical interpretation of the system has given a theoretical total calculated error of ±0.08%. Tests with the system will be performed once fabrication is complete to experimentally verify the calculated error. Since the total error was calculated using the worst case scenario, the actual error should be significantly less than the calculated value. The computer controlled, totally automated system is traceable to the National Institute of Standards and Technology. The design, calibration procedure, and statistical interpretation of the system will be discussed. 1 ref

  7. Validation of functional calibration and strap-down joint drift correction for computing 3D joint angles of knee, hip, and trunk in alpine skiing

    OpenAIRE

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    To obtain valid 3D joint angles with inertial sensors careful sensor-to-segment calibration (i.e. functional or anatomical calibration) is required and measured angular velocity at each sensor needs to be integrated to obtain segment and joint orientation (i.e. joint angles). Existing functional and anatomical calibration procedures were optimized for gait analysis and calibration movements were impractical to perform in outdoor settings. Thus, the aims of this study were 1) to propose and va...

  8. Experimental realization of nondestructive discrimination of Bell states using a five-qubit quantum computer

    Science.gov (United States)

    Sisodia, Mitali; Shukla, Abhishek; Pathak, Anirban

    2017-12-01

    A scheme for distributed quantum measurement that allows nondestructive or indirect Bell measurement was proposed by Gupta et al [1]. In the present work, Gupta et al.'s scheme is experimentally realized using the five-qubit super-conductivity-based quantum computer, which has been recently placed in cloud by IBM Corporation. The experiment confirmed that the Bell state can be constructed and measured in a nondestructive manner with a reasonably high fidelity. A comparison of the outcomes of this study and the results obtained earlier in an NMR-based experiment (Samal et al. (2010) [10]) has also been performed. The study indicates that to make a scalable SQUID-based quantum computer, errors introduced by the gates (in the present technology) have to be reduced considerably.

  9. Investigation on human serum albumin and Gum Tragacanth interactions using experimental and computational methods.

    Science.gov (United States)

    Moradi, Sajad; Taran, Mojtaba; Shahlaei, Mohsen

    2018-02-01

    The study on the interaction of human serum albumin and Gum Tragacanth, a biodegradable bio-polymer, has been undertaken. For this purpose, several experimental and computational methods were used. Investigation of thermodynamic parameters and mode of interactions were carried out using Fluorescence spectroscopy in 300 and 310K. Also, a Fourier transformed infrared spectra and synchronous fluorescence spectroscopy was performed. To give detailed insight of possible interactions, docking and molecular dynamic simulations were also applied. Results show that the interaction is based on hydrogen bonding and van der Waals forces. Structural analysis implies on no adverse change in protein conformation during binding of GT. Furthermore, computational methods confirm some evidence on secondary structure enhancement of protein as a presence of combining with Gum Tragacanth. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  11. The enthalpies of formation of alkyl carbamates: Experimental and computational redetermination

    International Nuclear Information System (INIS)

    Santos, Ana Filipa L.O.M.; Ribeiro da Silva, Manuel A.V.

    2013-01-01

    Highlights: ► Combustion calorimetry was used to redetermine Δ f H m o (cr) of methyl and ethyl carbamates. ► Δ cr g H m o of methyl and ethyl carbamates were redetermined by Calvet Microcalorimetry. ► Gas-phase Δ f H m o of the studied compounds have been derived from the experimental values of Δ f H m o (cr) and Δ cr g H m o . ► Gas-phase Δ f H m o of the studied compounds have been calculated by computational thermochemistry. ► The obtained Δ f H m o (g) for the alkylcarbamates are discussed versus literature values for the same compounds. - Abstract: In the present work, a redetermination of thermochemical data of methyl carbamate and ethyl carbamate was performed by both experimental and computational techniques. Their gas-phase standard (p o = 0.1 MPa) molar enthalpies of formation, Δ f H m o (g), at T = 298.15 K, were derived from the standard molar enthalpies of formation, in the crystalline phase, Δ f H m o (cr), and from the standard molar enthalpies of sublimation, Δ cr g H m o at T = 298.15 K, measured, respectively, by static bomb combustion calorimetry and high temperature Calvet microcalorimetry. The experimental results were compared with computational data, calculated at the G3(MP2)//B3LYP level, as well as with values reported in the literature. At the B3LYP/6-31G(d) level of theory, the molecular structure of both carbamates was obtained.

  12. Practical experimental certification of computational quantum gates using a twirling procedure.

    Science.gov (United States)

    Moussa, Osama; da Silva, Marcus P; Ryan, Colm A; Laflamme, Raymond

    2012-08-17

    Because of the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3-qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.

  13. Parallel computing in experimental mechanics and optical measurement: A review (II)

    Science.gov (United States)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  14. Read-only-memory-based quantum computation: Experimental explorations using nuclear magnetic resonance and future prospects

    International Nuclear Information System (INIS)

    Sypher, D.R.; Brereton, I.M.; Wiseman, H.M.; Hollis, B.L.; Travaglione, B.C.

    2002-01-01

    Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less 'magical', and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature 'real-world' problem relating to the lengths of paths in a network

  15. Experimental and computational thermochemical study of the tri-, tetra-, and pentachloronitrobenzene isomers

    International Nuclear Information System (INIS)

    Ribeiro da Silva, Manuel A.V.; Lobo Ferreira, Ana I.M.C.; Cabral, Joana I.T.A.; Santos, Ana Filipa L.O.M.; Moreno, Ana Rita G.; Galvao, Tiago L.P.; Rocha, Ines M.; Fernandes, Paula M.V.; Salgueiro, Silvia Q.; Moura, Vanessa A.F.de; Oliveira, Isabel M.S.C.; Cotelo, Paula C.; Ribeiro, Mariana R.A.

    2009-01-01

    The standard (p 0 =0.1MPa) molar enthalpies of formation of 2,4,6-trichloronitrobenzene, 2,3,5,6-tetrachloronitrobenzene, and pentachloronitrobenzene, in the crystalline state, at T = 298.15 K, were derived from the standard massic energies of combustion, in oxygen, at T = 298.15 K, measured by rotating-bomb combustion calorimetry. The standard molar enthalpies of sublimation, at T = 298.15 K, of 2,4,6-trichloronitrobenzene and pentachloronitrobenzene, were determined from the dependence with the temperature of the vapour pressures, measured by the Knudsen mass-loss effusion method, whereas for 2,3,5,6-tetrachloronitrobenzene, the Calvet drop microcalorimetric technique was used. The experimental values are also compared with estimates based on G3(MP2)//B3LYP computations, which have also been extended to all the isomers of the trichloro- and tetrachloronitrobenzene that were not studied experimentally.

  16. A computer program for processing experimental Compton profile of solids and liquids

    International Nuclear Information System (INIS)

    Das, G.P.

    1984-01-01

    A computer program COMPRO has been developed for processing experimental Compton profile data of solids and liquids generated by inelastic gamma ray scattering using a solid state detector and a multichannel analyser. It also calculates the fourier transform of the profile yielding the one electron autocorrelation function in position space. The theory behind the method of calculation is outlined and the various data processing steps needed to be applied on the raw experimental data have been discussed in detail. A flow chart of the program is given and the various subroutines of the program, method of feeding the input data and the method of presenting the final result are briefly described. The procedure is illustrated by measurement on a polycrystalline sample of manganese. The actual listing of the program along with the test run input data and the test run output data is also given. (M.G.B.)

  17. Educational Game Design. Bridging the gab between computer based learning and experimental learning environments

    DEFF Research Database (Denmark)

    Andersen, Kristine

    2007-01-01

    Considering the rapidly growing amount of digital educational materials only few of them bridge the gab between experimental learning environments and computer based learning environments (Gardner, 1991). Observations from two cases in primary school and lower secondary school in the subject...... with a prototype of a MOO storyline. The aim of the MOO storyline is to challenge the potential of dialogue, user involvement, and learning responsibility and to use the children?s natural curiosity and motivation for game playing, especially when digital games involves other children. The paper proposes a model......, based on the narrative approach for experimental learning subjects, relying on ideas from Csikszentmihalyis notion of flow (Csikszentmihalyi, 1991), storyline-pedagogy (Meldgaard, 1994) and ideas from Howard Gardner (Gardner, 1991). The model forms the basis for educational games to be used in home...

  18. Experimental and computational fluid dynamics studies of mixing of complex oral health products

    Science.gov (United States)

    Cortada-Garcia, Marti; Migliozzi, Simona; Weheliye, Weheliye Hashi; Dore, Valentina; Mazzei, Luca; Angeli, Panagiota; ThAMes Multiphase Team

    2017-11-01

    Highly viscous non-Newtonian fluids are largely used in the manufacturing of specialized oral care products. Mixing often takes place in mechanically stirred vessels where the flow fields and mixing times depend on the geometric configuration and the fluid physical properties. In this research, we study the mixing performance of complex non-Newtonian fluids using Computational Fluid Dynamics models and validate them against experimental laser-based optical techniques. To this aim, we developed a scaled-down version of an industrial mixer. As test fluids, we used mixtures of glycerol and a Carbomer gel. The viscosities of the mixtures against shear rate at different temperatures and phase ratios were measured and found to be well described by the Carreau model. The numerical results were compared against experimental measurements of velocity fields from Particle Image Velocimetry (PIV) and concentration profiles from Planar Laser Induced Fluorescence (PLIF).

  19. Application of data base management systems for developing experimental data base using ES computers

    International Nuclear Information System (INIS)

    Vasil'ev, V.I.; Karpov, V.V.; Mikhajlyuk, D.N.; Ostroumov, Yu.A.; Rumyantsev, A.N.

    1987-01-01

    Modern data base measurement systems (DBMS) are widely used for development and operation of different data bases by assignment of data processing systems in economy, planning, management. But up today development and operation of data masses with experimental physical data in ES computer has been based mainly on the traditional technology of consequent or index-consequent files. The principal statements of DBMS technology applicability for compiling and operation of data bases with data on physical experiments are formulated based on the analysis of DBMS opportunities. It is shown that application of DBMS allows to essentially reduce general costs of calculational resources for development and operation of data bases and to decrease the scope of stored experimental data when analyzing information content of data

  20. Experimental and Computational Analysis of Water-Droplet Formation and Ejection Process Using Hollow Microneedle

    Science.gov (United States)

    Kato, Norihisa; Oka, Ryotaro; Sakai, Takahiro; Shibata, Takayuki; Kawashima, Takahiro; Nagai, Moeto; Mineta, Takashi; Makino, Eiji

    2011-06-01

    In this paper, we present the possibility of liquid delivery using fabricated hollow silicon dioxide microneedles of approximately 2 µm in diameter. As a fundamental study, the water-droplet formation and ejection process was examined via dynamic observations during water ejection tests and computational fluid dynamics (CFD) analysis. The experimental results indicated that fluid flow in a microneedle follows the Hagen-Poiseuille law, i.e., the flow rate is approximately directly proportional to the fourth power of the inner diameter. Moreover, the ejection pressure and maximum droplet curvature obtained using the proposed microfluid ejection model were in good agreement with the experimental results. The resulting ejection pressure is equal to the theoretical pressure difference of a spherical droplet, which is determined using the Young-Laplace equation. The maximum curvature of a droplet formed at the tip of a microneedle can be estimated on the basis of the contact angle theory expressed by the Young equation.

  1. FLICA III. A digital computer program for thermal-hydraulic analysis of reactors and experimental loops

    International Nuclear Information System (INIS)

    Plas, Roger.

    1975-05-01

    This computer program describes the flow and heat transfer in steady and transient state in two-phase flows. It is the present stage of the evolution about FLICA, FLICA II and FLICA II B codes which have been used and developed at CEA for the thermal-hydraulic analysis of reactors and experimental loops with heating rod bundles. In the mathematical model all the significant terms of the fundamental hydrodynamic equations are taken into account with the approximations of turbulent viscosity and conductivity. The two-phase flow is calculated by the homogeneous model with slip. In the flow direction an implicit resolution scheme is available, which make possible to study partial or total flow blockage, with upstream and downstream effects. A special model represents the helical wire effects in out-of pile experimental rod bundles [fr

  2. Reflectivity of 1D photonic crystals: A comparison of computational schemes with experimental results

    Science.gov (United States)

    Pérez-Huerta, J. S.; Ariza-Flores, D.; Castro-García, R.; Mochán, W. L.; Ortiz, G. P.; Agarwal, V.

    2018-04-01

    We report the reflectivity of one-dimensional finite and semi-infinite photonic crystals, computed through the coupling to Bloch modes (BM) and through a transfer matrix method (TMM), and their comparison to the experimental spectral line shapes of porous silicon (PS) multilayer structures. Both methods reproduce a forbidden photonic bandgap (PBG), but slowly-converging oscillations are observed in the TMM as the number of layers increases to infinity, while a smooth converged behavior is presented with BM. The experimental reflectivity spectra is in good agreement with the TMM results for multilayer structures with a small number of periods. However, for structures with large amount of periods, the measured spectral line shapes exhibit better agreement with the smooth behavior predicted by BM.

  3. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales.

    Science.gov (United States)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido

    2017-05-16

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the

  4. Analysis of eventual accidents in a water experimental loop, using the Relap 4 computer code

    International Nuclear Information System (INIS)

    Fernandes Filho, T.L.

    1981-01-01

    Transients caused by accidents as (1) loss of coolant, (2) failure in the principal pump and (3) power excursions were analysed. In the accident simulation, the Relap 4/Mod 3 computer code was used. The results obtained with the steady state model showed to be consistent with the project-and operation data of the experimental loop. For all the accidents analysed that considered the performance of safety systems, the highest temperature of the heating rods in the testing section did not exceed the permissible temperature. (E.G.) [pt

  5. Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data

    Science.gov (United States)

    2017-03-02

    AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum- Encrypted Data Philip Walther UNIVERSITT WIEN Final...on Quantum- Encrypted Data 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-16-1-0004 5c.  PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Philip Walther 5d...1010 AT 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) EOARD Unit 4515 APO AE 09421-4515 10

  6. EXPERIMENTAL AND THEORETICAL FOUNDATIONS AND PRACTICAL IMPLEMENTATION OF TECHNOLOGY BRAIN-COMPUTER INTERFACE

    Directory of Open Access Journals (Sweden)

    A. Ya. Kaplan

    2013-01-01

    Full Text Available Technology brain-computer interface (BCI allow saperson to learn how to control external devices via thevoluntary regulation of own EEG directly from the brain without the involvement in the process of nerves and muscles. At the beginning the main goal of BCI was to replace or restore motor function to people disabled by neuromuscular disorders. Currently, the task of designing the BCI increased significantly, more capturing different aspects of life a healthy person. This article discusses the theoretical, experimental and technological base of BCI development and systematized critical fields of real implementation of these technologies.

  7. Sensitivity Measurement of Transmission Computer Tomography: thePreliminary Experimental Study

    International Nuclear Information System (INIS)

    Widodo, Chomsin-S; Sudjatmoko; Kusminarto; Agung-BS Utomo; Suparta, Gede B

    2000-01-01

    This paper reports result of preliminary experimental study onmeasurement method for sensitivity of a computed tomography (CT) scanner. ACT scanner has been build at the Department of Physics, FMIPA UGM and itsperformance based on its sensitivity was measured. The result showed that themeasurement method for sensitivity confirmed this method may be developedfurther as a measurement standard. Although the CT scanner developed has anumber of shortcoming, the analytical results from the sensitivitymeasurement suggest a number of reparations and improvements for the systemso that improved reconstructed CT images can be obtained. (author)

  8. Experimental realization of quantum cheque using a five-qubit quantum computer

    Science.gov (United States)

    Behera, Bikash K.; Banerjee, Anindita; Panigrahi, Prasanta K.

    2017-12-01

    Quantum cheques could be a forgery-free way to make transaction in a quantum networked banking system with perfect security against any no-signalling adversary. Here, we demonstrate the implementation of quantum cheque, proposed by Moulick and Panigrahi (Quantum Inf Process 15:2475-2486, 2016), using the five-qubit IBM quantum computer. Appropriate single qubit, CNOT and Fredkin gates are used in an optimized configuration. The accuracy of implementation is checked and verified through quantum state tomography by comparing results from the theoretical and experimental density matrices.

  9. Adsorptive desulfurization with CPO-27/MOF-74: an experimental and computational investigation.

    Science.gov (United States)

    Van de Voorde, Ben; Hezinová, Markéta; Lannoeye, Jeroen; Vandekerkhove, Annelies; Marszalek, Bartosz; Gil, Barbara; Beurroies, Isabelle; Nachtigall, Petr; De Vos, Dirk

    2015-04-28

    By combining experimental adsorption isotherms, microcalorimetric data, infrared spectroscopy and quantum chemical calculations the adsorption behaviour of the CPO-27/MOF-74 series (Ni, Co, Mg, Cu, and Zn) in the desulfurization of fuels is evaluated. The results show a clear influence of the metal ion on the adsorption capacity and affinity for S-heterocyclic compounds, with CPO-27(Ni) being the best performing material both in terms of capacity and affinity. The microcalorimetric data and infrared spectroscopy confirm the high affinity of CPO-27(Ni) for thiophene and similar compounds, while the computational data reveal that the origin of this outstanding adsorption performance is the strong sulfur-metal interaction.

  10. Computational and experimental investigation of plasma deflagration jets and detonation shocks in coaxial plasma accelerators

    Science.gov (United States)

    Subramaniam, Vivek; Underwood, Thomas C.; Raja, Laxminarayan L.; Cappelli, Mark A.

    2018-02-01

    the magnetic pinch. The pinch width and number density distribution are compared to experimentally obtained data to calibrate the inlet boundary conditions used to set up the plasma acceleration problem.

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  12. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    Science.gov (United States)

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  13. Computational and experimental study of the effect of mass transfer on liquid jet break-up

    Science.gov (United States)

    Schetz, J. A.; Situ, M.

    1983-06-01

    A computational method has been developed to predict the effect of mass transfer on liquid jet break-up in coaxial, low velocity gas streams. Two conditions, both with and without the effect of mass transfer on the jet break-up, are calculated, and compared with experimental results and the classical linear theory. Methanol and water were used as the injectants. The numerical solution can predict the instantaneous shape of the jet surface and the break-up time, and it is very close to the experimental results. The numerical solutions and the experimental results both indicate that the wave number of the maximum instability is about 6.9, higher than 4.51 which was predicted by Rayleigh's linear theory. The experimental results and numerical solution show that the growth of the amplitude of the trough is faster than the growth of the amplitude of the crest, especially for a rapidly vaporizing jet. The numerical solutions show that for the small rates of evaporation, the effect of the mass transfer on the interface has a stabilizing effect near the wave number for maximum instability. Inversely, it has a destabilizing effect far from the wave number for maximum instability. For rapid evaporation, the effect of the mass transfer always has a destabilizing effect and decreases the break-up time of the jet.

  14. Set-up and calibration of an outdoor nozzle-type rainfall simulator for soil erosion studies at the Masse experimental station (central Italy)

    Science.gov (United States)

    Vergni, Lorenzo; Todisco, Francesca

    2016-04-01

    This contribution describes the technical characteristics and the preliminary calibration of a rainfall simulator recently installed by the Department of Agricultural, Food and Environmental Sciences (Perugia University) at the Masse experimental station located 20 km south of Perugia, in the region of Umbria (central Italy). The site includes some USLE plots of different length λ = 11 and 22 m and width w = 2, 4 and 8 m, oriented parallel to a 16 % slope and kept free of vegetation by frequent ploughing. Since 2008, the station enabled to collect data from more than 80 erosive events, that were mainly used to investigate the relationship between rainfall characteristics and soil loss. The relevant soil loss variability that characterizes erosive storm events with similar overall characteristics (duration and/or depth) can be explained by the different rainfall profile of erosive storms and by the different antecedent soil aggregate stability. To analyse in more detail these aspects, recently, the Masse experimental station has been equipped with a semi-portable rainfall simulator placed over two micro-plots of 1x1 m each, having the same topographic and pedologic conditions of the adjacent USLE plots. The rainfall simulator consists of four full-cone spray nozzles for each micro-plot, placed at the angles of a 0.18-m square, centred over the plot at a height of 2.7 m above the ground. The operating pressure is regulated by pressure regulating valves and checked by pressure gauges mounted in correspondence of each nozzle. An electronic control unit regulates the start and stop of the inlet solenoid valves. A range of rainfall intensities can be achieved, by activating different combinations of nozzles (15 different intensities) also during the same simulation trial. The particular design of the plots allows to collect separately the runoff volume deriving from the plots and the water volume fallen outside of the plot. In this way it is possible to derive, by

  15. Assessment of coronary calcification using calibrated mass score with two different multidetector computed tomography scanners in the Copenhagen General Population Study

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, Andreas [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Groen, Jaap M. [Department of Radiology, University of Groningen, University Medical Center Groningen (Netherlands); Department of Medical Physics, OLVG, Amsterdam (Netherlands); Arnold, Ben A. [Image Analysis, 1380 Burkesville Road, Columbia, KY (United States); Nikolovski, Sasho [Department of Radiology, University of Groningen, University Medical Center Groningen (Netherlands); Knudsen, Andreas D., E-mail: dehlbaek@gmail.com [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Kühl, J. Tobias [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Nordestgaard, Børge G. [Department of Clinical Biochemistry and the Copenhagen General Population Study, Herlev Hospital, University of Copenhagen (Denmark); Greuter, Marcel J.W. [Department of Radiology, University of Groningen, University Medical Center Groningen (Netherlands); Kofoed, Klaus F. [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Department of Radiology, The Diagnostic Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark)

    2017-03-15

    Objective: Population studies have shown coronary calcium score to improve risk stratification in subjects suspected for cardiovascular disease. The aim of this work was to assess the validity of multidetector computed tomography (MDCT) for measurement of calibrated mass scores (MS) in a phantom study, and to investigate inter-scanner variability for MS and Agaston score (AS) recorded in a population study on two different high-end MDCT scanners. Materials and methods: A calcium phantom was scanned by a first (A) and second (B) generation 320-MDCT. MS was measured for each calcium deposit from repeated measurements in each scanner and compared to known physical phantom mass. Random samples of human subjects from the Copenhagen General Population Study were scanned with scanner A (N = 254) and scanner B (N = 253) where MS and AS distributions of these two groups were compared. Results: The mean total MS of the phantom was 32.9 ± 0.8 mg and 33.1 ± 0.9 mg (p = 0.43) assessed by scanner A and B respectively – the physical calcium mass was 34.0 mg. Correlation between measured MS and physical calcium mass was R{sup 2} = 0.99 in both scanners. In the population study the median total MS was 16.8 mg (interquartile range (IQR): 3.5–81.1) and 15.8 mg (IQR: 3.8–63.4) in scanner A and B (p = 0.88). The corresponding median total AS were 92 (IQR: 23–471) and 89 (IQR: 40–384) (p = 0.64). Conclusion: Calibrated calcium mass score may be assessed with very high accuracy in a calcium phantom by different generations of 320-MDCT scanners. In population studies, it appears acceptable to pool calcium scores acquired on different 320-MDCT scanners.

  16. Phantom-Calibrated versus Automatic Coronary Artery Mass Quantification with Multidetector-Row Computed Tomography: In Vitro and In Vivo Study

    Energy Technology Data Exchange (ETDEWEB)

    Serafin, Z.; Lasek, W.; Laskowska, K. (Dept. of Radiology and Diagnostic Imaging, Nicolaus Copernicus Univ., Collegium Medicum, Bydgoszcz (Poland))

    2008-11-15

    Background: Coronary artery calcium scoring is used as a method for cardiovascular risk stratification and monitoring of coronary heart disease. Automatic software-based calcium mass calculation has been proposed to improve the performance of the procedure. Purpose: To compare two algorithms of calcium mass measurement, automatic and phantom calibrated, with respect to correlation, measurement error, and accuracy in vitro and in vivo. Material and Methods: A cardiac phantom with calcium cylinder inserts was scanned with sequential non-overlapping collimation 4x2.5 mm, at 120 kV and 165 mAs. Fifty adults (37 men; mean age 46.2 years) were examined with the same settings using prospective electrocardiographic triggering to detect and quantify coronary artery calcifications. Calculations were performed with two methods: software-based automatic calcium mass measurement (ACM) and phantom-calibrated calcium mass measurement (CCM). Results: The total phantom calcium masses measured with ACM and CCM were 175.0+-13.8 mg and 163.0+-4.4 mg, respectively (P<0.0001), and ACM produced a higher mean error (4.5 vs. 3.2; P<0.05). Results of ACM and CCM were strongly correlated to each other (R=0.73-0.96; P<0.0001). Mean image noise in the patient study was 8.72+-1.68 HU. Results of patient calcium scoring with ACM and CCM were significantly different (median 70.3 mg and 59.7 mg, respectively; P<0.0001), with a mean systematic error of 17.5% (limit of agreement between 14.6% and 20.4%). The use of ACM resulted in an altered quartile classification for 14% of patients, as compared to CCM; all of these patients were classified into a higher category. Conclusion: Our data indicate that multidetector-row computed tomography coronary calcium mass determination based on dedicated phantom calibration shows lower measurement error than an automatic software-based calculation method. The tested automatic software does not yet seem to be a reliable option for calcium mass measurement

  17. Phantom-Calibrated versus Automatic Coronary Artery Mass Quantification with Multidetector-Row Computed Tomography: In Vitro and In Vivo Study

    International Nuclear Information System (INIS)

    Serafin, Z.; Lasek, W.; Laskowska, K.

    2008-01-01

    Background: Coronary artery calcium scoring is used as a method for cardiovascular risk stratification and monitoring of coronary heart disease. Automatic software-based calcium mass calculation has been proposed to improve the performance of the procedure. Purpose: To compare two algorithms of calcium mass measurement, automatic and phantom calibrated, with respect to correlation, measurement error, and accuracy in vitro and in vivo. Material and Methods: A cardiac phantom with calcium cylinder inserts was scanned with sequential non-overlapping collimation 4x2.5 mm, at 120 kV and 165 mAs. Fifty adults (37 men; mean age 46.2 years) were examined with the same settings using prospective electrocardiographic triggering to detect and quantify coronary artery calcifications. Calculations were performed with two methods: software-based automatic calcium mass measurement (ACM) and phantom-calibrated calcium mass measurement (CCM). Results: The total phantom calcium masses measured with ACM and CCM were 175.0±13.8 mg and 163.0±4.4 mg, respectively (P<0.0001), and ACM produced a higher mean error (4.5 vs. 3.2; P<0.05). Results of ACM and CCM were strongly correlated to each other (R=0.73-0.96; P<0.0001). Mean image noise in the patient study was 8.72±1.68 HU. Results of patient calcium scoring with ACM and CCM were significantly different (median 70.3 mg and 59.7 mg, respectively; P<0.0001), with a mean systematic error of 17.5% (limit of agreement between 14.6% and 20.4%). The use of ACM resulted in an altered quartile classification for 14% of patients, as compared to CCM; all of these patients were classified into a higher category. Conclusion: Our data indicate that multidetector-row computed tomography coronary calcium mass determination based on dedicated phantom calibration shows lower measurement error than an automatic software-based calculation method. The tested automatic software does not yet seem to be a reliable option for calcium mass measurement

  18. Calibration bench of flowmeters

    International Nuclear Information System (INIS)

    Bremond, J.; Da Costa, D.; Calvet, A.; Vieuxmaire, C.

    1966-01-01

    This equipment is devoted to the comparison of signals from two turbines installed in the Cabri experimental loop. The signal is compared to the standard turbine. The characteristics and the performance of the calibration bench are presented. (A.L.B.)

  19. SCHEMA computational design of virus capsid chimeras: calibrating how genome packaging, protection, and transduction correlate with calculated structural disruption.

    Science.gov (United States)

    Ho, Michelle L; Adler, Benjamin A; Torre, Michael L; Silberg, Jonathan J; Suh, Junghae

    2013-12-20

    Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions.

  20. Integration of experimental and computational methods for identifying geometric, thermal and diffusive properties of biomaterials

    Science.gov (United States)

    Weres, Jerzy; Kujawa, Sebastian; Olek, Wiesław; Czajkowski, Łukasz

    2016-04-01

    Knowledge of physical properties of biomaterials is important in understanding and designing agri-food and wood processing industries. In the study presented in this paper computational methods were developed and combined with experiments to enhance identification of agri-food and forest product properties, and to predict heat and water transport in such products. They were based on the finite element model of heat and water transport and supplemented with experimental data. Algorithms were proposed for image processing, geometry meshing, and inverse/direct finite element modelling. The resulting software system was composed of integrated subsystems for 3D geometry data acquisition and mesh generation, for 3D geometry modelling and visualization, and for inverse/direct problem computations for the heat and water transport processes. Auxiliary packages were developed to assess performance, accuracy and unification of data access. The software was validated by identifying selected properties and using the estimated values to predict the examined processes, and then comparing predictions to experimental data. The geometry, thermal conductivity, specific heat, coefficient of water diffusion, equilibrium water content and convective heat and water transfer coefficients in the boundary layer were analysed. The estimated values, used as an input for simulation of the examined processes, enabled reduction in the uncertainty associated with predictions.

  1. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations

    Directory of Open Access Journals (Sweden)

    Andrea Stocco

    2018-04-01

    Full Text Available This article describes the data analyzed in the paper “Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model” (Stocco et al., 2017 [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004 [2], Simon task (Craft and Simon, 1970 [3], and Automated Operation Span (Unsworth et al., 2005 [4], as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  2. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations.

    Science.gov (United States)

    Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S

    2018-04-01

    This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  3. Toward efficient computation of the expected relative entropy for nonlinear experimental design

    International Nuclear Information System (INIS)

    Coles, Darrell; Prange, Michael

    2012-01-01

    The expected relative entropy between prior and posterior model-parameter distributions is a Bayesian objective function in experimental design theory that quantifies the expected gain in information of an experiment relative to a previous state of knowledge. The expected relative entropy is a preferred measure of experimental quality because it can handle nonlinear data-model relationships, an important fact due to the ubiquity of nonlinearity in science and engineering and its effects on post-inversion parameter uncertainty. This objective function does not necessarily yield experiments that mediate well-determined systems, but, being a Bayesian quality measure, it rigorously accounts for prior information which constrains model parameters that may be only weakly constrained by the optimized dataset. Historically, use of the expected relative entropy has been limited by the computing and storage requirements associated with high-dimensional numerical integration. Herein, a bifocal algorithm is developed that makes these computations more efficient. The algorithm is demonstrated on a medium-sized problem of sampling relaxation phenomena and on a large problem of source–receiver selection for a 2D vertical seismic profile. The method is memory intensive but workarounds are discussed. (paper)

  4. Comparison of Experimental Surface and Flow Field Measurements to Computational Results of the Juncture Flow Model

    Science.gov (United States)

    Roozeboom, Nettie H.; Lee, Henry C.; Simurda, Laura J.; Zilliac, Gregory G.; Pulliam, Thomas H.

    2016-01-01

    Wing-body juncture flow fields on commercial aircraft configurations are challenging to compute accurately. The NASA Advanced Air Vehicle Program's juncture flow committee is designing an experiment to provide data to improve Computational Fluid Dynamics (CFD) modeling in the juncture flow region. Preliminary design of the model was done using CFD, yet CFD tends to over-predict the separation in the juncture flow region. Risk reduction wind tunnel tests were requisitioned by the committee to obtain a better understanding of the flow characteristics of the designed models. NASA Ames Research Center's Fluid Mechanics Lab performed one of the risk reduction tests. The results of one case, accompanied by CFD simulations, are presented in this paper. Experimental results suggest the wall mounted wind tunnel model produces a thicker boundary layer on the fuselage than the CFD predictions, resulting in a larger wing horseshoe vortex suppressing the side of body separation in the juncture flow region. Compared to experimental results, CFD predicts a thinner boundary layer on the fuselage generates a weaker wing horseshoe vortex resulting in a larger side of body separation.

  5. Experimental and computational analysis of pressure response in a multiphase flow loop

    Science.gov (United States)

    Morshed, Munzarin; Amin, Al; Rahman, Mohammad Azizur; Imtiaz, Syed

    2016-07-01

    The characteristics of multiphase fluid flow in pipes are useful to understand fluid mechanics encountered in the oil and gas industries. In the present day oil and gas exploration is successively inducing subsea operation in the deep sea and arctic condition. During the transport of petroleum products, understanding the fluid dynamics inside the pipe network is important for flow assurance. In this case the information regarding static and dynamic pressure response, pressure loss, optimum flow rate, pipe diameter etc. are the important parameter for flow assurance. The principal aim of this research is to represents computational analysis and experimental analysis of multi-phase (L/G) in a pipe network. This computational study considers a two-phase fluid flow through a horizontal flow loop with at different Reynolds number in order to determine the pressure distribution, frictional pressure loss profiles by volume of fluid (VOF) method. However, numerical simulations are validated with the experimental data. The experiment is conducted in 76.20 mm ID transparent circular pipe using water and air in the flow loop. Static pressure transducers are used to measure local pressure response in multiphase pipeline.

  6. Experimental and computational investigation of the NASA low-speed centrifugal compressor flow field

    Science.gov (United States)

    Hathaway, Michael D.; Chriss, Randall M.; Wood, Jerry R.; Strazisar, Anthony J.

    1993-01-01

    An experimental and computational investigation of the NASA Lewis Research Center's low-speed centrifugal compressor (LSCC) flow field was conducted using laser anemometry and Dawes' three-dimensional viscous code. The experimental configuration consisted of a backswept impeller followed by a vaneless diffuser. Measurements of the three-dimensional velocity field were acquired at several measurement planes through the compressor. The measurements describe both the throughflow and secondary velocity field along each measurement plane. In several cases the measurements provide details of the flow within the blade boundary layers. Insight into the complex flow physics within centrifugal compressors is provided by the computational fluid dynamics analysis (CFD), and assessment of the CFD predictions is provided by comparison with the measurements. Five-hole probe and hot-wire surveys at the inlet and exit to the impeller as well as surface flow visualization along the impeller blade surfaces provided independent confirmation of the laser measurement technique. The results clearly document the development of the throughflow velocity wake that is characteristic of unshrouded centrifugal compressors.

  7. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  8. Computational and Experimental Study of Thermodynamics of the Reaction of Titania and Water at High Temperatures.

    Science.gov (United States)

    Nguyen, Q N; Bauschlicher, C W; Myers, D L; Jacobson, N S; Opila, E J

    2017-12-14

    Gaseous titanium hydroxide and oxyhydroxide species were studied with quantum chemical methods. The results are used in conjunction with an experimental transpiration study of titanium dioxide (TiO 2 ) in water vapor-containing environments at elevated temperatures to provide a thermodynamic description of the Ti(OH) 4 (g) and TiO(OH) 2 (g) species. The geometry and harmonic vibrational frequencies of these species were computed using the coupled-cluster singles and doubles method with a perturbative correction for connected triple substitutions [CCSD(T)]. For the OH bending and rotation, the B3LYP density functional theory was used to compute corrections to the harmonic approximations. These results were combined to determine the enthalpy of formation. Experimentally, the transpiration method was used with water contents from 0 to 76 mol % in oxygen or argon carrier gases for 20-250 h exposure times at 1473-1673 K. Results indicate that oxygen is not a key contributor to volatilization, and the primary reaction for volatilization in this temperature range is TiO 2 (s) + H 2 O(g) = TiO(OH) 2 (g). Data were analyzed with both the second and third law methods using the thermal functions derived from the theoretical calculations. The third law enthalpy of formation at 298.15 K for TiO(OH) 2 (g) at 298 K was -838.9 ± 6.5 kJ/mol, which compares favorably to the theoretical calculation of -838.7 ± 25 kJ/mol. We recommend the experimentally derived third law enthalpy of formation at 298.15 K for TiO(OH) 2 , the computed entropy of 320.67 J/mol·K, and the computed heat capacity [149.192 + (-0.02539)T + (8.28697 × 10 -6 )T 2 + (-15614.05)/T + (-5.2182 × 10 -11 )/T 2 ] J/mol-K, where T is the temperature in K.

  9. Vortex-Concept for Radioactivity Release Prevention at NPP: Development of Computational Model of Lab-Scale Experimental Setup

    Energy Technology Data Exchange (ETDEWEB)

    Ullah, Sana; Sung, Yim Man; Park, Jin Soo; Sung Hyung Jin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The experimental validation of the vortex-like air curtain concept and use of an appropriate CFD modelling approach for analyzing the problem becomes crucial. A lab-scale experimental setup is designed to validate the proposed concept and CFD modeling approach as a part of validation process. In this study, a computational model of this lab-scale experiment setup is developed using open source CFD code OpenFOAM. The computational results will be compared with experimental data for validation purposes in future, when experimental data is available. 1) A computation model of a lab-scale experimental setup, designed to validate the concept of artificial vortex-like airflow generation for application to radioactivity dispersion prevention in the event of severe accident, was developed. 2) The mesh sensitivity study was performed and a mesh of about 2 million cells was found to be sufficient for this setup.

  10. A review of combined experimental and computational procedures for assessing biopolymer structure-process-property relationships.

    Science.gov (United States)

    Gronau, Greta; Krishnaji, Sreevidhya T; Kinahan, Michelle E; Giesa, Tristan; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J

    2012-11-01

    Tailored biomaterials with tunable functional properties are desirable for many applications ranging from drug delivery to regenerative medicine. To improve the predictability of biopolymer materials functionality, multiple design parameters need to be considered, along with appropriate models. In this article we review the state of the art of synthesis and processing related to the design of biopolymers, with an emphasis on the integration of bottom-up computational modeling in the design process. We consider three prominent examples of well-studied biopolymer materials - elastin, silk, and collagen - and assess their hierarchical structure, intriguing functional properties and categorize existing approaches to study these materials. We find that an integrated design approach in which both experiments and computational modeling are used has rarely been applied for these materials due to difficulties in relating insights gained on different length- and time-scales. In this context, multiscale engineering offers a powerful means to accelerate the biomaterials design process for the development of tailored materials that suit the needs posed by the various applications. The combined use of experimental and computational tools has a very broad applicability not only in the field of biopolymers, but can be exploited to tailor the properties of other polymers and composite materials in general. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. A review of combined experimental and computational procedures for assessing biopolymer structure–process–property relationships

    Science.gov (United States)

    Gronau, Greta; Krishnaji, Sreevidhya T.; Kinahan, Michelle E.; Giesa, Tristan; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.

    2013-01-01

    Tailored biomaterials with tunable functional properties are desirable for many applications ranging from drug delivery to regenerative medicine. To improve the predictability of biopolymer materials functionality, multiple design parameters need to be considered, along with appropriate models. In this article we review the state of the art of synthesis and processing related to the design of biopolymers, with an emphasis on the integration of bottom-up computational modeling in the design process. We consider three prominent examples of well-studied biopolymer materials – elastin, silk, and collagen – and assess their hierarchical structure, intriguing functional properties and categorize existing approaches to study these materials. We find that an integrated design approach in which both experiments and computational modeling are used has rarely been applied for these materials due to difficulties in relating insights gained on different length- and time-scales. In this context, multiscale engineering offers a powerful means to accelerate the biomaterials design process for the development of tailored materials that suit the needs posed by the various applications. The combined use of experimental and computational tools has a very broad applicability not only in the field of biopolymers, but can be exploited to tailor the properties of other polymers and composite materials in general. PMID:22938765

  12. Thermochemistry of 6-propyl-2-thiouracil: An experimental and computational study

    Energy Technology Data Exchange (ETDEWEB)

    Szterner, Piotr; Galvão, Tiago L.P.; Amaral, Luísa M.P.F.; Ribeiro da Silva, Maria D.M.C., E-mail: mdsilva@fc.up.pt; Ribeiro da Silva, Manuel A.V.

    2014-07-01

    Highlights: • Thermochemistry of 6-propyl-2-thiouracil – experimental and computational study. • Vapor pressure study of the 6-propyl-2-thiouracil by Knudsen effusion technique. • Enthalpies of formation of 6-propyl-2-thiouracil by rotating combustion calorimetry. • Accurate computational calculations (G3 and G4 composite methods) were performed. - Abstract: The standard (p{sup o} = 0.1 MPa) molar enthalpy of formation of 6-propyl-2-thiouracil was derived from its standard molar energy of combustion, in oxygen, to yield CO{sub 2} (g), N{sub 2} (g) and H{sub 2}SO{sub 4}·115H{sub 2}O (l), at T = 298.15 K, measured by rotating bomb combustion calorimetry. The vapor pressures as function of temperature were measured by the Knudsen effusion technique and the standard molar enthalpy of sublimation, Δ{sub cr}{sup g}H{sub m}{sup o}, at T = 298.15 K, was derived by the Clausius–Clapeyron equation. These two thermodynamic parameters yielded the standard molar enthalpy of formation, in the gaseous phase, at T = 298.15 K: −(142.5 ± 1.9) kJ mol{sup −1}. This value was compared with estimates obtained from very accurate computational calculations using the G3 and G4 composite methods.

  13. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  14. Thermodynamic properties of 2,7-di-tert-butylfluorene – An experimental and computational study

    International Nuclear Information System (INIS)

    Oliveira, Juliana A.S.A.; Freitas, Vera L.S.; Notario, Rafael; Ribeiro da Silva, Maria D.M.C.; Monte, Manuel J.S.

    2016-01-01

    Highlights: • Enthalpies and Gibbs energies of formation of 2,7-di-tert-butylfluorene were determined. • Vapour pressures were measured at different temperatures. • Phase transition thermodynamic properties were determined. - Abstract: This work presents a comprehensive experimental and computational study of the thermodynamic properties of 2,7-di-tert-butylfluorene. The standard (p"o = 0.1 MPa) molar enthalpy of formation in the crystalline phase was derived from the standard molar energy of combustion, measured by static bomb combustion calorimetry. The enthalpies and temperatures of transition between condensed phases were determined from DSC experiments. The vapour pressures of the crystalline and liquid phases were measured between (349.14 and 404.04) K, using two different experimental methods. From these results the standard molar enthalpies, entropies and Gibbs energies of sublimation and of vaporization were derived. The enthalpy of sublimation was also determined using Calvet microcalorimetry. The thermodynamic stability of 2,7-di-tert-butylfluorene in the crystalline and gaseous phases was evaluated by the determination of the standard Gibbs energies of formation, at the temperature 298.15 K, and compared with the ones reported in the literature for fluorene. A computational study at the G3(MP2)//B3LYP and G3 levels has been carried out. A conformational analysis has been performed and the enthalpy of formation of 2,7-di-tert-butylfluorene has been calculated, using atomization and isodesmic reactions. The calculated enthalpies of formation have been compared to the experimental values.

  15. Experimental and computational studies of thermal mixing in next generation nuclear reactors

    Science.gov (United States)

    Landfried, Douglas Tyler

    The Very High Temperature Reactor (VHTR) is a proposed next generation nuclear power plant. The VHTR utilizes helium as a coolant in the primary loop of the reactor. Helium traveling through the reactor mixes below the reactor in a region known as the lower plenum. In this region there exists large temperature and velocity gradients due to non-uniform heat generation in the reactor core. Due to these large gradients, concern should be given to reducing thermal striping in the lower plenum. Thermal striping is the phenomena by which temperature fluctuations in the fluid and transferred to and attenuated by surrounding structures. Thermal striping is a known cause of long term material failure. To better understand and predict thermal striping in the lower plenum two separate bodies of work have been conducted. First, an experimental facility capable of predictably recreating some aspects of flow in the lower plenum is designed according to scaling analysis of the VHTR. Namely the facility reproduces jets issuing into a crossflow past a tube bundle. Secondly, extensive studies investigate the mixing of a non-isothermal parallel round triple-jet at two jet-to-jet spacings was conducted. Experimental results were validation with an open source computational fluid dynamics package, OpenFOAMRTM. Additional care is given to understanding the implementation of the realizable k-a and Launder Gibson RSM turbulence Models in OpenFOAMRTM. In order to measure velocity and temperature in the triple-jet experiment a detailed investigation of temperature compensated hotwire anemometry is carried out with special concern being given to quantify the error with the measurements. Finally qualitative comparisons of trends in the experimental results and the computational results is conducted. A new and unexpected physical behavior was observed in the center jet as it appeared to spread unexpectedly for close spacings (S/Djet = 1.41).

  16. Experimental and Computational Study of Multiphase Flow Hydrodynamics in 2D Trickle Bed Reactors

    Science.gov (United States)

    Nadeem, H.; Ben Salem, I.; Kurnia, J. C.; Rabbani, S.; Shamim, T.; Sassi, M.

    2014-12-01

    Trickle bed reactors are largely used in the refining processes. Co-current heavy oil and hydrogen gas flow downward on catalytic particle bed. Fine particles in the heavy oil and/or soot formed by the exothermic catalytic reactions deposit on the bed and clog the flow channels. This work is funded by the refining company of Abu Dhabi and aims at mitigating pressure buildup due to fine deposition in the TBR. In this work, we focus on meso-scale experimental and computational investigations of the interplay between flow regimes and the various parameters that affect them. A 2D experimental apparatus has been built to investigate the flow regimes with an average pore diameter close to the values encountered in trickle beds. A parametric study is done for the development of flow regimes and the transition between them when the geometry and arrangement of the particles within the porous medium are varied. Liquid and gas flow velocities have also been varied to capture the different flow regimes. Real time images of the multiphase flow are captured using a high speed camera, which were then used to characterize the transition between the different flow regimes. A diffused light source was used behind the 2D Trickle Bed Reactor to enhance visualizations. Experimental data shows very good agreement with the published literature. The computational study focuses on the hydrodynamics of multiphase flow and to identify the flow regime developed inside TBRs using the ANSYS Fluent Software package. Multiphase flow inside TBRs is investigated using the "discrete particle" approach together with Volume of Fluid (VoF) multiphase flow modeling. The effect of the bed particle diameter, spacing, and arrangement are presented that may be used to provide guidelines for designing trickle bed reactors.

  17. Synchrotron Micro-XANES Measurements of Vanadium Oxidation State in Glasses as a Function of Oxygen Fugacity: Experimental Calibration of Data Relevant to Partition Coefficient Determination

    Science.gov (United States)

    Delaney, J. S.; Sutton, S. R.; Newville, M.; Jones, J. H.; Hanson, B.; Dyar, M. D.; Schreiber, H.

    2000-01-01

    Oxidation state microanalyses for V in glass have been made by calibrating XANES spectral features with optical spectroscopic measurements. The oxidation state change with fugacity of O2 will strongly influence partitioning results.

  18. Nanosecond electrical explosion of thin aluminum wire in vacuum: experimental and computational investigations

    International Nuclear Information System (INIS)

    Cochrane, Kyle Robert; Struve, Kenneth William; Rosenthal, Stephen Edgar; McDaniel, Dillon Heirman; Sarkisov, Gennady Sergeevich; Deeney, Christopher

    2004-01-01

    The experimental and computational investigations of nanosecond electrical explosion of thin Al wire in vacuum are presented. We have demonstrated that increasing the current rate leads to increased energy deposited before voltage collapse. Laser shadowgrams of the overheated Al core exhibit axial stratification with a ∼100 (micro)m period. The experimental evidence for synchronization of the wire expansion and light emission with voltage collapse is presented. Two-wavelength interferometry shows an expanding Al core in a low-ionized gas condition with increasing ionization toward the periphery. Hydrocarbons are indicated in optical spectra and their influence on breakdown physics is discussed. The radial velocity of low-density plasma reaches a value of ∼100 km/s. The possibility of an overcritical phase transition due to high pressure is discussed. 1D MHD simulation shows good agreement with experimental data. MHD simulation demonstrates separation of the exploding wire into a high-density cold core and a low-density hot corona as well as fast rejection of the current from the wire core to the corona during voltage collapse. Important features of the dynamics for wire core and corona follow from the MHD simulation and are discussed.

  19. Experimental predictions drawn from a computational model of sign-trackers and goal-trackers.

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Clark, Jeremy J; Flagel, Shelly B; Khamassi, Mehdi

    2015-01-01

    Gaining a better understanding of the biological mechanisms underlying the individual variation observed in response to rewards and reward cues could help to identify and treat individuals more prone to disorders of impulsive control, such as addiction. Variation in response to reward cues is captured in rats undergoing autoshaping experiments where the appearance of a lever precedes food delivery. Although no response is required for food to be delivered, some rats (goal-trackers) learn to approach and avidly engage the magazine until food delivery, whereas other rats (sign-trackers) come to approach and engage avidly the lever. The impulsive and often maladaptive characteristics of the latter response are reminiscent of addictive behaviour in humans. In a previous article, we developed a computational model accounting for a set of experimental data regarding sign-trackers and goal-trackers. Here we show new simulations of the model to draw experimental predictions that could help further validate or refute the model. In particular, we apply the model to new experimental protocols such as injecting flupentixol locally into the core of the nucleus accumbens rather than systemically, and lesioning of the core of the nucleus accumbens before or after conditioning. In addition, we discuss the possibility of removing the food magazine during the inter-trial interval. The predictions from this revised model will help us better understand the role of different brain regions in the behaviours expressed by sign-trackers and goal-trackers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms.

    Science.gov (United States)

    James, Ella L; Bonsall, Michael B; Hoppitt, Laura; Tunbridge, Elizabeth M; Geddes, John R; Milton, Amy L; Holmes, Emily A

    2015-08-01

    Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind's eye and cause distress. We investigated whether reconsolidation-the process during which memories become malleable when recalled-can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. © The Author(s) 2015.

  1. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  2. Comparison between a Computational Seated Human Model and Experimental Verification Data

    Directory of Open Access Journals (Sweden)

    Christian G. Olesen

    2014-01-01

    Full Text Available Sitting-acquired deep tissue injuries (SADTI are the most serious type of pressure ulcers. In order to investigate the aetiology of SADTI a new approach is under development: a musculo-skeletal model which can predict forces between the chair and the human body at different seated postures. This study focuses on comparing results from a model developed in the AnyBody Modeling System, with data collected from an experimental setup. A chair with force-measuring equipment was developed, an experiment was conducted with three subjects, and the experimental results were compared with the predictions of the computational model. The results show that the model predicted the reaction forces for different chair postures well. The correlation coefficients of how well the experiment and model correlate for the seat angle, backrest angle and footrest height was 0.93, 0.96, and 0.95. The study show a good agreement between experimental data and model prediction of forces between a human body and a chair. The model can in the future be used in designing wheelchairs or automotive seats.

  3. Comparison of Computational and Experimental Microphone Array Results for an 18%-Scale Aircraft Model

    Science.gov (United States)

    Lockard, David P.; Humphreys, William M.; Khorrami, Mehdi R.; Fares, Ehab; Casalino, Damiano; Ravetta, Patricio A.

    2015-01-01

    An 18%-scale, semi-span model is used as a platform for examining the efficacy of microphone array processing using synthetic data from numerical simulations. Two hybrid RANS/LES codes coupled with Ffowcs Williams-Hawkings solvers are used to calculate 97 microphone signals at the locations of an array employed in the NASA LaRC 14x22 tunnel. Conventional, DAMAS, and CLEAN-SC array processing is applied in an identical fashion to the experimental and computational results for three different configurations involving deploying and retracting the main landing gear and a part span flap. Despite the short time records of the numerical signals, the beamform maps are able to isolate the noise sources, and the appearance of the DAMAS synthetic array maps is generally better than those from the experimental data. The experimental CLEAN-SC maps are similar in quality to those from the simulations indicating that CLEAN-SC may have less sensitivity to background noise. The spectrum obtained from DAMAS processing of synthetic array data is nearly identical to the spectrum of the center microphone of the array, indicating that for this problem array processing of synthetic data does not improve spectral comparisons with experiment. However, the beamform maps do provide an additional means of comparison that can reveal differences that cannot be ascertained from spectra alone.

  4. Computational-experimental approach to drug-target interaction mapping: A case study on kinase inhibitors.

    Directory of Open Access Journals (Sweden)

    Anna Cichonska

    2017-08-01

    Full Text Available Due to relatively high costs and labor required for experimental profiling of the full target space of chemical compounds, various machine learning models have been proposed as cost-effective means to advance this process in terms of predicting the most potent compound-target interactions for subsequent verification. However, most of the model predictions lack direct experimental validation in the laboratory, making their practical benefits for drug discovery or repurposing applications largely unknown. Here, we therefore introduce and carefully test a systematic computational-experimental framework for the prediction and pre-clinical verification of drug-target interactions using a well-established kernel-based regression algorithm as the prediction model. To evaluate its performance, we first predicted unmeasured binding affinities in a large-scale kinase inhibitor profiling study, and then experimentally tested 100 compound-kinase pairs. The relatively high correlation of 0.77 (p < 0.0001 between the predicted and measured bioactivities supports the potential of the model for filling the experimental gaps in existing compound-target interaction maps. Further, we subjected the model to a more challenging task of predicting target interactions for such a new candidate drug compound that lacks prior binding profile information. As a specific case study, we used tivozanib, an investigational VEGF receptor inhibitor with currently unknown off-target profile. Among 7 kinases with high predicted affinity, we experimentally validated 4 new off-targets of tivozanib, namely the Src-family kinases FRK and FYN A, the non-receptor tyrosine kinase ABL1, and the serine/threonine kinase SLK. Our sub-sequent experimental validation protocol effectively avoids any possible information leakage between the training and validation data, and therefore enables rigorous model validation for practical applications. These results demonstrate that the kernel

  5. Experimental/Computational Approach to Accommodation Coefficients and its Application to Noble Gases on Aluminum Surface (Preprint)

    Science.gov (United States)

    2009-02-03

    computational approach to accommodation coefficients and its application to noble gases on aluminum surface Nathaniel Selden Uruversity of Southern Cahfornia, Los ...8217 ,. 0.’ a~ .......,..,P. • " ,,-0, "p"’U".. ,Po"D.’ 0.’P.... uro . P." FIG. 5: Experimental and computed radiometri~ force for argon (left), xenon

  6. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  7. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    computed counterparts resulting in an agreement within 2.5%, 5%, and 8% within solid water, bone, and lung, respectively. Conclusions: The proposed virtual point source model and characterization method can be used to compute absorbed dose in both the homogeneous and heterogeneous block phantoms within of 2%–8% of measured values, depending on the phantom and the beam quality. The authors’ results also provide experimental validation for their kV dose computation software, kVDoseCalc.

  8. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  9. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  10. Calculating buoy response for a wave energy converter—A comparison of two computational methods and experimental results

    Directory of Open Access Journals (Sweden)

    Linnea Sjökvist

    2017-05-01

    Full Text Available When designing a wave power plant, reliable and fast simulation tools are required. Computational fluid dynamics (CFD software provides high accuracy but with a very high computational cost, and in operational, moderate sea states, linear potential flow theories may be sufficient to model the hydrodynamics. In this paper, a model is built in COMSOL Multiphysics to solve for the hydrodynamic parameters of a point-absorbing wave energy device. The results are compared with a linear model where the hydrodynamical parameters are computed using WAMIT, and to experimental results from the Lysekil research site. The agreement with experimental data is good for both numerical models.

  11. Computational and experimental analyses of the wave propagation through a bar structure including liquid-solid interface

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang Jin [UST Graduate School, Daejeon (Korea, Republic of); Rhee, Hui Nam [Division of Mechanical and Aerospace Engineering, Sunchon National University, Sunchon (Korea, Republic of); Yoon, Doo Byung; Park, Jin Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    In this research, we study the propagation of longitudinal and transverse waves through a metal rod including a liquid layer using computational and experimental analyses. The propagation characteristics of longitudinal and transverse waves obtained by the computational and experimental analyses were consistent with the wave propagation theory for both cases, that is, the homogeneous metal rod and the metal rod including a liquid layer. The fluid-structure interaction modeling technique developed for the computational wave propagation analysis in this research can be applied to the more complex structures including solid-liquid interfaces.

  12. Development of portable flow calibrator

    International Nuclear Information System (INIS)

    Akiyama, Kiyomitsu; Iijima, Nobuo

    1995-01-01

    In the nuclear facilities, air sniffer system is often utilized to evaluate atmospheric concentration of radioactivity in the working environment. The system collects airborne dust on the filter during some sampling period. In this method, total air flow during the sampling period is an important parameter to evaluate the radioactivity concentration correctly. Therefore, calibration for the flow meter of air sniffer system must be done periodically according to Japan Industry Standards (JIS). As we have had to available device to calibrate the flow meter in the working area, we had to remove the flow meters from the installed place and carry them to another place where calibration can be made. This work required a great deal of labor. Now we have developed a portable flow calibrator for air sniffer system which enables us to make in-site calibration of the flow meter in the working area more easily. This report describes the outline of portable flow calibrator and it's experimental results. (author)

  13. Instrumentation calibration

    International Nuclear Information System (INIS)

    Mack, D.A.

    1976-08-01

    Procedures for the calibration of different types of laboratory equipment are described. Provisions for maintaining the integrity of reference and working standards traceable back to a national standard are discussed. Methods of validation and certification methods are included. An appendix lists available publications and services of national standardizing agencies

  14. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    Directory of Open Access Journals (Sweden)

    Khalil M. Ahmad Yousef

    2017-10-01

    Full Text Available Extrinsic calibration of a camera and a 2D laser range finder (lidar sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  15. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    Science.gov (United States)

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  16. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    Science.gov (United States)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  17. Experimental and computational validation of BDTPS using a heterogeneous boron phantom

    CERN Document Server

    Daquino, G G; Mazzini, M; Moss, R L; Muzi, L

    2004-01-01

    The idea to couple the treatment planning system (TPS) to the information on the real boron distribution in the patient acquired by positron emission tomography (PET) is the main added value of the new methodology set-up at DIMNP (Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione) of University of Pisa, in collaboration with the JRC (Joint Research Centre) at Petten (NL). This methodology has been implemented in a new TPS, called Boron Distribution Treatment Planning System (BDTPS), which takes into account the actual boron distribution in the patient's organ, as opposed to other TPSs used in BNCT that assume an ideal uniform boron distribution. BDTPS is based on the Monte Carlo technique and has been experimentally validated comparing the computed main parameters (thermal neutron flux, boron dose, etc.) to those measured during the irradiation of an ad hoc designed phantom (HEterogeneous BOron phanto M, HEBOM). The results are also in good agreement with those obtained by the standard TPS SER...

  18. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    2000-01-01

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  19. Catalyst-Controlled and Tunable, Chemoselective Silver-Catalyzed Intermolecular Nitrene Transfer: Experimental and Computational Studies.

    Science.gov (United States)

    Dolan, Nicholas S; Scamp, Ryan J; Yang, Tzuhsiung; Berry, John F; Schomaker, Jennifer M

    2016-11-09

    The development of new catalysts for selective nitrene transfer is a continuing area of interest. In particular, the ability to control the chemoselectivity of intermolecular reactions in the presence of multiple reactive sites has been a long-standing challenge in the field. In this paper, we demonstrate examples of silver-catalyzed, nondirected, intermolecular nitrene transfer reactions that are both chemoselective and flexible for aziridination or C-H insertion, depending on the choice of ligand. Experimental probes present a puzzling picture of the mechanistic details of the pathways mediated by [( t Bu 3 tpy)AgOTf] 2 and (tpa)AgOTf. Computational studies elucidate these subtleties and provide guidance for the future development of new catalysts exhibiting improved tunability in group transfer reactions.

  20. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  1. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    Science.gov (United States)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  2. Experimental, computational and theoretical studies of δ′ phase coarsening in Al–Li alloys

    International Nuclear Information System (INIS)

    Pletcher, B.A.; Wang, K.G.; Glicksman, M.E.

    2012-01-01

    Experimental characterization of microstructure evolution in three binary Al–Li alloys provides critical tests of both diffusion screening theory and multiparticle diffusion simulations, which predict late-stage phase-coarsening kinetics. Particle size distributions, growth kinetics and maximum particle sizes obtained using quantitative, centered dark-field transmission electron microscopy are compared quantitatively with theoretical and computational predictions. We also demonstrate the dependence on δ′ precipitate volume fraction of the rate constant for coarsening and the microstructure’s maximum particle size, both of which remained undetermined for this alloy system for nearly a half century. Our experiments show quantitatively that the diffusion-screening theoretical description of phase coarsening yields reasonable kinetic predictions, and that useful simulations of microstructure evolution are obtained via multiparticle diffusion. The tested theory and simulation method will provide useful tools for future design of two-phase alloys for elevated temperature applications.

  3. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-05-01

    A proposal has been made at LBL to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multilevel, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data through typical transformations and correlations in under 30 s. The throughput for such a facility, for five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600. 3 figures

  4. A computational and experimental investigation of wall loading near the DITE limiters

    International Nuclear Information System (INIS)

    Maddison, G.P.; Allen, J.; Fielding, S.J.; Johnson, P.C.; Matthews, G.F.

    1987-01-01

    Recycling from the DITE fixed limiter has been simulated employing a three-dimensional version of the DEGAS Monte Carlo neutral particle transport code. Experimental plasma data are used for input, in conjunction with a reasonable approximation to the DITE geometry. Between basic cases, moments of the neutral distribution function exhibit a scaling with limiter ion flux which becomes increasingly non-linear with order, a prominent instance being the first wall power loading. This reflects principally the influence of charge exchange neutrals sensing the ion temperature profiles. By also modifying the geometry, the neutral distribution is found for the regime investigated to depend primarily on plasma properties in velocity space and confining geometry in configuration space. On DITE, a so-called 'energy sensor array' (ESA) of bolometers detects wall loading adjacent to the limiter directly. Supposing a homogeneous radiation background, the computational model agress with ESA data roughly within a factor of 2. (orig.)

  5. Improving Wind Turbine Drivetrain Reliability Using a Combined Experimental, Computational, and Analytical Approach

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.

    2015-03-01

    Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.

  6. Computational and experimental studies of hydrodynamic instabilities and turbulent mixing (Review of NVIIEF efforts)

    International Nuclear Information System (INIS)

    Andronov, V.A.; Zhidov, I.G.; Meskov, E.E.; Nevmerzhitskii, N.V.; Nikiforov, V.V.; Razin, A.N.; Rogatchev, V.G.; Tolshmyakov, A.I.; Yanilkin, Yu.V.

    1995-02-01

    This report describes an extensive program of investigations conducted at Arzamas-16 in Russia over the past several decades. The focus of the work is on material interface instability and the mixing of two materials. Part 1 of the report discusses analytical and computational studies of hydrodynamic instabilities and turbulent mixing. The EGAK codes are described and results are illustrated for several types of unstable flow. Semiempirical turbulence transport equations are derived for the mixing of two materials, and their capabilities are illustrated for several examples. Part 2 discusses the experimental studies that have been performed to investigate instabilities and turbulent mixing. Shock-tube and jelly techniques are described in considerable detail. Results are presented for many circumstances and configurations

  7. Computational and experimental characterization of a liquid jet plunging into a quiescent pool at shallow inclination

    International Nuclear Information System (INIS)

    Deshpande, Suraj S.; Trujillo, Mario F.; Wu Xiongjun; Chahine, Georges

    2012-01-01

    Highlights: ► Jet impingement at shallow angles results in periodic cavity formation. ► Velocity profile affected both by buoyancy and splashing in the near field. ► Momentum diffusion leads to a velocity maximum at the gas–liquid interface for the far field. - Abstract: A circular water jet (Re = 1.6 × 10 5 ; We = 8.8 × 10 3 ) plunging at shallow angles (θ ≈ 12.5°) into a quiescent pool is investigated computationally and experimentally. A surprising finding from the work is that cavities, of the order of jet diameter, are formed periodically in the impact location, even though the impinging flow is smooth and completely devoid of such a periodicity. Computational prediction of these frequencies was compared with experimental findings, yielding excellent agreement. The region in the vicinity of the impact is characterized by strong churning due to splashing and formation of air cavities. Measured velocity profiles indicate a concentration of momentum beneath the free surface slightly beyond the impact location (X/D j ≈ 14), with a subsequent shift towards the free surface further downstream of this point (X/D j ≈ 30). This shift is due primarily to the action of buoyancy on the cavity/bubble population. Comparisons of the mean velocity profile between simulations and experiments are performed, yielding good agreement, with the exception of the relatively small churning flow region. Further downstream (X/D j ≳ 40), the flow develops mostly due to diffusion and the location of peak velocity coincides with the free surface. In this region, the free surface acts as an adiabatic boundary and restricts momentum diffusion, causing the peak velocity to occur at the free surface.

  8. Experimental and computational study on the molecular energetics of benzyloxyphenol isomers

    International Nuclear Information System (INIS)

    Ribeiro da Silva, Manuel A.V.; Lobo Ferreira, Ana I.M.C.; Cimas, Alvaro

    2011-01-01

    Highlights: → Combustion calorimetry was used to determine Δ f H m 0 (cr) of 4-benzyloxyphenol. → Vapor pressures were determined by the Knudsen mass-loss effusion technique. → Gas-phase Δ f H m 0 of benzyloxyphenol isomers have been derived. - Abstract: This paper reports a combined experimental and computational thermochemical study of 4-benzyloxyphenol. Static bomb combustion calorimetry and Knudsen mass-loss effusion technique were used to determine the standard (p o = 0.1 MPa) molar enthalpy of combustion, Δ c H m 0 =-(6580.1±1.8)kJ.mol -1 , and of sublimation, Δ cr g H m 0 =(131.0±0.9)kJ.mol -1 , respectively, from which the standard (p o = 0.1 MPa) molar enthalpy of formation, in the gaseous phase, at T = 298.15 K, Δ f H m 0 =-(119.5±2.7)kJ.mol -1 were derived. For comparison purposes, the gas-phase enthalpy of formation of this compound was estimated by G3(MP2)//B3LYP calculations, using a set of gas-phase working reactions; the results are in excellent agreement with experimental data. G3(MP2)//B3LYP computations were also extended to the calculation of the gas-phase enthalpies of formation of the 2- and 3-benzyloxyphenol isomers. Furthermore, this composite approach was also used to obtain information about the gas-phase acidities, gas-phase basicities, proton and electron affinities, adiabatic ionization enthalpies and, finally, O-H bond dissociation enthalpies.

  9. A computer-based automated algorithm for assessing acinar cell loss after experimental pancreatitis.

    Directory of Open Access Journals (Sweden)

    John F Eisses

    Full Text Available The change in exocrine mass is an important parameter to follow in experimental models of pancreatic injury and regeneration. However, at present, the quantitative assessment of exocrine content by histology is tedious and operator-dependent, requiring manual assessment of acinar area on serial pancreatic sections. In this study, we utilized a novel computer-generated learning algorithm to construct an accurate and rapid method of quantifying acinar content. The algorithm works by learning differences in pixel characteristics from input examples provided by human experts. HE-stained pancreatic sections were obtained in mice recovering from a 2-day, hourly caerulein hyperstimulation model of experimental pancreatitis. For training data, a pathologist carefully outlined discrete regions of acinar and non-acinar tissue in 21 sections at various stages of pancreatic injury and recovery (termed the "ground truth". After the expert defined the ground truth, the computer was able to develop a prediction rule that was then applied to a unique set of high-resolution images in order to validate the process. For baseline, non-injured pancreatic sections, the software demonstrated close agreement with the ground truth in identifying baseline acinar tissue area with only a difference of 1% ± 0.05% (p = 0.21. Within regions of injured tissue, the software reported a difference of 2.5% ± 0.04% in acinar area compared with the pathologist (p = 0.47. Surprisingly, on detailed morphological examination, the discrepancy was primarily because the software outlined acini and excluded inter-acinar and luminal white space with greater precision. The findings suggest that the software will be of great potential benefit to both clinicians and researchers in quantifying pancreatic acinar cell flux in the injured and recovering pancreas.

  10. Calibration and validation of a numerical model against experimental data of methane hydrate formation and dissociation in a sandy porous medium

    Science.gov (United States)

    Yin, Z.; Moridis, G. J.; Chong, Z. R.; Linga, P.

    2017-12-01

    Methane hydrates (MH) are known to trap enormous amounts of CH4 in oceanic and permafrost-associated deposits, and are being considered as a potential future energy source. Several powerful numerical simulators were developed to describe the behavior of natural hydrate-bearing sediments (HBS). The complexity and strong nonlinearities in HBS do not allow analytical solutions for code validation. The only reliable method to develop confidence in these models is through comparisons to laboratory and/or field experiments. The objective of this study is to reproduce numerically the results from earlier experiments of MH formation and depressurization (and the corresponding fluid production) in 1.0L reactor involving unconsolidated sand, thus validating and calibrating the TOUGH+Hydrate v1.5 simulator. We faithfully describe the reactor geometry and the experimental process that involves both hydrate formation and dissociation. We demonstrate that the laboratory experiments can only be captured by a kinetic hydration model. There is an excellent agreement between observations and predictions (a) of the cumulative gas depletion (during formation) and production (during dissociation) and (b) of pressure over time. The temperature agreement is less satisfactory, and the deviations are attributed to the fixed locations of the limited number of sensors that cannot fully capture the hydrate heterogeneity. We also predict the spatial distributions over time of the various phase (gas, aqueous and hydrate) saturations. Thus, hydrates form preferentially along the outer boundary of the sand core, and the hydrate front moves inward leaving a significant portion of the sand at the center hydrate-free. During depressurization, dissociation advances again inward from the reactor boundary to the center of the reactor. As expected, methane gas accumulates initially at the locations of most intense dissociation, and then gradually migrates to the upper section of the reactor because of

  11. Neuromorphological and wiring pattern alterations effects on brain function: a mixed experimental and computational approach.

    Directory of Open Access Journals (Sweden)

    Linus Manubens-Gil

    2015-04-01

    In addition, the study of fixed intact brains (by means of the state of the art CLARITY technique brings us closer to biologically and medically relevant situations, allowing not only to confirm whether the functional links in neuronal cultures are also present in vivo, but also enabling the introduction of functional information (like behavioral studies and functional imaging and another layer of structural alterations such as brain region morphology, neuronal density, and long-range connectivity. Taking together the experimental information from these systems we want to feed self-developed computational models that allow us to understand what are the fundamental characteristics of the observed connectivity patterns and the impact of each of the alterations on neuronal network function. These models will also provide a framework able to account for the emergent properties that bridge the gap between spontaneous electrical activity arousal/transmission and higher order information processing and memory storage capacities in the brain. As an additional part of the project we are now working on the application of the clearing, labeling and imaging protocols to human biopsy samples. Our aim is to obtain neuronal architecture and connectivity information from focal cortical dysplasia microcircuits using samples from intractable temporal lobe epilepsy patients that undergo deep-brain electrode recording diagnosis and posterior surgical extraction of the tissue. Our computational models can allow us to discern the contributions of the observed abnormalities to neuronal hyperactivity and epileptic seizure generation.

  12. Parallel computational and experimental studies of the morphological modification of calcium carbonate by cobalt

    Science.gov (United States)

    Braybrook, A. L.; Heywood, B. R.; Jackson, R. A.; Pitt, K.

    2002-08-01

    Crystal growth can be controlled by the incorporation of dopant ions into the lattice and yet the question of how such substituents affect the morphology has not been addressed. This paper describes the forms of calcite (CaCO 3) which arise when the growth assay is doped with cobalt. Distinct and specific morphological changes are observed; the calcite crystals adopt a morphology which is dominated by the {01.1} family of faces. These experimental studies paralleled the development of computational methods for the analysis of crystal habit as a function of dopant concentration. In this case, the predicted defect morphology also argued for the dominance of the (01.1) face in the growth form. The appearance of this face was related to the preferential segregation of the dopant ions to the crystal surface. This study confirms the evolution of a robust computational model for the analysis of calcite growth forms under a range of environmental conditions and presages the use of such tools for the predictive development of crystal morphologies in those applications where chemico-physical functionality is linked closely to a specific crystallographic form.

  13. Experimental and computational thermochemistry of 6,7-dihydro-4(5H)-benzofuranone

    International Nuclear Information System (INIS)

    Sousa, Clara C.S.; Morais, Victor M.F.; Matos, M. Agostinha R.

    2013-01-01

    Highlights: ► Standard molar enthalpy of formation of 6,7-dihydro-4(5H)-benzofuranone was obtain. ► Computational estimative of the standard molar enthalpy of formation. ► Computational and experimental values agree. ► Estimates of the standard molar enthalpy of formation of the isomers were obtain. - Abstract: The standard (p = 0.1 MPa) molar enthalpy of formation of 6,7-dihydro-4(5H)-benzofuranone was measured, at T = 298.15 K, by static bomb calorimetry and the standard molar enthalpy of vaporization, at T = 298.15 K, was obtained using Calvet microcalorimetry. These values were combined together to derive the standard molar enthalpy of formation of the title compound in gaseous phase, at T = 298.15 K, −(226.0 ± 2.8) kJ · mol −1 . Additionally, density functional theoretical calculations using the B3LYP hybrid exchange-correlation energy functional with extended basis sets and also other higher-level ab initio quantum calculations have been performed.

  14. Filling Source Feedthrus with Alumina/Molybdenum CND50 Cermet: Experimental, Theoretical, and Computational Approaches

    International Nuclear Information System (INIS)

    STUECKER, JOHN N.; CESARANO III, JOSEPH; CORRAL, ERICA LORRANE; SHOLLENBERGER, KIM ANN; ROACH, R. ALLEN; TORCZYNSKI, JOHN R.; THOMAS, EDWARD V.; VAN ORNUM, DAVID J.

    2001-01-01

    This report is a summary of the work completed in FY00 for science-based characterization of the processes used to fabricate cermet vias in source feedthrus. In particular, studies were completed to characterize the CND50 cermet slurry, characterize solvent imbibition, and identify critical via filling variables. These three areas of interest are important to several processes pertaining to the production of neutron generator tubes. Rheological characterization of CND50 slurry prepared with 94ND2 and Sandi94 primary powders were also compared. The 94ND2 powder was formerly produced at the GE Pinellas Plant and the Sandi94 is the new replacement powder produced at CeramTec. Processing variables that may effect the via-filling process were also studied and include: the effect of solids loading in the CND50 slurry; the effect of milling time; and the effect of Nuosperse (a slurry ''conditioner''). Imbibition characterization included a combination of experimental, theoretical, and computational strategies to determine solvent migration though complex shapes, specifically vias in the source feedthru component. Critical factors were determined using a controlled set of experiments designed to identify those variables that influence the occurrence of defects within the cermet filled via. These efforts were pursued to increase part production reliability, understand selected fundamental issues that impact the production of slurry-filled parts, and validate the ability of the computational fluid dynamics code, GOMA, to simulate these processes. Suggestions are made for improving the slurry filling of source feedthru vias

  15. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    Science.gov (United States)

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.

  16. An Experimental study on a Method of Computing Minimum flow rate

    International Nuclear Information System (INIS)

    Cho, Yeon Sik; Kim, Tae Hyun; Kim, Chang Hyun

    2009-01-01

    Many pump reliability problems in the Nuclear Power Plants (NPPs) are being attributed to the operation of the pump at flow rates well below its best efficiency point(BEP). Generally, the manufacturer and the user try to avert such problems by specifying a minimum flow, below which the pump should not be operated. Pump minimum flow usually involves two considerations. The first consideration is normally termed the 'thermal minimum flow', which is that flow required to prevent the fluid inside the pump from reaching saturation conditions. The other consideration is often referred to as 'mechanical minimum flow', which is that flow required to prevent mechanical damage. However, the criteria for specifying such a minimum flow are not clearly understood by all parties concerned. Also various factor and information for computing minimum flow are not easily available as considering for the pump manufacturer' proprietary. The objective of this study is to obtain experimental data for computing minimum flow rate and to understand the pump performances due to low flow operation. A test loop consisted of the pump to be used in NPPs, water tank, flow rate measurements and piping system with flow control devices was established for this study

  17. Benzoic acid derivatives: Evaluation of thermochemical properties with complementary experimental and computational methods

    International Nuclear Information System (INIS)

    Verevkin, Sergey P.; Zaitsau, Dzmitry H.; Emeĺyanenko, Vladimir N.; Stepurko, Elena N.; Zherikova, Kseniya V.

    2015-01-01

    Highlights: • Vapor pressures of benzoic acid derivatives were measured. • Sublimation enthalpies were derived and compared with the literature. • Thermochemical data tested for consistency using additivity rules and computations. • Contradiction between available enthalpies of sublimation was resolved. • Pairwise interactions of substituents on the benzene ring were derived. - Abstract: Molar sublimation enthalpies of the methyl- and methoxybenzoic acids were derived from the transpiration method, static method, and TGA. Thermochemical data available in the literature were collected, evaluated, and combined with own experimental results. This collection together with the new experimental results reported here has helped to resolve contradictions in the available enthalpy data and to recommend sets of sublimation and formation enthalpies for the benzoic acid derivatives. Gas-phase enthalpies of formation calculated with the G4 quantum-chemical method were in agreement with the experiment. Pairwise interactions of the methyl, methoxy, and carboxyl substituents on the benzene ring were derived and used for the development of simple group-additivity procedures for estimation of the vaporization enthalpies, gas-phase, and liquid-phase enthalpies of formation of substituted benzenes.

  18. Receiver operating characteristic curve evaluation on computed radiography: an experimental study

    International Nuclear Information System (INIS)

    Yu Zixi; Wang Changyuan; Xu Yue; Xie Jindong; Zhang Menglong; Wang Jian

    2003-01-01

    Objective: To get the maximum information from computed radiography (CR) output images by changing post-processing parameters. Methods: Six experimental photos of polypropylene balls of 2.0 mm in diameter were taken by means of one time X-ray exposing on an imaging plate (IP) with different post-processing parameters including rotation amount (GA), gray gradation type (GT), rotation center (GC), shifting amount (GS), frequency rank (RN), frequency type (RT), and degree of enhancement (RE). 6 photos were viewed by three students and one radiologist on a 6000 lx illuminance viewbox. Receiver operating characteristic (ROC) curves were made by means of 5-value-differentiation method. Results: The largest mean area value (Az) below ROC curves of a low contrast experimental photo with post-processing parameters GA=1.0, GT=A, GC=1.6, GS=0.3, RN=4.0, RT=R and RE=3.0 was 0.96, and the maximum information was obtained. The smallest mean area value (Az) was 0.78 with changed post-processing parameters GA=0.8, GS=-0.2 and RE=0.5 while other parameters were not changed. The minimum information was obtained from this photo. Conclusion: In order to get the maximum information from a CR output image, the post-processing parameters should be suitably selected

  19. Computational simulations of frictional losses in pipe networks confirmed in experimental apparatusses designed by honors students

    Science.gov (United States)

    Pohlman, Nicholas A.; Hynes, Eric; Kutz, April

    2015-11-01

    Lectures in introductory fluid mechanics at NIU are a combination of students with standard enrollment and students seeking honors credit for an enriching experience. Most honors students dread the additional homework problems or an extra paper assigned by the instructor. During the past three years, honors students of my class have instead collaborated to design wet-lab experiments for their peers to predict variable volume flow rates of open reservoirs driven by gravity. Rather than learn extra, the honors students learn the Bernoulli head-loss equation earlier to design appropriate systems for an experimental wet lab. Prior designs incorporated minor loss features such as sudden contraction or multiple unions and valves. The honors students from Spring 2015 expanded the repertoire of available options by developing large scale set-ups with multiple pipe networks that could be combined together to test the flexibility of the student team's computational programs. The engagement of bridging the theory with practice was appreciated by all of the students such that multiple teams were able to predict performance within 4% accuracy. The challenges, schedules, and cost estimates of incorporating the experimental lab into an introductory fluid mechanics course will be reported.

  20. Quantitative fluorescence lifetime spectroscopy in turbid media: comparison of theoretical, experimental and computational methods

    International Nuclear Information System (INIS)

    Vishwanath, Karthik; Mycek, Mary-Ann; Pogue, Brian

    2002-01-01

    A Monte Carlo model developed to simulate time-resolved fluorescence propagation in a semi-infinite turbid medium was validated against previously reported theoretical and computational results. Model simulations were compared to experimental measurements of fluorescence spectra and lifetimes on tissue-simulating phantoms for single and dual fibre-optic probe geometries. Experiments and simulations using a single probe revealed that scattering-induced artefacts appeared in fluorescence emission spectra, while fluorescence lifetimes were unchanged. Although fluorescence lifetime measurements are generally more robust to scattering artefacts than are measurements of fluorescence spectra, in the dual-probe geometry scattering-induced changes in apparent lifetime were predicted both from diffusion theory and via Monte Carlo simulation, as well as measured experimentally. In all cases, the recovered apparent lifetime increased with increasing scattering and increasing source-detector separation. Diffusion theory consistently underestimated the magnitude of these increases in apparent lifetime (predicting a maximum increase of ∼15%), while Monte Carlo simulations and experiment were closely matched (showing increases as large as 30%). These results indicate that quantitative simulations of time-resolved fluorescence propagation in turbid media will be important for accurate recovery of fluorophore lifetimes in biological spectroscopy and imaging applications. (author)

  1. Room temperature ionic liquids interacting with bio-molecules: an overview of experimental and computational studies

    Science.gov (United States)

    Benedetto, Antonio; Ballone, Pietro

    2016-03-01

    We briefly review experimental and computational studies of room temperature ionic liquids (RTILs) interacting with important classes of biomolecules, including phospholipids, peptides and proteins, nucleic acids and carbohydrates. Most of these studies have been driven by the interest for RTILs applications as solvents. Thus, available experimental data cover primarily thermodynamic properties such as the reciprocal solubility of RTILs and bio-molecules, as well as phase boundaries. Less extensive data are also available on transport properties such as diffusion and viscosity of homogeneous binary (RTILs/biomolecules) and ternary (RTIL/biomolecules/water) solutions. Most of the structural information at the atomistic level, of interest especially for biochemical, pharmaceutical and nanotechnology applications, has been made available by molecular dynamics simulations. Major exceptions to this statement are represented by the results from NMR and circular dichroism spectroscopy, by selected neutron and X-ray scattering data, and by recent neutron reflectometry measurements on lipid bilayers on surfaces, hydrated by water-RTIL solutions. A final section of our paper summarizes new developments in the field of RTILs based on amino acids, that combine in themselves the two main aspects of our discussion, i.e. ionic liquids and bio-molecules.

  2. Experimental and computational study of the thermodynamic properties of 2-nitrofluorene and 2-aminofluorene

    International Nuclear Information System (INIS)

    Oliveira, Juliana A.S.A.; Monte, Manuel J.S.; Notario, R.; Ribeiro da Silva, Maria D.M.C.

    2014-01-01

    Highlights: • A thermodynamic study of two fluorene derivatives is presented. • Vapour pressures and energies of combustion were measured. • Enthalpy, entropy and Gibbs energy of sublimation were derived. • Enthalpy and Gibbs energy of formation in crystal and gas phases were calculated. • Gas phase enthalpy of formation was also estimated by quantum chemical calculations. - Abstract: This report presents a comprehensive experimental and computational study of the thermodynamic properties of two fluorene derivatives: 2-aminofluorene and 2-nitrofluorene. The standard (p° = 0.1 MPa) molar enthalpies of formation in the crystalline phase of these compounds were derived from the standard molar energies of combustion, in oxygen, at T = 298.15 K, measured by static bomb combustion calorimetry. A Knudsen effusion method was used to perform the vapour pressure study of the referred compounds, yielding an accurate determination of the standard molar enthalpies and entropies of sublimation. The enthalpies of sublimation were also determined using Calvet microcalorimetry and the enthalpy and temperature of fusion were derived from DSC experiments. Derived results of standard enthalpy and Gibbs energy of formation in both gaseous and crystalline phases were compared with the ones reported in literature for fluorene. A theoretical study at the G3 and G4 levels has been carried out, and the calculated enthalpies of formation have been compared to the experimental values

  3. Role asymmetry and code transmission in signaling games: an experimental and computational investigation.

    Science.gov (United States)

    Moreno, Maggie; Baggio, Giosuè

    2015-07-01

    In signaling games, a sender has private access to a state of affairs and uses a signal to inform a receiver about that state. If no common association of signals and states is initially available, sender and receiver must coordinate to develop one. How do players divide coordination labor? We show experimentally that, if players switch roles at each communication round, coordination labor is shared. However, in games with fixed roles, coordination labor is divided: Receivers adjust their mappings more frequently, whereas senders maintain the initial code, which is transmitted to receivers and becomes the common code. In a series of computer simulations, player and role asymmetry as observed experimentally were accounted for by a model in which the receiver in the first signaling round has a higher chance of adjusting its code than its partner. From this basic division of labor among players, certain properties of role asymmetry, in particular correlations with game complexity, are seen to follow. Copyright © 2014 Cognitive Science Society, Inc.

  4. Benzoic acid derivatives: Evaluation of thermochemical properties with complementary experimental and computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Verevkin, Sergey P., E-mail: sergey.verevkin@uni-rostock.de [Department of Physical Chemistry and Department “Science and Technology of Life, Light and Matter”, University of Rostock, D-18059 Rostock (Germany); Department of Physical Chemistry, Kazan Federal University, 420008 Kazan (Russian Federation); Zaitsau, Dzmitry H. [Department of Physical Chemistry, Kazan Federal University, 420008 Kazan (Russian Federation); Emeĺyanenko, Vladimir N. [Department of Physical Chemistry and Department “Science and Technology of Life, Light and Matter”, University of Rostock, D-18059 Rostock (Germany); Stepurko, Elena N. [Chemistry Faculty and Research Institute for Physical Chemical Problems, Belarusian State University, 220030 Minsk (Belarus); Zherikova, Kseniya V. [Nikolaev Institute of Inorganic Chemistry of Siberian Branch of Russian Academy of Sciences, 630090 Novosibirsk (Russian Federation)

    2015-12-20

    Highlights: • Vapor pressures of benzoic acid derivatives were measured. • Sublimation enthalpies were derived and compared with the literature. • Thermochemical data tested for consistency using additivity rules and computations. • Contradiction between available enthalpies of sublimation was resolved. • Pairwise interactions of substituents on the benzene ring were derived. - Abstract: Molar sublimation enthalpies of the methyl- and methoxybenzoic acids were derived from the transpiration method, static method, and TGA. Thermochemical data available in the literature were collected, evaluated, and combined with own experimental results. This collection together with the new experimental results reported here has helped to resolve contradictions in the available enthalpy data and to recommend sets of sublimation and formation enthalpies for the benzoic acid derivatives. Gas-phase enthalpies of formation calculated with the G4 quantum-chemical method were in agreement with the experiment. Pairwise interactions of the methyl, methoxy, and carboxyl substituents on the benzene ring were derived and used for the development of simple group-additivity procedures for estimation of the vaporization enthalpies, gas-phase, and liquid-phase enthalpies of formation of substituted benzenes.

  5. COMPUTER-AIDED DESIGN, MANUFACTURE AND EXPERIMENTAL ANALYSIS OF A PAIR OF ELLIPTICAL SPUR GEARS

    Directory of Open Access Journals (Sweden)

    Mehmet YAZAR

    2016-12-01

    Full Text Available ABSTRACT In this study, geometrical equations of elliptical spur gears, which are too difficult to manufacture by traditional methods and which require specific machines equipped with special techniques, are developed using the methods in the literature. Using these equations, a LISP program on AutoLISP is created to model elliptical spur gears on AutoCAD with desired tooth number and modules. Elliptical spur gears are manufactured with 5 different modules by Wire EDM through the above-mentioned package program. The variations in the center distances of elliptical spur gears, the most important parameter for workability of gears, are experimentally determined by a simple test unit designed and manufactured within the context this study. In addition, the surface roughness and hardness of elliptical spur gears are obtained and hydraulic pump and noise analysis results are discussed. The experimental and computer-aided results show that the elliptical spur gears may widely be used in many industrial and mechanical applications in the future.

  6. Binding of the neuroleptic drug, gabapentin, to bovine serum albumin: Insights from experimental and computational studies

    International Nuclear Information System (INIS)

    Jalali, Fahimeh; Dorraji, Parisa S.; Mahdiuni, Hamid

    2014-01-01

    The interaction between antiepileptic drug, gabapentin (GP), and bovin serum albumin (BSA) was studied by spectroscopic and computational methods. The native fluorescence of BSA was quenched by GP. Stern–Volmer quenching constant was calculated at different temperatures which suggested a static mechanism. The association constant (K a ) was calculated from fluorescence quenching studies, which increased with temperature rising. GP competed well with warfarine for hydrophobic subdomain IIA (Sudlow's site I) on the protein. Enthalpy and entropy changes during the interaction of GP with BSA were obtained using van't Hoff plot, which showed an entropy-driven process and involvement of hydrophobic forces (ΔH>0 and ΔS>0). Synchronous fluorescence measurements of BSA solution in the presence of GP showed a considerable blue shift when Δλ=15 nm, therefore, GP interacts with tyrosine-rich sites on BSA. Optimized docked model of BSA–GP mixture confirmed the experimental results. -- Highlights: • Interaction of gabapentin and bovine serum albumin (BSA) is investigated by spectroscopic techniques. • Gabapentin can quench the fluorescence of BSA through a static quenching procedure. • The binding of gabapentin to BSA is driven mainly by hydrophobic interactions. • Subdomain IIA (Sudlow's site I) of BSA is found to be the main binding site for gabapentin. • Molecular docking modeling confirmed the experimental results

  7. Experimental and computational evaluation of neutrals in the Alcator C-Mod edge pedestal

    Science.gov (United States)

    Hughes, J. W.; Mossessian, D.; Labombard, B.; Terry, J.

    2004-11-01

    Pedestal-forming edge transport barriers (ETBs) in tokamak plasmas and the physics governing them are linked to the enhancement of confinement obtained in H-mode plasmas. Studies on Alcator C-Mod employ experimental measurements and simple 1-D transport models in order to better understand ETB physics. We examine the influences of ionization and charge exchange on the pedestals in electron density and temperature. Routine measurements from edge Thomson scattering (ETS) give pedestal scalings with global plasma parameters, while individual ETS profiles are combined with scanning Langmuir probe data and optical D_α emissivity measurements to give atomic density profiles and the associated radial distribution of the ionization source rate. From H-mode profiles of these quantities a well in effective plasma diffusivity is calculated, and is shown to systematically vary as the confinement regime is varied from ELM-free to EDA. Experimental work is supplemented with modeling and computation of edge neutral transport via KN1D, a kinetic solver for atomic and molecular distribution functions in slab geometry. The level of agreement between experiment and model is encouraging.

  8. Investigation of mixed mode - I/II fracture problems - Part 1: computational and experimental analyses

    Directory of Open Access Journals (Sweden)

    O. Demir

    2016-01-01

    Full Text Available In this study, to investigate and understand the nature of fracture behavior properly under in-plane mixed mode (Mode-I/II loading, three-dimensional fracture analyses and experiments of compact tension shear (CTS specimen are performed under different mixed mode loading conditions. Al 7075-T651 aluminum machined from rolled plates in the L-T rolling direction (crack plane is perpendicular to the rolling direction is used in this study. Results from finite element analyses and fracture loads, crack deflection angles obtained from the experiments are presented. To simulate the real conditions in the experiments, contacts are defined between the contact surfaces of the loading devices, specimen and loading pins. Modeling, meshing and the solution of the problem involving the whole assembly, i.e., loading devices, pins and the specimen, with contact mechanics are performed using ANSYSTM. Then, CTS specimen is analyzed separately using a submodeling approach, in which three-dimensional enriched finite elements are used in FRAC3D solver to calculate the resulting stress intensity factors along the crack front. Having performed the detailed computational and experimental studies on the CTS specimen, a new specimen type together with its loading device is also proposed that has smaller dimensions compared to the regular CTS specimen. Experimental results for the new specimen are also presented.

  9. Crown-ether functionalized carbon nanotubes for purification of lithium compounds: computational and experimental study

    International Nuclear Information System (INIS)

    Singha Deb, A.K.; Arora, S.K.; Joshi, J.M.; Ali, Sk. M.; Shenoy, K.T.; Goyal, Aiana

    2015-01-01

    Lithium compounds finds several applications in nuclear science and technology, viz, lithium fluoride/hydroxide/alloys are used as dosimetric materials in luminescence devices, molten-salt breeder reactor, international thermonuclear experimental reactor, single crystal based neutron detectors etc. The lithium compounds should be in a proper state of purity; especially it should not contain other alkali metal cations which can downgrade the performance. Hence, there is a need to develop a process for purification of the lithium salt to achieve the desired quality. Therefore an attempt has been made to develop advanced nanomaterials for purification of the lithium salts. In this work, benzo-15-crown-5(B15C5) functionalized carbon nanotubes (CNTs), owing to the good adsorption properties of CNT and alkali metal encapsulation behaviour of B15C5, were showed to bind preferentially with sodium and potassium ions compared to lithium ions. DFT based computation calculations have shown that the free energy of complexation of Na + and K + by B15C5-CNT is higher than that of Li + , implying that B15C5-CNT selectively binds Na + and K + . The experimental batch solid-liquid extraction has also revealed the same trend as in the calculations. The crown-ethers functionalized CNTs have the potentiality for use in purifying lithium compounds. (author)

  10. Computational and experimental study of nanoporous membranes for water desalination and decontamination.

    Energy Technology Data Exchange (ETDEWEB)

    Hickner, Michael A. (Penn State University, University Park, PA); Chinn, Douglas Alan (Sandia National Laboratories, Albuquerque, NM); Adalsteinsson, Helgi; Long, Kevin R. (Texas Tech University, Lubbock, TX); Kent, Michael Stuart (Sandia National Laboratories, Albuquerque, NM); Debusschere, Bert J.; Zendejas, Frank J.; Tran, Huu M.; Najm, Habib N.; Simmons, Blake Alexander

    2008-11-01

    Fundamentals of ion transport in nanopores were studied through a joint experimental and computational effort. The study evaluated both nanoporous polymer membranes and track-etched nanoporous polycarbonate membranes. The track-etched membranes provide a geometrically well characterized platform, while the polymer membranes are more closely related to ion exchange systems currently deployed in RO and ED applications. The experimental effort explored transport properties of the different membrane materials. Poly(aniline) membranes showed that flux could be controlled by templating with molecules of defined size. Track-etched polycarbonate membranes were modified using oxygen plasma treatments, UV-ozone exposure, and UV-ozone with thermal grafting, providing an avenue to functionalized membranes, increased wettability, and improved surface characteristic lifetimes. The modeling effort resulted in a novel multiphysics multiscale simulation model for field-driven transport in nanopores. This model was applied to a parametric study of the effects of pore charge and field strength on ion transport and charge exclusion in a nanopore representative of a track-etched polycarbonate membrane. The goal of this research was to uncover the factors that control the flux of ions through a nanoporous material and to develop tools and capabilities for further studies. Continuation studies will build toward more specific applications, such as polymers with attached sulfonate groups, and complex modeling methods and geometries.

  11. Verification of SACI-2 computer code comparing with experimental results of BIBLIS-A and LOOP-7 computer code

    International Nuclear Information System (INIS)

    Soares, P.A.; Sirimarco, L.F.

    1984-01-01

    SACI-2 is a computer code created to study the dynamic behaviour of a PWR nuclear power plant. To evaluate the quality of its results, SACI-2 was used to recalculate commissioning tests done in BIBLIS-A nuclear power plant and to calculate postulated transients for Angra-2 reactor. The results of SACI-2 computer code from BIBLIS-A showed as much good agreement as those calculated with the KWU Loop 7 computer code for Angra-2. (E.G.) [pt

  12. Thermodynamic study of 2-aminothiazole and 2-aminobenzothiazole: Experimental and computational approaches

    International Nuclear Information System (INIS)

    Silva, Ana L.R.; Monte, Manuel J.S.; Morais, Victor M.F.; Ribeiro da Silva, Maria D.M.C.

    2014-01-01

    Highlights: • Combustion of 2-aminothiazole and 2-aminobenzothiazole by rotating bomb calorimetry. • Enthalpies of sublimation of 2-aminothiazole and 2-aminobenzothiazole. • Gaseous enthalpies of formation of 2-aminothiazole and 2-aminobenzothiazole. • Gaseous enthalpies of formation calculated from high-level MO calculations. • Gas-phase enthalpies of formation estimated from G3(MP2)//B3LYP approach. - Abstract: This work reports an experimental and computational thermochemical study of two aminothiazole derivatives, namely 2-aminothiazole and 2-aminobenzothiazole. The standard (p° = 0.1 MPa) molar energies of combustion of these compounds were measured by rotating bomb combustion calorimetry. The standard molar enthalpies of sublimation, at T = 298.15 K, were derived from the temperature dependence of the vapor pressures of these compounds, measured by the Knudsen-effusion technique and from high temperature Calvet microcalorimetry. The conjugation of these experimental results enabled the calculation of the standard molar enthalpies of formation in the gaseous state, at T = 298.15 K, for the compounds studied. The corresponding standard Gibbs free energies of formation in crystalline and gaseous phases were also derived, allowing the analysis of their stability, in these phases. We have also estimated the gas-phase enthalpies of formation from high-level molecular orbital calculations at the G3(MP2)//B3LYP level of theory, the estimates revealing very good agreement with the experimental ones. The importance of some stabilizing electronic interactions occurring in the title molecules has been studied and quantitatively evaluated through Natural Bonding Orbital (NBO) of the corresponding wavefunctions and their Nucleus Independent Chemical Shifts (NICS) parameters have been calculated in order to rationalize the effect of electronic delocalization upon stability

  13. Experimental and computational study of the energetics of hydantoin and 2-thiohydantoin

    International Nuclear Information System (INIS)

    Silva, Ana L.R.; Cimas, Álvaro; Vale, Nuno; Gomes, Paula; Monte, Manuel J.S.; Ribeiro da Silva, Maria D.M.C.

    2013-01-01

    Highlights: ► A thermodynamic study of hydantoin and 2-thiohydantoin was performed. ► Enthalpies of formation of crystalline hydantoin and 2-thiohydantoin were determined. ► Enthalpies of sublimation of hydantoin and 2-thiohydantoin were derived. ► MO calculations at the G3MP2B3 level of theory are presented for both compounds. -- Abstract: This work reports an experimental and a theoretical study of two imidazolidine derivatives, hydantoin (CAS No. 461-72-3) and 2-thiohydantoin (CAS No. 503-87-7). The standard (p° = 0.1 MPa) molar energies of combustion of hydantoin and 2-thiohydantoin were measured by static and rotating bomb combustion calorimetry, respectively. The standard molar enthalpies of sublimation, at T = 298.15 K, were derived from the temperature dependence of the vapour pressures of these compounds, measured by the Knudsen-effusion technique, and from high temperature Calvet microcalorimetry. The conjugation of these experimental results enables the calculation of the standard molar enthalpies of formation in the gaseous state, at T = 298.15 K, which are discussed in terms of structural contributions. We have also estimated the gas-phase enthalpy of formation from high-level ab initio molecular orbital calculations at the G3MP2B3 level of theory, being the computed values in good agreement with the experimental ones. Furthermore, this composite approach was also used to obtain information about the gas-phase basicities, proton and electron affinities and adiabatic ionization enthalpies

  14. VX hydrolysis by human serum paraoxonase 1: a comparison of experimental and computational results.

    Directory of Open Access Journals (Sweden)

    Matthew W Peterson

    Full Text Available Human Serum paraoxonase 1 (HuPON1 is an enzyme that has been shown to hydrolyze a variety of chemicals including the nerve agent VX. While wildtype HuPON1 does not exhibit sufficient activity against VX to be used as an in vivo countermeasure, it has been suggested that increasing HuPON1's organophosphorous hydrolase activity by one or two orders of magnitude would make the enzyme suitable for this purpose. The binding interaction between HuPON1 and VX has recently been modeled, but the mechanism for VX hydrolysis is still unknown. In this study, we created a transition state model for VX hydrolysis (VX(ts in water using quantum mechanical/molecular mechanical simulations, and docked the transition state model to 22 experimentally characterized HuPON1 variants using AutoDock Vina. The HuPON1-VX(ts complexes were grouped by reaction mechanism using a novel clustering procedure. The average Vina interaction energies for different clusters were compared to the experimentally determined activities of HuPON1 variants to determine which computational procedures best predict how well HuPON1 variants will hydrolyze VX. The analysis showed that only conformations which have the attacking hydroxyl group of VX(ts coordinated by the sidechain oxygen of D269 have a significant correlation with experimental results. The results from this study can be used for further characterization of how HuPON1 hydrolyzes VX and design of HuPON1 variants with increased activity against VX.

  15. POLCAL - POLARIMETRIC RADAR CALIBRATION

    Science.gov (United States)

    Vanzyl, J.

    1994-01-01

    processing altitude or in the aircraft roll angle are possible causes of error in computing the antenna patterns inside the processor. POLCAL uses an altitude error correction algorithm to correctly remove the antenna pattern from the SAR images. POLCAL also uses a topographic calibration algorithm to reduce calibration errors resulting from ground topography. By utilizing the backscatter measurements from either the corner reflectors or a well-known distributed target, POLCAL can correct the residual amplitude offsets in the various polarization channels and correct for the absolute gain of the radar system. POLCAL also gives the user the option of calibrating a scene using the calibration data from a nearby site. This allows precise calibration of all the scenes acquired on a flight line where corner reflectors were present. Construction and positioning of corner reflectors is covered extensively in the program documentation. In an effort to keep the POLCAL code as transportable as possible, the authors eliminated all interactions with a graphics display system. For this reason, it is assumed that users will have their own software for doing the following: (1) synthesize an image using HH or VV polarization, (2) display the synthesized image on any display device, and (3) read the pixel locations of the corner reflectors from the image. The only inputs used by the software (in addition to the input Stokes matrix data file) is a small data file with the corner reflector information. POLCAL is written in FORTRAN 77 for use on Sun series computers running SunOS and DEC VAX computers running VMS. It requires 4Mb of RAM under SunOS and 3.7Mb of RAM under VMS for execution. The standard distribution medium for POLCAL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format or on a TK50 tape cartridge in DEC VAX FILES-11 format. Other distribution media may be available upon request

  16. The accident of overexposure at the University hospital center of Toulouse. Expertise report n.1. Checking of experimental protocols of micro-beams calibration before and after dysfunction correction

    International Nuclear Information System (INIS)

    2007-01-01

    The regional center of stereotaxic radiosurgery of the University hospital center of Toulouse is equipped since april 2006 of a Novalis accelerator (Brainlab) devoted to the intra-skull stereotaxic radiosurgery. In april 2007, during an intercomparison of dosimetry computer files coming from different sites, the Brainlab society finds an anomaly in the files. The analysis made by the society concludes to to the use of an inappropriate detector for the measurement of a dosimetry parameter during the initial calibration of the accelerator. following this error, 145 patients (on the 172 treated by the service in question) suffer the consequences of an overdose whom importance is variable according the cases. The I.R.S.N. in charge of an expertise about the protocols of calibration of micro-beams before and after the correction of the dysfunction, took up with the search of the technical causes of the dysfunction. This report presents successively: the documents base on which is founded the expertise; the material of dosimetry and quality control necessary to the initial calibration of the device and to its follow-up; the formula made at the accelerator commissioning; the calibration of micro-beams in the two configurations that allows the device (micro-multi-knives and conic collimator) and the definition of parameters of the software of treatment planning; the maintenance and quality control implemented in the frame of its clinical use. (N.C.)

  17. Automated Calibration of Dosimeters for Diagnostic Radiology

    International Nuclear Information System (INIS)

    Romero Acosta, A.; Gutierrez Lores, S.

    2015-01-01

    Calibration of dosimeters for diagnostic radiology includes current and charge measurements, which are often repetitive. However, these measurements are usually done using modern electrometers, which are equipped with an RS-232 interface that enables instrument control from a computer. This paper presents an automated system aimed to the measurements for the calibration of dosimeters used in diagnostic radiology. A software application was developed, in order to achieve the acquisition of the electric charge readings, measured values of the monitor chamber, calculation of the calibration coefficient and issue of a calibration certificate. A primary data record file is filled and stored in the computer hard disk. The calibration method used was calibration by substitution. With this system, a better control over the calibration process is achieved and the need for human intervention is reduced. the automated system will be used in the calibration of dosimeters for diagnostic radiology at the Cuban Secondary Standard Dosimetry Laboratory of the Center for Radiation Protection and Hygiene. (Author)

  18. Efficient Discovery of Novel Multicomponent Mixtures for Hydrogen Storage: A Combined Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wolverton, Christopher [Northwestern Univ., Evanston, IL (United States). Dept. of Materials Science and Engineering; Ozolins, Vidvuds [Univ. of California, Los Angeles, CA (United States). Dept. of Materials Science and Engineering; Kung, Harold H. [Northwestern Univ., Evanston, IL (United States). Dept. of Chemical and Biological Engineering; Yang, Jun [Ford Scientific Research Lab., Dearborn, MI (United States); Hwang, Sonjong [California Inst. of Technology (CalTech), Pasadena, CA (United States). Dept. of Chemistry and Chemical Engineering; Shore, Sheldon [The Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry and Biochemistry

    2016-11-28

    The objective of the proposed program is to discover novel mixed hydrides for hydrogen storage, which enable the DOE 2010 system-level goals. Our goal is to find a material that desorbs 8.5 wt.% H2 or more at temperatures below 85°C. The research program will combine first-principles calculations of reaction thermodynamics and kinetics with material and catalyst synthesis, testing, and characterization. We will combine materials from distinct categories (e.g., chemical and complex hydrides) to form novel multicomponent reactions. Systems to be studied include mixtures of complex hydrides and chemical hydrides [e.g. LiNH2+NH3BH3] and nitrogen-hydrogen based borohydrides [e.g. Al(BH4)3(NH3)3]. The 2010 and 2015 FreedomCAR/DOE targets for hydrogen storage systems are very challenging, and cannot be met with existing materials. The vast majority of the work to date has delineated materials into various classes, e.g., complex and metal hydrides, chemical hydrides, and sorbents. However, very recent studies indicate that mixtures of storage materials, particularly mixtures between various classes, hold promise to achieve technological attributes that materials within an individual class cannot reach. Our project involves a systematic, rational approach to designing novel multicomponent mixtures of materials with fast hydrogenation/dehydrogenation kinetics and favorable thermodynamics using a combination of state-of-the-art scientific computing and experimentation. We will use the accurate predictive power of first-principles modeling to understand the thermodynamic and microscopic kinetic processes involved in hydrogen release and uptake and to design new material/catalyst systems with improved properties. Detailed characterization and atomic-scale catalysis experiments will elucidate the effect of dopants and nanoscale catalysts in achieving fast kinetics and reversibility. And

  19. Computational Design of Creep-Resistant Alloys and Experimental Validation in Ferritic Superalloys

    Energy Technology Data Exchange (ETDEWEB)

    Liaw, Peter

    2014-12-31

    A new class of ferritic superalloys containing B2-type zones inside parent L21-type precipitates in a disordered solid-solution matrix, also known as a hierarchical-precipitate strengthened ferritic alloy (HPSFA), has been developed for high-temperature structural applications in fossil-energy power plants. These alloys were designed by the addition of the Ti element into a previously-studied NiAl-strengthened ferritic alloy (denoted as FBB8 in this study). In the present research, systematic investigations, including advanced experimental techniques, first-principles calculations, and numerical simulations, have been integrated and conducted to characterize the complex microstructures and excellent creep resistance of HPSFAs. The experimental techniques include transmission-electron microscopy, scanningtransmission- electron microscopy, neutron diffraction, and atom-probe tomography, which provide detailed microstructural information of HPSFAs. Systematic tension/compression creep tests revealed that HPSFAs exhibit the superior creep resistance, compared with the FBB8 and conventional ferritic steels (i.e., the creep rates of HPSFAs are about 4 orders of magnitude slower than the FBB8 and conventional ferritic steels.) First-principles calculations include interfacial free energies, anti-phase boundary (APB) free energies, elastic constants, and impurity diffusivities in Fe. Combined with kinetic Monte- Carlo simulations of interdiffusion coefficients, and the integration of computational thermodynamics and kinetics, these calculations provide great understanding of thermodynamic and mechanical properties of HPSFAs. In addition to the systematic experimental approach and first-principles calculations, a series of numerical tools and algorithms, which assist in the optimization of creep properties of ferritic superalloys, are utilized and developed. These numerical simulation results are compared with the available experimental data and previous first

  20. Computed tomography dosimeter utilizing a radiochromic film and an optical common-mode rejection: characterization and calibration of the GafChromic XRCT film

    International Nuclear Information System (INIS)

    Ohuchi, H.; Abe, M.

    2008-01-01

    Gafchromic XRCT radiochromic film is a self-developing high sensitivity radiochromic film product which can be used for assessment of delivered radiation doses which could match applications such as computed tomography (CT) dosimetry. The film automatically changes color upon irradiation changing from amber to dark greenish-black depending on the level of exposure. The absorption spectra of Gafchromic XRCT radiochromic film as measured with reflectance spectrophotometry have been investigated to analyse the dosimetry characteristics of the film. Results show two main absorption peaks produced from irradiation located at around 630 nm and 580 nm. We employed a commercially available, optical flatbed scanner for digitization of the film and image analysis software to determine the response of the XRCT films to ionizing radiation. The two dose response curves as a function of delivered dose ranging from 1.069 to 119.7 mGy for tube voltages of 80, 100, and 120 kV X-ray beams and from films scanned 24 hrs after exposure are obtained. One represents the net optical density obtained with the conventional analysis way using only red component and another shows the net reduced OD with the optical CMR scheme, which we developed, using red and green components. The measured ODs obtained with the optical CMR scheme show a good consistency among four samples and all values show an improved consistency with a second-order polynomial fit less than 1 mGy, while those with the conventional analysis exhibited a large discrepancy among four samples and did not show a consistency with a second-order polynomial fit less than 1 mGy. This result combined with its energy independence from 80 kV to 120 kV X-ray energy range provides a unique enhancement in dosimetric measurement capabilities such as the acquisition of high-spatial resolution and calibrated radiation dose profiles over currently available dosimetry films for CT applications. (author)

  1. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til [Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany) and Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany); Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany); Philips Healthcare, X-Ray Pre-Development, Veenpluis 4-6, 5684PC Best (Netherlands); Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany)

    2010-09-15

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  2. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    International Nuclear Information System (INIS)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til

    2010-01-01

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  3. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations

    International Nuclear Information System (INIS)

    Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H.

    2016-01-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10 −2 –1.0 × 10 −5 M with detection limit 8.5 × 10 −6 M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.

  4. Interaction of anthraquinone anti-cancer drugs with DNA:Experimental and computational quantum chemical study

    Science.gov (United States)

    Al-Otaibi, Jamelah S.; Teesdale Spittle, Paul; El Gogary, Tarek M.

    2017-01-01

    Anthraquinones form the basis of several anticancer drugs. Anthraquinones anticancer drugs carry out their cytotoxic activities through their interaction with DNA, and inhibition of topoisomerase II activity. Anthraquinones (AQ4 and AQ4H) were synthesized and studied along with 1,4-DAAQ by computational and experimental tools. The purpose of this study is to shade more light on mechanism of interaction between anthraquinone DNA affinic agents and different types of DNA. This study will lead to gain of information useful for drug design and development. Molecular structures were optimized using DFT B3LYP/6-31 + G(d). Depending on intramolecular hydrogen bonding interactions two conformers of AQ4 were detected and computed as 25.667 kcal/mol apart. Molecular reactivity of the anthraquinone compounds was explored using global and condensed descriptors (electrophilicity and Fukui functions). Molecular docking studies for the inhibition of CDK2 and DNA binding were carried out to explore the anti cancer potency of these drugs. NMR and UV-VIS electronic absorption spectra of anthraquinones/DNA were investigated at the physiological pH. The interaction of the three anthraquinones (AQ4, AQ4H and 1,4-DAAQ) were studied with three DNA (calf thymus DNA, (Poly[dA].Poly[dT]) and (Poly[dG].Poly[dC]). NMR study shows a qualitative pattern of drug/DNA interaction in terms of band shift and broadening. UV-VIS electronic absorption spectra were employed to measure the affinity constants of drug/DNA binding using Scatchard analysis.

  5. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations

    Energy Technology Data Exchange (ETDEWEB)

    Attia, Khalid A.M.; El-Abasawi, Nasr M.; Abdel-Azim, Ahmed H., E-mail: Ahmed.hussienabdelazim@hotmil.com

    2016-04-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10{sup −2}–1.0 × 10{sup −5} M with detection limit 8.5 × 10{sup −6} M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. - Highlights: • Novel use of ISE for selective determination of phenazopyridine hydrochloride. • Investigating the degradation pathway of phenazopyridine with enough confirmation scan. • To avoid time-consuming and experimental trials, computational studies have been applied. • The proposed sensor shows high selectivity, reasonable detection limit and fast response.

  6. Probing the mutational interplay between primary and promiscuous protein functions: a computational-experimental approach.

    Science.gov (United States)

    Garcia-Seisdedos, Hector; Ibarra-Molero, Beatriz; Sanchez-Ruiz, Jose M

    2012-01-01

    Protein promiscuity is of considerable interest due its role in adaptive metabolic plasticity, its fundamental connection with molecular evolution and also because of its biotechnological applications. Current views on the relation between primary and promiscuous protein activities stem largely from laboratory evolution experiments aimed at increasing promiscuous activity levels. Here, on the other hand, we attempt to assess the main features of the simultaneous modulation of the primary and promiscuous functions during the course of natural evolution. The computational/experimental approach we propose for this task involves the following steps: a function-targeted, statistical coupling analysis of evolutionary data is used to determine a set of positions likely linked to the recruitment of a promiscuous activity for a new function; a combinatorial library of mutations on this set of positions is prepared and screened for both, the primary and the promiscuous activities; a partial-least-squares reconstruction of the full combinatorial space is carried out; finally, an approximation to the Pareto set of variants with optimal primary/promiscuous activities is derived. Application of the approach to the emergence of folding catalysis in thioredoxin scaffolds reveals an unanticipated scenario: diverse patterns of primary/promiscuous activity modulation are possible, including a moderate (but likely significant in a biological context) simultaneous enhancement of both activities. We show that this scenario can be most simply explained on the basis of the conformational diversity hypothesis, although alternative interpretations cannot be ruled out. Overall, the results reported may help clarify the mechanisms of the evolution of new functions. From a different viewpoint, the partial-least-squares-reconstruction/Pareto-set-prediction approach we have introduced provides the computational basis for an efficient directed-evolution protocol aimed at the simultaneous

  7. Evaluation of medical countermeasures against organophosphorus compounds: the value of experimental data and computer simulations.

    Science.gov (United States)

    Worek, Franz; Aurbek, Nadine; Herkert, Nadja M; John, Harald; Eddleston, Michael; Eyer, Peter; Thiermann, Horst

    2010-09-06

    Despite extensive research for more than six decades on medical countermeasures against poisoning by organophosphorus compounds (OP) the treatment options are meagre. The presently established acetylcholinesterase (AChE) reactivators (oximes), e.g. obidoxime and pralidoxime, are insufficient against a number of nerve agents and there is ongoing debate on the benefit of oxime treatment in human OP pesticide poisoning. Up to now, the therapeutic efficacy of oximes was mostly evaluated in animal models but substantial species differences prevent direct extrapolation of animal data to humans. Hence, it was considered essential to establish relevant experimental in vitro models for the investigation of oximes as antidotes and to develop computer models for the simulation of oxime efficacy in different scenarios of OP poisoning. Kinetic studies on the various interactions between erythrocyte AChE from various species, structurally different OP and different oximes provided a basis for the initial assessment of the ability of oximes to reactivate inhibited AChE. In the present study, in vitro enzyme-kinetic and pharmacokinetic data from a minipig model of dimethoate poisoning and oxime treatment were used to calculate dynamic changes of AChE activities. It could be shown that there is a close agreement between calculated and in vivo AChE activities. Moreover, computer simulations provided insight into the potential and limitations of oxime treatment. In the end, such data may be a versatile tool for the ongoing discussion of the pros and cons of oxime treatment in human OP pesticide poisoning. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  8. Evaporation of freely suspended single droplets: experimental, theoretical and computational simulations

    International Nuclear Information System (INIS)

    Hołyst, R; Litniewski, M; Jakubczyk, D; Kolwas, K; Kolwas, M; Kowalski, K; Migacz, S; Palesa, S; Zientara, M

    2013-01-01

    Evaporation is ubiquitous in nature. This process influences the climate, the formation of clouds, transpiration in plants, the survival of arctic organisms, the efficiency of car engines, the structure of dried materials and many other phenomena. Recent experiments discovered two novel mechanisms accompanying evaporation: temperature discontinuity at the liquid–vapour interface during evaporation and equilibration of pressures in the whole system during evaporation. None of these effects has been predicted previously by existing theories despite the fact that after 130 years of investigation the theory of evaporation was believed to be mature. These two effects call for reanalysis of existing experimental data and such is the goal of this review. In this article we analyse the experimental and the computational simulation data on the droplet evaporation of several different systems: water into its own vapour, water into the air, diethylene glycol into nitrogen and argon into its own vapour. We show that the temperature discontinuity at the liquid–vapour interface discovered by Fang and Ward (1999 Phys. Rev. E 59 417–28) is a rule rather than an exception. We show in computer simulations for a single-component system (argon) that this discontinuity is due to the constraint of momentum/pressure equilibrium during evaporation. For high vapour pressure the temperature is continuous across the liquid–vapour interface, while for small vapour pressures the temperature is discontinuous. The temperature jump at the interface is inversely proportional to the vapour density close to the interface. We have also found that all analysed data are described by the following equation: da/dt = P 1 /(a + P 2 ), where a is the radius of the evaporating droplet, t is time and P 1 and P 2 are two parameters. P 1 = −λΔT/(q eff ρ L ), where λ is the thermal conductivity coefficient in the vapour at the interface, ΔT is the temperature difference between the liquid droplet

  9. A combined experimental and computational thermodynamic study of fluorene-9-methanol and fluorene-9-carboxylic acid

    International Nuclear Information System (INIS)

    Oliveira, Juliana A.S.A.; Calvinho, Maria M.; Notario, R.; Monte, Manuel J.S.; Ribeiro da Silva, Maria D.M.C.

    2013-01-01

    Highlights: • A thermodynamic study of two fluorene derivatives is presented. • Vapour pressures and energies of combustion were measured. • Enthalpy, entropy and Gibbs energy of sublimation were derived. • Enthalpy and Gibbs energy of formation in crystal and gas phases were calculated. • Gas phase enthalpy of formation was also estimated by quantum chemical calculations. -- Abstract: This work reports an experimental and computational thermodynamic study performed on two 9-fluorene derivatives: fluorene-9-methanol and fluorene-9-carboxylic acid. The standard (p o = 0.1 MPa) molar enthalpies of formation in the crystalline phase of these compounds were derived from the standard molar energies of combustion, in oxygen, at T = 298.15 K, measured by static bomb combustion calorimetry. A static method, based on a capacitance diaphragm gauge, and a Knudsen effusion method were used to perform the vapour pressure study of the referred compounds, yielding accurate determination of the standard molar enthalpies and entropies of sublimation and vaporisation. For fluorene-9-carboxylic acid, the enthalpy of sublimation was also determined using Calvet microcalorimetry. The enthalpy of fusion of both compounds was derived indirectly from vapour pressure results and directly from DSC experiments. Combining the thermodynamic parameters of the compounds studied, the standard Gibbs energy of formation in crystalline and gaseous phases were derived as well as the standard molar enthalpy of formation in the gaseous phase. A theoretical study at the G3 and G4 levels has been carried out, and the calculated enthalpies of formation have been compared with the experimental values

  10. Inferring regulatory networks from experimental morphological phenotypes: a computational method reverse-engineers planarian regeneration.

    Directory of Open Access Journals (Sweden)

    Daniel Lobo

    2015-06-01

    Full Text Available Transformative applications in biomedicine require the discovery of complex regulatory networks that explain the development and regeneration of anatomical structures, and reveal what external signals will trigger desired changes of large-scale pattern. Despite recent advances in bioinformatics, extracting mechanistic pathway models from experimental morphological data is a key open challenge that has resisted automation. The fundamental difficulty of manually predicting emergent behavior of even simple networks has limited the models invented by human scientists to pathway diagrams that show necessary subunit interactions but do not reveal the dynamics that are sufficient for complex, self-regulating pattern to emerge. To finally bridge the gap between high-resolution genetic data and the ability to understand and control patterning, it is critical to develop computational tools to efficiently extract regulatory pathways from the resultant experimental shape phenotypes. For example, planarian regeneration has been studied for over a century, but despite increasing insight into the pathways that control its stem cells, no constructive, mechanistic model has yet been found by human scientists that explains more than one or two key features of its remarkable ability to regenerate its correct anatomical pattern after drastic perturbations. We present a method to infer the molecular products, topology, and spatial and temporal non-linear dynamics of regulatory networks recapitulating in silico the rich dataset of morphological phenotypes resulting from genetic, surgical, and pharmacological experiments. We demonstrated our approach by inferring complete regulatory networks explaining the outcomes of the main functional regeneration experiments in the planarian literature; By analyzing all the datasets together, our system inferred the first systems-biology comprehensive dynamical model explaining patterning in planarian regeneration. This method

  11. Computational and experimental fluid mechanics. Progress report no. 5 for period January 1. to December 31. 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The general purpose of the program is the development of efficient algorithms, their implementation in codes of Computational Fluid Mechanics (CFD), and the experimental verification of these codes. Flows of both fundamental and applied nature will be investigated, including flows in industrial process equipment, about aerodynamics structures and ships, and flows over bed forms of importance for sediment transport. The experimental work will include the development of improved techniques, emphasizing optical methods. The objectives will be met through a coordinated experimental and theoretical/computational research program, organized in 7 specific projects: 1. CFD-methods and algorithms. 2. Spectral element simulation of ultrafiltration. 3. Turbulent swirling flows. 4. Near-wall models of turbulence. 5. Flow over bed forms. 6. Flow past ship hull. 7. Development of experimental techniques. (EG) 10 refs.

  12. Angiographic and artefact-free computed tomography imaging of experimental aneurysms embolised with hydrogel filaments

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Mark R. [Paracelsus Medical University, Department of Radiology and Magnetic Resonance Tomography, Christian Doppler Clinic, Salzburg (Austria); Cruise, Gregory M. [MicroVention Terumo, Aliso Viejo, CA (United States); Killer, Monika [Paracelsus Medical University, Department of Neurology/Neuroscience Institute, Christian Doppler Clinic, Salzburg (Austria)

    2010-04-15

    We compared experimental rabbit carotid bifurcation aneurysms embolised with platinum coils or hydrogel filaments by using digital subtraction angiography (DSA) and computed tomography angiography (CTA). Embolisation was performed using platinum coils (n = 2), hydrogel filaments loaded with iodine (n = 3) and hydrogel filaments loaded with barium sulphate (n = 3). In one case, a stent was deployed in the parent vessel to determine the effect of hydrogel filaments on stent visualisation. DSA evaluations occurred immediately post-treatment. CTA evaluations occurred at 0-13 weeks post-treatment. The DSA and CTA images were evaluated for the lack of artefacts and the visibilities of the embolic mass, individual coils and residual flow in the aneurysm sac and neck. The DSA results were largely concordant among the three groups. The embolic masses were readily evident with some individual coils being distinguished. In the aneurysms embolised with hydrogel filaments, visualisation of the individual coils, residual flow and stent with minor or no artefacts was possible using CTA. On the other hand, the beam hardening artefacts precluded analysis of aneurysms embolised with platinum coils. CTA-compatible embolic devices could have wide applications in diverse locations throughout the vasculature, particularly in combination with stents or stent grafts. (orig.)

  13. Synthesis and characterization of sulfolane-based amino alcohols: A combined experimental and computational study

    Science.gov (United States)

    Palchykov, Vitalii A.; Zarovnaya, Iryna S.; Tretiakov, Serhii V.; Reshetnyak, Alyona V.; Omelchenko, Iryna V.; Shishkin, Oleg V.; Okovytyy, Sergiy I.

    2018-04-01

    Aminolysis of 3,4-epoxysulfolane in aqueous media leads to a very complex mixture of products with unresolved stereochemistry. Herein, we report a detailed theoretical and experimental mechanistic investigation of this reaction along with extensive spectroscopic characterization of the resulting amino alcohols, using 1D and 2D NMR techniques (1H, 13C, NOE, NOESY, COSY, HSQC, HMBC) as well as XRD analysis. In addition to simple amines such as ammonia and benzylamine, our study also employed the more sterically hindered endo-bicyclo[2.2.1]hept-5-en-2-ylmethanamine. The mechanism of the aminolysis of 3,4-epoxysulfolane by aqueous ammonia was studied in more detail using quantum chemical calculations at the M06-2X/6-31++G** level of theory. The computational results led us to conclude that the most probable way of initial epoxide transformation is base-catalyzed rearrangement to a corresponding allylic alcohol. Subsequent formation of vicinal amino alcohols and diols proceeds via addition of ammonia or hydroxy-anions to activated double Cdbnd C bond with some preference of a cis-attack. Detailed analytical data obtained in the course of our work will be useful for the stereochemical identification of new sulfolane derivatives.

  14. Molecular interactions in the betaine monohydrate-polyol deep eutectic solvents: Experimental and computational studies

    Science.gov (United States)

    Zahrina, Ida; Mulia, Kamarza; Yanuar, Arry; Nasikin, Mohammad

    2018-04-01

    DES (deep eutectic solvents) are a new class of ionic liquids that have excellent properties. The strength of interaction between molecules in the DES affects their properties and applications. In this work, the strength of molecular interactions between components in the betaine monohydrate salt and polyol (glycerol or/and propylene glycol) eutectic mixtures was studied by experimental and computational studies. The melting point and fusion enthalpy of the mixtures were measured using STA (Simultaneous Thermal Analyzer). The nature and strength of intermolecular interactions were observed by FT-IR and NMR spectroscopy. The molecular dynamics simulation was used to determine the number of H-bonds, percent occupancy, and radial distribution functions in the eutectic mixtures. The interaction between betaine monohydrate and polyol is following order: betaine monohydrate-glycerol-propylene glycol > betaine monohydrate-glycerol > betaine monohydrate-propylene glycol, where the latter is the eutectic mixture with the lowest stability, strength and extent of the hydrogen bonding interactions between component molecules. The presence of intra-molecular hydrogen bonding interactions, the inter-molecular hydrogen bonding interactions between betaine molecule and polyol, and also interactions between polyol and H2O of betaine monohydrate in the eutectic mixtures.

  15. An experimental platform for triaxial high-pressure/high-temperature testing of rocks using computed tomography

    Science.gov (United States)

    Glatz, Guenther; Lapene, Alexandre; Castanier, Louis M.; Kovscek, Anthony R.

    2018-04-01

    A conventional high-pressure/high-temperature experimental apparatus for combined geomechanical and flow-through testing of rocks is not X-ray compatible. Additionally, current X-ray transparent systems for computed tomography (CT) of cm-sized samples are limited to design temperatures below 180 °C. We describe a novel, high-temperature (>400 °C), high-pressure (>2000 psi/>13.8 MPa confining, >10 000 psi/>68.9 MPa vertical load) triaxial core holder suitable for X-ray CT scanning. The new triaxial system permits time-lapse imaging to capture the role of effective stress on fluid distribution and porous medium mechanics. System capabilities are demonstrated using ultimate compressive strength (UCS) tests of Castlegate sandstone. In this case, flooding the porous medium with a radio-opaque gas such as krypton before and after the UCS test improves the discrimination of rock features such as fractures. The results of high-temperature tests are also presented. A Uintah Basin sample of immature oil shale is heated from room temperature to 459 °C under uniaxial compression. The sample contains kerogen that pyrolyzes as temperature rises, releasing hydrocarbons. Imaging reveals the formation of stress bands as well as the evolution and connectivity of the fracture network within the sample as a function of time.

  16. Computational modeling of the pressurization process in a NASP vehicle propellant tank experimental simulation

    Science.gov (United States)

    Sasmal, G. P.; Hochstein, J. I.; Wendl, M. C.; Hardy, T. L.

    1991-01-01

    A multidimensional computational model of the pressurization process in a slush hydrogen propellant storage tank was developed and its accuracy evaluated by comparison to experimental data measured for a 5 ft diameter spherical tank. The fluid mechanic, thermodynamic, and heat transfer processes within the ullage are represented by a finite-volume model. The model was shown to be in reasonable agreement with the experiment data. A parameter study was undertaken to examine the dependence of the pressurization process on initial ullage temperature distribution and pressurant mass flow rate. It is shown that for a given heat flux rate at the ullage boundary, the pressurization process is nearly independent of initial temperature distribution. Significant differences were identified between the ullage temperature and velocity fields predicted for pressurization of slush and those predicted for pressurization of liquid hydrogen. A simplified model of the pressurization process was constructed in search of a dimensionless characterization of the pressurization process. It is shown that the relationship derived from this simplified model collapses all of the pressure history data generated during this study into a single curve.

  17. A unified approach to linking experimental, statistical and computational analysis of spike train data.

    Directory of Open Access Journals (Sweden)

    Liang Meng

    Full Text Available A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data, but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach--linking statistical, computational, and experimental neuroscience--provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.

  18. Experimental and computational studies of film cooling with compound angle injection

    Energy Technology Data Exchange (ETDEWEB)

    Goldstein, R.J.; Eckert, E.R.G.; Patankar, S.V. [Univ. of Minnesota, Minneapolis, MN (United States)] [and others

    1995-10-01

    The thermal efficiency of gas turbine systems depends largely on the turbine inlet temperature. Recent decades have seen a steady rise in the inlet temperature and a resulting reduction in fuel consumption. At the same time, it has been necessary to employ intensive cooling of the hot components. Among various cooling methods, film cooling has become a standard method for cooling of the turbine airfoils and combustion chamber walls. The University of Minnesota program is a combined experimental and computational study of various film-cooling configurations. Whereas a large number of parameters influence film cooling processes, this research focuses on compound angle injection through a single row and through two rows of holes. Later work will investigate the values of contoured hole designs. An appreciation of the advantages of compound angle injection has risen recently with the demand for more effective cooling and with improved understanding of the flow; this project should continue to further this understanding. Approaches being applied include: (1) a new measurement system that extends the mass/heat transfer analogy to obtain both local film cooling and local mass (heat) transfer results in a single system, (2) direct measurement of three-dimensional turbulent transport in a highly-disturbed flow, (3) the use of compound angle and shaped holes to optimize film cooling performance, and (4) an exploration of anisotropy corrections to turbulence modeling of film cooling jets.

  19. Gypsum plasterboards enhanced with phase change materials: A fire safety assessment using experimental and computational techniques

    Directory of Open Access Journals (Sweden)

    Kolaitis Dionysios I.

    2013-11-01

    Full Text Available Phase Change Materials (PCM can be used for thermal energy storage, aiming to enhance building energy efficiency. Recently, gypsum plasterboards with incorporated paraffin-based PCM blends have become commercially available. In the high temperature environment developed during a fire, the paraffins, which exhibit relatively low boiling points, may evaporate and, escaping through the gypsum plasterboard's porous structure, emerge to the fire region, where they may ignite, thus adversely affecting the fire resistance characteristics of the building. Aiming to assess the fire safety behaviour of such building materials, an extensive experimental and computational analysis is performed. The fire behaviour and the main thermo-physical physical properties of PCM-enhanced gypsum plasterboards are investigated, using a variety of standard tests and devices (Scanning Electron Microscopy, Thermo Gravimetric Analysis, Cone Calorimeter. The obtained results are used to develop a dedicated numerical model, which is implemented in a CFD code. CFD simulations are validated using measurements obtained in a cone calorimeter. In addition, the CFD code is used to simulate an ISO 9705 room exposed to fire conditions, demonstrating that PCM addition may indeed adversely affect the fire safety of a gypsum plasterboard clad building.

  20. Computational and experimental study of the cluster size distribution in MAPLE

    International Nuclear Information System (INIS)

    Leveugle, Elodie; Zhigilei, Leonid V.; Sellinger, Aaron; Fitz-Gerald, James M.

    2007-01-01

    A combined experimental and computational study is performed to investigate the origin and characteristics of the surface features observed in SEM images of thin polymer films deposited in matrix-assisted pulsed laser evaporation (MAPLE). Analysis of high-resolution SEM images of surface morphologies of the films deposited at different fluences reveals that the mass distributions of the surface features can be well described by a power-law, Y(N) ∝ N -t , with exponent -t ∼ -1.6. Molecular dynamic simulations of the MAPLE process predict a similar size distribution for large clusters observed in the ablation plume. A weak dependence of the cluster size distributions on fluence and target composition suggests that the power-law cluster size distribution may be a general characteristic of the ablation plume generated as a result of an explosive decomposition of a target region overheated above the limit of its thermodynamic stability. Based on the simulation results, we suggest that the ejection of large matrix-polymer clusters, followed by evaporation of the volatile matrix, is responsible for the formation of the surface features observed in the polymer films deposited in MAPLE experiments

  1. Computer-mediated communication and interpersonal attraction: an experimental test of two explanatory hypotheses.

    Science.gov (United States)

    Antheunis, Marjolijn L; Valkenburg, Patti M; Peter, Jochen

    2007-12-01

    The aims of this study were (a) to investigate the influence of computer-mediated communication (CMC) on interpersonal attraction and (b) to examine two underlying processes in the CMC-interpersonal attraction relationship. We identified two variables that may mediate the influence of CMC on interpersonal attraction: self-disclosure and direct questioning. Focusing on these potential mediating variables, we tested two explanatory hypotheses: the CMC-induced direct questioning hypothesis and the CMC-induced self-disclosure hypothesis. Eighty-one cross-sex dyads were randomly assigned to one of three experimental conditions: text-only CMC, visual CMC, and face-to-face communication. We did not find a direct effect of CMC on interpersonal attraction. However, we did find two positive indirect effects of text-only CMC on interpersonal attraction: text-only CMC stimulated both self-disclosure and direct questioning, both of which in turn enhanced interpersonal attraction. Results are discussed in light of uncertainty reduction theory and CMC theories.

  2. Integrative computational and experimental approaches to establish a post-myocardial infarction knowledge map.

    Directory of Open Access Journals (Sweden)

    Nguyen T Nguyen

    2014-03-01

    Full Text Available Vast research efforts have been devoted to providing clinical diagnostic markers of myocardial infarction (MI, leading to over one million abstracts associated with "MI" and "Cardiovascular Diseases" in PubMed. Accumulation of the research results imposed a challenge to integrate and interpret these results. To address this problem and better understand how the left ventricle (LV remodels post-MI at both the molecular and cellular levels, we propose here an integrative framework that couples computational methods and experimental data. We selected an initial set of MI-related proteins from published human studies and constructed an MI-specific protein-protein-interaction network (MIPIN. Structural and functional analysis of the MIPIN showed that the post-MI LV exhibited increased representation of proteins involved in transcriptional activity, inflammatory response, and extracellular matrix (ECM remodeling. Known plasma or serum expression changes of the MIPIN proteins in patients with MI were acquired by data mining of the PubMed and UniProt knowledgebase, and served as a training set to predict unlabeled MIPIN protein changes post-MI. The predictions were validated with published results in PubMed, suggesting prognosticative capability of the MIPIN. Further, we established the first knowledge map related to the post-MI response, providing a major step towards enhancing our understanding of molecular interactions specific to MI and linking the molecular interaction, cellular responses, and biological processes to quantify LV remodeling.

  3. EXPERIMENTAL AND COMPUTATIONAL ACTIVITIES AT THE OREGON STATE UNIVERSITY NEES TSUNAMI RESEARCH FACILITY

    Directory of Open Access Journals (Sweden)

    S.C. Yim

    2009-01-01

    Full Text Available A diverse series of research projects have taken place or are underway at the NEES Tsunami Research Facility at Oregon State University. Projects range from the simulation of the processes and effects of tsunamis generated by sub-aerial and submarine landslides (NEESR, Georgia Tech., model comparisons of tsunami wave effects on bottom profiles and scouring (NEESR, Princeton University, model comparisons of wave induced motions on rigid and free bodies (Shared-Use, Cornell, numerical model simulations and testing of breaking waves and inundation over topography (NEESR, TAMU, structural testing and development of standards for tsunami engineering and design (NEESR, University of Hawaii, and wave loads on coastal bridge structures (non-NEES, to upgrading the two-dimensional wave generator of the Large Wave Flume. A NEESR payload project (Colorado State University was undertaken that seeks to improve the understanding of the stresses from wave loading and run-up on residential structures. Advanced computational tools for coupling fluid-structure interaction including turbulence, contact and impact are being developed to assist with the design of experiments and complement parametric studies. These projects will contribute towards understanding the physical processes that occur during earthquake generated tsunamis including structural stress, debris flow and scour, inundation and overland flow, and landslide generated tsunamis. Analytical and numerical model development and comparisons with the experimental results give engineers additional predictive tools to assist in the development of robust structures as well as identification of hazard zones and formulation of hazard plans.

  4. Thermal Hydraulic Computational Fluid Dynamics Simulations and Experimental Investigation of Deformed Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Mays, Brian [AREVA Federal Services, Lynchburg, VA (United States); Jackson, R. Brian [TerraPower, Bellevue, WA (United States)

    2017-03-08

    The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services. The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.

  5. Computational tools for experimental determination and theoretical prediction of protein structure

    Energy Technology Data Exchange (ETDEWEB)

    O`Donoghue, S.; Rost, B.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. The authors intend to review the state of the art in the experimental determination of protein 3D structure (focus on nuclear magnetic resonance), and in the theoretical prediction of protein function and of protein structure in 1D, 2D and 3D from sequence. All the atomic resolution structures determined so far have been derived from either X-ray crystallography (the majority so far) or Nuclear Magnetic Resonance (NMR) Spectroscopy (becoming increasingly more important). The authors briefly describe the physical methods behind both of these techniques; the major computational methods involved will be covered in some detail. They highlight parallels and differences between the methods, and also the current limitations. Special emphasis will be given to techniques which have application to ab initio structure prediction. Large scale sequencing techniques increase the gap between the number of known proteins sequences and that of known protein structures. They describe the scope and principles of methods that contribute successfully to closing that gap. Emphasis will be given on the specification of adequate testing procedures to validate such methods.

  6. Computational Simulation on Facial Expressions and Experimental Tensile Strength for Silicone Rubber as Artificial Skin

    Science.gov (United States)

    Amijoyo Mochtar, Andi

    2018-02-01

    Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.

  7. Dry calibration of electromagnetic flowmeters based on numerical models combining multiple physical phenomena (multiphysics)

    Science.gov (United States)

    Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.

    2010-10-01

    This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.

  8. Experimental and Computational Studies of the Flow Over a Sting Mounted Planetary Probe Configuration

    Science.gov (United States)

    Holden, Michael S.; Harvey, John K.; Boyd, Iain D.; George, Jyothish; Horvath, Thomas J.

    1997-01-01

    This paper summarizes the results of a series of experimental studies in the LENS shock tunnel and computations with DSMC and Navier Stokes codes which have been made to examine the aerothermal and flowfield characteristics of the flow over a sting-supported planetary probe configuration in hypervelocity air and nitrogen flows. The experimental program was conducted in the LENS hypervelocity shock tunnel at total enthalpies of 5and 10 MJkg for a range of reservoir pressure conditions from 70 to 500 bars. Heat transfer and pressure measurements were made on the front and rear face of the probe and along the supporting sting. High-speed and single shot schlieren photography were also employed to examine the flow over the model and the time to establish the flow in the base recirculation region. Predictions of the flowfield characteristics and the distributions of heat transfer and pressure were made with DSMC codes for rarefied flow conditions and with the Navier-Stokes solvers for the higher pressure conditions where the flows were assumed to be laminar. Analysis of the time history records from the heat transfer and pressure instrumentation on the face of the probe and in the base region indicated that the base flow was fully established in under 4 milliseconds from flow initiation or between 35 and 50 flow lengths based on base height. The measurements made in three different tunnel entries with two models of identical geometries but with different instrumentation packages, one prepared by NASA Langley and the second prepared by CUBRC, demonstrated good agreement between heat transfer measurements made with two different types of thin film and coaxial gage instrumentation. The measurements of heat transfer and pressure to the front face of the probe were in good agreement with theoretical predictions from both the DSMC and Navier Stokes codes. For the measurements made in low density flows, computations with the DSMC code were found to compare well with the

  9. Computational and Experimental Evaluations of a Novel Thermo-Brachytherapy Seed for Treatment of Solid Tumors

    Science.gov (United States)

    Warrell, Gregory R.

    Hyperthermia has long been known as a radiation therapy sensitizer of high potential; however successful delivery of this modality and integrating it with radiation have often proved technically difficult. We present the dual-modality thermobrachytherapy (TB) seed, based on the ubiquitous low dose-rate (LDR) brachytherapy permanent implant, as a simple and effective combination of hyperthermia and radiation therapy. Heat is generated from a ferromagnetic or ferrimagnetic core within the seed, which produces Joule heating by eddy currents. A strategically-selected Curie temperature provides thermal self-regulation. In order to obtain a uniform and sufficiently high temperature distribution, additional hyperthermia-only (HT-only) seeds are proposed to be used in vacant spots within the needles used to implant the TB seeds; this permits a high seed density without the use of additional needles. Experimental and computational studies were done both to optimize the design of the TB and HT-only seeds and to quantitatively assess their ability to heat and irradiate defined, patient-specific targets. Experiments were performed with seed-sized ferromagnetic samples in tissue-mimicking phantoms heated by an industrial induction heater. The magnetic and thermal properties of the seeds were studied computationally in the finite element analysis (FEA) solver COMSOL Multiphysics, modelling realistic patient-specific seed distributions. These distributions were derived from LDR permanent prostate implants previously conducted at our institution; various modifications of the seeds' design were studied. The calculated temperature distributions were analyzed by generating temperature-volume histograms, which were used to quantify coverage and temperature homogeneity for a range of blood perfusion rates, as well as for a range of seed Curie temperatures and thermal power production rates. The impact of the interseed attenuation and scatter (ISA) effect on radiation dose distributions

  10. Radiation Shielding Materials Containing Hydrogen, Boron, and Nitrogen: Systematic Computational and Experimental Study. Phase I

    Science.gov (United States)

    Thibeault, Sheila A.; Fay, Catharine C.; Lowther, Sharon E.; Earle, Kevin D.; Sauti, Godfrey; Kang, Jin Ho; Park, Cheol; McMullen, Amelia M.

    2012-01-01

    The key objectives of this study are to investigate, both computationally and experimentally, which forms, compositions, and layerings of hydrogen, boron, and nitrogen containing materials will offer the greatest shielding in the most structurally robust combination against galactic cosmic radiation (GCR), secondary neutrons, and solar energetic particles (SEP). The objectives and expected significance of this research are to develop a space radiation shielding materials system that has high efficacy for shielding radiation and that also has high strength for load bearing primary structures. Such a materials system does not yet exist. The boron nitride nanotube (BNNT) can theoretically be processed into structural BNNT and used for load bearing structures. Furthermore, the BNNT can be incorporated into high hydrogen polymers and the combination used as matrix reinforcement for structural composites. BNNT's molecular structure is attractive for hydrogen storage and hydrogenation. There are two methods or techniques for introducing hydrogen into BNNT: (1) hydrogen storage in BNNT, and (2) hydrogenation of BNNT (hydrogenated BNNT). In the hydrogen storage method, nanotubes are favored to store hydrogen over particles and sheets because they have much larger surface areas and higher hydrogen binding energy. The carbon nanotube (CNT) and BNNT have been studied as potentially outstanding hydrogen storage materials since 1997. Our study of hydrogen storage in BNNT - as a function of temperature, pressure, and hydrogen gas concentration - will be performed with a hydrogen storage chamber equipped with a hydrogen generator. The second method of introducing hydrogen into BNNT is hydrogenation of BNNT, where hydrogen is covalently bonded onto boron, nitrogen, or both. Hydrogenation of BN and BNNT has been studied theoretically. Hyper-hydrogenated BNNT has been theoretically predicted with hydrogen coverage up to 100% of the individual atoms. This is a higher hydrogen content

  11. An experimental and computational investigation of gas/particle flow in a vertical lifter

    Energy Technology Data Exchange (ETDEWEB)

    Mathisen, Anette

    2010-07-01

    Experimental and computational investigations of dilute gas/particle flow in a vertical lifter are performed. The effect of superficial gas velocity, particle density, particle size distribution and particle loading on particle velocities, particle fluctuations and particle cross-moment have been studied experimentally using laser Doppler anemometry (LDA) and particle image velocimetry (PIV). The results from the experimental investigation is compared with the computational investigation using FluentR. The experimental measurements are performed on a lab-scale vertical lifter, consisting of a fluidizing silo and a receiving tank with a glass pipe in which the solids phase is transported. The particles are placed in the fluidization tank and transport air enters at the bottom of the silo. The transport pipe is suspended above the inlet and as the transport air passes the opening, the particles are dragged into the air flow and transported upwards to the receiving tank. Fluidizing air is used to control the particle loading in the system and supplied through a distribution plate. The test section of the transport pipe is made of glass to enable the use of the optical laser based investigation techniques, LDA and PIV. Two types of powders are used, ZrO{sub 2} and glass, each with two different particle size distributions, average diameter of 260 and 530 micron and 120 and 518 micron, respectively. The experimental techniques LDA and PIV are used to investigate a dilute gas/particle vertical flow. The two techniques are also evaluated for use on this type of flow. LDA is a single point measurement technique, which means that one point is measured at a time. The acquisition stops when a pre-set criteria is reached, this can either be based on sample number or time. A measurement spanning over the whole cross-section of the pipe consists of several points. These points makes up a cross-sectional profile. PIV on the other hand is a whole field technique and consequently

  12. Validation of functional calibration and strap-down joint drift correction for computing 3D joint angles of knee, hip, and trunk in alpine skiing.

    Science.gov (United States)

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    To obtain valid 3D joint angles with inertial sensors careful sensor-to-segment calibration (i.e. functional or anatomical calibration) is required and measured angular velocity at each sensor needs to be integrated to obtain segment and joint orientation (i.e. joint angles). Existing functional and anatomical calibration procedures were optimized for gait analysis and calibration movements were impractical to perform in outdoor settings. Thus, the aims of this study were 1) to propose and validate a set of calibration movements that were optimized for alpine skiing and could be performed outdoors and 2) to validate the 3D joint angles of the knee, hip, and trunk during alpine skiing. The proposed functional calibration movements consisted of squats, trunk rotations, hip ad/abductions, and upright standing. The joint drift correction previously proposed for alpine ski racing was improved by adding a second step to reduce separately azimuth drift. The system was validated indoors on a skiing carpet at the maximum belt speed of 21 km/h and for measurement durations of 120 seconds. Calibration repeatability was on average boots. Joint angle precision was <4.9° for all angles and accuracy ranged from -10.7° to 4.2° where the presence of an athlete-specific bias was observed especially for the flexion angle. The improved joint drift correction reduced azimuth drift from over 25° to less than 5°. In conclusion, the system was valid for measuring 3D joint angles during alpine skiing and could be used outdoors. Errors were similar to the values reported in other studies for gait. The system may be well suited for within-athlete analysis but care should be taken for between-athlete analysis because of a possible athlete-specific joint angle bias.

  13. New approach for calibration the efficiency of HPGe detectors

    International Nuclear Information System (INIS)

    Alnour, I.A.; Wagiran, H.; Suhaimi Hamzah; Siong, W.B.; Mohd Suhaimi Elias

    2013-01-01

    Full-text: This work evaluates the efficiency calibrating of HPGe detector coupled with Canberra GC3018 with Genie 2000 software and Ortec GEM25-76-XLB-C with Gamma Vision software; available at Neutron activation analysis laboratory in Malaysian Nuclear Agency (NM). The efficiency calibration curve was constructed from measurement of an IAEA, standard gamma point sources set composed by 214 Am, 57 Co, 133 Ba, 152 Eu, 137 Cs and 60 Co. The efficiency calibrations were performed for three different geometries: 5, 10 and 15 cm distances from the end cap detector. The polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points. The efficiency equation was established from the known fitted parameters which allow for the efficiency evaluation at particular energy of interest. The study shows that significant deviations in the efficiency, depending on the source-detector distance and photon energy. (author)

  14. Calibration technology in application of robot-laser scanning system

    Science.gov (United States)

    Ren, YongJie; Yin, ShiBin; Zhu, JiGui

    2012-11-01

    A system composed of laser sensor and 6-DOF industrial robot is proposed to obtain complete three-dimensional (3-D) information of the object surface. Suitable for the different combining ways of laser sensor and robot, a new method to calibrate the position and pose between sensor and robot is presented. By using a standard sphere with known radius as a reference tool, the rotation and translation matrices between the laser sensor and robot are computed, respectively in two steps, so that many unstable factors introduced in conventional optimization methods can be avoided. The experimental results show that the accuracy of the proposed calibration method can be achieved up to 0.062 mm. The calibration method is also implemented into the automated robot scanning system to reconstruct a car door panel.

  15. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling; Etalonnage d'un spectrometre gamma en vue de la mesure de la radioactivite naturelle. Mesures experimentales et modelisation par techniques de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)

    2007-03-15

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  16. Synthesis of radiolabelled aryl azides from diazonium salts: experimental and computational results permit the identification of the preferred mechanism.

    Science.gov (United States)

    Joshi, Sameer M; de Cózar, Abel; Gómez-Vallejo, Vanessa; Koziorowski, Jacek; Llop, Jordi; Cossío, Fernando P

    2015-05-28

    Experimental and computational studies on the formation of aryl azides from the corresponding diazonium salts support a stepwise mechanism via acyclic zwitterionic intermediates. The low energy barriers associated with both transition structures are compatible with very fast and efficient processes, thus making this method suitable for the chemical synthesis of radiolabelled aryl azides.

  17. Experimental investigation of the diffusion coefficients in porous media by application of X-ray computer tomography

    DEFF Research Database (Denmark)

    Zhelezny, Petr; Shapiro, Alexander

    2006-01-01

    The present work describes a new experimental method that makes it possible to investigate diffusion coefficients in a porous medium. The method is based on application of X-ray computed tomography (CT). The general applicability of this method for the determination of diffusion coefficients...

  18. For the Love of Statistics: Appreciating and Learning to Apply Experimental Analysis and Statistics through Computer Programming Activities

    Science.gov (United States)

    Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.

    2016-01-01

    For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…

  19. Experimental and computational analysis of micromotions of an uncemented femoral knee implant using elastic and plastic bone material models

    NARCIS (Netherlands)

    Berahmani, Sanaz; Janssen, Dennis; Verdonschot, Nico

    2017-01-01

    It is essential to calculate micromotions at the bone-implant interface of an uncemented femoral total knee replacement (TKR) using a reliable computational model. In the current study, experimental measurements of micromotions were compared with predicted micromotions by Finite Element Analysis

  20. A calibration and data assimilation method using the Bayesian MARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2013-01-01

    Highlights: ► We outline a transparent, flexible method for the calibration of uncertain inputs to computer models. ► We account for model, data, emulator, and measurement uncertainties. ► The method produces improved predictive results, which are validated using leave one-out experiments. ► Our implementation leverages the Bayesian MARS emulator, but any emulator may be substituted. -- Abstract: We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to estimate the posterior distribution of the uncertain inputs such that when samples from the posterior are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments with confidence bounds. The method is similar to Metropolis–Hastings calibration methods with independently sampled updates, except that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our application, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The resulting posterior distributions agree with our existing intuition, and we validate the results by performing a series of leave-one-out predictions. We find that the calibrated predictions are considerably more accurate and less uncertain than blind sampling of the forward model alone.

  1. Experimental and computational studies of poly-L-lactic acid for cardiovascular applications: recent progress

    Science.gov (United States)

    Naseem, Raasti; Zhao, Liguo; Liu, Yang; Silberschmidt, Vadim V.

    2017-12-01

    Stents are commonly used in medical procedures to alleviate the symptoms of coronary heart disease, a prevalent modern society disease. These structures are employed to maintain vessel patency and restore blood flow. Traditionally stents are made of metals such as stainless steel or cobalt chromium; however, these scaffolds have known disadvantages. An emergence of transient scaffolds is gaining popularity, with the structure engaged for a required period whilst healing of the diseased arterial wall occurs. Polymers dominate a medical device sector, with incorporation in sutures, scaffolds and screws. Thanks to their good mechanical and biological properties and their ability to degrade naturally. Polylactic acid is an extremely versatile polymer, with its properties easily tailored to applications. Its dominance in the stenting field increases continually, with the first polymer scaffold gaining FDA approval in 2016. Still some challenges with PLLA bioresorbable materials remain, especially with regard to understanding their mechanical response, assessment of its changes with degradation and comparison of their performance with that of metallic drug-eluting stent. Currently, there is still a lack of works on evaluating both the pre-degradation properties and degradation performance of these scaffolds. Additionally, there are no established material models incorporating non-linear viscoelastic behaviour of PLLA and its evolution with in-service degradation. Assessing these features through experimental analysis accompanied by analytical and numerical studies will provide powerful tools for design and optimisation of these structures endorsing their broader use in stenting. This overview assesses the recent studies investigating mechanical and computational performance of poly(l-lactic) acid and its use in stenting applications.

  2. Experimental and computational investigation of graphene/SAMs/n-Si Schottky diodes

    Science.gov (United States)

    Aydin, H.; Bacaksiz, C.; Yagmurcukardes, N.; Karakaya, C.; Mermer, O.; Can, M.; Senger, R. T.; Sahin, H.; Selamet, Y.

    2018-01-01

    We have investigated the effect of two different self-assembled monolayers (SAMs) on electrical characteristics of bilayer graphene (BLG)/n-Si Schottky diodes. Novel 4″bis(diphenylamino)-1, 1‧:3″-terphenyl-5‧ carboxylic acids (TPA) and 4,4-di-9H-carbazol-9-yl-1,1‧:3‧1‧-terphenyl-5‧ carboxylic acid (CAR) aromatic SAMs have been used to modify n-Si surfaces. Cyclic voltammetry (CV) and Kelvin probe force microscopy (KPFM) results have been evaluated to verify the modification of n-Si surface. The current-voltage (I-V) characteristics of bare and SAMs modified devices show rectification behaviour verifying a Schottky junction at the interface. The ideality factors (n) from ln(I)-V dependences were determined as 2.13, 1.96 and 2.07 for BLG/n-Si, BLG/TPA/n-Si and BLG/CAR/n-Si Schottky diodes, respectively. In addition, Schottky barrier height (SBH) and series resistance (Rs) of SAMs modified diodes were decreased compared to bare diode due to the formation of a compatible interface between graphene and Si as well as π-π interaction between aromatic SAMs and graphene. The CAR-based device exhibits better diode characteristic compared to the TPA-based device. Computational simulations show that the BLG/CAR system exhibits smaller energy-level-differences than the BLG/TPA, which supports the experimental findings of a lower Schottky barrier and series resistance in BLG/CAR diode.

  3. Electronic nature of zwitterionic alkali metal methanides, silanides and germanides - a combined experimental and computational approach.

    Science.gov (United States)

    Li, H; Aquino, A J A; Cordes, D B; Hase, W L; Krempner, C

    2017-02-01

    Zwitterionic group 14 complexes of the alkali metals of formula [C(SiMe 2 OCH 2 CH 2 OMe) 3 M], (M- 1 ), [Si(SiMe 2 OCH 2 CH 2 OMe) 3 M], (M- 2 ), [Ge(SiMe 2 OCH 2 CH 2 OMe) 3 M], (M- 3 ), where M = Li, Na or K, have been prepared, structurally characterized and their electronic nature was investigated by computational methods. Zwitterions M- 2 and M- 3 were synthesized via reactions of [Si(SiMe 2 OCH 2 CH 2 OMe) 4 ] ( 2 ) and [Ge(SiMe 2 OCH 2 CH 2 OMe) 4 ] ( 3 ) with MOBu t (M = Li, Na or K), resp., in almost quantitative yields, while M- 1 were prepared from deprotonation of [HC(SiMe 2 OCH 2 CH 2 OMe) 3 ] ( 1 ) with LiBu t , NaCH 2 Ph and KCH 2 Ph, resp. X-ray crystallographic studies and DFT calculations in the gas-phase, including calculations of the NPA charges confirm the zwitterionic nature of these compounds, with the alkali metal cations being rigidly locked and charge separated from the anion by the internal OCH 2 CH 2 OMe donor groups. Natural bond orbital (NBO) analysis and the second order perturbation theory analysis of the NBOs reveal significant hyperconjugative interactions in M- 1 -M- 3 , primarily between the lone pair and the antibonding Si-O orbitals, the extent of which decreases in the order M- 1 > M- 2 > M- 3 . The experimental basicities and the calculated gas-phase basicities of M- 1 -M- 3 reveal the zwitterionic alkali metal methanides M- 1 to be significantly stronger bases than the analogous silanides M- 2 and germanium M- 3 .

  4. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  5. Irrigant flow in the root canal: experimental validation of an unsteady Computational Fluid Dynamics model using high-speed imaging.

    Science.gov (United States)

    Boutsioukis, C; Verhaagen, B; Versluis, M; Kastrinakis, E; van der Sluis, L W M

    2010-05-01

    To compare the results of a Computational Fluid Dynamics (CFD) simulation of the irrigant flow within a prepared root canal, during final irrigation with a syringe and a needle, with experimental high-speed visualizations and theoretical calculations of an identical geometry and to evaluate the effect of off-centre positioning of the needle inside the root canal. A CFD model was created to simulate irrigant flow from a side-vented needle inside a prepared root canal. Calculations were carried out for four different positions of the needle inside a prepared root canal. An identical root canal model was made from poly-dimethyl-siloxane (PDMS). High-speed imaging of the flow seeded with particles and Particle Image Velocimetry (PIV) were combined to obtain the velocity field inside the root canal experimentally. Computational, theoretical and experimental results were compared to assess the validity of the computational model. Comparison between CFD computations and experiments revealed good agreement in the velocity magnitude and vortex location and size. Small lateral displacements of the needle inside the canal had a limited effect on the flow field. High-speed imaging experiments together with PIV of the flow inside a simulated root canal showed a good agreement with the CFD model, even though the flow was unsteady. Therefore, the CFD model is able to predict reliably the flow in similar domains.

  6. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    Science.gov (United States)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  7. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    International Nuclear Information System (INIS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J

    2015-01-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features.In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP.At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process.The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques. (paper)

  8. Computational and experimental study of effects of sediment shape on erosion of hydraulic turbines

    International Nuclear Information System (INIS)

    Poudel, L; Thapa, B; Shrestha, B P; Thapa, B S; Shrestha, K P; Shrestha, N K

    2012-01-01

    Hard particles as Quartz and Feldspar are present in large amount in most of the rivers across the Himalayan basins. In run-off-river hydro power plants these particles find way to turbine and cause its components to erode. Loss of turbine material due to the erosion and subsequent change in flow pattern induce several operational and maintenance problems in the power plants. Reduction in overall efficiency, vibrations and reduced life of turbine components are the major effects of sediment erosion of hydraulic turbines. Sediment erosion of hydraulic turbines is a complex phenomenon and depends upon several factors. One of the most influencing parameter is the characteristics of sediment particles. Quantity of sediment particles, which are harder than the turbine material, is one of the bases to indicate erosion potential of a particular site. Research findings have indicated that shape and size of the hard particles together with velocity of impact play a major role to decide the mode and rate of erosion in turbine components. It is not a common practice in Himalayan basins to conduct a detail study of sediment characteristics as a part of feasibility study for hydropower projects. Lack of scientifically verified procedures and guidelines to conduct the sediment analysis to estimate its erosion potential is one of the reasons to overlook this important part of feasibility study. Present study has been conducted by implementing computational tools to characterize the sediment particles with respect to their shape and size. Experimental studies have also been done to analyze the effects of different combinations of shape and size of hard particles on turbine material. Efforts have also been given to develop standard procedures to conduct similar study to compare erosion potential between different hydropower sites. Digital image processing software and sieve analyzer have been utilized to extract shape and size of sediment particles from the erosion sensitive power

  9. Automatic calibration of gamma spectrometers

    International Nuclear Information System (INIS)

    Tluchor, D.; Jiranek, V.

    1989-01-01

    The principle is described of energy calibration of the spectrometric path based on the measurement of the standard of one radionuclide or a set of them. The entire computer-aided process is divided into three main steps, viz.: the insertion of the calibration standard by the operator; the start of the calibration program; energy calibration by the computer. The program was selected such that the spectrum identification should not depend on adjustment of the digital or analog elements of the gamma spectrometric measuring path. The ECL program is described for automatic energy calibration as is its control, the organization of data file ECL.DAT and the necessary hardware support. The computer-multichannel analyzer communication was provided using an interface pair of Canberra 8673V and Canberra 8573 operating in the RS-422 standard. All subroutines for communication with the multichannel analyzer were written in MACRO 11 while the main program and the other subroutines were written in FORTRAN-77. (E.J.). 1 tab., 4 refs

  10. Raman spectroscopic measurements of CO2 density: Experimental calibration with high-pressure optical cell (HPOC) and fused silica capillary capsule (FSCC) with application to fluid inclusion observations

    Science.gov (United States)

    Wang, X.; Chou, I-Ming; Hu, W.; Burruss, Robert; Sun, Q.; Song, Y.

    2011-01-01

    Raman spectroscopy is a powerful method for the determination of CO2 densities in fluid inclusions, especially for those with small size and/or low fluid density. The relationship between CO2 Fermi diad split (Δ, cm−1) and CO2 density (ρ, g/cm3) has been documented by several previous studies. However, significant discrepancies exist among these studies mainly because of inconsistent calibration procedures and lack of measurements for CO2fluids having densities between 0.21 and 0.75 g/cm3, where liquid and vapor phases coexist near room temperature.In this study, a high-pressure optical cell and fused silica capillary capsules were used to prepare pure CO2 samples with densities between 0.0472 and 1.0060 g/cm3. The measured CO2 Fermi diad splits were calibrated with two well established Raman bands of benzonitrile at 1192.6 and 1598.9 cm−1. The relationship between the CO2 Fermi diad split and density can be represented by: ρ = 47513.64243 − 1374.824414 × Δ + 13.25586152 × Δ2 − 0.04258891551 × Δ3(r2 = 0.99835, σ = 0.0253 g/cm3), and this relationship was tested by synthetic fluid inclusions and natural CO2-rich fluid inclusions. The effects of temperature and the presence of H2O and CH4 on this relationship were also examined.

  11. Fully in Silico Calibration of Empirical Predictive Models for Environmental Fate Properties of Novel Munitions Compounds

    Science.gov (United States)

    2016-04-01

    constants using several IMC related aromatic amines. To develop QSARs with these data, many possible descriptors were evaluated , ranging from simple to...converted to approximate half-lives to give. The blue symbols are experimental data, obtained with IMC related amino compounds...calibrated “in silico” that predict the hydrolysis behavior of the diverse range of energetic NACs. However, new experimental and computational

  12. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-01-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are −0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions. (paper)

  13. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    Science.gov (United States)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-07-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are -0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions.

  14. Mercury Continuous Emmission Monitor Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani

    2009-03-12

    Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma

  15. [Genotoxic modification of nucleic acid bases and biological consequences of it. Review and prospects of experimental and computational investigations

    Science.gov (United States)

    Poltev, V. I.; Bruskov, V. I.; Shuliupina, N. V.; Rein, R.; Shibata, M.; Ornstein, R.; Miller, J.

    1993-01-01

    The review is presented of experimental and computational data on the influence of genotoxic modification of bases (deamination, alkylation, oxidation) on the structure and biological functioning of nucleic acids. Pathways are discussed for the influence of modification on coding properties of bases, on possible errors of nucleic acid biosynthesis, and on configurations of nucleotide mispairs. The atomic structure of nucleic acid fragments with modified bases and the role of base damages in mutagenesis and carcinogenesis are considered.

  16. A comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements.

    Science.gov (United States)

    Abdelgaied, A; Fisher, J; Jennings, L M

    2018-02-01

    A more robust pre-clinical wear simulation framework is required in order to simulate wider and higher ranges of activities, observed in different patient populations such as younger more active patients. Such a framework will help to understand and address the reported higher failure rates for younger and more active patients (National_Joint_Registry, 2016). The current study has developed and validated a comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements (TKR). The input mechanical (elastic modulus and Poisson's ratio) and wear parameters of the moderately cross-linked ultra-high molecular weight polyethylene (UHMWPE) bearing material were independently measured from experimental studies under realistic test conditions, similar to the loading conditions found in the total knee replacements. The wear predictions from the computational wear simulation were validated against the direct experimental wear measurements for size 3 Sigma curved total knee replacements (DePuy, UK) in an independent experimental wear simulation study under three different daily activities; walking, deep squat, and stairs ascending kinematic conditions. The measured compressive mechanical properties of the moderately cross-linked UHMWPE material were more than 20% lower than that reported in the literature under tensile test conditions. The pin-on-plate wear coefficient of moderately cross-linked UHMWPE was significantly dependant of the contact stress and the degree of cross-shear at the articulating surfaces. The computational wear predictions for the TKR from the current framework were consistent and in a good agreement with the independent full TKR experimental wear simulation measurements, with 0.94 coefficient of determination of the framework. In addition, the comprehensive combined experimental and computational framework was able to explain the complex experimental wear trends from the three different daily

  17. Experimental evaluation of multimodal human computer interface for tactical audio applications

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.; Jovanov, E.; Oy, S.

    2002-01-01

    Mission critical and information overwhelming applications require careful design of the human computer interface. Typical applications include night vision or low visibility mission navigation, guidance through a hostile territory, and flight navigation and orientation. Additional channels of

  18. An Integrated Experimental and Computational Study of Heating due to Surface Catalysis under Hypersonic Conditions

    Science.gov (United States)

    2012-08-01

    platform COOLFluiD (Computational Object Oriented Libraries for Fluid Dynamics) [1, 2, 3, 4] is VKI collaborative software environment for high...Poedts. The COOLFluiD framework: Design solutions for high-performance object oriented scientific computing software . In P. M. A. Sloot V. S. Sunderan, G...Andrea Lani. An Object Oriented and high performance platform for aerothermodynamics simulation. PhD thesis, Université Libre de Bruxelles, Chaussée

  19. From Interfaces to Bulk: Experimental-Computational Studies Across Time and Length Scales of Multi-Functional Ionic Polymers

    Energy Technology Data Exchange (ETDEWEB)

    Perahia, Dvora [Clemson Univ., SC (United States); Grest, Gary S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-25

    Neutron experiments coupled with computational components have resulted in unprecedented understanding of the factors that impact the behavior of ionic structured polymers. Additionally, new computational tools to study macromolecules, were developed. In parallel, this DOE funding have enabled the education of the next generation of material researchers who are able to take the advantage neutron tools offer to the understanding and design of advanced materials. Our research has provided unprecedented insight into one of the major factors that limits the use of ionizable polymers, combining the macroscopic view obtained from the experimental techniques with molecular insight extracted from computational studies leading to transformative knowledge that will impact the design of nano-structured, materials. With the focus on model systems, of broad interest to the scientific community and to industry, the research addressed challenges that cut across a large number of polymers, independent of the specific chemical structure or the transported species.

  20. Strength and Reliability of Wood for the Components of Low-cost Wind Turbines: Computational and Experimental Analysis and Applications

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan

    2009-01-01

    of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...... are developed, which should provide the basis for microstructure-based correlating of observable and service properties of wood. Some correlations between microstructure, strength and service properties of wood have been established....

  1. Rotating-coil calibration in a reference quadrupole, considering roll-angle misalignment and higher-order harmonics

    CERN Document Server

    AUTHOR|(CDS)2075492; Buzio, Marco; Köster, Oliver; Russenschuck, Stephan; Severino, Giordana

    2016-01-01

    A method is proposed for calibrating the radius of a rotating coil sensor by relaxing the metrological constraints on alignment and field errors of the reference quadrupole. A coil radius calibration considering a roll-angle misalignment of the measurement bench, the magnet, and the motor-drive unit is analyzed. Then, the error arising from higher-order harmonic field imperfections in the reference quadrupole is assessed. The method is validated by numerical field computation for both the higher-order harmonic errors and the roll-angle misalignment. Finally, an experimental proof-of-principle demonstration is car-ried out in a calibration magnet with sextupole harmonic.

  2. A Computational-Experimental Study of Plasma Processing of Carbides at High Temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Bronson, Arturo [Univ. of Texas, El Paso, TX (United States); Kumar, Vinod [Univ. of Texas, El Paso, TX (United States)

    2016-02-01

    The effects of plasma on carbides were computationally investigated for the ultimate development of adherent, dense scales such as Al2O3-TiO2 systems toward oxidation resistance at 1873 K. An alumina scale forms on the carbide because of the significant stability of Al2O3 as the outer scale adjacent to the gas phase compared to rutile, though TiO and Ti2O3 may form as components of an inner layer of a complicated scale. A sequence of surface reactions starting with the adsorption of oxygen on the surface was applied to experimental data from Donnelly’s research group who reported the adsorption of O2 in a plasma atmosphere as a function of power. In addition to the adsorbed oxygen (Oad) as the rate determining step, it controlled the cascading reaction sequence of the adsorbed species of AlO, AlO2 and AlO3, as indicated in the present study. The rate of oxygen adsorption also depends on the ratio of the final to initial adsorbed oxygen as a function the oxygen potential. In a secondary research thrust, Ti3AlC was synthesized and subsequently oxidized. A 39Ti-14Al-47TiC (in wt%) mixture was equilibrated by using a pseudo-isopiestic technique to form ultimately an aggregate of Ti3AlC, Ti2AlC and TiC phases. The aggregate was primarily composed of Ti3AlC with minor amounts of Ti2AlC and TiC, as determined by an X-ray diffraction analysis. The Ti3AlC/Ti2AlC/TiC aggregate was subsequently oxidized at 1873 K to form a scale composed of an outer layer of Al2O3-TiO2-Al2TiO5 with an inner layer consisting of TiO-Al2O3- Al4CO3. The measured scale thickness grew according to Wagner’s parabolic growth rate, which estimates an effective diffusion coefficient of 6 (10)-8 cm2/s. The scale

  3. An adaptive approach to metal artifact reduction in helical computed tomography for radiation therapy treatment planning: Experimental and clinical studies

    International Nuclear Information System (INIS)

    Yazdia, Mehran; Gingras, Luc; Beaulieu, Luc

    2005-01-01

    Purpose: In this article, an approach to metal artifact reduction is proposed that is practical for clinical use in radiation therapy. It is based on a new interpolation scheme of the projections associated with metal implants in helical computed tomography (CT) scanners. Methods and Materials: A three-step approach was developed consisting of an automatic algorithm for metal implant detection, a correction algorithm for helical projections, and a new, efficient algorithm for projection interpolation. The modified raw projection data are transferred back to the CT scanner device where CT slices are regenerated using the built-in reconstruction operator. The algorithm was tested on a CT calibration phantom in which the density of inserted objects are known and on clinical prostate cases with two hip prostheses. The results are evaluated using the CT number and shape of the objects. Results: The validations on a CT calibration phantom with various inserts of known densities show that the algorithm improved the overall image quality by restoring the shape and the representative CT number of the objects in the image. For the clinical hip replacement cases, a large fraction of the bladder, rectum, and prostate that were not visible on the original CT slices were recovered using the algorithm. Precise contouring of the target volume was thus feasible. Without this enhancement, physicians would have drawn bigger margins to be sure to include the target and, at the same time, could have prescribed a lower dose to keep the same level of normal tissue toxicity. Conclusions: In both phantom experiment and patient studies, the algorithm resulted in significant artifact reduction with increases in the reliability of planning procedure for the case of metallic hip prostheses. This algorithm is now clinically used as a preprocessing before treatment planning for metal artifact reduction

  4. Novel Nano-Size Oxide Dispersion Strengthened Steels Development through Computational and Experimental Study

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Shizhong [Southern Univ. and A& M College, Baton Rouge, LA (United States)

    2016-05-30

    This report summarizes our recent works of theoretical modeling, simulation and experimental validation of the simulation results on the ferritic oxide dispersion strengthened (ODS) alloy research. The simulation of the stability and thermal dynamics simulation on potential thermal stable candidates were performed and related ODS samples were synthesized and characterized. The simulation methods and experimental texture validation techniques development, achievements already reached, course work development, students and postdoc training, and future improvement are briefly introduced.

  5. Observatory Magnetometer In-Situ Calibration

    Directory of Open Access Journals (Sweden)

    A Marusenkov

    2011-07-01

    Full Text Available An experimental validation of the in-situ calibration procedure, which allows estimating parameters of observatory magnetometers (scale factors, sensor misalignment without its operation interruption, is presented. In order to control the validity of the procedure, the records provided by two magnetometers calibrated independently in a coil system have been processed. The in-situ estimations of the parameters are in very good agreement with the values provided by the coil system calibration.

  6. Overview of hypersonic CFD code calibration studies

    Science.gov (United States)

    Miller, Charles G.

    1987-01-01

    The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.

  7. Uniformity calibration for ICT image

    International Nuclear Information System (INIS)

    Zeng Gang; Liu Li; Que Jiemin; Zhang Yingping; Yin Yin; Wang Yanfang; Yu Zhongqiang; Yan Yonglian

    2004-01-01

    The uniformity of ICT image is impaired by beam hardening and the inconsistency of detector units responses. The beam hardening and the nonlinearity of the detector's output have been analyzed. The correction factors are determined experimentally by the detector's responses with different absorption length. The artifacts in the CT image of a symmetrical aluminium cylinder have been eliminated after calibration. (author)

  8. Micro-Ramp Flow Control for Oblique Shock Interactions: Comparisons of Computational and Experimental Data

    Science.gov (United States)

    Hirt, Stephanie M.; Reich, David B.; O'Connor, Michael B.

    2012-01-01

    Computational fluid dynamics was used to study the effectiveness of micro-ramp vortex generators to control oblique shock boundary layer interactions. Simulations were based on experiments previously conducted in the 15- by 15-cm supersonic wind tunnel at the NASA Glenn Research Center. Four micro-ramp geometries were tested at Mach 2.0 varying the height, chord length, and spanwise spacing between micro-ramps. The overall flow field was examined. Additionally, key parameters such as boundary-layer displacement thickness, momentum thickness and incompressible shape factor were also examined. The computational results predicted the effects of the microramps well, including the trends for the impact that the devices had on the shock boundary layer interaction. However, computing the shock boundary layer interaction itself proved to be problematic since the calculations predicted more pronounced adverse effects on the boundary layer due to the shock than were seen in the experiment.

  9. RADIOMETRIC CALIBRATION OF MARS HiRISE HIGH RESOLUTION IMAGERY BASED ON FPGA

    Directory of Open Access Journals (Sweden)

    Y. Hou

    2016-06-01

    Full Text Available Due to the large data amount of HiRISE imagery, traditional radiometric calibration method is not able to meet the fast processing requirements. To solve this problem, a radiometric calibration system of HiRISE imagery based on field program gate array (FPGA is designed. The montage gap between two channels caused by gray inconsistency is removed through histogram matching. The calibration system is composed of FPGA and DSP, which makes full use of the parallel processing ability of FPGA and fast computation as well as flexible control characteristic of DSP. Experimental results show that the designed system consumes less hardware resources and the real-time processing ability of radiometric calibration of HiRISE imagery is improved.

  10. SoC-Based Edge Computing Gateway in the Context of the Internet of Multimedia Things: Experimental Platform

    Directory of Open Access Journals (Sweden)

    Maher Jridi

    2018-01-01

    Full Text Available This paper presents an algorithm/architecture and Hardware/Software co-designs for implementing a digital edge computing layer on a Zynq platform in the context of the Internet of Multimedia Things (IoMT. Traditional cloud computing is no longer suitable for applications that require image processing due to cloud latency and privacy concerns. With edge computing, data are processed, analyzed, and encrypted very close to the device, which enable the ability to secure data and act rapidly on connected things. The proposed edge computing system is composed of a reconfigurable module to simultaneously compress and encrypt multiple images, along with wireless image transmission and display functionalities. A lightweight implementation of the proposed design is obtained by approximate computing of the discrete cosine transform (DCT and by using a simple chaotic generator which greatly enhances the encryption efficiency. The deployed solution includes four configurations based on HW/SW partitioning in order to handle the compromise between execution time, area, and energy consumption. It was found with the experimental setup that by moving more components to hardware execution, a timing speedup of more than nine times could be achieved with a negligible amount of energy consumption. The power efficiency was then enhanced by a ratio of 7.7 times.

  11. Computer simulations of rare earth sites in glass: experimental tests and applications to laser materials

    International Nuclear Information System (INIS)

    Weber, M.J.

    1984-11-01

    Computer simulations of the microscopic structure of BeF 2 glasses using molecular dynamics are reviewed and compared with x-ray and neutron diffraction, EXAFS, NMR, and optical measurements. Unique information about the site-to-site variations in the local environments of rare earth ions is obtained using optical selective excitation and laser-induced fluorescence line-narrowing techniques. Applications and limitations of computer simulations to the development of laser glasses and to predictions of other static and dynamic properties of glasses are discussed. 35 references, 2 figures, 2 tables

  12. Experimental detection of nonclassical correlations in mixed-state quantum computation

    International Nuclear Information System (INIS)

    Passante, G.; Moussa, O.; Trottier, D. A.; Laflamme, R.

    2011-01-01

    We report on an experiment to detect nonclassical correlations in a highly mixed state. The correlations are characterized by the quantum discord and are observed using four qubits in a liquid-state nuclear magnetic resonance quantum information processor. The state analyzed is the output of a DQC1 computation, whose input is a single quantum bit accompanied by n maximally mixed qubits. This model of computation outperforms the best known classical algorithms and, although it contains vanishing entanglement, it is known to have quantum correlations characterized by the quantum discord. This experiment detects nonvanishing quantum discord, ensuring the existence of nonclassical correlations as measured by the quantum discord.

  13. SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects

    Science.gov (United States)

    Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M

    1998-01-01

    SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.

  14. A Comparative Study of Collagen Matrix Density Effect on Endothelial Sprout Formation Using Experimental and Computational Approaches.

    Science.gov (United States)

    Shamloo, Amir; Mohammadaliha, Negar; Heilshorn, Sarah C; Bauer, Amy L

    2016-04-01

    A thorough understanding of determining factors in angiogenesis is a necessary step to control the development of new blood vessels. Extracellular matrix density is known to have a significant influence on cellular behaviors and consequently can regulate vessel formation. The utilization of experimental platforms in combination with numerical models can be a powerful method to explore the mechanisms of new capillary sprout formation. In this study, using an integrative method, the interplay between the matrix density and angiogenesis was investigated. Owing the fact that the extracellular matrix density is a global parameter that can affect other parameters such as pore size, stiffness, cell-matrix adhesion and cross-linking, deeper understanding of the most important biomechanical or biochemical properties of the ECM causing changes in sprout morphogenesis is crucial. Here, we implemented both computational and experimental methods to analyze the mechanisms responsible for the influence of ECM density on the sprout formation that is difficult to be investigated comprehensively using each of these single methods. For this purpose, we first utilized an innovative approach to quantify the correspondence of the simulated collagen fibril density to the collagen density in the experimental part. Comparing the results of the experimental study and computational model led to some considerable achievements. First, we verified the results of the computational model using the experimental results. Then, we reported parameters such as the ratio of proliferating cells to migrating cells that was difficult to obtain from experimental study. Finally, this integrative system led to gain an understanding of the possible mechanisms responsible for the effect of ECM density on angiogenesis. The results showed that stable and long sprouts were observed at an intermediate collagen matrix density of 1.2 and 1.9 mg/ml due to a balance between the number of migrating and proliferating

  15. Vibrational and electronic spectra of 2-nitrobenzanthrone: An experimental and computational study

    Science.gov (United States)

    Onchoke, Kefa K.; Chaudhry, Saad N.; Ojeda, Jorge J.

    2016-01-01

    The environmental pollutant 2-nitrobenzanthrone (2-NBA) poses human health hazards, and is formed by atmospheric reactions of NOX gases with atmospheric particulates. Though its mutagenic effects have been studied in biological systems, its comprehensive spectroscopic experimental data are scarce. Thus, vibrational and optical spectroscopic analysis (UV-Vis, and fluorescence) of 2-NBA was studied using both experimental and density functional theory employing B3LYP method with 6-311 + G(d,p) basis set. The scaled theoretical vibrational frequencies show good agreement to experiment to within 5 cm- 1 and NBA, respectively. On the basis of normal coordinate analysis complete assignments of harmonic experimental infrared and Raman bands are made. The influence of the nitro group substitution upon the benzanthrone structure and symmetric CH vibrations, and electronic spectra is noted. This study is useful for the development of spectroscopy-mutagenicity relationships in nitrated polycyclic aromatic hydrocarbons.

  16. Experimental and computational study on molecular structure and vibrational analysis of an antihyperglycemic biomolecule: Gliclazide

    Science.gov (United States)

    Karakaya, Mustafa; Kürekçi, Mehmet; Eskiyurt, Buse; Sert, Yusuf; Çırak, Çağrı

    2015-01-01

    In present study, the experimental and theoretical harmonic vibrational frequencies of gliclazide molecule have been investigated. The experimental FT-IR (400-4000 cm-1) and Laser-Raman spectra (100-4000 cm-1) of the molecule in the solid phase were recorded. Theoretical vibrational frequencies and geometric parameters (bond lengths and bond angles) have been calculated using ab initio Hartree Fock (HF), density functional theory (B3LYP hybrid function) methods with 6-311++G(d,p) and 6-31G(d,p) basis sets by Gaussian 09W program. The assignments of the vibrational frequencies were performed by potential energy distribution (PED) analysis by using VEDA 4 program. Theoretical optimized geometric parameters and vibrational frequencies have been compared with the corresponding experimental data, and they have been shown to be in a good agreement with each other. Also, the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies have been found.

  17. Computational and experimental prediction of dust production in pebble bed reactors, Part II

    Energy Technology Data Exchange (ETDEWEB)

    Hiruta, Mie; Johnson, Gannon [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Rostamian, Maziar, E-mail: mrostamian@asme.org [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Potirniche, Gabriel P. [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Ougouag, Abderrafi M. [Idaho National Laboratory, 2525 N Fremont Avenue, Idaho Falls, ID 83401 (United States); Bertino, Massimo; Franzel, Louis [Department of Physics, Virginia Commonwealth University, Richmond, VA 23284 (United States); Tokuhiro, Akira [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States)

    2013-10-15

    Highlights: • Custom-built high temperature, high pressure tribometer is designed. • Two different wear phenomena at high temperatures are observed. • Experimental wear results for graphite are presented. • The graphite wear dust production in a typical Pebble Bed Reactor is predicted. -- Abstract: This paper is the continuation of Part I, which describes the high temperature and high pressure helium environment wear tests of graphite–graphite in frictional contact. In the present work, it has been attempted to simulate a Pebble Bed Reactor core environment as compared to Part I. The experimental apparatus, which is a custom-designed tribometer, is capable of performing wear tests at PBR relevant higher temperatures and pressures under a helium environment. This environment facilitates prediction of wear mass loss of graphite as dust particulates from the pebble bed. The experimental results of high temperature helium environment are used to anticipate the amount of wear mass produced in a pebble bed nuclear reactor.

  18. Combined experimental and computational modelling studies of the solubility of nickel in strontium titanate

    NARCIS (Netherlands)

    Beale, A.M.; Paul, M.; Sankar, G.; Oldman, R.J.; Catlow, R.A.; French, S.; Fowles, M.

    2009-01-01

    A combination of X-ray techniques and atomistic computational modelling has been used to study the solubility of Ni in SrTiO3 in relation to the application of this material for the catalytic partial oxidation of methane. The experiments have demonstrated that low temperature, hydrothermal synthesis

  19. Estimating valence from the sound of a word : Computational, experimental, and cross-linguistic evidence

    NARCIS (Netherlands)

    Louwerse, Max; Qu, Zhan

    2017-01-01

    It is assumed linguistic symbols must be grounded in perceptual information to attain meaning, because the sound of a word in a language has an arbitrary relation with its referent. This paper demonstrates that a strong arbitrariness claim should be reconsidered. In a computational study, we showed

  20. High-resolution subject-specific mitral valve imaging and modeling: experimental and computational methods.

    Science.gov (United States)

    Toma, Milan; Bloodworth, Charles H; Einstein, Daniel R; Pierce, Eric L; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2016-12-01

    The diversity of mitral valve (MV) geometries and multitude of surgical options for correction of MV diseases necessitates the use of computational modeling. Numerical simulations of the MV would allow surgeons and engineers to evaluate repairs, devices, procedures, and concepts before performing them and before moving on to more costly testing modalities. Constructing, tuning, and validating these models rely upon extensive in vitro characterization of valve structure, function, and response to change due to diseases. Micro-computed tomography ([Formula: see text]CT) allows for unmatched spatial resolution for soft tissue imaging. However, it is still technically challenging to obtain an accurate geometry of the diastolic MV. We discuss here the development of a novel technique for treating MV specimens with glutaraldehyde fixative in order to minimize geometric distortions in preparation for [Formula: see text]CT scanning. The technique provides a resulting MV geometry which is significantly more detailed in chordal structure, accurate in leaflet shape, and closer to its physiological diastolic geometry. In this paper, computational fluid-structure interaction (FSI) simulations are used to show the importance of more detailed subject-specific MV geometry with 3D chordal structure to simulate a proper closure validated against [Formula: see text]CT images of the closed valve. Two computational models, before and after use of the aforementioned technique, are used to simulate closure of the MV.

  1. Optimization of scaffold design for bone tissue engineering: A computational and experimental study.

    Science.gov (United States)

    Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R

    2014-04-01

    In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Separation of electron ion ring components (computational simulation and experimental results)

    International Nuclear Information System (INIS)

    Aleksandrov, V.S.; Dolbilov, G.V.; Kazarinov, N.Yu.; Mironov, V.I.; Novikov, V.G.; Perel'shtejn, Eh.A.; Sarantsev, V.P.; Shevtsov, V.F.

    1978-01-01

    The problems of the available polarization value of electron-ion rings in the regime of acceleration and separation of its components at the final stage of acceleration are studied. The results of computational simulation by use of the macroparticle method and experiments on the ring acceleration and separation are given. The comparison of calculation results with experiment is presented

  3. An experimental test of processes underlying self-disclosure in computer-mediated communication

    NARCIS (Netherlands)

    Schouten, A.P.; Valkenburg, P.M.; Peter, J.

    2009-01-01

    A consistent finding in computer-mediated communication (CMC) and Internet research is that, compared to face-toface communication, CMC results in higher levels of self-disclosure. We identified four possible mediators that may carry the influence of CMC on self-disclosure: self-presentation,

  4. Experimental investigation of surface determination process on multi-material components for dimensional computed tomography

    DEFF Research Database (Denmark)

    Borges de Oliveira, Fabrício; Stolfi, Alessandro; Bartscher, Markus

    2016-01-01

    The possibility of measuring multi-material components, while assessing inner and outer features simultaneously makes X-ray computed tomography (CT) the latest evolution in the field of coordinate measurement systems (CMSs). However, the difficulty in selecting suitable scanning parameters and su...

  5. Computational and experimental investigation of local stress fiber orientation in uniaxially and biaxially constrained microtissues

    NARCIS (Netherlands)

    Obbink - Huizer, C.; Foolen, J.; Oomens, C.W.J.; Borochin, M.A.; Chen, C.S.; Bouten, C.V.C.; Baaijens, F.P.T.

    2014-01-01

    The orientation of cells and associated F-actin stress fibers is essential for proper tissue functioning. We have previously developed a computational model that qualitatively describes stress fiber orientation in response to a range of mechanical stimuli. In this paper, the aim is to quantitatively

  6. Simulation of large scale air detritiation operations by computer modeling and bench-scale experimentation

    International Nuclear Information System (INIS)

    Clemmer, R.G.; Land, R.H.; Maroni, V.A.; Mintz, J.M.

    1978-01-01

    Although some experience has been gained in the design and construction of 0.5 to 5 m 3 /s air-detritiation systems, little information is available on the performance of these systems under realistic conditions. Recently completed studies at ANL have attempted to provide some perspective on this subject. A time-dependent computer model was developed to study the effects of various reaction and soaking mechanisms that could occur in a typically-sized fusion reactor building (approximately 10 5 m 3 ) following a range of tritium releases (2 to 200 g). In parallel with the computer study, a small (approximately 50 liter) test chamber was set up to investigate cleanup characteristics under conditions which could also be simulated with the computer code. Whereas results of computer analyses indicated that only approximately 10 -3 percent of the tritium released to an ambient enclosure should be converted to tritiated water, the bench-scale experiments gave evidence of conversions to water greater than 1%. Furthermore, although the amounts (both calculated and observed) of soaked-in tritium are usually only a very small fraction of the total tritium release, the soaked tritium is significant, in that its continuous return to the enclosure extends the cleanup time beyond the predicted value in the absence of any soaking mechanisms

  7. A local computer network for the experimental data acquisition at BESSY

    International Nuclear Information System (INIS)

    Buchholz, W.

    1984-01-01

    For the users of the Berlin dedicated electron storage ring for synchrotron radiation (BESSY) a local computer network has been installed: The system is designed primarily for data acquisition and offers the users a generous hardware provision combined with maximum sortware flexibility

  8. iCFD: Interpreted Computational Fluid Dynamics - Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design - The secondary clarifier.

    Science.gov (United States)

    Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy

    2015-10-15

    The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii

  9. Experimental comparison among the laboratories accredited within the framework of the European Co-operation for Accreditation on the calibration of a radiation protection dosimeters in the terms of the quantity air Kerma

    International Nuclear Information System (INIS)

    Bovi, M.; Toni, M.P.; Tricomi, G.

    2002-01-01

    The European co-operation for Accreditation (EA) formalises the collaboration of the Accreditation Bodies of the Member States of the European Union and the European Free Trade Association covering all conformity assessment activities. This collaboration is based on a Memorandum of Understanding dated the 27 November 1997 and aims at developing and maintain Multilateral Agreements (MLAs) within EA and with non-members accreditation bodies. MLAs Signatories guarantee uniformity of accreditation by continuous and rigorous evaluation. Based on mutual confidence, the MLAs recognise the equivalence of the accreditation systems administered by EA Members and of certificates and reports issued by bodies accredited under these systems. A basic element of the program to establish and maintain mutual confidence among calibration services is the participation of the accredited laboratories in experimental interlaboratory comparisons (ILC) organised by EA members or other international organisations. The aim of these ILC is to verify the technical equivalence of calibration services within the EA. The ILC which it is dealt with in the present work was recently carried out over a period of two years, ending in May 2002. It interested the laboratories accredited in the ionising radiation field for calibration of dosimeters at radiation protection levels in terms of the quantity air kerma (K air ) due to 6 0C o and 1 37C s gamma radiation. The ILC was planned by the EA expert group on Ionising radiation and radioactivity and approved by the EA General Assembly in December 1999 with the title Calibration of a Radiation Protection Dosimeter under the code IR3. The need of this comparison also resulted from an inquiry carried out in 1998 by the expert group among the different Accreditation Bodies members of EA and associated to EA. The organ