WorldWideScience

Sample records for level experimental validation

  1. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  2. 76 FR 81991 - National Spectrum Sharing Research Experimentation, Validation, Verification, Demonstration and...

    Science.gov (United States)

    2011-12-29

    ... NATIONAL SCIENCE FOUNDATION National Spectrum Sharing Research Experimentation, Validation... requirements of national level spectrum research, development, demonstration, and field trial facilities... to determine the optimal way to manage and use the radio spectrum. During Workshop I held at Boulder...

  3. Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.

    Science.gov (United States)

    Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B

    2018-01-24

    The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.

  4. Experimental validation of the HARMONIE code

    International Nuclear Information System (INIS)

    Bernard, A.; Dorsselaere, J.P. van

    1984-01-01

    An experimental program of deformation, in air, of different groups of subassemblies (7 to 41 subassemblies), was performed on a scale 1 mock-up in the SPX1 geometry, in order to achieve a first experimental validation of the code HARMONIE. The agreement between tests and calculations was suitable, qualitatively for all the groups and quantitatively for regular groups of 19 subassemblies at most. The differences come mainly from friction between pads, and secondly from the foot gaps. (author)

  5. Fission Product Experimental Program: Validation and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Leclaire, N.; Ivanova, T.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Girault, E. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-02-15

    From 1998 to 2004, a series of critical experiments referred to as the fission product (FP) experimental program was performed at the Commissariat a l'Energie Atomique Valduc research facility. The experiments were designed by Institut de Radioprotection et de Surete Nucleaire (IRSN) and funded by AREVA NC and IRSN within the French program supporting development of a technical basis for burnup credit validation. The experiments were performed with the following six key fission products encountered in solution either individually or as mixtures: {sup 103}Rh, {sup 133}Cs, {sup nat}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd. The program aimed at compensating for the lack of information on critical experiments involving FPs and at establishing a basis for FPs credit validation. One hundred forty-five critical experiments were performed, evaluated, and analyzed with the French CRISTAL criticality safety package and the American SCALE5. 1 code system employing different cross-section libraries. The aim of the paper is to show the experimental data potential to improve the ability to perform validation of full burnup credit calculation. The paper describes three Phases of the experimental program; the results of preliminary evaluation, the calculation, and the sensitivity/uncertainty study of the FP experiments used to validate the APOLLO2-MORET 4 route in the CRISTAL criticality package for burnup credit applications. (authors)

  6. Preliminary Validation of the MATRA-LMR Code Using Existing Sodium-Cooled Experimental Data

    International Nuclear Information System (INIS)

    Choi, Sun Rock; Kim, Sangji

    2014-01-01

    The main objective of the SFR prototype plant is to verify TRU metal fuel performance, reactor operation, and transmutation ability of high-level wastes. The core thermal-hydraulic design is used to ensure the safe fuel performance during the whole plant operation. The fuel design limit is highly dependent on both the maximum cladding temperature and the uncertainties of the design parameters. Therefore, an accurate temperature calculation in each subassembly is highly important to assure a safe and reliable operation of the reactor systems. The current core thermalhydraulic design is mainly performed using the SLTHEN (Steady-State LMR Thermal-Hydraulic Analysis Code Based on ENERGY Model) code, which has been already validated using the existing sodium-cooled experimental data. In addition to the SLTHEN code, a detailed analysis is performed using the MATRA-LMR (Multichannel Analyzer for Transient and steady-state in Rod Array-Liquid Metal Reactor) code. In this work, the MATRA-LMR code is validated for a single subassembly evaluation using the previous experimental data. The MATRA-LMR code has been validated using existing sodium-cooled experimental data. The results demonstrate that the design code appropriately predicts the temperature distributions compared with the experimental values. Major differences are observed in the experiments with the large pin number due to the radial-wise mixing difference

  7. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  8. Design of passive directional acoustic devices using Topology Optimization - from method to experimental validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Fernandez Grande, Efren

    2016-01-01

    emission in two dimensions and is experimentally validated using three dimensional prints of the optimized designs. The emitted fields exhibit a level difference of at least 15 dB on axis relative to the off-axis directions, over frequency bands of approximately an octave. It is demonstrated to be possible...

  9. Experimental validation of optimum resistance moment of concrete ...

    African Journals Online (AJOL)

    Experimental validation of optimum resistance moment of concrete slabs reinforced ... other solutions to combat corrosion problems in steel reinforced concrete. ... Eight specimens of two-way spanning slabs reinforced with CFRP bars were ...

  10. Relationship of otolith strontium-to-calcium ratios and salinity: Experimental validation for juvenile salmonids

    Science.gov (United States)

    Zimmerman, C.E.

    2005-01-01

    Analysis of otolith strontium (Sr) or strontium-to-calcium (Sr:Ca) ratios provides a powerful tool to reconstruct the chronology of migration among salinity environments for diadromous salmonids. Although use of this method has been validated by examination of known individuals and translocation experiments, it has never been validated under controlled experimental conditions. In this study, incorporation of otolith Sr was tested across a range of salinities and resulting levels of ambient Sr and Ca concentrations in juvenile chinook salmon (Oncorhynchus tshawytscha), coho salmon (Oncorhynchus kisutch), sockeye salmon (Oncorhynchus nerka), rainbow trout (Oncorhynchus rnykiss), and Arctic char (Salvelinus alpinus). Experimental water was mixed, using stream water and seawater as end members, to create experimental salinities of 0.1, 6.3, 12.7, 18.6, 25.5, and 33.0 psu. Otolith Sr and Sr:Ca ratios were significantly related to salinity for all species (r2 range: 0.80-0.91) but provide only enough predictive resolution to discriminate among fresh water, brackish water, and saltwater residency. These results validate the use of otolith Sr:Ca ratios to broadly discriminate salinity histories encountered by salmonids but highlight the need for further research concerning the influence of osmoregulation and physiological changes associated with smoking on otolith microchemistry.

  11. Solar-Diesel Hybrid Power System Optimization and Experimental Validation

    Science.gov (United States)

    Jacobus, Headley Stewart

    As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.

  12. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  13. Validation of new empirical model for self-leveling behavior of cylindrical particle beds based on experimental database

    International Nuclear Information System (INIS)

    Morita, Koji; Matsumoto, Tatsuya; Taketa, Shohei; Nishi, Shinpei; Cheng, Songbai; Suzuki, Tohru; Tobita, Yoshiharu

    2014-01-01

    During a material relocation phase of core disruptive accidents (CDAs) in sodium cooled fast reactors (SFRs), debris beds can be formed in the lower plenum region due to rapid quenching and fragmentation of molten core materials. Heat removal from debris beds is crucial to achieve so called in-vessel retention (IVR) of degraded core materials. Coolant boiling in the beds may lead to leveling of their mound shape, and then changes coolability of the beds with decay heat as well as neutronic characteristics. To clarify the mechanisms underlying this self-leveling behavior, several series of experiments using simulant materials has been performed in collaboration between Japan Atomic Energy Agency (JAEA) and Kyushu University in Japan. In the present study, experiments in a cylindrical system were employed to develop experimental data on self-leveling process of particle beds. In the experiments, to simulate the coolant boiling due to the decay heat in fuel, nitrogen gas was percolated uniformly through the bottom of the particle bed with a conical shape mound using a gas injection method. Time variations in bed height during the self-leveling process were measured for key experimental parameters on particle size, density and sphericity, and gas flow rate. Using a dimensional analysis approach, a new model was proposed to correlate the experimental data on transient bed height with an empirical equation using a characteristic time of self-leveling development and a terminal equilibrium height of the bed. It was demonstrated that the proposed model predicts self-leveling development of particle beds with reasonable accuracy in the present ranges of experimental conditions. (author)

  14. Level validity of self-report whole-family measures.

    Science.gov (United States)

    Manders, Willeke A; Cook, William L; Oud, Johan H L; Scholte, Ron H J; Janssens, Jan M A M; De Bruyn, Eric E J

    2007-12-01

    This article introduces an approach to testing the level validity of family assessment instruments (i.e., whether a family instrument measures family functioning at the level of the system it purports to assess). Two parents and 2 adolescents in 69 families rated the warmth in each of their family relationships and in the family as a whole. Family members' ratings of whole-family warmth assessed family functioning not only at the family level (i.e., characteristics of the family as a whole) but also at the individual level of analysis (i.e., characteristics of family members as raters), indicating a lack of level validity. Evidence was provided for the level validity of a latent variable based on family members' ratings of whole-family warmth. The findings underscore the importance of assessing the level validity of individual ratings of whole-family functioning.

  15. Experimental validation of a topology optimized acoustic cavity

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Sigmund, Ole; Fernandez Grande, Efren

    2015-01-01

    This paper presents the experimental validation of an acoustic cavity designed using topology optimization with the goal of minimizing the sound pressure locally for monochromatic excitation. The presented results show good agreement between simulations and measurements. The effect of damping...

  16. Experimental Validation of a Permeability Model for Enrichment Membranes

    International Nuclear Information System (INIS)

    Orellano, Pablo; Brasnarof, Daniel; Florido Pablo

    2003-01-01

    An experimental loop with a real scale diffuser, in a single enrichment-stage configuration, was operated with air at different process conditions, in order to characterize the membrane permeability.Using these experimental data, an analytical geometric-and-morphologic-based model was validated.It is conclude that a new set of independent measurements, i.e. enrichment, is necessary in order to fully characterize diffusers, because of its internal parameters are not univocally determinated with permeability experimental data only

  17. Validation of the Performance of High-level Waste Disposal System

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Won Jin; Park, J. H.; Lee, J. O. (and others)

    2007-06-15

    The experimental researches to validate the integrity and safety of high-level waste disposal system were carried out. The studies on the construction of KURT, and the site rock characteristics were conducted. Thermal-hydro-mechanical behavior of engineered barrier system was investigated using the engineering-scale test facility. The migration and retardation of radionuclide through the rock fracture under anaerobic and reducing condition were studied. The distribution coefficients of radionuclides onto granite, the rock matrix diffusion coefficients, and the gap and grain boundary inventories of spent fuel were measured.

  18. The impact of crowd noise on officiating in Muay Thai: achieving external validity in an experimental setting

    Directory of Open Access Journals (Sweden)

    Tony D Myers

    2012-09-01

    Full Text Available Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the ‘crowd noise’ intervention is allowed to vary, they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring ‘home’ and ‘away’ boxers. In each bout, judges were randomised into a ‘noise’ (live sound or ‘no crowd noise’ (noise cancelling headphones and white noise condition, resulting in 59 judgements in the ‘no crowd noise’ and 61 in the ‘crowd noise’ condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the ‘ten point must’ scoring system shared with professional boxing. The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  19. The impact of crowd noise on officiating in muay thai: achieving external validity in an experimental setting.

    Science.gov (United States)

    Myers, Tony; Balmer, Nigel

    2012-01-01

    Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the "crowd noise" intervention is allowed to vary), they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring "home" and "away" boxers. In each bout, judges were randomized into a "noise" (live sound) or "no crowd noise" (noise-canceling headphones and white noise) condition, resulting in 59 judgments in the "no crowd noise" and 61 in the "crowd noise" condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the "10-point must" scoring system shared with professional boxing). The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  20. Development of a quality-assessment tool for experimental bruxism studies: reliability and validity.

    Science.gov (United States)

    Dawson, Andreas; Raphael, Karen G; Glaros, Alan; Axelsson, Susanna; Arima, Taro; Ernberg, Malin; Farella, Mauro; Lobbezoo, Frank; Manfredini, Daniele; Michelotti, Ambra; Svensson, Peter; List, Thomas

    2013-01-01

    To combine empirical evidence and expert opinion in a formal consensus method in order to develop a quality-assessment tool for experimental bruxism studies in systematic reviews. Tool development comprised five steps: (1) preliminary decisions, (2) item generation, (3) face-validity assessment, (4) reliability and discriminitive validity assessment, and (5) instrument refinement. The kappa value and phi-coefficient were calculated to assess inter-observer reliability and discriminative ability, respectively. Following preliminary decisions and a literature review, a list of 52 items to be considered for inclusion in the tool was compiled. Eleven experts were invited to join a Delphi panel and 10 accepted. Four Delphi rounds reduced the preliminary tool-Quality-Assessment Tool for Experimental Bruxism Studies (Qu-ATEBS)- to 8 items: study aim, study sample, control condition or group, study design, experimental bruxism task, statistics, interpretation of results, and conflict of interest statement. Consensus among the Delphi panelists yielded good face validity. Inter-observer reliability was acceptable (k = 0.77). Discriminative validity was excellent (phi coefficient 1.0; P reviews of experimental bruxism studies, exhibits face validity, excellent discriminative validity, and acceptable inter-observer reliability. Development of quality assessment tools for many other topics in the orofacial pain literature is needed and may follow the described procedure.

  1. Hypoxia and oxidation levels of DNA and lipids in humans and animal experimental models

    DEFF Research Database (Denmark)

    Møller, Peter; Risom, Lotte; Lundby, Carsten

    2008-01-01

    The objective of this review was to evaluate the association between hypoxia and oxidative damage to DNA and lipids. Evaluation criteria encompassed specificity and validation status of the biomarkers, study design, strength of the association, dose-response relationship, biological plausibility......, analogous exposures, and effect modification by intervention. The collective interpretation indicates persuasive evidence from the studies in humans for an association between hypoxia and elevated levels of oxidative damage to DNA and lipids. The levels of oxidatively generated DNA lesions and lipid...... in subjects at high altitude. Most of the animal experimental models should be interpreted with caution because the assays for assessment of lipid peroxidation products have suboptimal validity....

  2. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  3. Experimental validation of an ultrasonic flowmeter for unsteady flows

    Science.gov (United States)

    Leontidis, V.; Cuvier, C.; Caignaert, G.; Dupont, P.; Roussette, O.; Fammery, S.; Nivet, P.; Dazin, A.

    2018-04-01

    An ultrasonic flowmeter was developed for further applications in cryogenic conditions and for measuring flow rate fluctuations in the range of 0 to 70 Hz. The prototype was installed in a flow test rig, and was validated experimentally both in steady and unsteady water flow conditions. A Coriolis flowmeter was used for the calibration under steady state conditions, whereas in the unsteady case the validation was done simultaneously against two methods: particle image velocimetry (PIV), and with pressure transducers installed flush on the wall of the pipe. The results show that the developed flowmeter and the proposed methodology can accurately measure the frequency and amplitude of unsteady fluctuations in the experimental range of 0-9 l s-1 of the mean main flow rate and 0-70 Hz of the imposed disturbances.

  4. Experimental validation of a new heterogeneous mechanical test design

    Science.gov (United States)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  5. INL Experimental Program Roadmap for Thermal Hydraulic Code Validation

    Energy Technology Data Exchange (ETDEWEB)

    Glenn McCreery; Hugh McIlroy

    2007-09-01

    Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role of expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related

  6. Experimental validation of the containment codes ASTARTE and SEURBNUK

    International Nuclear Information System (INIS)

    Kendall, K.C.; Arnold, L.A.; Broadhouse, B.J.; Jones, A.; Yerkess, A.; Benuzzi, A.

    1979-10-01

    The fast reactor containment codes ASTARTE and SEURBNUK are being validated against data from the COVA series of small scale experiments being performed jointly by the UKAEA and JRC Ispra. The experimental programme is nearly complete, and data are given. (U.K.)

  7. Numerical calibration and experimental validation of a PCM-Air heat exchanger model

    International Nuclear Information System (INIS)

    Stathopoulos, N.; El Mankibi, M.; Santamouris, Mattheos

    2017-01-01

    Highlights: • Development of a PCM-Air heat exchanger experimental unit and its numerical model. • Differential Scanning Calorimetry for PCM properties. • Ineptitude of DSC obtained heat capacity curves. • Creation of adequate heat capacity curves depending on heat transfer rates. • Confrontation of numerical and experimental results and validation of the model. - Abstract: Ambitious goals have been set at international, European and French level for energy consumption and greenhouse gas emissions decrease of the building sector. Achieving them requires renewable energy integration, a technology that presents however an important drawback: intermittent energy production. In response, thermal energy storage (TES) technology applications have been developed in order to correlate energy production and consumption of the building. Phase Change Materials (PCMs) have been widely used in TES applications as they offer a high storage density and adequate phase change temperature range. It is important to accurately know the thermophysical properties of the PCM, both for experimental (system design) and numerical (correct prediction) purposes. In this paper, the fabrication of a PCM – Air experimental prototype is presented at first, along with the development of a numerical model simulating the downstream temperature evolution of the heat exchanger. Particular focus is given to the calibration method and the validation of the model using experimental characterization results. Differential scanning calorimetry (DSC) is used to define the thermal properties of the PCM. Initial numerical results are underestimated compared to experimental ones. Various factors were investigated, pointing to the ineptitude of the heat capacity parameter, as DSC results depend on heating/cooling rates. Adequate heat capacity curves were empirically determined, depending on heat transfer rates and based on DSC results and experimental observations. The results of the proposed model

  8. Experimental validation of calculated atomic charges in ionic liquids

    Science.gov (United States)

    Fogarty, Richard M.; Matthews, Richard P.; Ashworth, Claire R.; Brandt-Talbot, Agnieszka; Palgrave, Robert G.; Bourne, Richard A.; Vander Hoogerstraete, Tom; Hunt, Patricia A.; Lovelock, Kevin R. J.

    2018-05-01

    A combination of X-ray photoelectron spectroscopy and near edge X-ray absorption fine structure spectroscopy has been used to provide an experimental measure of nitrogen atomic charges in nine ionic liquids (ILs). These experimental results are used to validate charges calculated with three computational methods: charges from electrostatic potentials using a grid-based method (ChelpG), natural bond orbital population analysis, and the atoms in molecules approach. By combining these results with those from a previous study on sulfur, we find that ChelpG charges provide the best description of the charge distribution in ILs. However, we find that ChelpG charges can lead to significant conformational dependence and therefore advise that small differences in ChelpG charges (<0.3 e) should be interpreted with care. We use these validated charges to provide physical insight into nitrogen atomic charges for the ILs probed.

  9. DMFC performance and methanol cross-over: Experimental analysis and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Casalegno, A.; Marchesi, R. [Dipartimento di Energia, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2008-10-15

    A combined experimental and modelling approach is proposed to analyze methanol cross-over and its effect on DMFC performance. The experimental analysis is performed in order to allow an accurate investigation of methanol cross-over influence on DMFC performance, hence measurements were characterized in terms of uncertainty and reproducibility. The findings suggest that methanol cross-over is mainly determined by diffusion transport and affects cell performance partly via methanol electro-oxidation at the cathode. The modelling analysis is carried out to further investigate methanol cross-over phenomenon. A simple model evaluates the effectiveness of two proposed interpretations regarding methanol cross-over and its effects. The model is validated using the experimental data gathered. Both the experimental analysis and the proposed and validated model allow a substantial step forward in the understanding of the main phenomena associated with methanol cross-over. The findings confirm the possibility to reduce methanol cross-over by optimizing anode feeding. (author)

  10. Criteria of the validation of experimental and evaluated covariance data

    International Nuclear Information System (INIS)

    Badikov, S.

    2008-01-01

    The criteria of the validation of experimental and evaluated covariance data are reviewed. In particular: a) the criterion of the positive definiteness for covariance matrices, b) the relationship between the 'integral' experimental and estimated uncertainties, c) the validity of the statistical invariants, d) the restrictions imposed to correlations between experimental errors, are described. Applying these criteria in nuclear data evaluation was considered and 4 particular points have been examined. First preserving positive definiteness of covariance matrices in case of arbitrary transformation of a random vector was considered, properties of the covariance matrices in operations widely used in neutron and reactor physics (splitting and collapsing energy groups, averaging the physical values over energy groups, estimation parameters on the basis of measurements by means of generalized least squares method) were studied. Secondly, an algorithm for comparison of experimental and estimated 'integral' uncertainties was developed, square root of determinant of a covariance matrix is recommended for use in nuclear data evaluation as a measure of 'integral' uncertainty for vectors of experimental and estimated values. Thirdly, a set of statistical invariants-values which are preserved in statistical processing was presented. And fourthly, the inequality that signals a correlation between experimental errors that leads to unphysical values is given. An application is given concerning the cross-section of the (n,t) reaction on Li 6 with a neutron incident energy comprised between 1 and 100 keV

  11. Experimental validation of a mathematical model for seabed liquefaction in waves

    DEFF Research Database (Denmark)

    Sumer, B. Mutlu; Kirca, Özgür; Fredsøe, Jørgen

    2011-01-01

    This paper summarizes the results of an experimental study directed towards the validation of a mathematical model for the buildup of pore water pressure and resulting liquefaction of marine soils under progressive waves. Experiments were conducted under controlled conditions with silt ( d50 = 0.......070 mm) in a wave flume with a soil pit. Waves with wave heights in the range 7.7-18 cm with the water depth 55 cm and the wave period 1.6 s enabled us to study both the liquefaction and no-liquefaction regime pore water pressure buildup. The experimental data was used to validate the model. A numerical...

  12. Tyre tread-block friction: modelling, simulation and experimental validation

    Science.gov (United States)

    Wallaschek, Jörg; Wies, Burkard

    2013-07-01

    Pneumatic tyres are used in vehicles since the beginning of the last century. They generate braking and steering forces for bicycles, motor cycles, cars, busses, trucks, agricultural vehicles and aircraft. These forces are generated in the usually very small contact area between tyre and road and their performance characteristics are of eminent importance for safety and comfort. Much research has been addressed to optimise tyre design with respect to footprint pressure and friction. In this context, the development of virtual tyre prototypes, that is, simulation models for the tyre, has grown to a science in its own. While the modelling of the structural dynamics of the tyre has reached a very advanced level, which allows to take into account effects like the rate-independent inelasticity of filled elastomers or the transient 3D deformations of the ply-reinforced tread, shoulder and sidewalls, little is known about the friction between tread-block elements and road. This is particularly obvious in the case when snow, ice, water or a third-body layer are present in the tyre-road contact. In the present paper, we give a survey on the present state of knowledge in the modelling, simulation and experimental validation of tyre tread-block friction processes. We concentrate on experimental techniques.

  13. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations

    International Nuclear Information System (INIS)

    Streek, Jacco van de; Neumann, Marcus A.

    2010-01-01

    The accuracy of a dispersion-corrected density functional theory method is validated against 241 experimental organic crystal structures from Acta Cryst. Section E. This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 Å either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect

  14. Validation of NEPTUNE-CFD two-phase flow models using experimental data

    International Nuclear Information System (INIS)

    Perez-Manes, Jorge; Sanchez Espinoza, Victor Hugo; Bottcher, Michael; Stieglitz, Robert; Sergio Chiva Vicent

    2014-01-01

    This paper deals with the validation of the two-phase flow models of the CFD code NEPTUNE-CFD using experimental data provided by the OECD BWR BFBT and PSBT Benchmark. Since the two-phase models of CFD codes are extensively being improved, the validation is a key step for the acceptability of such codes. The validation work is performed in the frame of the European NURISP Project and it was focused on the steady state and transient void fraction tests. The influence of different NEPTUNE-CFD model parameters on the void fraction prediction is investigated and discussed in detail. Due to the coupling of heat conduction solver SYRTHES with NEPTUNE-CFD, the description of the coupled fluid dynamics and heat transfer between the fuel rod and the fluid is improved significantly. The averaged void fraction predicted by NEPTUNE-CFD for selected PSBT and BFBT tests is in good agreement with the experimental data. Finally, areas for future improvements of the NEPTUNE-CFD code were identified, too. (authors)

  15. [Evaluation of Suicide Risk Levels in Hospitals: Validity and Reliability Tests].

    Science.gov (United States)

    Macagnino, Sandro; Steinert, Tilman; Uhlmann, Carmen

    2018-05-01

    Examination of in-hospital suicide risk levels concerning their validity and their reliability. The internal suicide risk levels were evaluated in a cross sectional study of in 163 inpatients. A reliability check was performed via determining interrater-reliability of senior physician, therapist and the responsible nurse. Within the scope of the validity check, we conducted analyses of criterion validity and construct validity. For the total sample an "acceptable" to "good" interrater-reliability (Kendalls W = .77) of suicide risk levels were obtained. Schizophrenic disorders showed the lowest values, for personality disorders we found the highest level of interrater-reliability. When examining the criterion validity, Item-9 of the BDI-II is substantial correlated to our suicide risk levels (ρ m  = .54, p validity check, affective disorders showed the highest correlation (ρ = .77), compatible also with "convergent validity". They differed with schizophrenic disorders which showed the least concordance (ρ = .43). In-hospital suicide risk levels may represent an important contribution to the assessment of suicidal behavior of inpatients experiencing psychiatric treatment due to their overall good validity and reliability. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Topology Optimization for Wave Propagation Problems with Experimental Validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk

    designed using the proposed method is provided. A novel approach for designing meta material slabs with selectively tuned negative refractive behavior is outlined. Numerical examples demonstrating the behavior of a slab under different conditions is provided. Results from an experimental studydemonstrating...... agreement with numerical predictions are presented. Finally an approach for designing acoustic wave shaping devices is treated. Three examples of applications are presented, a directional sound emission device, a wave splitting device and a flat focusing lens. Experimental results for the first two devices......This Thesis treats the development and experimental validation of density-based topology optimization methods for wave propagation problems. Problems in the frequency regime where design dimensions are between approximately one fourth and ten wavelengths are considered. All examples treat problems...

  17. Experimental level densities of atomic nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Guttormsen, M.; Bello Garrote, F.L.; Eriksen, T.K.; Giacoppo, F.; Goergen, A.; Hagen, T.W.; Klintefjord, M.; Larsen, A.C.; Nyhus, H.T.; Renstroem, T.; Rose, S.J.; Sahin, E.; Siem, S.; Tornyi, T.G.; Tveten, G.M. [University of Oslo, Department of Physics, Oslo (Norway); Aiche, M.; Ducasse, Q.; Jurado, B. [University of Bordeaux, CENBG, CNRS/IN2P3, B.P. 120, Gradignan (France); Bernstein, L.A.; Bleuel, D.L. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Byun, Y.; Voinov, A. [Ohio University, Department of Physics and Astronomy, Athens, Ohio (United States); Gunsing, F. [CEA Saclay, DSM/Irfu/SPhN, Cedex (France); Lebois, L.; Leniau, B.; Wilson, J. [Institut de Physique Nucleaire d' Orsay, Orsay Cedex (France); Wiedeking, M. [iThemba LABS, P.O. Box 722, Somerset West (South Africa)

    2015-12-15

    It is almost 80 years since Hans Bethe described the level density as a non-interacting gas of protons and neutrons. In all these years, experimental data were interpreted within this picture of a fermionic gas. However, the renewed interest of measuring level density using various techniques calls for a revision of this description. In particular, the wealth of nuclear level densities measured with the Oslo method favors the constant-temperature level density over the Fermi-gas picture. From the basis of experimental data, we demonstrate that nuclei exhibit a constant-temperature level density behavior for all mass regions and at least up to the neutron threshold. (orig.)

  18. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures. This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  19. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures.This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  20. Architectural-level power estimation and experimentation

    Science.gov (United States)

    Ye, Wu

    With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for

  1. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  2. Experimental verification of the imposing principle for maximum permissible levels of multicolor laser radiation

    Directory of Open Access Journals (Sweden)

    Ivashin V.A.

    2013-12-01

    Full Text Available Aims. The study presents the results of experimental research to verify the principle overlay for maximum permissible levels (MPL of multicolor laser radiation single exposure on eyes. This principle of the independence of the effects of radiation with each wavelength (the imposing principle, was founded and generalized to a wide range of exposure conditions. Experimental verification of this approach in relation to the impact of laser radiation on tissue fundus of an eye, as shows the analysis of the literature was not carried out. Material and methods. Was used in the experimental laser generating radiation with wavelengths: Л1 =0,532 microns, A2=0,556to 0,562 microns and A3=0,619to 0,621 urn. Experiments were carried out on eyes of rabbits with evenly pigmented eye bottom. Results. At comparison of results of processing of the experimental data with the calculated data it is shown that these levels are close by their parameters. Conclusions. For the first time in the Russian Federation had been performed experimental studies on the validity of multi-colored laser radiation on the organ of vision. In view of the objective coincidence of the experimental data with the calculated data, we can conclude that the mathematical formulas work.

  3. Modeling of surge in free-spool centrifugal compressors : experimental validation

    NARCIS (Netherlands)

    Gravdahl, J.T.; Willems, F.P.T.; Jager, de A.G.; Egeland, O.

    2004-01-01

    The derivation of a compressor characteristic, and the experimental validation of a dynamic model for a variable speed centrifugal compressor using this characteristic, are presented. The dynamic compressor model of Fink et al. is used, and a variable speed compressor characteristic is derived by

  4. Experimental validation of wireless communication with chaos

    International Nuclear Information System (INIS)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S.; Grebogi, Celso

    2016-01-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  5. Experimental validation of wireless communication with chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian [Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xian University of Technology, Xian 710048 (China); Baptista, Murilo S.; Grebogi, Celso [Institute for Complex System and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen AB24 3UE (United Kingdom)

    2016-08-15

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  6. Experimental validation of wireless communication with chaos.

    Science.gov (United States)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S; Grebogi, Celso

    2016-08-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  7. Validation of Experimental whole-body SAR Assessment Method in a Complex Indoor Environment

    DEFF Research Database (Denmark)

    Bamba, Aliou; Joseph, Wout; Vermeeren, Gunter

    2012-01-01

    Assessing experimentally the whole-body specific absorption rate (SARwb) in a complex indoor environment is very challenging. An experimental method based on room electromagnetics theory (accounting only the Line-Of-Sight as specular path) to assess the whole-body SAR is validated by numerical...... of the proposed method is that it allows discarding the computation burden because it does not use any discretizations. Results show good agreement between measurement and computation at 2.8 GHz, as long as the plane wave assumption is valid, i.e., for high distances from the transmitter. Relative deviations 0...

  8. Physics of subcritical multiplying regions and experimental validation

    International Nuclear Information System (INIS)

    Salvatores, M.

    1996-01-01

    The coupling of a particle accelerator with a spallation target and with a subcritical multiplying region has been proposed in the fifties and is called here a hybrid system. This article gives some ideas about the energetic balance of such a system. The possibilities of experimental validation of some properties of a subcritical multiplying region by using MASURCA facility at CEA-Cadarache are examined. The results of a preliminary experiment called MUSE are presented. (A.C.)

  9. Role of calibration, validation, and relevance in multi-level uncertainty integration

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Calibration of model parameters is an essential step in predicting the response of a complicated system, but the lack of data at the system level makes it impossible to conduct this quantification directly. In such a situation, system model parameters are estimated using tests at lower levels of complexity which share the same model parameters with the system. For such a multi-level problem, this paper proposes a methodology to quantify the uncertainty in the system level prediction by integrating calibration, validation and sensitivity analysis at different levels. The proposed approach considers the validity of the models used for parameter estimation at lower levels, as well as the relevance at the lower level to the prediction at the system level. The model validity is evaluated using a model reliability metric, and models with multivariate output are considered. The relevance is quantified by comparing Sobol indices at the lower level and system level, thus measuring the extent to which a lower level test represents the characteristics of the system so that the calibration results can be reliably used in the system level. Finally the results of calibration, validation and relevance analysis are integrated in a roll-up method to predict the system output. - Highlights: • Relevance analysis to quantify the closeness of two models. • Stochastic model reliability metric to integrate multiple validation experiments. • Extend the model reliability metric to deal with multivariate output. • Roll-up formula to integrate calibration, validation, and relevance.

  10. Experimental Validation of a Wave Energy Converter Array Hydrodynamics Tool

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Ferri, Francesco; Kofoed, Jens Peter

    2017-01-01

    This paper uses experimental data to validate a wave energy converter (WEC) array hydrodynamics tool developed within the context of linearized potential flow theory. To this end, wave forces and power absorption by an array of five-point absorber WECs in monochromatic and panchromatic waves were...

  11. Validation of a Wave-Body Interaction Model by Experimental Tests

    DEFF Research Database (Denmark)

    Ferri, Francesco; Kramer, Morten; Pecher, Arthur

    2013-01-01

    Within the wave energy field, numerical simulation has recently acquired a worldwide consent as being a useful tool, besides physical model testing. The main goal of this work is the validation of a numerical model by experimental results. The numerical model is based on a linear wave-body intera...

  12. Multiphysics modelling and experimental validation of high concentration photovoltaic modules

    International Nuclear Information System (INIS)

    Theristis, Marios; Fernández, Eduardo F.; Sumner, Mike; O'Donovan, Tadhg S.

    2017-01-01

    Highlights: • A multiphysics modelling approach for concentrating photovoltaics was developed. • An experimental campaign was conducted to validate the models. • The experimental results were in good agreement with the models. • The multiphysics modelling allows the concentrator’s optimisation. - Abstract: High concentration photovoltaics, equipped with high efficiency multijunction solar cells, have great potential in achieving cost-effective and clean electricity generation at utility scale. Such systems are more complex compared to conventional photovoltaics because of the multiphysics effect that is present. Modelling the power output of such systems is therefore crucial for their further market penetration. Following this line, a multiphysics modelling procedure for high concentration photovoltaics is presented in this work. It combines an open source spectral model, a single diode electrical model and a three-dimensional finite element thermal model. In order to validate the models and the multiphysics modelling procedure against actual data, an outdoor experimental campaign was conducted in Albuquerque, New Mexico using a high concentration photovoltaic monomodule that is thoroughly described in terms of its geometry and materials. The experimental results were in good agreement (within 2.7%) with the predicted maximum power point. This multiphysics approach is relatively more complex when compared to empirical models, but besides the overall performance prediction it can also provide better understanding of the physics involved in the conversion of solar irradiance into electricity. It can therefore be used for the design and optimisation of high concentration photovoltaic modules.

  13. Experimental Validation of a Dynamic Model for Lightweight Robots

    Directory of Open Access Journals (Sweden)

    Alessandro Gasparetto

    2013-03-01

    Full Text Available Nowadays, one of the main topics in robotics research is dynamic performance improvement by means of a lightening of the overall system structure. The effective motion and control of these lightweight robotic systems occurs with the use of suitable motion planning and control process. In order to do so, model-based approaches can be adopted by exploiting accurate dynamic models that take into account the inertial and elastic terms that are usually neglected in a heavy rigid link configuration. In this paper, an effective method for modelling spatial lightweight industrial robots based on an Equivalent Rigid Link System approach is considered from an experimental validation perspective. A dynamic simulator implementing the formulation is used and an experimental test-bench is set-up. Experimental tests are carried out with a benchmark L-shape mechanism.

  14. Dynamic modeling and experimental validation for direct contact membrane distillation (DCMD) process

    KAUST Repository

    Eleiwi, Fadi

    2016-02-01

    This work proposes a mathematical dynamic model for the direct contact membrane distillation (DCMD) process. The model is based on a 2D Advection–Diffusion Equation (ADE), which describes the heat and mass transfer mechanisms that take place inside the DCMD module. The model studies the behavior of the process in the time varying and the steady state phases, contributing to understanding the process performance, especially when it is driven by intermittent energy supply, such as the solar energy. The model is experimentally validated in the steady state phase, where the permeate flux is measured for different feed inlet temperatures and the maximum absolute error recorded is 2.78 °C. Moreover, experimental validation includes the time variation phase, where the feed inlet temperature ranges from 30 °C to 75 °C with 0.1 °C increment every 2min. The validation marks relative error to be less than 5%, which leads to a strong correlation between the model predictions and the experiments.

  15. Dynamic modeling and experimental validation for direct contact membrane distillation (DCMD) process

    KAUST Repository

    Eleiwi, Fadi; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Francis, Lijo; Laleg-Kirati, Taous-Meriem

    2016-01-01

    This work proposes a mathematical dynamic model for the direct contact membrane distillation (DCMD) process. The model is based on a 2D Advection–Diffusion Equation (ADE), which describes the heat and mass transfer mechanisms that take place inside the DCMD module. The model studies the behavior of the process in the time varying and the steady state phases, contributing to understanding the process performance, especially when it is driven by intermittent energy supply, such as the solar energy. The model is experimentally validated in the steady state phase, where the permeate flux is measured for different feed inlet temperatures and the maximum absolute error recorded is 2.78 °C. Moreover, experimental validation includes the time variation phase, where the feed inlet temperature ranges from 30 °C to 75 °C with 0.1 °C increment every 2min. The validation marks relative error to be less than 5%, which leads to a strong correlation between the model predictions and the experiments.

  16. High Level Analysis, Design and Validation of Distributed Mobile Systems with CoreASM

    Science.gov (United States)

    Farahbod, R.; Glässer, U.; Jackson, P. J.; Vajihollahi, M.

    System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the abstract state machine (ASM) formalism in combination with the CoreASM open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding. CoreASM has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the CoreASM project in light of three different application scenarios.

  17. Validity of a combined fibromyalgia (FM) questionnaires to asses physical activity levels in Spanish elderly women: an experimental approach.

    Science.gov (United States)

    Cancela, José María; Varela, Silvia; Alvarez, María José; Molina, Antonio; Ayán, Carlos; Martín, Vicente

    2011-01-01

    Questionnaires designed to assess the level of physical activity among elderly Spanish speaking women usually have problems of reproducibility and are difficult to administer. This study aims to validate a Spanish combined version of two questionnaires originally designed to assess physical activity levels in fibromyalgia women. The leisure time physical activity instrument (LTPAI) and the physical activity at home and work instrument (PAHWI). Both questionnaires were translated to Spanish using translation/back translation methodology, and then were administered to 44 women aged 60-80 twice, with an interval of 2 weeks. During the first administration, participants answered the Yale physical activity questionnaires (YPAS) and performed the 6-min walking test (6MWT). Although the Spanish version of the LTPAI and the PAWHI showed poor test-retest reliability and poor construct validity, the sum of the two questionnaires showed much better associations. The results suggest that the Spanish combined version of LTPAI and PAHWI would seem to be useful tools for assessing the level of physical activity among elderly Spanish speaking women. Nevertheless, such considerations as the cultural adaptation of their content or the link between the intensity of physical activity as perceived and that actually done must be adjusted for greater efficiency. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  18. Experimental validation of solid rocket motor damping models

    Science.gov (United States)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2017-12-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  19. Experimental validation of solid rocket motor damping models

    Science.gov (United States)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2018-06-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  20. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations.

    Science.gov (United States)

    van de Streek, Jacco; Neumann, Marcus A

    2010-10-01

    This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.

  1. Dislocation-mediated strain hardening in tungsten: Thermo-mechanical plasticity theory and experimental validation

    Science.gov (United States)

    Terentyev, Dmitry; Xiao, Xiazi; Dubinko, A.; Bakaeva, A.; Duan, Huiling

    2015-12-01

    A self-consistent thermo-mechanical model to study the strain-hardening behavior of polycrystalline tungsten was developed and validated by a dedicated experimental route. Dislocation-dislocation multiplication and storage, as well dislocation-grain boundary (GB) pinning were the major mechanisms underlying the evolution of plastic deformation, thus providing a link between the strain hardening behavior and material's microstructure. The microstructure of the polycrystalline tungsten samples has been thoroughly investigated by scanning and electron microscopy. The model was applied to compute stress-strain loading curves of commercial tungsten grades, in the as-received and as-annealed states, in the temperature range of 500-1000 °C. Fitting the model to the independent experimental results obtained using a single crystal and as-received polycrystalline tungsten, the model demonstrated its capability to predict the deformation behavior of as-annealed samples in a wide temperature range and applied strain. The relevance of the dislocation-mediated plasticity mechanisms used in the model have been validated using transmission electron microscopy examination of the samples deformed up to different amounts of strain. On the basis of the experimental validation, the limitations of the model are determined and discussed.

  2. Summary: Experimental validation of real-time fault-tolerant systems

    Science.gov (United States)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  3. An Experimental Validated Control Strategy of Maglev Vehicle-Bridge Self-Excited Vibration

    Directory of Open Access Journals (Sweden)

    Lianchun Wang

    2017-01-01

    Full Text Available This study discusses an experimentally validated control strategy of maglev vehicle-bridge vibration, which degrades the stability of the suspension control, deteriorates the ride comfort, and limits the cost of the magnetic levitation system. First, a comparison between the current-loop and magnetic flux feedback is carried out and a minimum model including flexible bridge and electromagnetic levitation system is proposed. Then, advantages and disadvantages of the traditional feedback architecture with the displacement feedback of electromagnet yE and bridge yB in pairs are explored. The results indicate that removing the feedback of the bridge’s displacement yB from the pairs (yE − yB measured by the eddy-current sensor is beneficial for the passivity of the levitation system and the control of the self-excited vibration. In this situation, the signal acquisition of the electromagnet’s displacement yE is discussed for the engineering application. Finally, to validate the effectiveness of the aforementioned control strategy, numerical validations are carried out and the experimental data are provided and analyzed.

  4. Experimental Validation of a Differential Variational Inequality-Based Approach for Handling Friction and Contact in Vehicle

    Science.gov (United States)

    2015-11-20

    terrain modeled using the discrete element method (DEM). Experimental Validation of a Differential Variational Inequality -Based Approach for Handling...COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Experimental Validation of a Differential Variational Inequality -Based Approach for...sinkage, and single wheel tests. 1.1. Modeling Frictional Contact Via Differential Variational Inequalities Consider a three dimensional (3D) system of

  5. Numerical simulation and experimental validation of coiled adiabatic capillary tubes

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Valladares, O. [Centro de Investigacion en Energia, Universidad Nacional Autonoma de Mexico (UNAM), Apdo. Postal 34, 62580 Temixco, Morelos (Mexico)

    2007-04-15

    The objective of this study is to extend and validate the model developed and presented in previous works [O. Garcia-Valladares, C.D. Perez-Segarra, A. Oliva, Numerical simulation of capillary tube expansion devices behaviour with pure and mixed refrigerants considering metastable region. Part I: mathematical formulation and numerical model, Applied Thermal Engineering 22 (2) (2002) 173-182; O. Garcia-Valladares, C.D. Perez-Segarra, A. Oliva, Numerical simulation of capillary tube expansion devices behaviour with pure and mixed refrigerants considering metastable region. Part II: experimental validation and parametric studies, Applied Thermal Engineering 22 (4) (2002) 379-391] to coiled adiabatic capillary tube expansion devices working with pure and mixed refrigerants. The discretized governing equations are coupled using an implicit step by step method. A special treatment has been implemented in order to consider transitions (subcooled liquid region, metastable liquid region, metastable two-phase region and equilibrium two-phase region). All the flow variables (enthalpies, temperatures, pressures, vapor qualities, velocities, heat fluxes, etc.) together with the thermophysical properties are evaluated at each point of the grid in which the domain is discretized. The numerical model allows analysis of aspects such as geometry, type of fluid (pure substances and mixtures), critical or non-critical flow conditions, metastable regions, and transient aspects. Comparison of the numerical simulation with a wide range of experimental data presented in the technical literature will be shown in the present article in order to validate the model developed. (author)

  6. The International Experimental Thermal Hydraulic Systems database – TIETHYS: A new NEA validation tool

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, Upendra S.

    2018-07-22

    Nuclear reactor codes require validation with appropriate data representing the plant for specific scenarios. The thermal-hydraulic data is scattered in different locations and in different formats. Some of the data is in danger of being lost. A relational database is being developed to organize the international thermal hydraulic test data for various reactor concepts and different scenarios. At the reactor system level, that data is organized to include separate effect tests and integral effect tests for specific scenarios and corresponding phenomena. The database relies on the phenomena identification sections of expert developed PIRTs. The database will provide a summary of appropriate data, review of facility information, test description, instrumentation, references for the experimental data and some examples of application of the data for validation. The current database platform includes scenarios for PWR, BWR, VVER, and specific benchmarks for CFD modelling data and is to be expanded to include references for molten salt reactors. There are place holders for high temperature gas cooled reactors, CANDU and liquid metal reactors. This relational database is called The International Experimental Thermal Hydraulic Systems (TIETHYS) database and currently resides at Nuclear Energy Agency (NEA) of the OECD and is freely open to public access. Going forward the database will be extended to include additional links and data as they become available. https://www.oecd-nea.org/tiethysweb/

  7. Experimental validation of lead cross sections for scale and MCNP

    International Nuclear Information System (INIS)

    Henrikson, D.J.

    1995-01-01

    Moving spent nuclear fuel between facilities often requires the use of lead-shielded casks. Criticality safety that is based upon calculations requires experimental validation of the fuel matrix and lead cross section libraries. A series of critical experiments using a high-enriched uranium-aluminum fuel element with a variety of reflectors, including lead, has been identified. Twenty-one configurations were evaluated in this study. The fuel element was modelled for KENO V.a and MCNP 4a using various cross section sets. The experiments addressed in this report can be used to validate lead-reflected calculations. Factors influencing calculated k eff which require further study include diameters of styrofoam inserts and homogenization

  8. MATLAB/Simulink Pulse-Echo Ultrasound System Simulator Based on Experimentally Validated Models.

    Science.gov (United States)

    Kim, Taehoon; Shin, Sangmin; Lee, Hyongmin; Lee, Hyunsook; Kim, Heewon; Shin, Eunhee; Kim, Suhwan

    2016-02-01

    A flexible clinical ultrasound system must operate with different transducers, which have characteristic impulse responses and widely varying impedances. The impulse response determines the shape of the high-voltage pulse that is transmitted and the specifications of the front-end electronics that receive the echo; the impedance determines the specification of the matching network through which the transducer is connected. System-level optimization of these subsystems requires accurate modeling of pulse-echo (two-way) response, which in turn demands a unified simulation of the ultrasonics and electronics. In this paper, this is realized by combining MATLAB/Simulink models of the high-voltage transmitter, the transmission interface, the acoustic subsystem which includes wave propagation and reflection, the receiving interface, and the front-end receiver. To demonstrate the effectiveness of our simulator, the models are experimentally validated by comparing the simulation results with the measured data from a commercial ultrasound system. This simulator could be used to quickly provide system-level feedback for an optimized tuning of electronic design parameters.

  9. The turbulent viscosity models and their experimental validation; Les modeles de viscosite turbulente et leur validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This workshop on turbulent viscosity models and on their experimental validation was organized by the `convection` section of the French society of thermal engineers. From the 9 papers presented during this workshop, 8 deal with the modeling of turbulent flows inside combustion chambers, turbo-machineries or in other energy-related applications, and have been selected for ETDE. (J.S.)

  10. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  11. Numerical modeling and experimental validation of thermoplastic composites induction welding

    Science.gov (United States)

    Palmieri, Barbara; Nele, Luigi; Galise, Francesco

    2018-05-01

    In this work, a numerical simulation and experimental test of the induction welding of continuous fibre-reinforced thermoplastic composites (CFRTPCs) was provided. The thermoplastic Polyamide 66 (PA66) with carbon fiber fabric was used. Using a dedicated software (JMag Designer), the influence of the fundamental process parameters such as temperature, current and holding time was investigated. In order to validate the results of the simulations, and therefore the numerical model used, experimental tests were carried out, and the temperature values measured during the tests were compared with the aid of an optical pyrometer, with those provided by the numerical simulation. The mechanical properties of the welded joints were evaluated by single lap shear tests.

  12. Electromagnetic scattering problems -Numerical issues and new experimental approaches of validation

    Energy Technology Data Exchange (ETDEWEB)

    Geise, Robert; Neubauer, Bjoern; Zimmer, Georg [University of Braunschweig, Institute for Electromagnetic Compatibility, Schleinitzstrasse 23, 38106 Braunschweig (Germany)

    2015-03-10

    Electromagnetic scattering problems, thus the question how radiated energy spreads when impinging on an object, are an essential part of wave propagation. Though the Maxwell’s differential equations as starting point, are actually quite simple,the integral formulation of an object’s boundary conditions, respectively the solution for unknown induced currents can only be solved numerically in most cases.As a timely topic of practical importance the scattering of rotating wind turbines is discussed, the numerical description of which is still based on rigorous approximations with yet unspecified accuracy. In this context the issue of validating numerical solutions is addressed, both with reference simulations but in particular with the experimental approach of scaled measurements. For the latter the idea of an incremental validation is proposed allowing a step by step validation of required new mathematical models in scattering theory.

  13. Experimental validation for calcul methods of structures having shock non-linearity

    International Nuclear Information System (INIS)

    Brochard, D.; Buland, P.

    1987-01-01

    For the seismic analysis of non-linear structures, numerical methods have been developed which need to be validated on experimental results. The aim of this paper is to present the design method of a test program which results will be used for this purpose. Some applications to nuclear components will illustrate this presentation [fr

  14. Three-dimensional shape optimization of a cemented hip stem and experimental validations.

    Science.gov (United States)

    Higa, Masaru; Tanino, Hiromasa; Nishimura, Ikuya; Mitamura, Yoshinori; Matsuno, Takeo; Ito, Hiroshi

    2015-03-01

    This study proposes novel optimized stem geometry with low stress values in the cement using a finite element (FE) analysis combined with an optimization procedure and experimental measurements of cement stress in vitro. We first optimized an existing stem geometry using a three-dimensional FE analysis combined with a shape optimization technique. One of the most important factors in the cemented stem design is to reduce stress in the cement. Hence, in the optimization study, we minimized the largest tensile principal stress in the cement mantle under a physiological loading condition by changing the stem geometry. As the next step, the optimized stem and the existing stem were manufactured to validate the usefulness of the numerical models and the results of the optimization in vitro. In the experimental study, strain gauges were embedded in the cement mantle to measure the strain in the cement mantle adjacent to the stems. The overall trend of the experimental study was in good agreement with the results of the numerical study, and we were able to reduce the largest stress by more than 50% in both shape optimization and strain gauge measurements. Thus, we could validate the usefulness of the numerical models and the results of the optimization using the experimental models. The optimization employed in this study is a useful approach for developing new stem designs.

  15. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  16. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    Science.gov (United States)

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  17. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    International Nuclear Information System (INIS)

    Terzuoli, F.; Galassi, M.C.; Mazzini, D.; D'Auria, F.

    2008-01-01

    Pressurized thermal shock (PTS) modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV) lifetime is the cold water emergency core cooling (ECC) injection into the cold leg during a loss of coolant accident (LOCA). Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM) Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs) code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mecanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX), and a research code (NEPTUNE CFD). The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling

  18. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    Directory of Open Access Journals (Sweden)

    F. Terzuoli

    2008-01-01

    Full Text Available Pressurized thermal shock (PTS modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV lifetime is the cold water emergency core cooling (ECC injection into the cold leg during a loss of coolant accident (LOCA. Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mécanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX, and a research code (NEPTUNE CFD. The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling.

  19. Computational Fluid Dynamics Modeling of the Human Pulmonary Arteries with Experimental Validation.

    Science.gov (United States)

    Bordones, Alifer D; Leroux, Matthew; Kheyfets, Vitaly O; Wu, Yu-An; Chen, Chia-Yuan; Finol, Ender A

    2018-05-21

    Pulmonary hypertension (PH) is a chronic progressive disease characterized by elevated pulmonary arterial pressure, caused by an increase in pulmonary arterial impedance. Computational fluid dynamics (CFD) can be used to identify metrics representative of the stage of PH disease. However, experimental validation of CFD models is often not pursued due to the geometric complexity of the model or uncertainties in the reproduction of the required flow conditions. The goal of this work is to validate experimentally a CFD model of a pulmonary artery phantom using a particle image velocimetry (PIV) technique. Rapid prototyping was used for the construction of the patient-specific pulmonary geometry, derived from chest computed tomography angiography images. CFD simulations were performed with the pulmonary model with a Reynolds number matching those of the experiments. Flow rates, the velocity field, and shear stress distributions obtained with the CFD simulations were compared to their counterparts from the PIV flow visualization experiments. Computationally predicted flow rates were within 1% of the experimental measurements for three of the four branches of the CFD model. The mean velocities in four transversal planes of study were within 5.9 to 13.1% of the experimental mean velocities. Shear stresses were qualitatively similar between the two methods with some discrepancies in the regions of high velocity gradients. The fluid flow differences between the CFD model and the PIV phantom are attributed to experimental inaccuracies and the relative compliance of the phantom. This comparative analysis yielded valuable information on the accuracy of CFD predicted hemodynamics in pulmonary circulation models.

  20. Validation of the CATHARE2 code against experimental data from Brayton-cycle plants

    International Nuclear Information System (INIS)

    Bentivoglio, Fabrice; Tauveron, Nicolas; Geffraye, Genevieve; Gentner, Herve

    2008-01-01

    In recent years the Commissariat a l'Energie Atomique (CEA) has commissioned a wide range of feasibility studies of future-advanced nuclear reactors, in particular gas-cooled reactors (GCR). The thermohydraulic behaviour of these systems is a key issue for, among other things, the design of the core, the assessment of thermal stresses, and the design of decay heat removal systems. These studies therefore require efficient and reliable simulation tools capable of modelling the whole reactor, including the core, the core vessel, piping, heat exchangers and turbo-machinery. CATHARE2 is a thermal-hydraulic 1D reference safety code developed and extensively validated for the French pressurized water reactors. It has been recently adapted to deal also with gas-cooled reactor applications. In order to validate CATHARE2 for these new applications, CEA has initiated an ambitious long-term experimental program. The foreseen experimental facilities range from small-scale loops for physical correlations, to component technology and system demonstration loops. In the short-term perspective, CATHARE2 is being validated against existing experimental data. And in particular from the German power plants Oberhausen I and II. These facilities have both been operated by the German utility Energie Versorgung Oberhausen (E.V.O.) and their power conversion systems resemble to the high-temperature reactor concepts: Oberhausen I is a 13.75-MWe Brayton-cycle air turbine plant, and Oberhausen II is a 50-MWe Brayton-cycle helium turbine plant. The paper presents these two plants, the adopted CATHARE2 modelling and a comparison between experimental data and code results for both steady state and transient cases

  1. CFD Modeling and Experimental Validation of a Solar Still

    Directory of Open Access Journals (Sweden)

    Mahmood Tahir

    2017-01-01

    Full Text Available Earth is the densest planet of the solar system with total area of 510.072 million square Km. Over 71.68% of this area is covered with water leaving a scant area of 28.32% for human to inhabit. The fresh water accounts for only 2.5% of the total volume and the rest is the brackish water. Presently, the world is facing chief problem of lack of potable water. This issue can be addressed by converting brackish water into potable through a solar distillation process and solar still is specially assigned for this purpose. Efficiency of a solar still explicitly depends on its design parameters, such as wall material, chamber depth, width and slope of the zcondensing surface. This study was aimed at investigating the solar still parameters using CFD modeling and experimental validation. The simulation data of ANSYS-FLUENT was compared with actual experimental data. A close agreement among the simulated and experimental results was seen in the presented work. It reveals that ANSYS-FLUENT is a potent tool to analyse the efficiency of the new designs of the solar distillation systems.

  2. Experimental Testing Procedures and Dynamic Model Validation for Vanadium Redox Flow Battery Storage System

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per Bromand

    2013-01-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing...... efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs...

  3. Impact-friction vibrations of tubular systems. Numerical simulation and experimental validation

    International Nuclear Information System (INIS)

    Jacquart, G.

    1993-05-01

    This note presents a summary on the numerical developments made to simulate impact-friction vibrations of tubular systems, detailing the algorithms used and the expression of impact and friction forces. A synthesis of the experimental results obtained on MASSIF workbench is also presented, as well as their comparison with numerical computations in order to validate the numerical approach. (author). 5 refs

  4. Experimental validation of models for Plasma Focus devices

    International Nuclear Information System (INIS)

    Rodriguez Palomino, Luis; Gonzalez, Jose; Clausse, Alejandro

    2003-01-01

    Plasma Focus(PF) Devices are thermonuclear pulsators that produce short pulsed radiation (X-ray, charged particles and neutrons). Since Filippov and Mather, investigations have been used to study plasma properties. Nowadays the interest about PF is focused in technology applications, related to the use of these devices as pulsed neutron sources. In the numerical calculus the Inter institutional PLADEMA (PLAsmas DEnsos MAgnetizados) network is developing three models. Each one is useful in different engineering stages of the Plasma Focus design. One of the main objectives in this work is a comparative study on the influence of the different parameters involved in each models. To validate these results, several experimental measurements under different geometry and initial conditions were performed. (author)

  5. Experimental validation of the buildings energy performance (PEC assessment methods with reference to occupied spaces heating

    Directory of Open Access Journals (Sweden)

    Cristian PETCU

    2010-01-01

    Full Text Available This paper is part of the series of pre-standardization research aimed to analyze the existing methods of calculating the Buildings Energy Performance (PEC in view of their correction of completing. The entire research activity aims to experimentally validate the PEC Calculation Algorithm as well as the comparative application, on the support of several case studies focused on representative buildings of the stock of buildings in Romania, of the PEC calculation methodology for buildings equipped with occupied spaces heating systems. The targets of the report are the experimental testing of the calculation models so far known (NP 048-2000, Mc 001-2006, SR EN 13790:2009, on the support provided by the CE INCERC Bucharest experimental building, together with the complex calculation algorithms specific to the dynamic modeling, for the evaluation of the occupied spaces heat demand in the cold season, specific to the traditional buildings and to modern buildings equipped with solar radiation passive systems, of the ventilated solar space type. The schedule of the measurements performed in the 2008-2009 cold season is presented as well as the primary processing of the measured data and the experimental validation of the heat demand monthly calculation methods, on the support of CE INCERC Bucharest. The calculation error per heating season (153 days of measurements between the measured heat demand and the calculated one was of 0.61%, an exceptional value confirming the phenomenological nature of the INCERC method, NP 048-2006. The mathematical model specific to the hourly thermal balance is recurrent – decisional with alternating paces. The experimental validation of the theoretical model is based on the measurements performed on the CE INCERC Bucharest building, within a time lag of 57 days (06.01-04.03.2009. The measurements performed on the CE INCERC Bucharest building confirm the accuracy of the hourly calculation model by comparison to the values

  6. Experimental Results on the Level Crossing Intervals of the Phase of Sine Wave Plus Noise

    Science.gov (United States)

    Youssef, Neji; Munakata, Tsutomu; Mimaki, Tadashi

    1993-03-01

    Experimental study was made on the level crossing intervals of a phase process of a sine wave plus narrow-band Gaussian noise. Since successive level crossings of phase do not necessarily occur alternately in the upward and downward direction due to the phase jump beyond 2π, the usual definitions of the probability densities of the level crossing intervals for continuous random processes are not applicable in the case of the phase process. Therefore, the probability densities of level crossing intervals of phase process are newly defined. Measurements of these densities were performed for noise having lowpass spectra of Gaussian and 7th order Butterworth types. Results are given for various values of the signal-to-noise power ratio and of the crossing level, and compared with corresponding approximation developed under the assumption of quasi-independence. The validity of the assumption depends on the spectrum shape of the noise.

  7. An Experimental Simulation to Validate FEM to Predict Transverse Young’s Modulus of FRP Composites

    Directory of Open Access Journals (Sweden)

    V. S. Sai

    2013-01-01

    Full Text Available Finite element method finds application in the analysis of FRP composites due to its versatility in getting the solution for complex cases which are not possible by exact classical analytical approaches. The finite element result is questionable unless it is obtained from converged mesh and properly validated. In the present work specimens are prepared with metallic materials so that the arrangement of fibers is close to hexagonal packing in a matrix as similar arrangement in case of FRP is complex due to the size of fibers. Transverse Young’s moduli of these specimens are determined experimentally. Equivalent FE models are designed and corresponding transverse Young’s moduli are compared with the experimental results. It is observed that the FE values are in good agreement with the experimental results, thus validating FEM for predicting transverse modulus of FRP composites.

  8. Experimental Study of a Multi Level Overtopping Wave Power Device

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Hald, Tue; Frigaard, Peter Bak

    2002-01-01

    Results of experimental investigations of a floating wave energy device called Power Pyramid is presented. The Power Pyramid utilizes reservoirs in multiple levels when capturing wave overtopping and converting it into electrical energy. The effect of capturing the overtopping in multiple levels,......, using 5 levels introduces practical problems, and is most probably not economically feasible. It is concluded that it is reasonable to use 2 levels (maybe 3), which can increase the efficiency by 25-40 % compared to using a single level.......Results of experimental investigations of a floating wave energy device called Power Pyramid is presented. The Power Pyramid utilizes reservoirs in multiple levels when capturing wave overtopping and converting it into electrical energy. The effect of capturing the overtopping in multiple levels......, compared to only one level, has been evaluated experimentally. From the experimental results, and the performed optimizations based on these, it has been found that the efficiency of a wave power device of the overtopping type can be increased by as much as 76 % by using 5 levels instead of 1. However...

  9. Modeling and Experimental Validation for 3D mm-wave Radar Imaging

    Science.gov (United States)

    Ghazi, Galia

    As the problem of identifying suicide bombers wearing explosives concealed under clothing becomes increasingly important, it becomes essential to detect suspicious individuals at a distance. Systems which employ multiple sensors to determine the presence of explosives on people are being developed. Their functions include observing and following individuals with intelligent video, identifying explosives residues or heat signatures on the outer surface of their clothing, and characterizing explosives using penetrating X-rays, terahertz waves, neutron analysis, or nuclear quadrupole resonance. At present, mm-wave radar is the only modality that can both penetrate and sense beneath clothing at a distance of 2 to 50 meters without causing physical harm. Unfortunately, current mm-wave radar systems capable of performing high-resolution, real-time imaging require using arrays with a large number of transmitting and receiving modules; therefore, these systems present undesired large size, weight and power consumption, as well as extremely complex hardware architecture. The overarching goal of this thesis is the development and experimental validation of a next generation inexpensive, high-resolution radar system that can distinguish security threats hidden on individuals located at 2-10 meters range. In pursuit of this goal, this thesis proposes the following contributions: (1) Development and experimental validation of a new current-based, high-frequency computational method to model large scattering problems (hundreds of wavelengths) involving lossy, penetrable and multi-layered dielectric and conductive structures, which is needed for an accurate characterization of the wave-matter interaction and EM scattering in the target region; (2) Development of combined Norm-1, Norm-2 regularized imaging algorithms, which are needed for enhancing the resolution of the images while using a minimum number of transmitting and receiving antennas; (3) Implementation and experimental

  10. Development of a quality-assessment tool for experimental bruxism studies: reliability and validity

    NARCIS (Netherlands)

    Dawson, A.; Raphael, K.G.; Glaros, A.; Axelsson, S.; Arima, T.; Ernberg, M.; Farella, M.; Lobbezoo, F.; Manfredini, D.; Michelotti, A.; Svensson, P.; List, T.

    2013-01-01

    AIMS: To combine empirical evidence and expert opinion in a formal consensus method in order to develop a quality-assessment tool for experimental bruxism studies in systematic reviews. METHODS: Tool development comprised five steps: (1) preliminary decisions, (2) item generation, (3) face-validity

  11. Validation of Code ASTEC with LIVE-L1 Experimental Results

    International Nuclear Information System (INIS)

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  12. Validation of geotechnical software for repository performance assessment

    International Nuclear Information System (INIS)

    LeGore, T.; Hoover, J.D.; Khaleel, R.; Thornton, E.C.; Anantatmula, R.P.; Lanigan, D.C.

    1989-01-01

    An important step in the characterization of a high level nuclear waste repository is to demonstrate that geotechnical software, used in performance assessment, correctly models validation. There is another type of validation, called software validation. It is based on meeting the requirements of specifications documents (e.g. IEEE specifications) and does not directly address the correctness of the specifications. The process of comparing physical experimental results with the predicted results should incorporate an objective measure of the level of confidence regarding correctness. This paper reports on a methodology developed that allows the experimental uncertainties to be explicitly included in the comparison process. The methodology also allows objective confidence levels to be associated with the software. In the event of a poor comparison, the method also lays the foundation for improving the software

  13. Experimental Definition and Validation of Protein Coding Transcripts in Chlamydomonas reinhardtii

    Energy Technology Data Exchange (ETDEWEB)

    Kourosh Salehi-Ashtiani; Jason A. Papin

    2012-01-13

    Algal fuel sources promise unsurpassed yields in a carbon neutral manner that minimizes resource competition between agriculture and fuel crops. Many challenges must be addressed before algal biofuels can be accepted as a component of the fossil fuel replacement strategy. One significant challenge is that the cost of algal fuel production must become competitive with existing fuel alternatives. Algal biofuel production presents the opportunity to fine-tune microbial metabolic machinery for an optimal blend of biomass constituents and desired fuel molecules. Genome-scale model-driven algal metabolic design promises to facilitate both goals by directing the utilization of metabolites in the complex, interconnected metabolic networks to optimize production of the compounds of interest. Using Chlamydomonas reinhardtii as a model, we developed a systems-level methodology bridging metabolic network reconstruction with annotation and experimental verification of enzyme encoding open reading frames. We reconstructed a genome-scale metabolic network for this alga and devised a novel light-modeling approach that enables quantitative growth prediction for a given light source, resolving wavelength and photon flux. We experimentally verified transcripts accounted for in the network and physiologically validated model function through simulation and generation of new experimental growth data, providing high confidence in network contents and predictive applications. The network offers insight into algal metabolism and potential for genetic engineering and efficient light source design, a pioneering resource for studying light-driven metabolism and quantitative systems biology. Our approach to generate a predictive metabolic model integrated with cloned open reading frames, provides a cost-effective platform to generate metabolic engineering resources. While the generated resources are specific to algal systems, the approach that we have developed is not specific to algae and

  14. Integral Reactor Containment Condensation Model and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Qiao [Oregon State Univ., Corvallis, OR (United States); Corradini, Michael [Univ. of Wisconsin, Madison, WI (United States)

    2016-05-02

    This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). In the three years of investigation duration, following the original proposal, the planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flow into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). A film flow model was developed for the scaling analysis, and the results suggest that the 1/3 scaled test facility covers large portion of laminar film flow, leading to a lower average heat transfer coefficient comparing to the prototypic value. Although it is conservative in reactor safety analysis, the significant reduction of heat transfer coefficient (50%) could under estimate the prototypic condensation heat transfer rate, resulting in inaccurate prediction of the decay heat removal capability. Further investigation is thus needed to quantify the scaling distortion for safety analysis code validation. Experimental investigations were performed in the existing MASLWR test facility at OrST with minor modifications. A total of 13 containment condensation tests were conducted for pressure

  15. A Validation Approach for Quasistatic Numerical/Experimental Indentation Analysis in Soft Materials Using 3D Digital Image Correlation.

    Science.gov (United States)

    Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A

    2017-06-28

    A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results.

  16. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  17. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    Science.gov (United States)

    Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.

    2007-03-01

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  18. Experimental validation of the twins prediction program for rolling noise. Pt.2: results

    NARCIS (Netherlands)

    Thompson, D.J.; Fodiman, P.; Mahé, H.

    1996-01-01

    Two extensive measurement campaigns have been carried out to validate the TWINS prediction program for rolling noise, as described in part 1 of this paper. This second part presents the experimental results of vibration and noise during train pass-bys and compares them with predictions from the

  19. Modeling and experimental validation of a Hybridized Energy Storage System for automotive applications

    Science.gov (United States)

    Fiorenti, Simone; Guanetti, Jacopo; Guezennec, Yann; Onori, Simona

    2013-11-01

    This paper presents the development and experimental validation of a dynamic model of a Hybridized Energy Storage System (HESS) consisting of a parallel connection of a lead acid (PbA) battery and double layer capacitors (DLCs), for automotive applications. The dynamic modeling of both the PbA battery and the DLC has been tackled via the equivalent electric circuit based approach. Experimental tests are designed for identification purposes. Parameters of the PbA battery model are identified as a function of state of charge and current direction, whereas parameters of the DLC model are identified for different temperatures. A physical HESS has been assembled at the Center for Automotive Research The Ohio State University and used as a test-bench to validate the model against a typical current profile generated for Start&Stop applications. The HESS model is then integrated into a vehicle simulator to assess the effects of the battery hybridization on the vehicle fuel economy and mitigation of the battery stress.

  20. Experimental validation of prototype high voltage bushing

    Science.gov (United States)

    Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.

    2017-08-01

    Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.

  1. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    An experimental analysis has been performed to validate the measurement error of cooling curves measured in thin walled ductile cast iron. Specially designed thermocouples with Ø0.2 mm thermocouple wire in Ø1.6 mm ceramic tube was used for the experiments. Temperatures were measured in plates...

  2. Validation of a buffet meal design in an experimental restaurant.

    Science.gov (United States)

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Experimental validation of a thermodynamic boiler model under steady state and dynamic conditions

    International Nuclear Information System (INIS)

    Carlon, Elisa; Verma, Vijay Kumar; Schwarz, Markus; Golicza, Laszlo; Prada, Alessandro; Baratieri, Marco; Haslinger, Walter; Schmidl, Christoph

    2015-01-01

    Highlights: • Laboratory tests on two commercially available pellet boilers. • Steady state and a dynamic load cycle tests. • Pellet boiler model calibration based on data registered in stationary operation. • Boiler model validation with reference to both stationary and dynamic operation. • Validated model suitable for coupled simulation of building and heating system. - Abstract: Nowadays dynamic building simulation is an essential tool for the design of heating systems for residential buildings. The simulation of buildings heated by biomass systems, first of all needs detailed boiler models, capable of simulating the boiler both as a stand-alone appliance and as a system component. This paper presents the calibration and validation of a boiler model by means of laboratory tests. The chosen model, i.e. TRNSYS “Type 869”, has been validated for two commercially available pellet boilers of 6 and 12 kW nominal capacities. Two test methods have been applied: the first is a steady state test at nominal load and the second is a load cycle test including stationary operation at different loads as well as transient operation. The load cycle test is representative of the boiler operation in the field and characterises the boiler’s stationary and dynamic behaviour. The model had been calibrated based on laboratory data registered during stationary operation at different loads and afterwards it was validated by simulating both the stationary and the dynamic tests. Selected parameters for the validation were the heat transfer rates to water and the water temperature profiles inside the boiler and at the boiler outlet. Modelling results showed better agreement with experimental data during stationary operation rather than during dynamic operation. Heat transfer rates to water were predicted with a maximum deviation of 10% during the stationary operation, and a maximum deviation of 30% during the dynamic load cycle. However, for both operational regimes the

  4. Loss of vacuum accident (LOVA): Comparison of computational fluid dynamics (CFD) flow velocities against experimental data for the model validation

    International Nuclear Information System (INIS)

    Bellecci, C.; Gaudio, P.; Lupelli, I.; Malizia, A.; Porfiri, M.T.; Quaranta, R.; Richetta, M.

    2011-01-01

    A recognized safety issue for future fusion reactors fueled with deuterium and tritium is the generation of sizeable quantities of dust. Several mechanisms resulting from material response to plasma bombardment in normal and off-normal conditions are responsible for generating dust of micron and sub-micron length scales inside the VV (Vacuum Vessel) of experimental fusion facilities. The loss of coolant accidents (LOCA), loss of coolant flow accidents (LOFA) and loss of vacuum accidents (LOVA) are types of accidents, expected in experimental fusion reactors like ITER, that may jeopardize components and plasma vessel integrity and cause dust mobilization risky for workers and public. The air velocity is the driven parameter for dust resuspension and its characterization, in the very first phase of the accidents, is critical for the dust release. To study the air velocity trend a small facility, Small Tank for Aerosol Removal and Dust (STARDUST), was set up at the University of Rome 'Tor Vergata', in collaboration with ENEA Frascati laboratories. It simulates a low pressurization rate (300 Pa/s) LOVA event in ITER due to a small air inlet from two different positions of the leak: at the equatorial port level and at the divertor port level. The velocity magnitude in STARDUST was investigated in order to map the velocity field by means of a punctual capacitive transducer placed inside STARDUST without obstacles. FLUENT was used to simulate the flow behavior for the same LOVA scenarios used during the experimental tests. The results of these simulations were compared against the experimental data for CFD code validation. For validation purposes, the CFD simulation data were extracted at the same locations as the experimental data were collected for the first four seconds, because at the beginning of the experiments the maximum velocity values (that could cause the almost complete dust mobilization) have been measured. In this paper the authors present and discuss the

  5. Experimental and theoretical study of radon levels in a house

    Energy Technology Data Exchange (ETDEWEB)

    Ameon, R.; Dupuis, M.; Marie, L.; Diez, O.; LionS, J. [Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-roses (France); Tymen, G. [LARAAH, Universite de Bretagne Occidentale, Brest (France)

    2006-07-01

    Full text of publication follows: Radon being a radioactive gas of natural origin is omnipresent everywhere at the surface of earth. It is created by the radium decay issued from the uranium contained in the earth crust and more specifically in granitic and volcanic subsoils. Because of the dilution due to air masses, its concentration in open air is low. On the other hand, radon may accumulate in the confined atmosphere of buildings and achieve high concentration levels. Across France, it has been estimated that 300 000 individual dwellings present concentration higher than the French reference level of 400 Bq.m{sup -3} and that 60 000 other ones would exhibit concentration above 1 000 Bq.m{sup -3}, the French warning threshold. Indoor radon concentration may vary significantly for various reasons, including design of buildings, radium content and texture of the soil in contact with the building's slab and walls, the under pressure value between the inside and outside and the fresh air supply rate. These considerations have led the I.R.S.N. to develop a code called R.A.D.O.N. 2 for conducting simple and methodical studies of indoor radon concentrations, to take into account the above-mentioned factors. But, the achievement of an effective diagnosis and risk management -aiding tool requires to first check its validity on the phenomenological model at the origin of the code. A 3-year experimental follow-up was, thus, conducted within an unoccupied house built on an uranium-bearing geological formation. After characterization of the subsoil, the instrumentation was implemented on site to continuously monitor the following parameters: - the radon source term in the building (exhalation rate of {sup 222}Rn at the ground/building interface and at soil surface, radon concentration at the soil and in outdoor air), - the radon penetration by advection (differential pressure in the house basement), - the driving mechanisms for natural ventilation in the house (weather

  6. Experimental and theoretical study of radon levels in a house

    International Nuclear Information System (INIS)

    Ameon, R.; Dupuis, M.; Marie, L.; Diez, O.; LionS, J.; Tymen, G.

    2006-01-01

    Full text of publication follows: Radon being a radioactive gas of natural origin is omnipresent everywhere at the surface of earth. It is created by the radium decay issued from the uranium contained in the earth crust and more specifically in granitic and volcanic subsoils. Because of the dilution due to air masses, its concentration in open air is low. On the other hand, radon may accumulate in the confined atmosphere of buildings and achieve high concentration levels. Across France, it has been estimated that 300 000 individual dwellings present concentration higher than the French reference level of 400 Bq.m -3 and that 60 000 other ones would exhibit concentration above 1 000 Bq.m -3 , the French warning threshold. Indoor radon concentration may vary significantly for various reasons, including design of buildings, radium content and texture of the soil in contact with the building's slab and walls, the under pressure value between the inside and outside and the fresh air supply rate. These considerations have led the I.R.S.N. to develop a code called R.A.D.O.N. 2 for conducting simple and methodical studies of indoor radon concentrations, to take into account the above-mentioned factors. But, the achievement of an effective diagnosis and risk management -aiding tool requires to first check its validity on the phenomenological model at the origin of the code. A 3-year experimental follow-up was, thus, conducted within an unoccupied house built on an uranium-bearing geological formation. After characterization of the subsoil, the instrumentation was implemented on site to continuously monitor the following parameters: - the radon source term in the building (exhalation rate of 222 Rn at the ground/building interface and at soil surface, radon concentration at the soil and in outdoor air), - the radon penetration by advection (differential pressure in the house basement), - the driving mechanisms for natural ventilation in the house (weather conditions, indoor

  7. Advanced Reactors-Intermediate Heat Exchanger (IHX) Coupling: Theoretical Modeling and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Utgikar, Vivek [Univ. of Idaho, Moscow, ID (United States); Sun, Xiaodong [The Ohio State Univ., Columbus, OH (United States); Christensen, Richard [The Ohio State Univ., Columbus, OH (United States); Sabharwall, Piyush [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-12-29

    The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate the models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.

  8. A comprehensive collection of experimentally validated primers for Polymerase Chain Reaction quantitation of murine transcript abundance

    Directory of Open Access Journals (Sweden)

    Wang Xiaowei

    2008-12-01

    Full Text Available Abstract Background Quantitative polymerase chain reaction (QPCR is a widely applied analytical method for the accurate determination of transcript abundance. Primers for QPCR have been designed on a genomic scale but non-specific amplification of non-target genes has frequently been a problem. Although several online databases have been created for the storage and retrieval of experimentally validated primers, only a few thousand primer pairs are currently present in existing databases and the primers are not designed for use under a common PCR thermal profile. Results We previously reported the implementation of an algorithm to predict PCR primers for most known human and mouse genes. We now report the use of that resource to identify 17483 pairs of primers that have been experimentally verified to amplify unique sequences corresponding to distinct murine transcripts. The primer pairs have been validated by gel electrophoresis, DNA sequence analysis and thermal denaturation profile. In addition to the validation studies, we have determined the uniformity of amplification using the primers and the technical reproducibility of the QPCR reaction using the popular and inexpensive SYBR Green I detection method. Conclusion We have identified an experimentally validated collection of murine primer pairs for PCR and QPCR which can be used under a common PCR thermal profile, allowing the evaluation of transcript abundance of a large number of genes in parallel. This feature is increasingly attractive for confirming and/or making more precise data trends observed from experiments performed with DNA microarrays.

  9. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    Science.gov (United States)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  10. Recent experimental results on level densities for compound reaction calculations

    International Nuclear Information System (INIS)

    Voinov, A.V.

    2012-01-01

    There is a problem related to the choice of the level density input for Hauser-Feshbach model calculations. Modern computer codes have several options to choose from but it is not clear which of them has to be used in some particular cases. Availability of many options helps to describe existing experimental data but it creates problems when it comes to predictions. Traditionally, different level density systematics are based on experimental data from neutron resonance spacing which are available for a limited spin interval and one parity only. On the other hand reaction cross section calculations use the total level density. This can create large uncertainties when converting the neutron resonance spacing to the total level density that results in sizable uncertainties in cross section calculations. It is clear now that total level densities need to be studied experimentally in a systematic manner. Such information can be obtained only from spectra of compound nuclear reactions. The question is does level densities obtained from compound nuclear reactions keep the same regularities as level densities obtained from neutron resonances- Are they consistent- We measured level densities of 59-64 Ni isotopes from proton evaporation spectra of 6,7 Li induced reactions. Experimental data are presented. Conclusions of how level density depends on the neutron number and on the degree of proximity to the closed shell ( 56 Ni) are drawn. The level density parameters have been compared with parameters obtained from the analysis of neutron resonances and from model predictions

  11. Thermodynamic properties of 1-naphthol: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range 5 K to 445 K. • Vapor pressures were measured for the temperature range 370 K to 570 K. • Computed and derived properties for ideal gas entropies are in excellent accord. • The enthalpy of combustion was measured and shown to be consistent with reliable literature values. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Thermodynamic properties for 1-naphthol (Chemical Abstracts registry number [90-15-3]) in the ideal-gas state are reported based on both experimental and computational methods. Measured properties included the triple-point temperature, enthalpy of fusion, and heat capacities for the crystal and liquid phases by adiabatic calorimetry; vapor pressures by inclined-piston manometry and comparative ebulliometry; and the enthalpy of combustion of the crystal phase by oxygen bomb calorimetry. Critical properties were estimated. Entropies for the ideal-gas state were derived from the experimental studies for the temperature range 298.15 ⩽ T/K ⩽ 600, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. The mutual validation of the independent experimental and computed results is achieved with a scaling factor of 0.975 applied to the calculated vibrational frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in a series of recent articles by this research group. This article reports the first extension of this approach to a hydroxy-aromatic compound. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous. The enthalpy of combustion for 1-naphthol was also measured in this research, and excellent

  12. Method for Determining Volumetric Efficiency and Its Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ambrozik Andrzej

    2017-12-01

    Full Text Available Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, in which non-stationary processes occur. To make calculations of thermodynamic parameters of the engine operating cycle, based on the comparison of cycles, it is necessary to know the mean constant value of cylinder pressure throughout this process. Because of the character of in-cylinder pressure pattern and difficulties in pressure experimental determination, in the present paper, a novel method for the determination of this quantity was presented. In the new approach, the iteration method was used. In the method developed for determining the volumetric efficiency, the following equations were employed: the law of conservation of the amount of substance, the first law of thermodynamics for open system, dependences for changes in the cylinder volume vs. the crankshaft rotation angle, and the state equation. The results of calculations performed with this method were validated by means of experimental investigations carried out for a selected engine at the engine test bench. A satisfactory congruence of computational and experimental results as regards determining the volumetric efficiency was obtained. The method for determining the volumetric efficiency presented in the paper can be used to investigate the processes taking place in the cylinder of an IC engine.

  13. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  14. Experimental validation of a rate-based model for CO2 capture using an AMP solution

    DEFF Research Database (Denmark)

    Gabrielsen, Jostein; Svendsen, H. F.; Michelsen, Michael Locht

    2007-01-01

    Detailed experimental data, including temperature profiles over the absorber, for a carbon dioxide (CO"2) absorber with structured packing in an integrated laboratory pilot plant using an aqueous 2-amino-2-methyl-1-propanol (AMP) solution are presented. The experimental gas-liquid material balance...... was within an average of 3.5% for the experimental conditions presented. A predictive rate-based steady-state model for CO"2 absorption into an AMP solution, using an implicit expression for the enhancement factor, has been validated against the presented pilot plant data. Furthermore, a parameter...

  15. Analytical and Experimental Study for Validation of the Device to Confine BN Reactor Melted Fuel

    International Nuclear Information System (INIS)

    Rogozhkin, S.; Osipov, S.; Sobolev, V.; Shepelev, S.; Kozhaev, A.; Mavrin, M.; Ryabov, A.

    2013-01-01

    To validate the design and confirm the design characteristics of the special retaining device (core catcher) used for protection of BN reactor vessel in the case of a severe beyond-design basis accident with core melting, computational and experimental studies were carried out. The Tray test facility that uses water as coolant was developed and fabricated by OKBM; experimental studies were performed. To verify the methodical approach used for the computational study, experimental results obtained in the Tray test facility were compared with numerical simulation results obtained by the STAR-CCM+ CFD code

  16. Experimental validation of the fluid–structure interaction simulation of a bioprosthetic aortic heart valve

    International Nuclear Information System (INIS)

    Kemp, I.; Dellimore, K.; Rodriguez, R.; Scheffer, C.; Blaine, D.; Weich, H.; Doubell, A.

    2013-01-01

    Experiments performed on a 19 mm diameter bioprosthetic valve were used to successfully validate the fluid–structure interaction (FSI) simulation of an aortic valve at 72 bpm. The FSI simulation was initialized via a novel approach utilizing a Doppler sonogram of the experimentally tested valve. Using this approach very close quantitative agreement (≤12.5 %) between the numerical predictions and experimental values for several key valve performance parameters, including the peak systolic transvalvular pressure gradient, rapid valve opening time and rapid valve closing time, was obtained. The predicted valve leaflet kinematics during opening and closing were also in good agreement with the experimental measurements.

  17. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    Science.gov (United States)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  18. Parametric model of servo-hydraulic actuator coupled with a nonlinear system: Experimental validation

    Science.gov (United States)

    Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.

    2018-05-01

    Hydraulic actuators play a key role in experimental structural dynamics. In a previous study, a physics-based model for a servo-hydraulic actuator coupled with a nonlinear physical system was developed. Later, this dynamical model was transformed into controllable canonical form for position tracking control purposes. For this study, a nonlinear device is designed and fabricated to exhibit various nonlinear force-displacement profiles depending on the initial condition and the type of materials used as replaceable coupons. Using this nonlinear system, the controllable canonical dynamical model is experimentally validated for a servo-hydraulic actuator coupled with a nonlinear physical system.

  19. ENGINEERING DESIGN OPTIMIZATION OF HEEL TESTING EQUIPMENT IN THE EXPERIMENTAL VALIDATION OF SAFE WALKING

    Directory of Open Access Journals (Sweden)

    Cristiano Fragassa

    2017-06-01

    Full Text Available Experimental test methods for the evaluation of the resistance of heels of ladies' shoes in the case of impact loads are fully defined by International Organization for Standardization (ISO procedures that indicate all the conditions of experiment. A first Standard (ISO 19553 specifies the test method for determining the strength of the heels in the case of single impact. The result offers a valuation of the liability to fail under the sporadic heavy blows. A second Standard (ISO 19556 details a method for testing the capability of heels of women' shoes to survive to the repetition of small impacts provoked by normal walking. These Standards strictly define the features for two different testing devices (with specific materials, geometries, weights, etc. and all the experimental procedures to be followed during tests. On the contrary, this paper describes the technical solutions adopted to design one single experimental device able to perform impact testing of heels in both conditions. Joining the accuracy of mechanic movements with the speed of an electronic control system, a new and flexible equipment for the complete characterization of heels respect to (single or fatigue impacts was developed. Moreover a new level of performances in experimental validation of heel resistance was introduced by the versatility of the user-defined software control programs, able to encode every complex time-depending cycle of impact loads. Dynamic simulations permitted to investigate the impacts on heel in different conditions of testing, optimizing the machine design. The complexity of real stresses on shoes during an ordinary walk and in other common situations (as going up and downstairs was considered for a proper dimensioning.

  20. Experimental validation of TASS/SMR-S critical flow model for the integral reactor SMART

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Si Won; Ra, In Sik; Kim, Kun Yeup [ACT Co., Daejeon (Korea, Republic of); Chung, Young Jong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    An advanced integral PWR, SMART (System- Integrated Modular Advanced ReacTor) is being developed in KAERI. It has a compact size and a relatively small power rating (330MWt) compared to a conventional reactor. Because new concepts are applied to SMART, an experimental and analytical validation is necessary for the safety evaluation of SMART. The analytical safety validation is being accomplished by a safety analysis code for an integral reactor, TASS/SMR-S developed by KAERI. TASS/SMR-S uses a lumped parameter one dimensional node and path modeling for the thermal hydraulic calculation and it uses point kinetics for the reactor power calculation. It has models for a general usage such as a core heat transfer model, a wall heat structure model, a critical flow model, component models, and it also has many SMART specific models such as an once through helical coiled steam generator model, and a condensate heat transfer model. To ensure that the TASS/SMR-S code has the calculation capability for the safety evaluation of SMART, the code should be validated for the specific models with the separate effect test experimental results. In this study, TASS/SMR-S critical flow model is evaluated as compared with SMD (Super Moby Dick) experiment

  1. Experimental validation of UTDefect

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, A.S. [ABB Tekniska Roentgencentralen AB, Taeby (Sweden); Bostroem, A.; Wirdelius, H. [Chalmers Univ. of Technology, Goeteborg (Sweden). Div. of Mechanics

    1997-01-01

    This study reports on conducted experiments and computer simulations of ultrasonic nondestructive testing (NDT). Experiments and simulations are compared with the purpose of validating the simulation program UTDefect. UTDefect simulates ultrasonic NDT of cracks and some other defects in isotropic and homogeneous materials. Simulations for the detection of surface breaking cracks are compared with experiments in pulse-echo mode on surface breaking cracks in carbon steel plates. The echo dynamics are plotted and compared with the simulations. The experiments are performed on a plate with thickness 36 mm and the crack depths are 7.2 mm and 18 mm. L- and T-probes with frequency 1, 2 and 4 MHz and angels 45, 60 and 70 deg are used. In most cases the probe and the crack is on opposite sides of the plate, but in some cases they are on the same side. Several cracks are scanned from two directions. In total 53 experiments are reported for 33 different combinations. Generally the simulations agree well with the experiments and UTDefect is shown to be able to, within certain limits, perform simulations that are close to experiments. It may be concluded that: For corner echoes the eight 45 deg cases and the eight 60 deg cases show good agreement between experiments and UTDefect, especially for the 7.2 mm crack. The amplitudes differ more for some cases where the defect is close to the probe and for the corner of the 18 mm crack. For the two 70 deg cases there are too few experimental values to compare the curve shapes, but the amplitudes do not differ too much. The tip diffraction echoes also agree well in general. For some cases, where the defect is close to the probe, the amplitudes differ more than 10-15 dB, but for all but two cases the difference in amplitude is less than 7 dB. 6 refs.

  2. Both experimental hypothyroidism and hyperthyroidism increase cardiac irisin levels in rats.

    Science.gov (United States)

    Atici, E; Menevse, E; Baltaci, A K; Mogulkoc, R

    2018-01-01

    Irisin is a newly discovered myokine and adipokine that increases total body energy expenditure. The aim of this study was to determine the effect of experimental hypothyroidism and hyperthyroidism on the levels of irisin in heart tissue in rats. The study was performed on the 40 male Sprague-Dawley rats. Experimental groups were designed as; Control, Hypothyroidism, Hypothyroidism+L-Thyroxine, Hyperthyroidism and Hyperthyroidism + PTU. Following 3 weeks experimental period, irisin levels were determined in heart tissues. Hypothyroidism group values of irisin were higher than in the control group, but lower than in the hyperthyroidism group. The hyperthyroidism group had the highest levels of cardiac irisin. The results of the study showed that the experimental hypothyroidism and hyperthyroidism increased the heart irisin levels, but the increase in the hyperthyroidism group was much higher than in the hypothyroidism group. However, treatment of hypothyroidism and hyperthyroidism corrected cardiac irisin levels (Fig. 1, Ref. 28).

  3. Macroscopic Dynamic Modeling of Sequential Batch Cultures of Hybridoma Cells: An Experimental Validation

    Directory of Open Access Journals (Sweden)

    Laurent Dewasme

    2017-02-01

    Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.

  4. Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin.

    Science.gov (United States)

    Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R

    2015-11-01

    The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a "complete mystical experience" that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. © The Author(s) 2015.

  5. Analysis of residual stresses due to roll-expansion process: Finite element computation and validation by experimental tests

    International Nuclear Information System (INIS)

    Aufaure, M.; Boudot, R.; Zacharie, G.; Proix, J.M.

    1987-01-01

    The steam generator heat exchangers of pressurized water reactors are made of U-shaped tubes, both ends of them being fixed to a plate by roll-expansion. This process consists in increasing the tube section by means of a rotating tool in order to apply its outer side to the surface of the hole through the plate. As reported by de Keroulas (1986), in service cracks appeared on these tubes in the transition from expanded to nonexpanded portions. So we developed a program to compute residual stresses at the surface of the tubes, which caused their cracking, and to endeavour to lower their level by acting on some parameters. This program was validated by experimental tests. (orig.)

  6. Experimental validation of sound field control with a circular double-layer array of loudspeakers

    DEFF Research Database (Denmark)

    Chang, Jiho; Jacobsen, Finn

    2013-01-01

    This paper is concerned with experimental validation of a recently proposed method of controlling sound fields with a circular double-layer array of loudspeakers [Chang and Jacobsen, J. Acoust. Soc. Am. 131(6), 4518-4525 (2012)]. The double-layer of loudspeakers is realized with 20 pairs of closed...

  7. Numerical Simulation and Experimental Validation of the Inflation Test of Latex Balloons

    OpenAIRE

    Bustos, Claudio; Herrera, Claudio García; Celentano, Diego; Chen, Daming; Cruchaga, Marcela

    2016-01-01

    Abstract Experiments and modeling aimed at assessing the mechanical response of latex balloons in the inflation test are presented. To this end, the hyperelastic Yeoh material model is firstly characterized via tensile test and, then, used to numerically simulate via finite elements the stress-strain evolution during the inflation test. The numerical pressure-displacement curves are validated with those obtained experimentally. Moreover, this analysis is extended to a biomedical problem of an...

  8. Experimental validation of field cooling simulations for linear superconducting magnetic bearings

    Energy Technology Data Exchange (ETDEWEB)

    Dias, D H N; Motta, E S; Sotelo, G G; De Andrade Jr, R, E-mail: ddias@coe.ufrj.b [Laboratorio de aplicacao de Supercondutores (LASUP), Universidade Federal do Rio de Janeiro, Rio de Janeiro (Brazil)

    2010-07-15

    For practical stability of a superconducting magnetic bearing the refrigeration process must occur with the superconductor in the presence of the magnetic field (a field cooling (FC) process). This paper presents an experimental validation of a method for simulating this system in the FC case. Measured and simulated results for a vertical force between a high temperature superconductor and a permanent magnet rail are compared. The main purpose of this work is to consolidate a simulation tool that can help in future projects on superconducting magnetic bearings for MagLev vehicles.

  9. Neuroinflammatory targets and treatments for epilepsy validated in experimental models.

    Science.gov (United States)

    Aronica, Eleonora; Bauer, Sebastian; Bozzi, Yuri; Caleo, Matteo; Dingledine, Raymond; Gorter, Jan A; Henshall, David C; Kaufer, Daniela; Koh, Sookyong; Löscher, Wolfgang; Louboutin, Jean-Pierre; Mishto, Michele; Norwood, Braxton A; Palma, Eleonora; Poulter, Michael O; Terrone, Gaetano; Vezzani, Annamaria; Kaminski, Rafal M

    2017-07-01

    A large body of evidence that has accumulated over the past decade strongly supports the role of inflammation in the pathophysiology of human epilepsy. Specific inflammatory molecules and pathways have been identified that influence various pathologic outcomes in different experimental models of epilepsy. Most importantly, the same inflammatory pathways have also been found in surgically resected brain tissue from patients with treatment-resistant epilepsy. New antiseizure therapies may be derived from these novel potential targets. An essential and crucial question is whether targeting these molecules and pathways may result in anti-ictogenesis, antiepileptogenesis, and/or disease-modification effects. Therefore, preclinical testing in models mimicking relevant aspects of epileptogenesis is needed to guide integrated experimental and clinical trial designs. We discuss the most recent preclinical proof-of-concept studies validating a number of therapeutic approaches against inflammatory mechanisms in animal models that could represent novel avenues for drug development in epilepsy. Finally, we suggest future directions to accelerate preclinical to clinical translation of these recent discoveries. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  10. Numerical Simulation and Experimental Validation of the Inflation Test of Latex Balloons

    Directory of Open Access Journals (Sweden)

    Claudio Bustos

    Full Text Available Abstract Experiments and modeling aimed at assessing the mechanical response of latex balloons in the inflation test are presented. To this end, the hyperelastic Yeoh material model is firstly characterized via tensile test and, then, used to numerically simulate via finite elements the stress-strain evolution during the inflation test. The numerical pressure-displacement curves are validated with those obtained experimentally. Moreover, this analysis is extended to a biomedical problem of an eyeball under glaucoma conditions.

  11. Developing knowledge level scale of functional foods: Validity and ...

    African Journals Online (AJOL)

    The aim of the study was to develop a scale to determine the knowledge levels of University students on functional foods and to investigate the validity and reliability of the scale. The research was conducted on 417 (209 girls and 208 boys) undergraduate students in Selcuk University regarding functional foods.

  12. Time Reversal UWB Communication System: A Novel Modulation Scheme with Experimental Validation

    Directory of Open Access Journals (Sweden)

    Khaleghi A

    2010-01-01

    Full Text Available A new modulation scheme is proposed for a time reversal (TR ultra wide-band (UWB communication system. The new modulation scheme uses the binary pulse amplitude modulation (BPAM and adds a new level of modulation to increase the data rate of a TR UWB communication system. Multiple data bits can be transmitted simultaneously with a cost of little added interference. Bit error rate (BER performance and the maximum achievable data rate of the new modulation scheme are theoretically analyzed. Two separate measurement campaigns are carried out to analyze the proposed modulation scheme. In the first campaign, the frequency responses of a typical indoor channel are measured and the performance is studied by the simulations using the measured frequency responses. Theoretical and the simulative performances are in strong agreement with each other. Furthermore, the BER performance of the proposed modulation scheme is compared with the performance of existing modulation schemes. It is shown that the proposed modulation scheme outperforms QAM and PAM for in an AWGN channel. In the second campaign, an experimental validation of the proposed modulation scheme is done. It is shown that the performances with the two measurement campaigns are in good agreement.

  13. Validation of moderator-level reactivity coefficient using station data

    Energy Technology Data Exchange (ETDEWEB)

    Younis, M.; Martchouk, I., E-mail: mohamed.younis@amecfw.com, E-mail: iouri.martchouk@amecfw.com [Amec Foster Wheeler, Toronto, ON (Canada); Buchan, P.D., E-mail: david.buchan@opg.com [Ontario Power Generation, Pickering, ON (Canada)

    2015-07-01

    The reactivity effect due to variations in the moderator level has been recognized as a reactor physics phenomenon of importance during normal operation and accident analysis. The moderator-level reactivity coefficient is an important parameter in safety analysis of CANDU reactors, e.g., during Loss of Moderator Heat Sink as well as in the simulation of Reactor Regulating System action in CANDU reactors that use moderator level for reactivity control. This paper presents the results of the validation exercise of the reactor-physics toolset using the measurements performed in Pickering Unit 4 in 2003. The capability of the code suite of predicting moderator-level reactivity effect was tested by comparing measured and predicted reactor-physics parameters. (author)

  14. Experimental Validation and Model Verification for a Novel Geometry ICPC Solar Collector

    DEFF Research Database (Denmark)

    Perers, Bengt; Duff, William S.; Daosukho, Jirachote

    A novel geometry ICPC solar collector was developed at the University of Chicago and Colorado State University. A ray tracing model has been designed to investigate the optical performance of both the horizontal and vertical fin versions of this collector. Solar radiation is modeled as discrete...... to the desired incident angle of the sun’s rays, performance of the novel ICPC solar collector at various specified angles along the transverse and longitudinal evacuated tube directions were experimentally determined. To validate the ray tracing model, transverse and longitudinal performance predictions...... at the corresponding specified incident angles are compared to the Sandia results. A 100 m2 336 Novel ICPC evacuated tube solar collector array has been in continuous operation at a demonstration project in Sacramento California since 1998. Data from the initial operation of the array are used to further validate...

  15. Elasto-dynamic analysis of a gear pump-Part III: Experimental validation procedure and model extension to helical gears

    Science.gov (United States)

    Mucchi, E.; Dalpiaz, G.

    2015-01-01

    This work concerns external gear pumps for automotive applications, which operate at high speed and low pressure. In previous works of the authors (Part I and II, [1,2]), a non-linear lumped-parameter kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps was presented. It takes into account the most important phenomena involved in the operation of this kind of machine. The two main sources of noise and vibration are considered: pressure pulsation and gear meshing. The model has been used in order to foresee the influence of working conditions and design modifications on vibration generation. The model's experimental validation is a difficult task. Thus, Part III proposes a novel methodology for the validation carried out by the comparison of simulations and experimental results concerning forces and moments: it deals with the external and inertial components acting on the gears, estimated by the model, and the reactions and inertial components on the pump casing and the test plate, obtained by measurements. The validation is carried out comparing the level of the time synchronous average in the time domain and the waterfall maps in the frequency domain, with particular attention to identify system resonances. The validation results are satisfactory globally, but discrepancies are still present. Moreover, the assessed model has been properly modified for the application to a new virtual pump prototype with helical gears in order to foresee gear accelerations and dynamic forces. Part IV is focused on improvements in the modelling and analysis of the phenomena bound to the pressure evolution around the gears in order to achieve results closer to the measured values. As a matter of fact, the simulation results have shown that a variable meshing stiffness has a notable contribution on the dynamic behaviour of the pump but this is not as important as the pressure phenomena. As a consequence, the original model was modified with the

  16. Experimental Validation of Mathematical Framework for Fast Switching Valves used in Digital Hydraulic Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller

    2015-01-01

    of 10 kW during switching (mean of approximately 250 W) and a pressure loss below 0.5 bar at 600 l/min. The main goal of this article is validate parts of the mathematical framework based on a series of experiments. Furthermore, this article aims to document the experience gained from the experimental...

  17. Experimental study of self-leveling behavior in debris bed

    International Nuclear Information System (INIS)

    Zhang, Bin; Harada, Tetsushi; Hirahara, Daisuke; Matsumoto, Tatsuya; Morita, Koji; Fukuda, Kenji; Yamano, Hidemasa; Suzuki, Tohru; Tobita, Yoshiharu

    2008-01-01

    After a core disruptive accident in a sodium-cooled fast reactor, core debris may settle on locations such as within the core-support structure or in the lower inlet plenum of the reactor vessel as debris beds, as a consequence of rapid quenching and fragmentation of core materials in subcooled sodium. The particle beds that are initially of varying depth have been observed to undergo a process of self-leveling when sodium boiling occurs within the beds. The boiling is believed to provide the driven force with debris needed to overcome resisting forces. Self-leveling ability has much effect on heat-removal capability of debris beds. In the present study, characteristics of self-leveling behaviors were investigated experimentally with simulant materials. Although the decay heat from fuel debris drives the coolant boiling in reactor accident conditions, the present experiments employed depressurization boiling of water to simulate axially increasing void distribution in a debris bed, which consists of solid particles of alumina or lead with different density. The particle size (from 0.5 mm to 6 mm in diameter) and shape (spherical or non-spherical particles) were also taken as experimental parameters. A rough criteria for self-leveling occurrence is proposed and compared with the experimental results. Characteristics of the self-leveling behaviors observed are analyzed and extrapolate to reactor accident conditions. (author)

  18. Experimental and simulation validation of ABHE for disinfection of Legionella in hot water systems

    International Nuclear Information System (INIS)

    Altorkmany, Lobna; Kharseh, Mohamad; Ljung, Anna-Lena; Staffan Lundström, T.

    2017-01-01

    Highlights: • ABHE system can supply a continues thermal treatment of water with saving energy. • Mathematical and experimental validation of ABHE performance are presented. • EES-based model is developed to simulate ABHE system. • Energy saving by ABHE is proved for different initial working parameters. - Abstract: The work refers to an innovative system inspired by nature that mimics the thermoregulation system that exists in animals. This method, which is called Anti Bacteria Heat Exchanger (ABHE), is proposed to achieve continuous thermal disinfection of bacteria in hot water systems with high energy efficiency. In particular, this study aims to demonstrate the opportunity to gain energy by means of recovering heat over a plate heat exchanger. Firstly, the thermodynamics of the ABHE is clarified to define the ABHE specification. Secondly, a first prototype of an ABHE is built with a specific configuration based on simplicity regarding design and construction. Thirdly, an experimental test is carried out. Finally, a computer model is built to simulate the ABHE system and the experimental data is used to validate the model. The experimental results indicate that the performance of the ABHE system is strongly dependent on the flow rate, while the supplied temperature has less effect. Experimental and simulation data show a large potential for saving energy of this thermal disinfection method by recovering heat. To exemplify, when supplying water at a flow rate of 5 kg/min and at a temperature of 50 °C, the heat recovery is about 1.5 kW while the required pumping power is 1 W. This means that the pressure drop is very small compared to the energy recovered and consequently high saving in total cost is promising.

  19. Experimental validation of a heat transfer model for concentrating photovoltaic system

    International Nuclear Information System (INIS)

    Sendhil Kumar, Natarajan; Matty, Katz; Rita, Ebner; Simon, Weingaertner; Ortrun, Aßländer; Alex, Cole; Roland, Wertz; Tim, Giesen; Tapas Kumar, Mallick

    2012-01-01

    In this paper, a three dimensional heat transfer model is presented for a novel concentrating photovoltaic design for Active Solar Panel Initiative System (ASPIS). The concentration ratio of two systems (early and integrated prototype) are 5× and 10× respectively, designed for roof-top integrated Photovoltaic systems. ANSYS 12.1, CFX package was effectively used to predict the temperatures of the components of the both ASPIS systems at various boundary conditions. The predicted component temperatures of an early prototype were compared with experimental results of ASPIS, which were carried out in Solecta – Israel and at the Austrian Institute of Technology (AIT) – Austria. It was observed that the solar cell and lens temperature prediction shows good agreement with Solecta measurements. The minimum and maximum deviation of 3.8% and 17.9% were observed between numerical and Solecta measurements and the maximum deviations of 16.9% were observed between modeling and AIT measurements. Thus, the developed validated thermal model enables to predict the component temperatures for concentrating photovoltaic systems. - Highlights: ► Experimentally validated heat transfer model for concentrating Photovoltaic system developed. ► Predictions of solar cell temperatures for parallactic tracking CPV system for roof integration. ► The ASPIS module contains 2 mm wide 216 solar cells manufactured based on SATURN technology. ► A solar cell temperature of 44 °C was predicted for solar radiation intensity was 1000 W/m 2 and ambient temperature was 20 °C. ► Average deviation was 6% and enabled to predict temperature of any CPV system.

  20. PSpice Modeling Platform for SiC Power MOSFET Modules with Extensive Experimental Validation

    DEFF Research Database (Denmark)

    Ceccarelli, Lorenzo; Iannuzzo, Francesco; Nawaz, Muhammad

    2016-01-01

    to simulate the performance of high current rating (above 100 A), multi-chip SiC MOSFET modules both for static and switching behavior. Therefore, the simulation results have been validated experimentally in a wide range of operating conditions, including high temperatures, gate resistance and stray elements....... The whole process has been repeated for three different modules with voltage rating of 1.2 kV and 1.7 kV, manufactured by three different companies. Lastly, a parallel connection of two modules of the same type has been performed in order to observe the unbalancing and mismatches experimentally......The aim of this work is to present a PSpice implementation for a well-established and compact physics-based SiC MOSFET model, including a fast, experimental-based parameter extraction procedure in a MATLAB GUI environment. The model, originally meant for single-die devices, has been used...

  1. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    Science.gov (United States)

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  2. Visual Servoing Tracking Control of a Ball and Plate System: Design, Implementation and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ming-Tzu Ho

    2013-07-01

    Full Text Available This paper presents the design, implementation and validation of real-time visual servoing tracking control for a ball and plate system. The position of the ball is measured with a machine vision system. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA device to meet real-time constraints. A detailed dynamic model of the system is derived for the simulation study. By neglecting the high-order coupling terms, the ball and plate system model is simplified into two decoupled ball and beam systems, and an approximate input-output feedback linearization approach is then used to design the controller for trajectory tracking. The designed control law is implemented on a digital signal processor (DSP. The validity of the performance of the developed control system is investigated through simulation and experimental studies. Experimental results show that the designed system functions well with reasonable agreement with simulations.

  3. Texas Panhandle soil-crop-beef food chain for uranium: a dynamic model validated by experimental data

    International Nuclear Information System (INIS)

    Wenzel, W.J.; Wallwork-Barber, K.M.; Rodgers, J.C.; Gallegos, A.F.

    1982-01-01

    Long-term simulations of uranium transport in the soil-crop-beef food chain were performed using the BIOTRAN model. Experimental data means from an extensive Pantex beef cattle study are presented. Experimental data were used to validate the computer model. Measurements of uranium in air, soil, water, range grasses, feed, and cattle tissues are compared to simulated uranium output values in these matrices when the BIOTRAN model was set at the measured soil and air values. The simulations agreed well with experimental data even though metabolic details for ruminants and uranium chemical form in the environment remain to be studied

  4. Chemical looping reforming in packed-bed reactors : modelling, experimental validation and large-scale reactor design

    NARCIS (Netherlands)

    Spallina, V.; Marinello, B.; Gallucci, F.; Romano, M.C.; van Sint Annaland, M.

    This paper addresses the experimental demonstration and model validation of chemical looping reforming in dynamically operated packed-bed reactors for the production of H2 or CH3OH with integrated CO2 capture. This process is a combination of auto-thermal and steam methane reforming and is carried

  5. Predictive validity of examinations at the Secondary Education Certificate (SEC) level

    OpenAIRE

    Farrugia, Josette; Ventura, Frank

    2007-01-01

    This paper presents the predictive validity of results obtained by 16-year-old Maltese students in the May 2004 Secondary Education Certificate (SEC) examinations in Biology, Chemistry, Physics, Mathematics, Computing, English and Maltese for the Advanced level examinations in these subjects taken by the same students two years later. The study checks whether the SEC level is a good foundation for the higher level, the likelihood of obtaining a high grade at A-level from particular SEC result...

  6. Hypertension Knowledge-Level Scale (HK-LS: A Study on Development, Validity and Reliability

    Directory of Open Access Journals (Sweden)

    Cemalettin Kalyoncu

    2012-03-01

    Full Text Available This study was conducted to develop a scale to measure knowledge about hypertension among Turkish adults. The Hypertension Knowledge-Level Scale (HK-LS was generated based on content, face, and construct validity, internal consistency, test re-test reliability, and discriminative validity procedures. The final scale had 22 items with six sub-dimensions. The scale was applied to 457 individuals aged ≥18 years, and 414 of them were re-evaluated for test-retest reliability. The six sub-dimensions encompassed 60.3% of the total variance. Cronbach alpha coefficients were 0.82 for the entire scale and 0.92, 0.59, 0.67, 0.77, 0.72, and 0.76 for the sub-dimensions of definition, medical treatment, drug compliance, lifestyle, diet, and complications, respectively. The scale ensured internal consistency in reliability and construct validity, as well as stability over time. Significant relationships were found between knowledge score and age, gender, educational level, and history of hypertension of the participants. No correlation was found between knowledge score and working at an income-generating job. The present scale, developed to measure the knowledge level of hypertension among Turkish adults, was found to be valid and reliable.

  7. Hypertension Knowledge-Level Scale (HK-LS): a study on development, validity and reliability.

    Science.gov (United States)

    Erkoc, Sultan Baliz; Isikli, Burhanettin; Metintas, Selma; Kalyoncu, Cemalettin

    2012-03-01

    This study was conducted to develop a scale to measure knowledge about hypertension among Turkish adults. The Hypertension Knowledge-Level Scale (HK-LS) was generated based on content, face, and construct validity, internal consistency, test re-test reliability, and discriminative validity procedures. The final scale had 22 items with six sub-dimensions. The scale was applied to 457 individuals aged ≥ 18 years, and 414 of them were re-evaluated for test-retest reliability. The six sub-dimensions encompassed 60.3% of the total variance. Cronbach alpha coefficients were 0.82 for the entire scale and 0.92, 0.59, 0.67, 0.77, 0.72, and 0.76 for the sub-dimensions of definition, medical treatment, drug compliance, lifestyle, diet, and complications, respectively. The scale ensured internal consistency in reliability and construct validity, as well as stability over time. Significant relationships were found between knowledge score and age, gender, educational level, and history of hypertension of the participants. No correlation was found between knowledge score and working at an income-generating job. The present scale, developed to measure the knowledge level of hypertension among Turkish adults, was found to be valid and reliable.

  8. Simulation, experimental validation and kinematic optimization of a Stirling engine using air and helium

    International Nuclear Information System (INIS)

    Bert, Juliette; Chrenko, Daniela; Sophy, Tonino; Le Moyne, Luis; Sirot, Frédéric

    2014-01-01

    A Stirling engine with nominal output power of 1 kW is tested using air and helium as working gases. The influence of working pressure, engine speed and temperature of the hot source is studied, analyzing instantaneous gas pressure as well as instantaneous and stationary temperature at different positions to derive the effective power. A zero dimensional finite-time thermodynamic, three zones model of a generic Stirling engine is developed and successfully validated against experimental gas temperature and pressure in each zone, providing the effective power. This validation underlines the interest of different working gases as well as different geometric configurations for different applications. Furthermore, the validated model allows parametric studies of the engine, with regard to geometry, working gas and engine kinematics. It is used in order to optimize the kinematic of a Stirling engine for different working points and gases. - Highlights: • A Stirling engine of 1 kW is tested using air and helium as working gas. • Effects of working pressure, speed and temperature on power are studied. • A zero dimensional finite-time thermodynamic, three zones model of it is validated. • The validated model is used for parametric studies and optimization of the engine

  9. Attempted development and cross-validation of predictive models of individual-level and organizational-level turnover of nuclear power operators

    International Nuclear Information System (INIS)

    Vasa-Sideris, S.J.

    1989-01-01

    Nuclear power accounts for 209% of the electric power generated in the U.S. by 107 nuclear plants which employ over 8,700 operators. Operator turnover is significant to utilities from the economic point of view since it costs almost three hundred thousand dollars to train and qualify one operator, and because turnover affects plant operability and therefore plant safety. The study purpose was to develop and cross-validate individual-level and organizational-level models of turnover of nuclear power plant operators. Data were obtained by questionnaires and from published data for 1983 and 1984 on a number of individual, organizational, and environmental predictors. Plants had been in operation for two or more years. Questionnaires were returned by 29 out of 50 plants on over 1600 operators. The objectives were to examine the reliability of the turnover criterion, to determine the classification accuracy of the multivariate predictive models and of categories of predictors (individual, organizational, and environmental) and to determine if a homology existed between the individual-level and organizational-level models. The method was to examine the shrinkage that occurred between foldback design (in which the predictive models were reapplied to the data used to develop them) and cross-validation. Results did not support the hypothesis objectives. Turnover data were accurate but not stable between the two years. No significant differences were detected between the low and high turnover groups at the organization or individual level in cross-validation. Lack of stability in the criterion, restriction of range, and small sample size at the organizational level were serious limitations of this study. The results did support the methods. Considerable shrinkage occurred between foldback and cross-validation of the models

  10. Experimental validation of thermal design of top shield for a pool type SFR

    International Nuclear Information System (INIS)

    Aithal, Sriramachandra; Babu, V. Rajan; Balasubramaniyan, V.; Velusamy, K.; Chellapandi, P.

    2016-01-01

    Highlights: • Overall thermal design of top shield in a SFR is experimentally verified. • Air jet cooling is effective in ensuring the temperatures limits for top shield. • Convection patterns in narrow annulus are in line with published CFD results. • Wire mesh insulation ensures gradual thermal gradient at top portion of main vessel. • Under loss of cooling scenario, sufficient time is available for corrective action. - Abstract: An Integrated Top Shield Test Facility towards validation of thermal design of top shield for a pool type SFR has been conceived, constructed & commissioned. Detailed experiments were performed in this experimental facility having full-scale features. Steady state temperature distribution within the facility is measured for various heater plate temperatures in addition to simulating different operating states of the reactor. Following are the important observations (i) jet cooling system is effective in regulating the roof slab bottom plate temperature and thermal gradient across roof slab simulating normal operation of reactor, (ii) wire mesh insulation provided in roof slab-main vessel annulus is effective in obtaining gradual thermal gradient along main vessel top portion and inhibiting the setting up of cellular convection within annulus and (iii) cellular convection with four distinct convective cells sets in the annular gap between roof slab and small rotatable plug measuring ∼ϕ4 m in diameter & gap width varying from 16 mm to 30 mm. Repeatability of results is also ensured during all the above tests. The results presented in this paper is expected to provide reference data for validation of thermal hydraulic models in addition to serving as design validation of jet cooling system for pool type SFR.

  11. De novo peptide design and experimental validation of histone methyltransferase inhibitors.

    Directory of Open Access Journals (Sweden)

    James Smadbeck

    Full Text Available Histones are small proteins critical to the efficient packaging of DNA in the nucleus. DNA–protein complexes, known as nucleosomes, are formed when the DNA winds itself around the surface of the histones. The methylation of histone residues by enhancer of zeste homolog 2 (EZH2 maintains gene repression over successive cell generations. Overexpression of EZH2 can silence important tumor suppressor genes leading to increased invasiveness of many types of cancers. This makes the inhibition of EZH2 an important target in the development of cancer therapeutics. We employed a three-stage computational de novo peptide design method to design inhibitory peptides of EZH2. The method consists of a sequence selection stage and two validation stages for fold specificity and approximate binding affinity. The sequence selection stage consists of an integer linear optimization model that was solved to produce a rank-ordered list of amino acid sequences with increased stability in the bound peptide-EZH2 structure. These sequences were validated through the calculation of the fold specificity and approximate binding affinity of the designed peptides. Here we report the discovery of novel EZH2 inhibitory peptides using the de novo peptide design method. The computationally discovered peptides were experimentally validated in vitro using dose titrations and mechanism of action enzymatic assays. The peptide with the highest in vitro response, SQ037, was validated in nucleo using quantitative mass spectrometry-based proteomics. This peptide had an IC50 of 13.5 mM, demonstrated greater potency as an inhibitor when compared to the native and K27A mutant control peptides, and demonstrated competitive inhibition versus the peptide substrate. Additionally, this peptide demonstrated high specificity to the EZH2 target in comparison to other histone methyltransferases. The validated peptides are the first computationally designed peptides that directly inhibit EZH2

  12. De novo peptide design and experimental validation of histone methyltransferase inhibitors.

    Directory of Open Access Journals (Sweden)

    James Smadbeck

    Full Text Available Histones are small proteins critical to the efficient packaging of DNA in the nucleus. DNA-protein complexes, known as nucleosomes, are formed when the DNA winds itself around the surface of the histones. The methylation of histone residues by enhancer of zeste homolog 2 (EZH2 maintains gene repression over successive cell generations. Overexpression of EZH2 can silence important tumor suppressor genes leading to increased invasiveness of many types of cancers. This makes the inhibition of EZH2 an important target in the development of cancer therapeutics. We employed a three-stage computational de novo peptide design method to design inhibitory peptides of EZH2. The method consists of a sequence selection stage and two validation stages for fold specificity and approximate binding affinity. The sequence selection stage consists of an integer linear optimization model that was solved to produce a rank-ordered list of amino acid sequences with increased stability in the bound peptide-EZH2 structure. These sequences were validated through the calculation of the fold specificity and approximate binding affinity of the designed peptides. Here we report the discovery of novel EZH2 inhibitory peptides using the de novo peptide design method. The computationally discovered peptides were experimentally validated in vitro using dose titrations and mechanism of action enzymatic assays. The peptide with the highest in vitro response, SQ037, was validated in nucleo using quantitative mass spectrometry-based proteomics. This peptide had an IC50 of 13.5 [Formula: see text]M, demonstrated greater potency as an inhibitor when compared to the native and K27A mutant control peptides, and demonstrated competitive inhibition versus the peptide substrate. Additionally, this peptide demonstrated high specificity to the EZH2 target in comparison to other histone methyltransferases. The validated peptides are the first computationally designed peptides that directly

  13. Site characterization and validation - Head variations during the entire experimental period

    International Nuclear Information System (INIS)

    Haigh, D.; Brightman, M.; Black, J.; Parry, S.

    1992-01-01

    The site characterization and validation project lasted for five years from 1986 to 1991. It consisted of a number of experiments within the region known as the SCV site. During this period of experimentation a monitoring system was established within the mine for the purpose of measuring the variation of head at a number of locations within and around the site. The system installed was based around a set of equipment known as a Piezomac TM system. In this system there is one central pressure transducer and each borehole interval is connected to it in turn. It can measure up to 55 separate points during each measurement 'cycle'. Monitoring points were either complete boreholes or sections of boreholes isolated by packers. In order to produce reasonable file size, data sets were screened. The results show that the SCV site was always responding to some form of hydrogeological disturbance. Many key tests were performed against changing background trends. This was particularly so of the simulated drift experiment and the large scale crosshole tests. However, some estimates of long term equilibrium heads before and after excavation of the validation drift have been made. Contoured plots of heads before and after show significant reduction of steady state heads as a result of drift excavation. Furthermore contouring the estimated long term drawdowns responding to the simulated drift experiment shows the specific influence of the H zone and the A/B zone. Overall the results of the monitoring show that the mine was a very active hydrogeological environment during the experimentation. Additionally it was often very difficult to clearly identify the causes of such disturbances. (au)

  14. Dynamic model with experimental validation of a biogas-fed SOFC plant

    International Nuclear Information System (INIS)

    D'Andrea, G.; Gandiglio, M.; Lanzini, A.; Santarelli, M.

    2017-01-01

    Highlights: • 60% of DIR into the SOFC anode reduces the air blower parasitic losses by 14%. • PID-controlled cathode airflow enables fast thermal regulation of the SOFC. • Stack overheating occurs due to unexpected reductions in the cathode airflow. • Current ramp rates higher than +0.30 A/min lead to an excessive stack overheating. - Abstract: The dynamic model of a poly-generation system based on a biogas-fed solid oxide fuel cell (SOFC) plant is presented in this paper. The poly-generation plant was developed in the framework of the FP7 EU-funded project SOFCOM ( (www.sofcom.eu)), which consists of a fuel-cell based polygeneration plant with CO_2 capture and re-use. CO_2 is recovered from the anode exhaust of the SOFC (after oxy-combustion, cooling and water condensation) and the Carbon is fixed in the form of micro-algae in a tubular photobioreactor. This work focuses on the dynamic operation of the SOFC module running on steam-reformed biogas. Both steady state and dynamic operation of the fuel cell stack and the related Balance-of-Plant (BoP) has been modeled in order to simulate the thermal behavior and performance of the system. The model was validated against experimental data gathered during the operation of the SOFCOM proof-of-concept showing good agreement with the experimental data. The validated model has been used to investigate further on the harsh off-design operation of the proof-of-concept. Simulation results provide guidelines for an improved design of the control system of the plant, highlighting the feasible operating region under safe conditions and means to maximize the overall system efficiency.

  15. Experimental validation of a theoretical model for a direct-expansion solar-assisted heat pump applied to heating

    International Nuclear Information System (INIS)

    Moreno-Rodriguez, A.; Garcia-Hernando, N.; González-Gil, A.; Izquierdo, M.

    2013-01-01

    This paper discusses the experimental validation of a theoretical model that determines the operating parameters of a DXSAHP (direct-expansion solar-assisted heat pump) applied to heating. For this application, the model took into account the variable condensing temperature, and it was developed from the following environmental variables: outdoor temperature, solar radiation and wind. The experimental data were obtained from a prototype installed at the University Carlos III, which is located south of Madrid. The prototype uses a solar collector with a total area of 5.6 m 2 , a compressor with a rated capacity of 1100 W, a thermostatic expansion valve and fan-coil units as indoor terminals. The monitoring results were analyzed for several typical days in the climatic zone where the machine was located to understand the equipment's seasonal behavior. The experimental coefficient of the performance varies between 1.9 and 2.7, and the equipment behavior in extreme outdoor conditions has also been known to determine the thermal demand that can be compensated for. - Highlights: • The study aims to present an experimental validation of a theoretical model. • The experimental COP can vary between 1.9 and 2.7 (max. condensation temperature 59 °C). • A “dragging term” relates condensation and evaporation temperature. • The operating parameters respond to the solar radiation. The COP may increase up to 25%

  16. The Dynamic Similitude Design Method of Thin Walled Structures and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Zhong Luo

    2016-01-01

    Full Text Available For the applicability of dynamic similitude models of thin walled structures, such as engine blades, turbine discs, and cylindrical shells, the dynamic similitude design of typical thin walled structures is investigated. The governing equation of typical thin walled structures is firstly unified, which guides to establishing dynamic scaling laws of typical thin walled structures. Based on the governing equation, geometrically complete scaling law of the typical thin walled structure is derived. In order to determine accurate distorted scaling laws of typical thin walled structures, three principles are proposed and theoretically proved by combining the sensitivity analysis and governing equation. Taking the thin walled annular plate as an example, geometrically complete and distorted scaling laws can be obtained based on the principles of determining dynamic scaling laws. Furthermore, the previous five orders’ accurate distorted scaling laws of thin walled annular plates are presented and numerically validated. Finally, the effectiveness of the similitude design method is validated by experimental annular plates.

  17. Modelling of PEM Fuel Cell Performance: Steady-State and Dynamic Experimental Validation

    Directory of Open Access Journals (Sweden)

    Idoia San Martín

    2014-02-01

    Full Text Available This paper reports on the modelling of a commercial 1.2 kW proton exchange membrane fuel cell (PEMFC, based on interrelated electrical and thermal models. The electrical model proposed is based on the integration of the thermodynamic and electrochemical phenomena taking place in the FC whilst the thermal model is established from the FC thermal energy balance. The combination of both models makes it possible to predict the FC voltage, based on the current demanded and the ambient temperature. Furthermore, an experimental characterization is conducted and the parameters for the models associated with the FC electrical and thermal performance are obtained. The models are implemented in Matlab Simulink and validated in a number of operating environments, for steady-state and dynamic modes alike. In turn, the FC models are validated in an actual microgrid operating environment, through the series connection of 4 PEMFC. The simulations of the models precisely and accurately reproduce the FC electrical and thermal performance.

  18. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    Science.gov (United States)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  19. Environmental dose rate heterogeneity of beta radiation and its implications for luminescence dating: Monte Carlo modelling and experimental validation

    DEFF Research Database (Denmark)

    Nathan, R.P.; Thomas, P.J.; Jain, M.

    2003-01-01

    and identify the likely size of these effects on D-e distributions. The study employs the MCNP 4C Monte Carlo electron/photon transport model, supported by an experimental validation of the code in several case studies. We find good agreement between the experimental measurements and the Monte Carlo...

  20. Experimental investigation of stratified two-phase flows in the hot leg of a PWR for CFD validation

    Energy Technology Data Exchange (ETDEWEB)

    Vallee, Christophe; Lucas, Dirk [Helmholtz-Zentrum Dresden-Rossendorf (HZDR) e.V., Dresden (Germany). Inst. of Fluid Dynamics; Tomiyama, Akio [Kobe Univ (Japan). Graduate School of Engineering; Murase, Michio [Institute of Nuclear Safety System, Inc. (INSS), Fukui (Japan)

    2012-12-15

    Stratified 2-phase flows were investigated in 2 different models of the hot leg of a pressurised water reactor (PWR) in order to provide experimental data for the development and validation of computational fluid dynamics (CFD) codes. Therefore, the local flow structure was visualised with a high-speed video camera. Moreover, one test section was designed with a rectangular cross-section to achieve optimal observation conditions. The phenomenon of counter-current flow limitation (CCFL) was investigated, which may affect the reflux condenser cooling mode in some accident scenarios. The experiments were conducted with air and water at room temperature and maximum pressures of 3 bar as well as with steam and saturated water at boundary conditions of up to 50 bar and 264 C. The measured CCFL characteristics were compared with similar experimental data and correlations available in the literature. This shows that the channel height is the characteristic length to be used in the Wallis parameter for channels with rectangular cross-sections. Furthermore, the experimental results confirm that the Wallis similarity is appropriate to scale CCFL in the hot leg of a PWR over a wide range of pressure and temperature conditions. Finally, an image processing algorithm was developed to recognise the stratified interface in the camera frames. Subsequently, the interfacial structure along the hot leg was visualised by the representation of the probability distribution of the water level. (orig.)

  1. Application of a computational situation assessment model to human system interface design and experimental validation of its effectiveness

    International Nuclear Information System (INIS)

    Lee, Hyun-Chul; Koh, Kwang-Yong; Seong, Poong-Hyun

    2013-01-01

    Highlights: ► We validate the effectiveness of a proposed procedure thru an experiment. ► The proposed procedure addresses the salient coding of the key information. ► It was found that salience coding affects operators’ attention significantly. ► The first observation to the key information quickly guided to the correct situation awareness. ► It was validated the proposed procedure is effective for better situation awareness. - Abstract: To evaluate the effects of human cognitive characteristics on situation awareness, a computational situation assessment model of nuclear power plant operators has been developed, as well as a procedure to apply the developed model to the design of human system interfaces (HSIs). The concept of the proposed procedure is to identify the key information source, which is expected to guarantee fast and accurate diagnosis when operators attend to it. The developed computational model is used to search the diagnostic paths and the key information source. In this study, an experiment with twelve trained participants was executed to validate the effectiveness of the proposed procedure. Eighteen scenarios covering various accidents were administered twice for each subject, and experimental data were collected and analyzed. As a result of the data analysis, it was validated that the salience level of information sources significantly influences the attention of operators, and the first observation of the key information sources leads operators to a quick and correct situation assessment. Therefore, we conclude that the proposed procedure for applying the developed model to HSI design is effective

  2. Validation of CryoSat-2 SAR mode based lake levels

    DEFF Research Database (Denmark)

    Nielsen, Karina; Stenseng, Lars; Andersen, Ole Baltazar

    2015-01-01

    Lake level serve as an important indicator of the climate and continuous measurements are therefore essential. Satellite radar altimetry has now been used successfully for more than two decades to measure lake level as an addition to gauge measurements. The technique has, due to the large footprint...... with water levels obtained from Envisat. We find that the along-track precision of the mean based on CryoSat-2 is a few centimeter, even for the small lakes, which is a significant improvement compared to previous missions such as Envisat. When validating against gauge data we find RMS values of differences...

  3. Gas-liquid Two Phase Flow Modelling of Incompressible Fluid and Experimental Validation Studies in Vertical Centrifugal Casting

    International Nuclear Information System (INIS)

    Zhou, J X; Shen, X; Yin, Y J; Guo, Z; Wang, H

    2015-01-01

    In this paper, Gas-liquid two phase flow mathematic models of incompressible fluid were proposed to explore the feature of fluid under certain centrifugal force in vertical centrifugal casting (VCC). Modified projection-level-set method was introduced to solve the mathematic models. To validate the simulation results, two methods were used in this study. In the first method, the simulation result of basic VCC flow process was compared with its analytic solution. The relationship between the numerical solution and deterministic analytic solution was presented to verify the correctness of numerical algorithms. In the second method, systematic water simulation experiments were developed. In this initial experiment, special experimental vertical centrifugal device and casting shapes were designed to describe typical mold-filling processes in VCC. High speed camera system and data collection devices were used to capture flow shape during the mold-filling process. Moreover, fluid characteristic at different rotation speed (from 40rpm, 60rpmand 80rpm) was discussed to provide comparative resource for simulation results. As compared with the simulation results, the proposed mathematical models could be proven and the experimental design could help us advance the accuracy of simulation and further studies for VCC. (paper)

  4. Simulation and experimental validation of the performance of a absorption refrigerator

    International Nuclear Information System (INIS)

    Olbricht, Michael; Luke, Andrea

    2015-01-01

    The two biggest obstacles to a stronger market penetration of absorption refrigerators are their high cost and the size of the apparatus, which are due to the inaccurate methods for plant design. In order to contribute to an improved design a thermodynamic model is presented to describe the performance of a absorption refrigerator with the working fluid water/lithium. In this model, the processes are displayed in the single apparatus and coupled to each other in the systemic context. Thereby the interactions between the apparatus can specifically investigated and thus the process limiting component can be identified under the respective conditions. A validation of the simulation model and the boundary conditions used is done based on experimental data operating a self-developed absorption refrigerator. In the simulation, the heat transfer surfaces in accordance with the real system can be specified. The heat transport is taken into account based on typical values for the heat transfer in the individual apparatuses. Simulation results show good agreement with the experimental data. The physical relationships and influences externally defined operating parameters are correctly reproduced. Due to the chosen low heat transfer coefficient, the calculated cooling capacities by the model are below the experimentally measured. Finally, the possibilities and limitations are discussed by using the model and further improvement possibilities are suggested. [de

  5. Verification and validation of the PLTEMP/ANL code for thermal hydraulic analysis of experimental and test reactors

    International Nuclear Information System (INIS)

    Kalimullah, M.; Olson, A.O.; Feldman, E.E.; Hanan, N.; Dionne, B.

    2012-01-01

    The document compiles in a single volume several verification and validation works done for the PLTEMP/ANL code during the years of its development and improvement. Some works that are available in the open literature are simply referenced at the outset, and are not included in the document. PLTEMP has been used in conversion safety analysis reports of several US and foreign research reactors that have been licensed and converted. A list of such reactors is given. Each chapter of the document deals with the verification or validation of a specific model. The model verification is usually done by comparing the code with hand calculation, Microsoft spreadsheet calculation, or Mathematica calculation. The model validation is done by comparing the code with experimental data or a more validated code like the RELAP5 code.

  6. Verification and Validation of the PLTEMP/ANL Code for Thermal-Hydraulic Analysis of Experimental and Test Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kalimullah, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Olson, Arne P. [Argonne National Lab. (ANL), Argonne, IL (United States); Feldman, E. E. [Argonne National Lab. (ANL), Argonne, IL (United States); Hanan, N. [Argonne National Lab. (ANL), Argonne, IL (United States); Dionne, B. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-04-07

    The document compiles in a single volume several verification and validation works done for the PLTEMP/ANL code during the years of its development and improvement. Some works that are available in the open literature are simply referenced at the outset, and are not included in the document. PLTEMP has been used in conversion safety analysis reports of several US and foreign research reactors that have been licensed and converted. A list of such reactors is given. Each chapter of the document deals with the verification or validation of a specific model. The model verification is usually done by comparing the code with hand calculation, Microsoft spreadsheet calculation, or Mathematica calculation. The model validation is done by comparing the code with experimental data or a more validated code like the RELAP5 code.

  7. Experimental validation of additively manufactured optimized shapes for passive cooling

    DEFF Research Database (Denmark)

    Lazarov, Boyan S.; Sigmund, Ole; Meyer, Knud E.

    2018-01-01

    This article confirms the superior performance of topology optimized heat sinks compared to lattice designs and suggests simpler manufacturable pin-fin design interpretations. The development is driven by the wide adoption of light-emitting-diode (LED) lamps for industrial and residential lighting....... The presented heat sink solutions are generated by topology optimization, a computational morphogenesis approach with ultimate design freedom, relying on high-performance computing and simulation. Optimized devices exhibit complex and organic-looking topologies which are realized with the help of additive...... manufacturing. To reduce manufacturing cost, a simplified interpretation of the optimized design is produced and validated as well. Numerical and experimental results agree well and indicate that the obtained designs outperform lattice geometries by more than 21%, resulting in a doubling of life expectancy and...

  8. Experimental Validation of the LHC Helium Relief System Flow Modeling

    CERN Document Server

    Fydrych, J; Riddone, G

    2006-01-01

    In case of simultaneous resistive transitions in a whole sector of magnets in the Large Hadron Collider, the helium would be vented from the cold masses to a dedicated recovery system. During the discharge the cold helium will eventually enter a pipe at room temperature. During the first period of the flow the helium will be heated intensely due to the pipe heat capacity. To study the changes of the helium thermodynamic and flow parameters we have simulated numerically the most critical flow cases. To verify and validate numerical results, a dedicated laboratory test rig representing the helium relief system has been designed and commissioned. Both numerical and experimental results allow us to determine the distributions of the helium parameters along the pipes as well as mechanical strains and stresses.

  9. IVIM: modeling, experimental validation and application to animal models

    International Nuclear Information System (INIS)

    Fournet, Gabrielle

    2016-01-01

    This PhD thesis is centered on the study of the IVIM ('Intravoxel Incoherent Motion') MRI sequence. This sequence allows for the study of the blood microvasculature such as the capillaries, arterioles and venules. To be sensitive only to moving groups of spins, diffusion gradients are added before and after the 180 degrees pulse of a spin echo (SE) sequence. The signal component corresponding to spins diffusing in the tissue can be separated from the one related to spins travelling in the blood vessels which is called the IVIM signal. These two components are weighted by f IVIM which represents the volume fraction of blood inside the tissue. The IVIM signal is usually modelled by a mono-exponential (ME) function and characterized by a pseudo-diffusion coefficient, D*. We propose instead a bi-exponential IVIM model consisting of a slow pool, characterized by F slow and D* slow corresponding to the capillaries as in the ME model, and a fast pool, characterized by F fast and D* fast, related to larger vessels such as medium-size arterioles and venules. This model was validated experimentally and more information was retrieved by comparing the experimental signals to a dictionary of simulated IVIM signals. The influence of the pulse sequence, the repetition time and the diffusion encoding time was also studied. Finally, the IVIM sequence was applied to the study of an animal model of Alzheimer's disease. (author) [fr

  10. Design of JT-60SA magnets and associated experimental validations

    International Nuclear Information System (INIS)

    Zani, L.; Barabaschi, P.; Peyrot, M.; Meunier, L.; Tomarchio, V.; Duglue, D.; Decool, P.; Torre, A.; Marechal, J.L.; Della Corte, A.; Di Zenobio, A.; Muzzi, L.; Cucchiaro, A.; Turtu, S.; Ishida, S.; Yoshida, K.; Tsuchiya, K.; Kizu, K.; Murakami, H.

    2011-01-01

    In the framework of the JT-60SA project, aiming at upgrading the present JT-60U tokamak toward a fully superconducting configuration, the detailed design phase led to adopt for the three main magnet systems a brand new design. Europe (EU) is expected to provide to Japan (JA) the totality of the toroidal field (TF) magnet system, while JA will provide both Equilibrium field (EF) and Central Solenoid (CS) systems. All magnet designs were optimized trough the past years and entered in parallel into extensive experimentally-based phases of concept validation, which came to maturation in the years 2009 and 2010. For this, all magnet systems were investigated by mean of dedicated samples, e.g. conductor and joint samples designed, manufactured and tested at full scale in ad hoc facilities either in EU or in JA. The present paper, after an overall description of magnet systems layouts, presents in a general approach the different experimental campaigns dedicated to qualification design and manufacture processes of either coils, conductors and electrical joints. The main results with the associated analyses are shown and the main conclusions presented, especially regarding their contribution to consolidate the triggering of magnet mass production. The status of respective manufacturing stages in EU and in JA are also evoked. (authors)

  11. Translation and validation of the Greek version of the hypertension knowledge-level scale.

    Science.gov (United States)

    Chatziefstratiou, Anastasia A; Giakoumidakis, Konstantinos; Fotos, Nikolaos V; Baltopoulos, George; Brokalaki-Pananoudaki, Hero

    2015-12-01

    To translate and validate a Greek version of the Hypertension Knowledge-Level Scale. The major barrier in the management of hypertension is the lack of adherence to medications and lifestyle adjustments. Patients' knowledge of the nature of hypertension and cardiovascular risk factors is a significant factor affecting individuals' adherence. However, few instruments have been developed to assess patients' knowledge level and no one has been translated into Greek. This study used a case control study design. Data collection for this research occurred between February 7, 2013 and March 10, 2013. The sample included both hypertensives and non-hypertensives. Participants simultaneously completed the version of the Hypertension Knowledge-Level Scale. A total of 68 individuals completed the questionnaire. Coefficient alpha was 0·66 for hypertensives and 0·79 for non-hypertensives. The difference for the mean scores in the entire scale between the two samples was statistically significant. In addition, significant differences were observed in many sub-dimensions and no correlation was found between level, knowledge and age, gender and education level. Findings provide support for the validity of the Greek version of the Hypertension Knowledge-Level Scale. The translation and validation of an instrument evaluating the level of knowledge of hypertension contribute to assessing the provided educational intervention. Low knowledge level should lead to the development of new methods of education, therefore nurses will have the opportunity to amplify their role in patients' education and develop relationships based on honesty and respect. © 2015 John Wiley & Sons Ltd.

  12. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  13. Experimental validation of neutron activation simulation of a varian medical linear accelerator.

    Science.gov (United States)

    Morato, S; Juste, B; Miro, R; Verdu, G; Diez, S

    2016-08-01

    This work presents a Monte Carlo simulation using the last version of MCNP, v. 6.1.1, of a Varian CLinAc emitting a 15MeV photon beam. The main objective of the work is to estimate the photoneutron production and activated products inside the medical linear accelerator head. To that, the Varian LinAc head was modelled in detail using the manufacturer information, and the model was generated with a CAD software and exported as a mesh to be included in the particle transport simulation. The model includes the transport of photoneutrons generated by primary photons and the (n, γ) reactions which can result in activation products. The validation of this study was done using experimental measures. Activation products have been identified by in situ gamma spectroscopy placed at the jaws exit of the LinAc shortly after termination of a high energy photon beam irradiation. Comparison between experimental and simulation results shows good agreement.

  14. experimental implementation of single-phase, three-level, sinusoidal

    African Journals Online (AJOL)

    Page 1 ... of many multilevel inverter configurations. This paper presents an experimental report of a simplified topology for single-phase, SPWM, three-level voltage source inverter wit R-L load. To keep the power circuit ... employed in many industrial applications such as variable speed drives, uninterruptible power sup-.

  15. Methodology for experimental validation of a CFD model for predicting noise generation in centrifugal compressors

    International Nuclear Information System (INIS)

    Broatch, A.; Galindo, J.; Navarro, R.; García-Tíscar, J.

    2014-01-01

    Highlights: • A DES of a turbocharger compressor working at peak pressure point is performed. • In-duct pressure signals are measured in a steady flow rig with 3-sensor arrays. • Pressure spectra comparison is performed as a validation for the numerical model. • A suitable comparison methodology is developed, relying on pressure decomposition. • Whoosh noise at outlet duct is detected in experimental and numerical spectra. - Abstract: Centrifugal compressors working in the surge side of the map generate a broadband noise in the range of 1–3 kHz, named as whoosh noise. This noise is perceived at strongly downsized engines operating at particular conditions (full load, tip-in and tip-out maneuvers). A 3-dimensional CFD model of a centrifugal compressor is built to analyze fluid phenomena related to whoosh noise. A detached eddy simulation is performed with the compressor operating at the peak pressure point of 160 krpm. A steady flow rig mounted on an anechoic chamber is used to obtain experimental measurements as a means of validation for the numerical model. In-duct pressure signals are obtained in addition to standard averaged global variables. The numerical simulation provides global variables showing excellent agreement with experimental measurements. Pressure spectra comparison is performed to assess noise prediction capability of numerical model. The influence of the type and position of the virtual pressure probes is evaluated. Pressure decomposition is required by the simulations to obtain meaningful spectra. Different techniques for obtaining pressure components are analyzed. At the simulated conditions, a broadband noise in 1–3 kHz frequency band is detected in the experimental measurements. This whoosh noise is also captured by the numerical model

  16. Validation of lower body negative pressure as an experimental model of hemorrhage

    Science.gov (United States)

    Shade, Robert E.; Muniz, Gary W.; Bauer, Cassondra; Goei, Kathleen A.; Pidcoke, Heather F.; Chung, Kevin K.; Cap, Andrew P.; Convertino, Victor A.

    2013-01-01

    Lower body negative pressure (LBNP), a model of hemorrhage (Hem), shifts blood to the legs and elicits central hypovolemia. This study compared responses to LBNP and actual Hem in sedated baboons. Arterial pressure, pulse pressure (PP), central venous pressure (CVP), heart rate, stroke volume (SV), and +dP/dt were measured. Hem steps were 6.25%, 12.5%, 18.75%, and 25% of total estimated blood volume. Shed blood was returned, and 4 wk after Hem, the same animals were subjected to four LBNP levels which elicited equivalent changes in PP and CVP observed during Hem. Blood gases, hematocrit (Hct), hemoglobin (Hb), plasma renin activity (PRA), vasopressin (AVP), epinephrine (EPI), and norepinephrine (NE) were measured at baseline and maximum Hem or LBNP. LBNP levels matched with 6.25%, 12.5%, 18.75%, and 25% hemorrhage were −22 ± 6, −41 ± 7, −54 ± 10, and −71 ± 7 mmHg, respectively (mean ± SD). Hemodynamic responses to Hem and LBNP were similar. SV decreased linearly such that 25% Hem and matching LBNP caused a 50% reduction in SV. Hem caused a decrease in Hct, Hb, and central venous oxygen saturation (ScvO2). In contrast, LBNP increased Hct and Hb, while ScvO2 remained unchanged. Hem caused greater elevations in AVP and NE than LBNP, while PRA, EPI, and other hematologic indexes did not differ between studies. These results indicate that while LBNP does not elicit the same effect on blood cell loss as Hem, LBNP mimics the integrative cardiovascular response to Hem, and validates the use of LBNP as an experimental model of central hypovolemia associated with Hem. PMID:24356525

  17. Global and local level density models

    International Nuclear Information System (INIS)

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  18. Image quality validation of Sentinel 2 Level-1 products: performance status at the beginning of the constellation routine phase

    Science.gov (United States)

    Francesconi, Benjamin; Neveu-VanMalle, Marion; Espesset, Aude; Alhammoud, Bahjat; Bouzinac, Catherine; Clerc, Sébastien; Gascon, Ferran

    2017-09-01

    Sentinel-2 is an Earth Observation mission developed by the European Space Agency (ESA) in the frame of the Copernicus program of the European Commission. The mission is based on a constellation of 2-satellites: Sentinel-2A launched in June 2015 and Sentinel-2B launched in March 2017. It offers an unprecedented combination of systematic global coverage of land and coastal areas, a high revisit of five days at the equator and 2 days at mid-latitudes under the same viewing conditions, high spatial resolution, and a wide field of view for multispectral observations from 13 bands in the visible, near infrared and short wave infrared range of the electromagnetic spectrum. The mission performances are routinely and closely monitored by the S2 Mission Performance Centre (MPC), including a consortium of Expert Support Laboratories (ESL). This publication focuses on the Sentinel-2 Level-1 product quality validation activities performed by the MPC. It presents an up-to-date status of the Level-1 mission performances at the beginning of the constellation routine phase. Level-1 performance validations routinely performed cover Level-1 Radiometric Validation (Equalisation Validation, Absolute Radiometry Vicarious Validation, Absolute Radiometry Cross-Mission Validation, Multi-temporal Relative Radiometry Vicarious Validation and SNR Validation), and Level-1 Geometric Validation (Geolocation Uncertainty Validation, Multi-spectral Registration Uncertainty Validation and Multi-temporal Registration Uncertainty Validation). Overall, the Sentinel-2 mission is proving very successful in terms of product quality thereby fulfilling the promises of the Copernicus program.

  19. Experimental Testing and Model Validation of a Decoupled-Phase On-Load Tap Changer Transformer in an Active Network

    DEFF Research Database (Denmark)

    Zecchino, Antonio; Hu, Junjie; Coppo, Massimiliano

    2016-01-01

    Due to the increasing penetration of single-phase small generation units and electric vehicles connected to distribution grids, system operators are facing challenges related to local unbalanced voltage rise or drop issues, which may lead to a violation of the allowed voltage band. To address...... this problem, distribution transformers with on-load tapping capability are under development. This paper presents model and experimental validation of a 35 kVA three-phase power distribution transformer with independent on-load tap changer control capability on each phase. With the purpose of investigating...... to reproduce the main feature of an unbalanced grid. The experimental activities are recreated in by carrying out dynamics simulation studies, aiming at validating the implemented models of both the transformer as well as the other grid components. Phase-neutral voltages’ deviations are limited, proving...

  20. Black liquor devolatilization and swelling - a detailed droplet model and experimental validation

    International Nuclear Information System (INIS)

    Jaervinen, M.; Zevenhoven, R.; Vakkilainen, E.; Forssen, M.

    2003-01-01

    In this paper, we present results from a new detailed physical model for single black liquor droplet pyrolysis and swelling, and validate them against experimental data from a non-oxidizing environment using two different reactor configurations. In the detailed model, we solve for the heat transfer and gas phase mass transfer in the droplet and thereby, the intra-particle gas-char and gas-gas interactions during drying and devolatilization can be studied. In the experimental part, the mass change, the swelling behaviour, and the volume fraction of larger voids, i.e. cenospheres in the droplets were determined in a non-oxidizing environment. The model gave a good correlation with experimental swelling and mass loss data. Calculations suggest that a considerable amount of the char can be consumed before the entire droplet has experienced the devolatilization and drying stages of combustion. Char formed at the droplet surface layer is generally consumed by gasification with H 2 O flowing outwards from the droplet interior. The extent of char conversion during devolatilization and the rate of devolatilization are greatly affected by swelling and the formation of larger voids in the particle. The more the particle swells and the more homogeneous the particle structure is, the larger is the conversion of char at the end of devolatilization

  1. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  2. Achieving external validity in home advantage research: generalizing crowd noise effects.

    Science.gov (United States)

    Myers, Tony D

    2014-01-01

    Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials' decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer a level of confirmation of the findings of laboratory studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed.

  3. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    Science.gov (United States)

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. © 2014 A. P. Dasgupta et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  4. Achieving external validity in home advantage research: generalizing crowd noise effects

    Directory of Open Access Journals (Sweden)

    Tony D Myers

    2014-06-01

    Full Text Available Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials’ decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer some confirmation of the validity of laboratory experimental studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed.

  5. Experimental validation of tape springs to be used as thin-walled space structures

    Science.gov (United States)

    Oberst, S.; Tuttle, S. L.; Griffin, D.; Lambert, A.; Boyce, R. R.

    2018-04-01

    With the advent of standardised launch geometries and off-the-shelf payloads, space programs utilising nano-satellite platforms are growing worldwide. Thin-walled, flexible and self-deployable structures are commonly used for antennae, instrument booms or solar panels owing to their lightweight, ideal packaging characteristics and near zero energy consumption. However their behaviour in space, in particular in Low Earth Orbits with continually changing environmental conditions, raises many questions. Accurate numerical models, which are often not available due to the difficulty of experimental testing under 1g-conditions, are needed to answer these questions. In this study, we present on-earth experimental validations, as a starting point to study the response of a tape spring as a representative of thin-walled flexible structures under static and vibrational loading. Material parameters of tape springs in a singly (straight, open cylinder) and a doubly curved design, are compared to each other by combining finite element calculations, with experimental laser vibrometry within a single and multi-stage model updating approach. While the determination of the Young's modulus is unproblematic, the damping is found to be inversely proportional to deployment length. With updated material properties the buckling instability margin is calculated using different slenderness ratios. Results indicate a high sensitivity of thin-walled structures to miniscule perturbations, which makes proper experimental testing a key requirement for stability prediction on thin-elastic space structures. The doubly curved tape spring provides closer agreement with experimental results than a straight tape spring design.

  6. First experimental validation on the core equilibrium code: HARMONIE

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.; Cozzani, M.; Gnuffi, M.

    1981-08-01

    The code HARMONIE calculates the mechanical equilibrium of a fast reactor. An experimental program of deformation, in air, of groups of subassemblies, was performed on a mock-up, in the Super Phenix 1- geometry. This program included three kinds of tests, all performed without and then with grease: on groups of 2 or 3 rings of subassemblies, subjected to a force acting upon flats or angles; on groups of 35 and 41 subassemblies, subjected to a force acting on the first row, then with 1 or 2 empty cells; and on groups with 1 or 2 bowed subassemblies or 1 enlarged one over flats. A preliminary test on the friction coefficient in air between two pads showed some dependance upon the pad surface condition with a scattering factor of 8. Two basic code hypotheses were validated: the rotation of the subassemblies around their axis was negligible after deformation of the group, and the choice of a mean Maxwell coefficient, between those of 1st and 2nd slope, led to very similar results to experimental. The agreement between tests and HARMONIE calculations was suitable, qualitatively for all the groups and quantitatively for regular groups of 3 rings at most. But the difference increased for larger groups of 35 or 41 subassemblies: friction between pads, neglected by HARMONIE, seems to be the main reason. Other reasons for these differences are: the influence of the loading order on the mock-up, and the initial contacts issued from the gap between foot and diagrid-insert, and from manufacture bowings

  7. Validating Performance Level Descriptors (PLDs) for the AP® Environmental Science Exam

    Science.gov (United States)

    Reshetar, Rosemary; Kaliski, Pamela; Chajewski, Michael; Lionberger, Karen

    2012-01-01

    This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.

  8. Assessment of the energy performance of the solar space system attached to the CE – INCERC Bucharest experimental house – experimental validation

    Directory of Open Access Journals (Sweden)

    Dan CONSTANTINESCU

    2010-01-01

    Full Text Available The INCERC Bucharest experimental house is equipped on the Southern façade with a ventilated solar space. The solar space ensures the ventilation of the entire building at a constant rate of 0.60 exchanges / h during the cold season, by inletting the pre-heated space in the greenhouse space. In the hot season the system ensures the building reversible ventilation by providing the fresh air rate by air suction in the building Northern zone, a consequence of the natural draught effect ensured by the solar space. This report presents the experiments performed in the season 2008-2009 and the experimental validation of the mathematical model used in assessing the solar space energy performance in the heating season.

  9. Servo-hydraulic actuator in controllable canonical form: Identification and experimental validation

    Science.gov (United States)

    Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.

    2018-02-01

    Hydraulic actuators have been widely used to experimentally examine structural behavior at multiple scales. Real-time hybrid simulation (RTHS) is one innovative testing method that largely relies on such servo-hydraulic actuators. In RTHS, interface conditions must be enforced in real time, and controllers are often used to achieve tracking of the desired displacements. Thus, neglecting the dynamics of hydraulic transfer system may result either in system instability or sub-optimal performance. Herein, we propose a nonlinear dynamical model for a servo-hydraulic actuator (a.k.a. hydraulic transfer system) coupled with a nonlinear physical specimen. The nonlinear dynamical model is transformed into controllable canonical form for further tracking control design purposes. Through a number of experiments, the controllable canonical model is validated.

  10. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences

    Science.gov (United States)

    Parsons, Thomas D.

    2015-01-01

    An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869

  11. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    Science.gov (United States)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  12. Predictive Validity of Curriculum-Based Measures for English Learners at Varying English Proficiency Levels

    Science.gov (United States)

    Kim, Jennifer Sun; Vanderwood, Michael L.; Lee, Catherine Y.

    2016-01-01

    This study examined the predictive validity of curriculum-based measures in reading for Spanish-speaking English learners (ELs) at various levels of English proficiency. Third-grade Spanish-speaking EL students were screened during the fall using DIBELS Oral Reading Fluency (DORF) and Daze. Predictive validity was examined in relation to spring…

  13. Preliminary experimentally-validated forced and mixed convection computational simulations of the Rotatable Buoyancy Tunnel

    International Nuclear Information System (INIS)

    Clifford, Corey E.; Kimber, Mark L.

    2015-01-01

    Although computational fluid dynamics (CFD) has not been directly utilized to perform safety analyses of nuclear reactors in the United States, several vendors are considering adopting commercial numerical packages for current and future projects. To ensure the accuracy of these computational models, it is imperative to validate the assumptions and approximations built into commercial CFD codes against physical data from flows analogous to those in modern nuclear reactors. To this end, researchers at Utah State University (USU) have constructed the Rotatable Buoyancy Tunnel (RoBuT) test facility, which is designed to provide flow and thermal validation data for CFD simulations of forced and mixed convection scenarios. In order to evaluate the ability of current CFD codes to capture the complex physics associated with these types of flows, a computational model of the RoBuT test facility is created using the ANSYS Fluent commercial CFD code. The numerical RoBuT model is analyzed at identical conditions to several experimental trials undertaken at USU. Each experiment is reconstructed numerically and evaluated with the second-order Reynolds stress model (RSM). Two different thermal boundary conditions at the heated surface of the RoBuT test section are investigated: constant temperature (isothermal) and constant surface heat flux (isoflux). Additionally, the fluid velocity at the inlet of the test section is varied in an effort to modify the relative importance of natural convection heat transfer from the heated wall of the RoBuT. Mean velocity, both in the streamwise and transverse directions, as well as components of the Reynolds stress tensor at three points downstream of the RoBuT test section inlet are compared to results obtained from experimental trials. Early computational results obtained from this research initiative are in good agreement with experimental data obtained from the RoBuT facility and both the experimental data and numerical method can be used

  14. A comparative study of soft sensor design for lipid estimation of microalgal photobioreactor system with experimental validation.

    Science.gov (United States)

    Yoo, Sung Jin; Jung, Dong Hwi; Kim, Jung Hun; Lee, Jong Min

    2015-03-01

    This study examines the applicability of various nonlinear estimators for online estimation of the lipid concentration in microalgae cultivation system. Lipid is a useful bio-product that has many applications including biofuels and bioactives. However, the improvement of lipid productivity using real-time monitoring and control with experimental validation is limited because measurement of lipid in microalgae is a difficult and time-consuming task. In this study, estimation of lipid concentration from other measurable sources such as biomass or glucose sensor was studied. Extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filter (PF) were compared in various cases for their applicability to photobioreactor systems. Furthermore, simulation studies to identify appropriate types of sensors for estimating lipid were also performed. Based on the case studies, the most effective case was validated with experimental data and found that UKF and PF with time-varying system noise covariance is effective for microalgal photobioreactor system. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Experimental validation for combustion analysis of GOTHIC code in 2-dimensional combustion chamber

    International Nuclear Information System (INIS)

    Lee, J. W.; Yang, S. Y.; Park, K. C.; Jung, S. H.

    2002-01-01

    In this study, the prediction capability of GOTHIC code for hydrogen combustion phenomena was validated with the results of two-dimensional premixed hydrogen combustion experiment executed by Seoul National University. The experimental chamber has about 24 liter free volume (1x0.024x1 m 3 ) and 2-dimensional rectangular shape. The test were preformed with 10% hydrogen/air gas mixture and conducted with combination of two igniter positions (top center, top corner) and two boundary conditions (bottom full open, bottom right half open). Using the lumped parameter and mechanistic combustion model in GOTHIC code, the SNU experiments were simulated under the same conditions. The GOTHIC code prediction of the hydrogen combustion phenomena did not compare well with the experimental results. In case of lumped parameter simulation, the combustion time was predicted appropriately. But any other local information related combustion phenomena could not be obtained. In case of mechanistic combustion analysis, the physical combustion phenomena of gas mixture were not matched experimental ones. In boundary open cases, the GOTHIC predicted very long combustion time and the flame front propagation could not simulate appropriately. Though GOTHIC showed flame propagation phenomenon in adiabatic calculation, the induction time of combustion was still very long compare with experimental results. Also, it was found that the combustion model of GOTHIC code had some weak points in low concentration of hydrogen combustion simulation

  16. NUMERICAL MODELLING AND EXPERIMENTAL INFLATION VALIDATION OF A BIAS TWO-WHEEL TIRE

    Directory of Open Access Journals (Sweden)

    CHUNG KET THEIN

    2016-02-01

    Full Text Available This paper presents a parametric study on the development of a computational model for bias two-wheel tire through finite element analysis (FEA. An 80/90- 17 bias two-wheel tire was adopted which made up of four major layers of rubber compound with different material properties to strengthen the structure. Mooney-Rivlin hyperelastic model was applied to represent the behaviour of incompressible rubber compound. A 3D tire model was built for structural static finite element analysis. The result was validated from the inflation analysis. Structural static finite element analysis method is suitable for evaluation of the tire design and improvement of the tire behaviour to desired performance. Experimental tire was inflated at various pressures and the geometry between numerical and experimental tire were compared. There are good agreements between numerical simulation model and the experiment results. This indicates that the simulation model can be applied to the bias two-wheel tire design in order to predict the tire behaviour and improve its mechanical characteristics.

  17. Content Validity and Acceptability of a Developed Worktext in Basic Mathematics 2

    Directory of Open Access Journals (Sweden)

    Mae Joy F. Tan-Espinar

    2017-02-01

    Full Text Available Teaching tertiary mathematics entails the use of instructional materials which lead to independent learning. The study evaluated the content validity and level of acceptability of a developed worktext in Basic Mathematics 2. It also found the significant difference between the respondents’ evaluation. Likewise, the study found the significant difference in the pretest and posttest performance between experimental and the control group and the difference between the posttest of the experimental and control groups. The study utilized the descriptive comparative method in determining the validity and acceptability of the developed worktext and the difference between the evaluation of experts/teachers and the student respondents. Quasi-experimental design was also used to find out if the worktext is effective in teaching the course employing t-test for correlated samples and t-test for independent samples. The result showed that the content validity and acceptability is very much valid and very much acceptable. The difference in the post-test between the experimental and the control groups was significant. It is concluded that the worktext is effective to be used in teaching the course.

  18. Validation of a numerical 3-D fluid-structure interaction model for a prosthetic valve based on experimental PIV measurements.

    Science.gov (United States)

    Guivier-Curien, Carine; Deplano, Valérie; Bertrand, Eric

    2009-10-01

    A numerical 3-D fluid-structure interaction (FSI) model of a prosthetic aortic valve was developed, based on a commercial computational fluid dynamics (CFD) software program using an Arbitrary Eulerian Lagrangian (ALE) formulation. To make sure of the validity of this numerical model, an equivalent experimental model accounting for both the geometrical features and the hydrodynamic conditions was also developed. The leaflet and the flow behaviours around the bileaflet valve were investigated numerically and experimentally by performing particle image velocimetry (PIV) measurements. Through quantitative and qualitative comparisons, it was shown that the leaflet behaviour and the velocity fields were similar in both models. The present study allows the validation of a fully coupled 3-D FSI numerical model. The promising numerical tool could be therefore used to investigate clinical issues involving the aortic valve.

  19. Active Transportation Demand Management (ATDM) Trajectory Level Validation

    Data.gov (United States)

    Department of Transportation — The ATDM Trajectory Validation project developed a validation framework and a trajectory computational engine to compare and validate simulated and observed vehicle...

  20. Sliding spool design for reducing the actuation forces in direct operated proportional directional valves: Experimental validation

    International Nuclear Information System (INIS)

    Amirante, Riccardo; Distaso, Elia; Tamburrano, Paolo

    2016-01-01

    Highlights: • An innovative procedure to design a commercial proportional directional valve is shown. • Experimental tests are performed to demonstrate the flow force reduction. • The design is improved by means of a previously made optimization procedure. • Great reduction in the flow forces without reducing the flow rate is demonstrated. - Abstract: This paper presents the experimental validation of a new methodology for the design of the spool surfaces of four way three position direct operated proportional directional valves. The proposed methodology is based on the re-design of both the compensation profile (the central conical surface of the spool) and the lateral surfaces of the spool, in order to reduce the flow forces acting on the spool and hence the actuation forces. The aim of this work is to extend the application range of these valves to higher values of pressure and flow rate, thus avoiding the employment of more expensive two stage configurations in the case of high-pressure conditions and/or flow rate. The paper first presents a theoretical approach and a general strategy for the sliding spool design to be applied to any four way three position direct operated proportional directional valve. Then, the proposed approach is experimentally validated on a commercially available valve using a hydraulic circuit capable of measuring the flow rate as well as the actuation force over the entire spool stroke. The experimental results, performed using both the electronic driver provided by the manufacturer and a manual actuation system, show that the novel spool surface requires remarkably lower actuation forces compared to the commercial configuration, while maintaining the same flow rate trend as a function of the spool position.

  1. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    International Nuclear Information System (INIS)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  2. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  3. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til [Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany) and Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany); Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany); Philips Healthcare, X-Ray Pre-Development, Veenpluis 4-6, 5684PC Best (Netherlands); Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany)

    2010-09-15

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  4. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    International Nuclear Information System (INIS)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til

    2010-01-01

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  5. Experimental validation of calculation schemes connected with PWR absorbers and burnable poisons; Validation experimentale des schemas de calcul relatifs aux absorbants et poisons consommables dans les REP

    Energy Technology Data Exchange (ETDEWEB)

    Klenov, P.

    1995-10-01

    In France 80% of electricity is produced by PWR reactors. For a better exploitation of these reactors a modular computer code Apollo-II has been developed. his code compute the flux transport by discrete ordinate method or by probabilistic collisions on extended configurations such as reactor cells, assemblies or little cores. For validation of this code on mixed oxide fuel lattices with absorbers an experimental program Epicure in the reactor Eole was induced. This thesis is devoted to the validation of the Apollo code according to the results of the Epicure program. 43 refs., 65 figs., 1 append.

  6. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  7. Experimental validation of Monte Carlo calculations for organ dose

    International Nuclear Information System (INIS)

    Yalcintas, M.G.; Eckerman, K.F.; Warner, G.G.

    1980-01-01

    The problem of validating estimates of absorbed dose due to photon energy deposition is examined. The computational approaches used for the estimation of the photon energy deposition is examined. The limited data for validation of these approaches is discussed and suggestions made as to how better validation information might be obtained

  8. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    International Nuclear Information System (INIS)

    Zeile, Christian; Maione, Ivan A.

    2015-01-01

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  9. Out-of-plane buckling of pantographic fabrics in displacement-controlled shear tests: experimental results and model validation

    Science.gov (United States)

    Barchiesi, Emilio; Ganzosch, Gregor; Liebold, Christian; Placidi, Luca; Grygoruk, Roman; Müller, Wolfgang H.

    2018-01-01

    Due to the latest advancements in 3D printing technology and rapid prototyping techniques, the production of materials with complex geometries has become more affordable than ever. Pantographic structures, because of their attractive features, both in dynamics and statics and both in elastic and inelastic deformation regimes, deserve to be thoroughly investigated with experimental and theoretical tools. Herein, experimental results relative to displacement-controlled large deformation shear loading tests of pantographic structures are reported. In particular, five differently sized samples are analyzed up to first rupture. Results show that the deformation behavior is strongly nonlinear, and the structures are capable of undergoing large elastic deformations without reaching complete failure. Finally, a cutting edge model is validated by means of these experimental results.

  10. Hypertension Knowledge-Level Scale (HK-LS): A Study on Development, Validity and Reliability

    OpenAIRE

    Erkoc, Sultan Baliz; Isikli, Burhanettin; Metintas, Selma; Kalyoncu, Cemalettin

    2012-01-01

    This study was conducted to develop a scale to measure knowledge about hypertension among Turkish adults. The Hypertension Knowledge-Level Scale (HK-LS) was generated based on content, face, and construct validity, internal consistency, test re-test reliability, and discriminative validity procedures. The final scale had 22 items with six sub-dimensions. The scale was applied to 457 individuals aged ≥18 years, and 414 of them were re-evaluated for test-retest reliability. The six sub-dimensio...

  11. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  12. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  13. An experimentally validated simulation model for a four-stage spray dryer

    DEFF Research Database (Denmark)

    Petersen, Lars Norbert; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2017-01-01

    mathematical model is an index-1 differential algebraic equation (DAE) model with 12 states, 9 inputs, 8 disturbances, and 30 parameters. The parameters in the model are identified from well-excited experimental data obtained from the industrialtype spray dryer. The simulated outputs ofthe model are validated...... is divided into four consecutive stages: a primary spray drying stage, two heated fluid bed stages, and a cooling fluid bed stage. Each of these stages in the model is assumed ideally mixed and the dynamics are described by mass- and energy balances. These balance equations are coupled with constitutive...... equations such as a thermodynamic model, the water evaporation rate, the heat transfer rates, and an equation for the stickiness of the powder (glass transition temperature). Laboratory data is used to model the equilibrium moisture content and the glass transition temperature of the powder. The resulting...

  14. Modeling and experimental validation of water mass balance in a PEM fuel cell stack

    DEFF Research Database (Denmark)

    Liso, Vincenzo; Araya, Samuel Simon; Olesen, Anders Christian

    2016-01-01

    Polymer electrolyte membrane (PEM) fuel cells require good hydration in order to deliver high performance and ensure long life operation. Water is essential for proton conductivity in the membrane which increases by nearly six orders of magnitude from dry to fully hydrated. Adequate water...... management in PEM fuel cell is crucial in order to avoid an imbalance between water production and water removal from the fuel cell. In the present study, a novel mathematical zero-dimensional model has been formulated for the water mass balance and hydration of a polymer electrolyte membrane. This model...... is validated against experimental data. In the results it is shown that the fuel cell water balance calculated by this model shows better fit with experimental data-points compared with model where only steady state operation were considered. We conclude that this discrepancy is due a different rate of water...

  15. Theoretical model and experimental validation of a direct-expansion solar assisted heat pump for domestic hot water applications

    International Nuclear Information System (INIS)

    Moreno-Rodríguez, A.; González-Gil, A.; Izquierdo, M.; Garcia-Hernando, N.

    2012-01-01

    This paper has shown the development of a theoretical model to determine the operating parameters and consumption of a domestic hot water (DHW) installation, which uses a direct-expansion solar assisted heat pump (DXSAHP) with refrigerant R-134a, a compressor with a rated capacity of 1.1 kW and collectors with a total area of 5.6 m 2 . The model results have been compared and validated the experimental results obtained with the equipment installed at the University Carlos III, South of Madrid. The analysis was conducted over the course of a year, and the results have been represented depending on the meteorological and process variables of several representative days. Taking into account the thermal losses of the installation and the dependency on the operating conditions, the acquired experimental coefficient of performance is between 1.7 and 2.9, while the DHW tank temperature over the course of the study is 51 °C. -- Highlights: ► The study aims to present a new theoretical model and an experimental validation. ► The experimental COP vary between 1.7 and 2.9 (max. condensation temperature 57 °C). ► The operating parameters respond to the solar radiation. The COP may increase up to 50%. ► The useful surface area varies between 50% and 85% of the total surface. ► The system stops if conditions exceed the maximum value of the absorbed heat.

  16. Experimental validation of a simple, low-cost, T-junction droplet generator fabricated through 3D printing

    Science.gov (United States)

    Donvito, Lidia; Galluccio, Laura; Lombardo, Alfio; Morabito, Giacomo; Nicolosi, Alfio; Reno, Marco

    2015-03-01

    Three-dimensional printing has been recently proposed and assessed for continuous flow microfluidic devices. In this paper the focus is on a new application of this rapid and low cost method for microfluidic device prototyping: droplets production through a T-junction generator. The feasibility of this new methodology is assessed by means of an experimental study in which the statistical parameters which characterize the production of droplets are analyzed. Furthermore, this study assesses the validity of previous theoretical and experimental results, obtained for a PDMS T-junction droplet generator, also in the case of a 3D printed Acrylonitrile microfluidic chip. Finally, the feasibility of producing monodisperse droplets by analyzing the polydispersity index of the prepared droplets is demonstrated.

  17. A mathematical model for hydrogen evolution in an electrochemical cell and experimental validation

    International Nuclear Information System (INIS)

    Mahmut D Mat; Yuksel Kaplan; Beycan Ibrahimoglu; Nejat Veziroglu; Rafig Alibeyli; Sadiq Kuliyev

    2006-01-01

    Electrochemical reaction is largely employed in various industrial areas such as hydrogen production, chlorate process, electroplating, metal purification etc. Most of these processes often take place with gas evaluation on the electrodes. Presence of gas phase in the liquid phase makes the problem two-phase flow which is much knowledge available from heat transfer and fluid mechanics studies. The motivation of this study is to investigate hydrogen release in an electrolysis processes from two-phase flow point of view and investigate effect of gas release on the electrolysis process. Hydrogen evolution, flow field and current density distribution in an electrochemical cell are investigated with a two-phase flow model. The mathematical model involves solutions of transport equations for the variables of each phase with allowance for inter phase transfer of mass and momentum. An experimental set-up is established to collect data to validate and improve the mathematical model. Void fraction is determined from measurement of resistivity changes in the system due to the presence of bubbles. A good agreement is obtained between numerical results and experimental data. (authors)

  18. DC microgrid power flow optimization by multi-layer supervision control. Design and experimental validation

    International Nuclear Information System (INIS)

    Sechilariu, Manuela; Wang, Bao Chao; Locment, Fabrice; Jouglet, Antoine

    2014-01-01

    Highlights: • DC microgrid (PV array, storage, power grid connection, DC load) with multi-layer supervision control. • Power balancing following power flow optimization while providing interface for smart grid communication. • Optimization under constraints: storage capability, grid power limitations, grid time-of-use pricing. • Experimental validation of DC microgrid power flow optimization by multi-layer supervision control. • DC microgrid able to perform peak shaving, to avoid undesired injection, and to make full use of locally energy. - Abstract: Urban areas have great potential for photovoltaic (PV) generation, however, direct PV power injection has limitations for high level PV penetration. It induces additional regulations in grid power balancing because of lacking abilities of responding to grid issues such as reducing grid peak consumption or avoiding undesired injections. The smart grid implementation, which is designed to meet these requirements, is facilitated by microgrids development. This paper presents a DC microgrid (PV array, storage, power grid connection, DC load) with multi-layer supervision control which handles instantaneous power balancing following the power flow optimization while providing interface for smart grid communication. The optimization takes into account forecast of PV power production and load power demand, while satisfying constraints such as storage capability, grid power limitations, grid time-of-use pricing and grid peak hour. Optimization, whose efficiency is related to the prediction accuracy, is carried out by mixed integer linear programming. Experimental results show that the proposed microgrid structure is able to control the power flow at near optimum cost and ensures self-correcting capability. It can respond to issues of performing peak shaving, avoiding undesired injection, and making full use of locally produced energy with respect to rigid element constraints

  19. Levels of Personality Functioning Scale Self-Report Validation Journal of Personality Assessment

    OpenAIRE

    Good, Evan; Hopwood, Christopher; Morey, Leslie

    2017-01-01

    Validation of the Levels of Personality Functioning Scale - Self-Report. Results suggest that the measure has a robust single dimension and that it correlates in a very general manner with a wide range of maladaptive personality variables, consistent with its purpose as a measure of non-specific personality pathology.

  20. Electrocatalysis of borohydride oxidation: a review of density functional theory approach combined with experimental validation

    International Nuclear Information System (INIS)

    Sison Escaño, Mary Clare; Arevalo, Ryan Lacdao; Kasai, Hideaki; Gyenge, Elod

    2014-01-01

    The electrocatalysis of borohydride oxidation is a complex, up-to-eight-electron transfer process, which is essential for development of efficient direct borohydride fuel cells. Here we review the progress achieved by density functional theory (DFT) calculations in explaining the adsorption of BH 4 − on various catalyst surfaces, with implications for electrocatalyst screening and selection. Wherever possible, we correlate the theoretical predictions with experimental findings, in order to validate the proposed models and to identify potential directions for further advancements. (topical review)

  1. Electrocatalysis of borohydride oxidation: a review of density functional theory approach combined with experimental validation

    Science.gov (United States)

    Sison Escaño, Mary Clare; Lacdao Arevalo, Ryan; Gyenge, Elod; Kasai, Hideaki

    2014-09-01

    The electrocatalysis of borohydride oxidation is a complex, up-to-eight-electron transfer process, which is essential for development of efficient direct borohydride fuel cells. Here we review the progress achieved by density functional theory (DFT) calculations in explaining the adsorption of BH4- on various catalyst surfaces, with implications for electrocatalyst screening and selection. Wherever possible, we correlate the theoretical predictions with experimental findings, in order to validate the proposed models and to identify potential directions for further advancements.

  2. Content validity and nursing sensitivity of community-level outcomes from the Nursing Outcomes Classification (NOC).

    Science.gov (United States)

    Head, Barbara J; Aquilino, Mary Lober; Johnson, Marion; Reed, David; Maas, Meridean; Moorhead, Sue

    2004-01-01

    To evaluate the content validity and nursing sensitivity of six community-level outcomes from the Nursing Outcomes Classification (NOC; Johnson, Maas, & Moorhead, 2000). A survey research design was used. Questionnaires were mailed to 300 public health nursing experts; 102 nurses responded. Experts evaluated between 11 and 30 indicators for each of the six outcomes for: (a) importance of the indicators for measuring the outcome, and (b) influence of nursing on the indicators. Content validity and nursing sensitivity of the outcomes were estimated with a modified Fehring technique. All outcomes were deemed important; only Community Competence had an outcome content validity score < .80. The outcome sensitivity score for Community Health: Immunity was .80; other outcome scores ranged from .62-.70. Indicator ratios for all 102 indicators met the study criterion for importance, with 87% designated as critical and 13% as supplemental. Sensitivity ratios reflected judgments that 45% of the indicators were sensitive to nursing intervention. The study provided evidence of outcome content validity and nursing sensitivity of the study outcomes; further validation research is recommended, followed by testing of the study outcomes in clinical practice. Community-level nursing-sensitive outcomes will potentially enable study of the efficacy and effectiveness of public health interventions focused on improving health of populations and communities.

  3. Validation of an experimental polyurethane model for biomechanical studies on implant supported prosthesis - tension tests

    Directory of Open Access Journals (Sweden)

    Mariane Miyashiro

    2011-06-01

    Full Text Available OBJECTIVES: The complexity and heterogeneity of human bone, as well as ethical issues, frequently hinder the development of clinical trials. The purpose of this in vitro study was to determine the modulus of elasticity of a polyurethane isotropic experimental model via tension tests, comparing the results to those reported in the literature for mandibular bone, in order to validate the use of such a model in lieu of mandibular bone in biomechanical studies. MATERIAL AND METHODS: Forty-five polyurethane test specimens were divided into 3 groups of 15 specimens each, according to the ratio (A/B of polyurethane reagents (PU-1: 1/0.5, PU-2: 1/1, PU-3: 1/1.5. RESULTS: Tension tests were performed in each experimental group and the modulus of elasticity values found were 192.98 MPa (SD=57.20 for PU-1, 347.90 MPa (SD=109.54 for PU-2 and 304.64 MPa (SD=25.48 for PU-3. CONCLUSION: The concentration of choice for building the experimental model was 1/1.

  4. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  5. Experimental Peptide Identification Repository (EPIR): an integrated peptide-centric platform for validation and mining of tandem mass spectrometry data

    DEFF Research Database (Denmark)

    Kristensen, Dan Bach; Brønd, Jan Christian; Nielsen, Peter Aagaard

    2004-01-01

    LC MS/MS has become an established technology in proteomic studies, and with the maturation of the technology the bottleneck has shifted from data generation to data validation and mining. To address this bottleneck we developed Experimental Peptide Identification Repository (EPIR), which...... is an integrated software platform for storage, validation, and mining of LC MS/MS-derived peptide evidence. EPIR is a cumulative data repository where precursor ions are linked to peptide assignments and protein associations returned by a search engine (e.g. Mascot, Sequest, or PepSea). Any number of datasets can...

  6. Principles of validation of diagnostic assays for infectious diseases

    International Nuclear Information System (INIS)

    Jacobson, R.H.

    1998-01-01

    Assay validation requires a series of inter-related processes. Assay validation is an experimental process: reagents and protocols are optimized by experimentation to detect the analyte with accuracy and precision. Assay validation is a relative process: its diagnostic sensitivity and diagnostic specificity are calculated relative to test results obtained from reference animal populations of known infection/exposure status. Assay validation is a conditional process: classification of animals in the target population as infected or uninfected is conditional upon how well the reference animal population used to validate the assay represents the target population; accurate predictions of the infection status of animals from test results (PV+ and PV-) are conditional upon the estimated prevalence of disease/infection in the target population. Assay validation is an incremental process: confidence in the validity of an assay increases over time when use confirms that it is robust as demonstrated by accurate and precise results; the assay may also achieve increasing levels of validity as it is upgraded and extended by adding reference populations of known infection status. Assay validation is a continuous process: the assay remains valid only insofar as it continues to provide accurate and precise results as proven through statistical verification. Therefore, the work required for validation of diagnostic assays for infectious diseases does not end with a time-limited series of experiments based on a few reference samples rather, to assure valid test results from an assay requires constant vigilance and maintenance of the assay, along with reassessment of its performance characteristics for each unique population of animals to which it is applied. (author)

  7. Alteration of 'R7T7' type nuclear glasses: statistical approach, experimental validation, local evolution model

    International Nuclear Information System (INIS)

    Thierry, F.

    2003-02-01

    The aim of this work is to propose an evolution of nuclear (R7T7-type) glass alteration modeling. The first part of this thesis is about development and validation of the 'r(t)' model. This model which predicts the decrease of alteration rates in confined conditions is based upon a coupling between a first-order dissolution law and a diffusion barrier effect of the alteration gel layer. The values and the uncertainties regarding the main adjustable parameters of the model (α, Dg and C*) have been determined from a systematic study of the available experimental data. A program called INVERSION has been written for this purpose. This work lead to characterize the validity domain of the 'r(t)' model and to parametrize it. Validation experiments have been undertaken, confirming the validity of the parametrization over 200 days. A new model is proposed in the second part of this thesis. It is based on an inhibition of glass dissolution reaction by silicon coupled with a local description of silicon retention in the alteration gel layer. This model predicts the evolutions of boron and silicon concentrations in solution as well as the concentrations and retention profiles in the gel layer. These predictions have been compared to measurements of retention profiles by the secondary ion mass spectrometry (SIMS) method. The model has been validated on fractions of gel layer which reactivity present low or moderate disparities. (author)

  8. Experimental vibration level analysis of a Francis turbine

    International Nuclear Information System (INIS)

    Bucur, D M; Dunca, G; Calinoiu, C

    2012-01-01

    In this study the vibration level of a Francis turbine is investigated by experimental work in site. Measurements are carried out for different power output values, in order to highlight the influence of the operation regimes on the turbine behavior. The study focuses on the turbine shaft to identify the mechanical vibration sources and on the draft tube in order to identify the hydraulic vibration sources. Analyzing the vibration results, recommendations regarding the operation of the turbine, at partial load close to minimum values, in the middle of the operating domain or close to maximum values of electric power, can be made in order to keep relatively low levels of vibration. Finally, conclusions are drawn in order to present the real sources of the vibrations.

  9. Experimental validation of a simple, low-cost, T-junction droplet generator fabricated through 3D printing

    International Nuclear Information System (INIS)

    Donvito, Lidia; Galluccio, Laura; Lombardo, Alfio; Morabito, Giacomo; Nicolosi, Alfio; Reno, Marco

    2015-01-01

    Three-dimensional printing has been recently proposed and assessed for continuous flow microfluidic devices. In this paper the focus is on a new application of this rapid and low cost method for microfluidic device prototyping: droplets production through a T-junction generator. The feasibility of this new methodology is assessed by means of an experimental study in which the statistical parameters which characterize the production of droplets are analyzed. Furthermore, this study assesses the validity of previous theoretical and experimental results, obtained for a PDMS T-junction droplet generator, also in the case of a 3D printed Acrylonitrile microfluidic chip. Finally, the feasibility of producing monodisperse droplets by analyzing the polydispersity index of the prepared droplets is demonstrated. (paper)

  10. Three-dimensional deformation response of a NiTi shape memory helical-coil actuator during thermomechanical cycling: experimentally validated numerical model

    Science.gov (United States)

    Dhakal, B.; Nicholson, D. E.; Saleeb, A. F.; Padula, S. A., II; Vaidyanathan, R.

    2016-09-01

    Shape memory alloy (SMA) actuators often operate under a complex state of stress for an extended number of thermomechanical cycles in many aerospace and engineering applications. Hence, it becomes important to account for multi-axial stress states and deformation characteristics (which evolve with thermomechanical cycling) when calibrating any SMA model for implementation in large-scale simulation of actuators. To this end, the present work is focused on the experimental validation of an SMA model calibrated for the transient and cyclic evolutionary behavior of shape memory Ni49.9Ti50.1, for the actuation of axially loaded helical-coil springs. The approach requires both experimental and computational aspects to appropriately assess the thermomechanical response of these multi-dimensional structures. As such, an instrumented and controlled experimental setup was assembled to obtain temperature, torque, degree of twist and extension, while controlling end constraints during heating and cooling of an SMA spring under a constant externally applied axial load. The computational component assesses the capabilities of a general, multi-axial, SMA material-modeling framework, calibrated for Ni49.9Ti50.1 with regard to its usefulness in the simulation of SMA helical-coil spring actuators. Axial extension, being the primary response, was examined on an axially-loaded spring with multiple active coils. Two different conditions of end boundary constraint were investigated in both the numerical simulations as well as the validation experiments: Case (1) where the loading end is restrained against twist (and the resulting torque measured as the secondary response) and Case (2) where the loading end is free to twist (and the degree of twist measured as the secondary response). The present study focuses on the transient and evolutionary response associated with the initial isothermal loading and the subsequent thermal cycles under applied constant axial load. The experimental

  11. Experimental validation of Villain's conjecture about magnetic ordering in quasi-1D helimagnets

    International Nuclear Information System (INIS)

    Cinti, F.; Rettori, A.; Pini, M.G.; Mariani, M.; Micotti, E.; Lascialfari, A.; Papinutto, N.; Amato, A.; Caneschi, A.; Gatteschi, D.; Affronte, M.

    2010-01-01

    Low-temperature magnetic susceptibility, zero-field muon spin resonance and specific heat measurements have been performed in the quasi-one-dimensional (1D) molecular helimagnetic compound Gd(hfac) 3 NITEt. The specific heat presents two anomalies at T 0 =2.19(2)K and T N =1.88(2)K, while susceptibility and zero-field muon spin resonance show anomalies only at T N =1.88(2)K. The results suggest an experimental validation of Villain's conjecture of a two-step magnetic ordering in quasi-1D XY helimagnets: the paramagnetic phase and the helical spin solid phases are separated by a chiral spin liquid, where translational invariance is broken without violation of rotational invariance.

  12. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    Science.gov (United States)

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  13. Experimental Validation of an FSW Model with an Enhanced Friction Law: Application to a Threaded Cylindrical Pin Tool

    Directory of Open Access Journals (Sweden)

    Narges Dialami

    2017-11-01

    Full Text Available This work adopts a fast and accurate two-stage computational strategy for the analysis of FSW (Friction stir welding processes using threaded cylindrical pin tools. The coupled thermo-mechanical problem is equipped with an enhanced friction model to include the effect of non-uniform pressure distribution under the pin shoulder. The overall numerical strategy is successfully validated by the experimental measurements provided by the industrial partner (Sapa. The verification of the numerical model using the experimental evidence is not only accomplished in terms of temperature evolution but also in terms of torque, longitudinal, transversal and vertical forces.

  14. Development and Validation of the Persian Version of the Acceptable Noise Level (ANL Test in Normal Children Aged 5-8 Years

    Directory of Open Access Journals (Sweden)

    Abdollah Moossavi

    2016-06-01

    Full Text Available Background: The goal of the present study was to develop and validate the Persian version of the Acceptable Noise Level (ANL test in normal, Persianspeaking children aged 5-8 years. Methods: This tool-making and non-experimental research was conducted in two stages. In the first stage the proper story was selected and recorded after evaluation of its content validity. In the second stage this test material was administered to a total 181 normal children (97 girls and 84 boys randomly chosen from the population of preschool and primary school children of Tehran (District 5, aged 5-8 years in four age groups to evaluate the reliability of test in order to develop the Persian version of the ANL test and assess its changes during the growth. Lawshe’s method and Cronbach’s alpha coefficient were used to assess the content validity and reliability of the test, respectively. Mann– Whitney U test was used to examine gender differences, and Kruskal-Wallis test was to examine age differences. Results: Test-retest correlation of 0.74 indicated acceptable reliability of the test. Significant differences were found between most of different age groups for the ANL mean scores (P0.05. Conclusion: The study results indicated good validity and reliability of the Persian version of the ANL test in children. Therefore this test can be useful in designing classrooms suitable for 5-8 year-old children of both genders.

  15. Experimental validation of GADRAS's coupled neutron-photon inverse radiation transport solver

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Harding, Lee T.

    2010-01-01

    Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of α-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.

  16. Experimental and computational validation of BDTPS using a heterogeneous boron phantom

    CERN Document Server

    Daquino, G G; Mazzini, M; Moss, R L; Muzi, L

    2004-01-01

    The idea to couple the treatment planning system (TPS) to the information on the real boron distribution in the patient acquired by positron emission tomography (PET) is the main added value of the new methodology set-up at DIMNP (Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione) of University of Pisa, in collaboration with the JRC (Joint Research Centre) at Petten (NL). This methodology has been implemented in a new TPS, called Boron Distribution Treatment Planning System (BDTPS), which takes into account the actual boron distribution in the patient's organ, as opposed to other TPSs used in BNCT that assume an ideal uniform boron distribution. BDTPS is based on the Monte Carlo technique and has been experimentally validated comparing the computed main parameters (thermal neutron flux, boron dose, etc.) to those measured during the irradiation of an ad hoc designed phantom (HEterogeneous BOron phanto M, HEBOM). The results are also in good agreement with those obtained by the standard TPS SER...

  17. An independent verification and validation of the Future Theater Level Model conceptual model

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  18. Zero-G experimental validation of a robotics-based inertia identification algorithm

    Science.gov (United States)

    Bruggemann, Jeremy J.; Ferrel, Ivann; Martinez, Gerardo; Xie, Pu; Ma, Ou

    2010-04-01

    The need to efficiently identify the changing inertial properties of on-orbit spacecraft is becoming more critical as satellite on-orbit services, such as refueling and repairing, become increasingly aggressive and complex. This need stems from the fact that a spacecraft's control system relies on the knowledge of the spacecraft's inertia parameters. However, the inertia parameters may change during flight for reasons such as fuel usage, payload deployment or retrieval, and docking/capturing operations. New Mexico State University's Dynamics, Controls, and Robotics Research Group has proposed a robotics-based method of identifying unknown spacecraft inertia properties1. Previous methods require firing known thrusts then measuring the thrust, and the velocity and acceleration changes. The new method utilizes the concept of momentum conservation, while employing a robotic device powered by renewable energy to excite the state of the satellite. Thus, it requires no fuel usage or force and acceleration measurements. The method has been well studied in theory and demonstrated by simulation. However its experimental validation is challenging because a 6- degree-of-freedom motion in a zero-gravity condition is required. This paper presents an on-going effort to test the inertia identification method onboard the NASA zero-G aircraft. The design and capability of the test unit will be discussed in addition to the flight data. This paper also introduces the design and development of an airbearing based test used to partially validate the method, in addition to the approach used to obtain reference value for the test system's inertia parameters that can be used for comparison with the algorithm results.

  19. Conception and validation software tools for the level 0 muon trigger of LHCb

    International Nuclear Information System (INIS)

    Aslanides, E.; Cachemiche, J. P.; Cogan, J.; Duval, P. Y.; Le Gac, R.; Hachon, F.; Leroy, O.; Liotard, P. L.; Marin, F.; Tsaregorodtsev, A.

    2009-01-01

    The Level-0 muon trigger processor of the LHCb experiment looks for straight particles crossing muon detector and measures their transverse momentum. It processes 40*10 6 proton-proton collisions per second. The tracking uses a road algorithm relying on the projectivity of the muon detector (the logical layout in the 5 muon station is projective in y to the interaction point and it is also projective in x when the bending in the horizontal direction introduced by the magnetic field is ignored). The architecture of the Level-0 muon trigger is complex with a dense network of data interconnections. The design and validation of such an intricate system has only been possible with intense use of software tools for the detector simulation, the modelling of the hardware components behaviour and the validation. A database describing the data-flow is the corner stone between the software and hardware components. (authors)

  20. Validation and reliability of the scale Self-efficacy and their child's level of asthma control

    Directory of Open Access Journals (Sweden)

    Ana Lúcia Araújo Gomes

    Full Text Available ABSTRACT Objective: To evaluate the psychometric properties in terms of validity and reliability of the scale Self-efficacy and their child's level of asthma control: Brazilian version. Method: Methodological study in which 216 parents/guardians of children with asthma participated. A construct validation (factor analysis and test of hypothesis by comparison of contrasted groups and an analysis of reliability in terms of homogeneity (Cronbach's alpha and stability (test-retest were carried out. Results: Exploratory factor analysis proved suitable for the Brazilian version of the scale (Kaiser-Meyer-Olkim index of 0.879 and Bartlett's sphericity with p < 0.001. The correlation matrix in factor analysis suggested the removal of item 7 from the scale. Cronbach's alpha of the final scale, with 16 items, was 0.92. Conclusion: The Brazilian version of Self-efficacy and their child's level of asthma control presented psychometric properties that confirmed its validity and reliability.

  1. Experimental validation for combustion analysis of GOTHIC 6.1b code in 2-dimensional premixed combustion experiments

    International Nuclear Information System (INIS)

    Lee, J. Y.; Lee, J. J.; Park, K. C.

    2003-01-01

    In this study, the prediction capability of GOTHIC code for hydrogen combustion phenomena was validated with the results of two-dimensional premixed hydrogen combustion experiment executed by Seoul National University. In the experimental results, we could confirm the propagation characteristics of hydrogen flame such as buoyancy effect, flame front shape etc.. The combustion time of the tests was about 0.1 sec.. In the GOTHIC analyses results, the GOTHIC code could predict the overall hydrogen flame propagation characteristics but the buoyancy effect and flame shape did not compare well with the experimental results. Especially, in case of the flame propagate to the dead-end, GOTHIC predicted the flame did not affected by the flow and this cause quite different results in flame propagation from experimental results. Moreover the combustion time of the analyses was about 1 sec. which is ten times longer than the experimental result. To obtain more reasonable analysis results, it is necessary that combustion model parameters in GOTHIC code apply appropriately and hydrogen flame characteristics be reflected in solving governing equations

  2. Neutronics experimental validation of the Jules Horowitz reactor fuel by interpretation of the VALMONT experimental program-transposition of the uncertainties on the reactivity of JHR with JEF2.2 and JEFF3.1.1

    International Nuclear Information System (INIS)

    Leray, O.; Hudelot, J.P.; Doederlein, C.; Vaglio-Gaudard, C.; Antony, M.; Santamarina, A.; Bernard, D.

    2012-01-01

    The new European material testing Jules Horowitz Reactor (JHR), currently under construction in Cadarache center (CEA France), will use LEU (20% enrichment in 235 U) fuels (U 3 Si 2 for the start up and UMoAl in the future) which are quite different from the industrial oxide fuel, for which an extensive neutronics experimental validation database has been established. The HORUS3D/N neutronics calculation scheme, used for the design and safety studies of the JHR, is being developed within the framework of a rigorous verification-numerical validation-experimental validation methodology. In this framework, the experimental VALMONT (Validation of Aluminium Molybdenum uranium fuel for Neutronics) program has been performed in the MINERVE facility of CEA Cadarache (France), in order to qualify the capability of HORUS3D/N to accurately calculate the reactivity of the JHR reactor. The MINERVE facility using the oscillation technique provides accurate measurements of reactivity effect of samples. The VALMONT program includes oscillations of samples of UAl ∞ /Al and UMo/Al with enrichments ranging from 0.2% to 20% and Uranium densities from 2.2 to 8 g/cm 3 . The geometry of the samples and the pitch of the experimental lattice ensure maximum representativeness with the neutron spectrum expected for JHR. By comparing the effect of the sample with the one of a known fuel specimen, the reactivity effect can be measured in absolute terms and be compared to computational results. Special attention was paid to the rigorous determination and reduction of the experimental uncertainties. The calculational analysis of the VALMONT results was performed with the French deterministic code APOLLO2. A comparison of the impact of the different calculation methods, data libraries and energy meshes that were tested is presented. The interpretation of the VALMONT experimental program allowed the experimental validation of JHR fuel UMoAl8 (with an enrichment of 19.75% 235 U) by the Minerve

  3. Computational Prediction and Rationalization, and Experimental Validation of Handedness Induction in Helical Aromatic Oligoamide Foldamers.

    Science.gov (United States)

    Liu, Zhiwei; Hu, Xiaobo; Abramyan, Ara M; Mészáros, Ádám; Csékei, Márton; Kotschy, András; Huc, Ivan; Pophristic, Vojislava

    2017-03-13

    Metadynamics simulations were used to describe the conformational energy landscapes of several helically folded aromatic quinoline carboxamide oligomers bearing a single chiral group at either the C or N terminus. The calculations allowed the prediction of whether a helix handedness bias occurs under the influence of the chiral group and gave insight into the interactions (sterics, electrostatics, hydrogen bonds) responsible for a particular helix sense preference. In the case of camphanyl-based and morpholine-based chiral groups, experimental data confirming the validity of the calculations were already available. New chiral groups with a proline residue were also investigated and were predicted to induce handedness. This prediction was verified experimentally through the synthesis of proline-containing monomers, their incorporation into an oligoamide sequence by solid phase synthesis and the investigation of handedness induction by NMR spectroscopy and circular dichroism. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Experimental Validation of UTDefect: Scattering in Anisotropic Media and Near-field Behavior

    International Nuclear Information System (INIS)

    Pecorari, Claudio

    2002-11-01

    Theoretical models that simulate measurements of ultrasonic waves undergoing scattering by material defects have been developed by Prof. Bostroem and co-workers at Chalmers Univ. of Tech. for a variety of experimental configurations and defects. A software program named UTDefect has been developed at the same time, which gathers the theoretical results obtained so far in a single package. A discussion of the motivations behind such an effort and details concerning UTDefect can be found in articles by Bostroem. Following an initial effort to validate some of the theoretical predictions available at the time, the present project has been conceived as a support to the on-going theoretical work. In fact, the goal of the project described in this report has been the experimental validation of two aspects of the above theory that have not yet been tested: the scattering of a finite ultrasonic beam by a surface-breaking crack in an anisotropic medium, and an improved model of the behaviour of a finite ultrasonic beam in the near-field region of the source. In the last case, the supporting medium is supposed to be isotropic. To carry out the first task, a single crystal, silicon sample was employed. A surface-breaking notch with a depth of approximately 1.8 mm was introduced by means of a wire-cutting saw to simulate a scattering defect. Two kinds of measurements were performed of this sample. The first one considered the signal amplitude as a function of the transducer position. To this end, three wedges generating beams propagating in different directions were used. The second series of measurements concerned the frequency content of the backscattered signals at the position where the amplitude was maximum. All three wedges mentioned above were used also in this part of the work. The experimental results were compared to the values of the physical quantities of interest as predicted by UTDefect, with the only difference that UTDefect was run for a sub-surface rectangular

  5. Experimental Study of the Twin Turbulent Water Jets Using Laser Doppler Anemometry for Validating Numerical Models

    International Nuclear Information System (INIS)

    Wang Huhu; Lee Saya; Hassan, Yassin A.; Ruggles, Arthur E.

    2014-01-01

    The design of next generation (Gen. IV) high-temperature nuclear reactors including gas-cooled and sodium-cooled ones involves massive numerical works especially the Computational Fluid Dynamics (CFD) simulations. The high cost of large-scale experiments and the inherent uncertainties existing in the turbulent models and wall functions of any CFD codes solving Reynolds-averaged Navier-Stokes (RANS) equations necessitate the high-spacial experimental data sets for benchmarking the simulation results. In Gen. IV conceptual reactors, the high- temperature flows mix in the upper plenum before entering the secondary cooling system. The mixing condition should be accurately estimated and fully understood as it is related to the thermal stresses induced in the upper plenum and the magnitudes of output power oscillations due to any changes of primary coolant temperature. The purpose of this study is to use Laser Doppler Anemometry (LDA) technique to measure the flow field of two submerged parallel jets issuing from two rectangular channels. The LDA data sets can be used to validate the corresponding simulation results. The jets studied in this work were at room temperature. The turbulent characteristics including the distributions of mean velocities, turbulence intensities, Reynolds stresses were studied. Uncertainty analysis was also performed to study the errors involved in this experiment. The experimental results in this work are valid for benchmarking any steady-state numerical simulations using turbulence models to solve RANS equations. (author)

  6. Experimental and numerical study of light gas dispersion in a ventilated room

    Energy Technology Data Exchange (ETDEWEB)

    Gelain, Thomas, E-mail: thomas.gelain@irsn.fr; Prévost, Corinne

    2015-11-15

    Highlights: • Presentation of many experimental local data for different configurations. • Highlight of the influence of numerical parameters used in the CFD code. • Validation of the CFD code ANSYS CFX on the basis of experimental data. - Abstract: The objective of this study is to validate the ANSYS CFX version 12 computational code on the basis of light gas dispersion tests performed in two ventilated rooms. It follows an initial study on heavy gas dispersion carried out by Ricciardi et al. (2008). First, a study of sensitivity to various numerical parameters allows a set of reference data to be developed and the influence of the numerical scheme of advection to be revealed. Second, two helium (simulating hydrogen) dispersion test grids are simulated for the two rooms studied, and the results of the calculations are compared with experimental results. The very good agreement between these results allows the code and its dataset to be validated for this application. In future, a study with higher levels of helium (on the order of 4% vol at equilibrium) is envisaged in the context of safety analyses related to the hydrogen risk, these levels representing the lower explosive limit (LEL) of hydrogen.

  7. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  8. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  9. Systematic validation of predicted microRNAs for cyclin D1

    International Nuclear Information System (INIS)

    Jiang, Qiong; Feng, Ming-Guang; Mo, Yin-Yuan

    2009-01-01

    MicroRNAs are the endogenous small non-coding RNA molecules capable of silencing protein coding genes at the posttranscriptional level. Based on computer-aided predictions, a single microRNA could have over a hundred of targets. On the other hand, a single protein-coding gene could be targeted by many potential microRNAs. However, only a relatively small number of these predicted microRNA/mRNA interactions are experimentally validated, and no systematic validation has been carried out using a reporter system. In this study, we used luciferease reporter assays to validate microRNAs that can silence cyclin D1 (CCND1) because CCND1 is a well known proto-oncogene implicated in a variety of types of cancers. We chose miRanda (http://www.microRNA.org) as a primary prediction method. We then cloned 51 of 58 predicted microRNA precursors into pCDH-CMV-MCS-EF1-copGFP and tested for their effect on the luciferase reporter carrying the 3'-untranslated region (UTR) of CCND1 gene. Real-time PCR revealed the 45 of 51 cloned microRNA precursors expressed a relatively high level of the exogenous microRNAs which were used in our validation experiments. By an arbitrary cutoff of 35% reduction, we identified 7 microRNAs that were able to suppress Luc-CCND1-UTR activity. Among them, 4 of them were previously validated targets and the rest 3 microRNAs were validated to be positive in this study. Of interest, we found that miR-503 not only suppressed the luciferase activity, but also suppressed the endogenous CCND1 both at protein and mRNA levels. Furthermore, we showed that miR-503 was able to reduce S phase cell populations and caused cell growth inhibition, suggesting that miR-503 may be a putative tumor suppressor. This study provides a more comprehensive picture of microRNA/CCND1 interactions and it further demonstrates the importance of experimental target validation

  10. Development and experimental validation of a tool to determine out-of-field dose in radiotherapy

    International Nuclear Information System (INIS)

    Bessieres, I.

    2013-01-01

    Over the last two decades, many technical developments have been achieved on intensity modulated radiotherapy (IMRT) and allow a better conformation of the dose to the tumor and consequently increase the success of cancer treatments. These techniques often reduce the dose to organs at risk close to the target volume; nevertheless they increase peripheral dose levels. In this situation, the rising of the survival rate also increases the probability of secondary effects expression caused by peripheral dose deposition (second cancers for instance). Nowadays, the peripheral dose is not taken into account during the treatment planning and no reliable prediction tool exists. However it becomes crucial to consider the peripheral dose during the planning, especially for pediatric cases. Many steps of the development of an accurate and fast Monte Carlo out-of-field dose prediction tool based on the PENELOPE code have been achieved during this PhD work. To this end, we demonstrated the ability of the PENELOPE code to estimate the peripheral dose by comparing its results with reference measurements performed on two experimental configurations (metrological and pre-clinical). During this experimental work, we defined a protocol for low doses measurement with OSL dosimeters. In parallel, we highlighted the slow convergence of the code for clinical use. Consequently, we accelerated the code by implementing a new variance reduction technique called pseudo-deterministic transport which is specifically with the objective of improving calculations in areas far away from the beam. This step improved the efficiency of the peripheral doses estimation in both validation configurations (by a factor of 20) in order to reach reasonable computing times for clinical application. Optimization works must be realized in order improve the convergence of our tool and consider a final clinical use. (author) [fr

  11. Validation and Comparison of a Model of the Effect of Sea-Level Rise on Coastal Wetlands.

    Science.gov (United States)

    Mogensen, Laura A; Rogers, Kerrylee

    2018-01-22

    Models are used to project coastal wetland distribution under future sea-level rise scenarios to assist decision-making. Model validation and comparison was used to investigate error and uncertainty in the Sea Level Affecting Marshes Model, a readily available model with minimal validation, particularly for wetlands beyond North America. Accurate parameterisation is required to improve the performance of the model, and indeed any spatial model. Consideration of tidal attenuation further enhances model performance, particularly for coastal wetlands located within estuaries along wave-dominated coastlines. The model does not simulate vegetation changes that are known to occur, particularly when sedimentation exceeds rates of sea-level rise resulting in shoreline progradation. Model performance was reasonable over decadal timescales, decreasing as the time-scale of retrospection increased due to compounding of errors. Comparison with other deterministic models showed reasonable agreement by 2100. However, given the uncertainty of the future and the unpredictable nature of coastal wetlands, it is difficult to ascertain which model could be realistic enough to meet its intended purpose. Model validation and comparison are useful for assessing model efficacy and parameterisation, and should be applied before application of any spatially explicit model of coastal wetland response to sea-level rise.

  12. Numerical simulation and experimental validation of internal heat exchanger influence on CO{sub 2} trans-critical cycle performance

    Energy Technology Data Exchange (ETDEWEB)

    Rigola, Joaquim; Ablanque, Nicolas; Perez-Segarra, Carlos D.; Oliva, Assensi [Centre Tecnologic de Transferencia de Calor (CTTC), Universitat Politecnica de Catalunya (UPC), ETSEIAT, C. Colom 11, 08222 Terrassa (Barcelona) (Spain)

    2010-06-15

    The present paper is a numerical and experimental comparative study of the whole vapour compression refrigerating cycle in general, and reciprocating compressors in particular, with the aim of showing the possibilities that CO{sub 2} offers for commercial refrigeration, considering a single-stage trans-critical cycle using semi-hermetic reciprocating compressors under small cooling capacity systems. The present work is focussed on the influence of using an internal heat exchanger (IHX) in order to improve the cycle performance under real working conditions. In order to validate the numerical results, an experimental unit specially designed and built to analyze trans-critical refrigerating equipments considering IHX has been built. Both numerical results and experimental data show reasonable good agreement, while the comparative global values conclude the improvement of cooling capacity and COP when IHX is considered in the CO{sub 2} trans-critical cycle. (author)

  13. Experimental Validation of Stratified Flow Phenomena, Graphite Oxidation, and Mitigation Strategies of Air Ingress Accidents

    Energy Technology Data Exchange (ETDEWEB)

    Chang Ho Oh; Eung Soo Kim; Hee Cheon No; Nam Zin Cho

    2008-12-01

    The US Department of Energy is performing research and development (R&D) that focuses on key phenomena that are important during challenging scenarios that may occur in the Next Generation Nuclear Plant (NGNP) Program / GEN-IV Very High Temperature Reactor (VHTR). Phenomena identification and ranking studies (PIRT) to date have identified the air ingress event, following on the heels of a VHTR depressurization, as very important (Schultz et al., 2006). Consequently, the development of advanced air ingress-related models and verification and validation (V&V) are very high priority for the NGNP program. Following a loss of coolant and system depressurization, air will enter the core through the break. Air ingress leads to oxidation of the in-core graphite structure and fuel. The oxidation will accelerate heat-up of the bottom reflector and the reactor core and will cause the release of fission products eventually. The potential collapse of the bottom reflector because of burn-off and the release of CO lead to serious safety problems. For estimation of the proper safety margin we need experimental data and tools, including accurate multi-dimensional thermal-hydraulic and reactor physics models, a burn-off model, and a fracture model. We also need to develop effective strategies to mitigate the effects of oxidation. The results from this research will provide crucial inputs to the INL NGNP/VHTR Methods R&D project. This project is focused on (a) analytical and experimental study of air ingress caused by density-driven, stratified, countercurrent flow, (b) advanced graphite oxidation experiments, (c) experimental study of burn-off in the bottom reflector, (d) structural tests of the burnt-off bottom reflector, (e) implementation of advanced models developed during the previous tasks into the GAMMA code, (f) full air ingress and oxidation mitigation analyses, (g) development of core neutronic models, (h) coupling of the core neutronic and thermal hydraulic models, and (i

  14. Numerical Validation of a Vortex Model against ExperimentalData on a Straight-Bladed Vertical Axis Wind Turbine

    Directory of Open Access Journals (Sweden)

    Eduard Dyachuk

    2015-10-01

    Full Text Available Cyclic blade motion during operation of vertical axis wind turbines (VAWTs imposes challenges on the simulations models of the aerodynamics of VAWTs. A two-dimensional vortex model is validated against the new experimental data on a 12-kW straight-bladed VAWT, which is operated at an open site. The results on the normal force on one blade are analyzed. The model is assessed against the measured data in the wide range of tip speed ratios: from 1.8 to 4.6. The predicted results within one revolution have a similar shape and magnitude as the measured data, though the model does not reproduce every detail of the experimental data. The present model can be used when dimensioning the turbine for maximum loads.

  15. Final Design and Experimental Validation of the Thermal Performance of the LHC Lattice Cryostats

    International Nuclear Information System (INIS)

    Bourcey, N.; Capatina, O.; Parma, V.; Poncet, A.; Rohmig, P.; Serio, L.; Skoczen, B.; Tock, J.-P.; Williams, L. R.

    2004-01-01

    The recent commissioning and operation of the LHC String 2 have given a first experimental validation of the global thermal performance of the LHC lattice cryostat at nominal cryogenic conditions. The cryostat designed to minimize the heat inleak from ambient temperature, houses under vacuum and thermally protects the cold mass, which contains the LHC twin-aperture superconducting magnets operating at 1.9 K in superfluid helium. Mechanical components linking the cold mass to the vacuum vessel, such as support posts and insulation vacuum barriers are designed with efficient thermalisations for heat interception to minimise heat conduction. Heat inleak by radiation is reduced by employing multilayer insulation (MLI) wrapped around the cold mass and around an aluminium thermal shield cooled to about 60 K.Measurements of the total helium vaporization rate in String 2 gives, after substraction of supplementary heat loads and end effects, an estimate of the total thermal load to a standard LHC cell (107 m) including two Short Straight Sections and six dipole cryomagnets. Temperature sensors installed at critical locations provide a temperature mapping which allows validation of the calculated and estimated thermal performance of the cryostat components, including efficiency of the heat interceptions

  16. Final Design and Experimental Validation of the Thermal Performance of the LHC Lattice Cryostats

    Science.gov (United States)

    Bourcey, N.; Capatina, O.; Parma, V.; Poncet, A.; Rohmig, P.; Serio, L.; Skoczen, B.; Tock, J.-P.; Williams, L. R.

    2004-06-01

    The recent commissioning and operation of the LHC String 2 have given a first experimental validation of the global thermal performance of the LHC lattice cryostat at nominal cryogenic conditions. The cryostat designed to minimize the heat inleak from ambient temperature, houses under vacuum and thermally protects the cold mass, which contains the LHC twin-aperture superconducting magnets operating at 1.9 K in superfluid helium. Mechanical components linking the cold mass to the vacuum vessel, such as support posts and insulation vacuum barriers are designed with efficient thermalisations for heat interception to minimise heat conduction. Heat inleak by radiation is reduced by employing multilayer insulation (MLI) wrapped around the cold mass and around an aluminium thermal shield cooled to about 60 K. Measurements of the total helium vaporization rate in String 2 gives, after substraction of supplementary heat loads and end effects, an estimate of the total thermal load to a standard LHC cell (107 m) including two Short Straight Sections and six dipole cryomagnets. Temperature sensors installed at critical locations provide a temperature mapping which allows validation of the calculated and estimated thermal performance of the cryostat components, including efficiency of the heat interceptions.

  17. Absorber and regenerator models for liquid desiccant air conditioning systems. Validation and comparison using experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Krause, M.; Heinzen, R.; Jordan, U.; Vajen, K. [Kassel Univ., Inst. of Thermal Engineering, Kassel (Germany); Saman, W.; Halawa, E. [Sustainable Energy Centre, Univ. of South Australia, Mawson Lakes, Adelaide (Australia)

    2008-07-01

    Solar assisted air conditioning systems using liquid desiccants represent a promising option to decrease high summer energy demand caused by electrically driven vapor compression machines. The main components of liquid desiccant systems are absorbers for dehumidifying and cooling of supply air and regenerators for concentrating the desiccant. However, high efficient and validated reliable components are required and the design and operation have to be adjusted to each respective building design, location, and user demand. Simulation tools can help to optimize component and system design. The present paper presents new developed numerical models for absorbers and regenerators, as well as experimental data of a regenerator prototype. The models have been compared with a finite-difference method model as well as experimental data. The data are gained from the regenerator prototype presented and an absorber presented in the literature. (orig.)

  18. Development of robust flexible OLED encapsulations using simulated estimations and experimental validations

    International Nuclear Information System (INIS)

    Lee, Chang-Chun; Shih, Yan-Shin; Wu, Chih-Sheng; Tsai, Chia-Hao; Yeh, Shu-Tang; Peng, Yi-Hao; Chen, Kuang-Jung

    2012-01-01

    This work analyses the overall stress/strain characteristic of flexible encapsulations with organic light-emitting diode (OLED) devices. A robust methodology composed of a mechanical model of multi-thin film under bending loads and related stress simulations based on nonlinear finite element analysis (FEA) is proposed, and validated to be more reliable compared with related experimental data. With various geometrical combinations of cover plate, stacked thin films and plastic substrate, the position of the neutral axis (NA) plate, which is regarded as a key design parameter to minimize stress impact for the concerned OLED devices, is acquired using the present methodology. The results point out that both the thickness and mechanical properties of the cover plate help in determining the NA location. In addition, several concave and convex radii are applied to examine the reliable mechanical tolerance and to provide an insight into the estimated reliability of foldable OLED encapsulations. (paper)

  19. Experimentally Manipulating Items Informs on the (Limited Construct and Criterion Validity of the Humor Styles Questionnaire

    Directory of Open Access Journals (Sweden)

    Willibald Ruch

    2017-04-01

    Full Text Available How strongly does humor (i.e., the construct-relevant content in the Humor Styles Questionnaire (HSQ; Martin et al., 2003 determine the responses to this measure (i.e., construct validity? Also, how much does humor influence the relationships of the four HSQ scales, namely affiliative, self-enhancing, aggressive, and self-defeating, with personality traits and subjective well-being (i.e., criterion validity? The present paper answers these two questions by experimentally manipulating the 32 items of the HSQ to only (or mostly contain humor (i.e., construct-relevant content or to substitute the humor content with non-humorous alternatives (i.e., only assessing construct-irrelevant context. Study 1 (N = 187 showed that the HSQ affiliative scale was mainly determined by humor, self-enhancing and aggressive were determined by both humor and non-humorous context, and self-defeating was primarily determined by the context. This suggests that humor is not the primary source of the variance in three of the HQS scales, thereby limiting their construct validity. Study 2 (N = 261 showed that the relationships of the HSQ scales to the Big Five personality traits and subjective well-being (positive affect, negative affect, and life satisfaction were consistently reduced (personality or vanished (subjective well-being when the non-humorous contexts in the HSQ items were controlled for. For the HSQ self-defeating scale, the pattern of relationships to personality was also altered, supporting an positive rather than a negative view of the humor in this humor style. The present findings thus call for a reevaluation of the role that humor plays in the HSQ (construct validity and in the relationships to personality and well-being (criterion validity.

  20. Experimentally Manipulating Items Informs on the (Limited) Construct and Criterion Validity of the Humor Styles Questionnaire.

    Science.gov (United States)

    Ruch, Willibald; Heintz, Sonja

    2017-01-01

    How strongly does humor (i.e., the construct-relevant content) in the Humor Styles Questionnaire (HSQ; Martin et al., 2003) determine the responses to this measure (i.e., construct validity)? Also, how much does humor influence the relationships of the four HSQ scales, namely affiliative, self-enhancing, aggressive, and self-defeating, with personality traits and subjective well-being (i.e., criterion validity)? The present paper answers these two questions by experimentally manipulating the 32 items of the HSQ to only (or mostly) contain humor (i.e., construct-relevant content) or to substitute the humor content with non-humorous alternatives (i.e., only assessing construct-irrelevant context). Study 1 ( N = 187) showed that the HSQ affiliative scale was mainly determined by humor, self-enhancing and aggressive were determined by both humor and non-humorous context, and self-defeating was primarily determined by the context. This suggests that humor is not the primary source of the variance in three of the HQS scales, thereby limiting their construct validity. Study 2 ( N = 261) showed that the relationships of the HSQ scales to the Big Five personality traits and subjective well-being (positive affect, negative affect, and life satisfaction) were consistently reduced (personality) or vanished (subjective well-being) when the non-humorous contexts in the HSQ items were controlled for. For the HSQ self-defeating scale, the pattern of relationships to personality was also altered, supporting an positive rather than a negative view of the humor in this humor style. The present findings thus call for a reevaluation of the role that humor plays in the HSQ (construct validity) and in the relationships to personality and well-being (criterion validity).

  1. Interaction of 1.319 μm laser with skin: an optical-thermal-damage model and experimental validation

    Science.gov (United States)

    Jiao, Luguang; Yang, Zaifu; Wang, Jiarui

    2014-09-01

    With the widespread use of high-power laser systems operating within the wavelength region of approximately 1.3 to 1.4 μm, it becomes very necessary to refine the laser safety guidelines setting the exposure limits for the eye and skin. In this paper, an optical-thermal-damage model was developed to simulate laser propagation, energy deposition, heat transfer and thermal damage in the skin for 1.319 μm laser irradiation. Meanwhile, an experiment was also conducted in vitro to measure the tempreture history of a porcine skin specimen irradiated by a 1.319 μm laser. Predictions from the model included light distribution in the skin, temperature response and thermal damge level of the tissue. It was shown that the light distribution region was much larger than that of the incident laser at the wavelength of 1.319 μm, and the maximum value of the fluence rate located on the interior region of the skin, not on the surface. By comparing the calculated temperature curve with the experimentally recorded temperautre data, good agreement was shown betweeen them, which validated the numerical model. The model also indicated that the damage integral changed little when the temperature of skin tissue was lower than about 55 °C, after that, the integral increased rapidly and denatunation of the tissue would occur. Based on this model, we can further explore the damage mechanisms and trends for the skin and eye within the wavelength region of 1.3 μm to 1.4 μm, incorporating with in vivo experimental investigations.

  2. Validations and improvements of airfoil trailing-edge noise prediction models using detailed experimental data

    DEFF Research Database (Denmark)

    Kamruzzaman, M.; Lutz, Th.; Würz, W.

    2012-01-01

    This paper describes an extensive assessment and a step by step validation of different turbulent boundary-layer trailing-edge noise prediction schemes developed within the European Union funded wind energy project UpWind. To validate prediction models, measurements of turbulent boundary-layer pr...... with measurements in the frequency region higher than 1 kHz, whereas they over-predict the sound pressure level in the low-frequency region. Copyright © 2011 John Wiley & Sons, Ltd.......-layer properties such as two-point turbulent velocity correlations, the spectra of the associated wall pressure fluctuations and the emitted trailing-edge far-field noise were performed in the laminar wind tunnel of the Institute of Aerodynamics and Gas Dynamics, University of Stuttgart. The measurements were...... carried out for a NACA 643-418 airfoil, at Re  =  2.5 ×106, angle of attack of −6° to 6°. Numerical results of different prediction schemes are extensively validated and discussed elaborately. The investigations on the TNO-Blake noise prediction model show that the numerical wall pressure fluctuation...

  3. Experimental Equipment Validation for Methane (CH4) and Carbon Dioxide (CO2) Hydrates

    Science.gov (United States)

    Saad Khan, Muhammad; Yaqub, Sana; Manner, Naathiya; Ani Karthwathi, Nur; Qasim, Ali; Mellon, Nurhayati Binti; Lal, Bhajan

    2018-04-01

    Clathrate hydrates are eminent structures regard as a threat to the gas and oil industry in light of their irritating propensity to subsea pipelines. For natural gas transmission and processing, the formation of gas hydrate is one of the main flow assurance delinquent has led researchers toward conducting fresh and meticulous studies on various aspects of gas hydrates. This paper highlighted the thermodynamic analysis on pure CH4 and CO2 gas hydrates on the custom fabricated equipment (Sapphire cell hydrate reactor) for experimental validation. CO2 gas hydrate formed at lower pressure (41 bar) as compared to CH4 gas hydrate (70 bar) while comparison of thermodynamic properties between CH4 and CO2 also presented in this study. This preliminary study could provide pathways for the quest of potent hydrate inhibitors.

  4. Experimental Validation of Surrogate Models for Predicting the Draping of Physical Interpolating Surfaces

    DEFF Research Database (Denmark)

    Christensen, Esben Toke; Lund, Erik; Lindgaard, Esben

    2018-01-01

    This paper concerns the experimental validation of two surrogate models through a benchmark study involving two different variable shape mould prototype systems. The surrogate models in question are different methods based on kriging and proper orthogonal decomposition (POD), which were developed...... to the performance of the studied surrogate models. By comparing surrogate model performance for the two variable shape mould systems, and through a numerical study involving simple finite element models, the underlying cause of this effect is explained. It is concluded that for a variable shape mould prototype...... hypercube approach. This sampling method allows for generating a space filling and high-quality sample plan that respects mechanical constraints of the variable shape mould systems. Through the benchmark study, it is found that mechanical freeplay in the modeled system is severely detrimental...

  5. Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy

    International Nuclear Information System (INIS)

    Testa, M.; Schümann, J.; Lu, H.-M.; Paganetti, H.; Shin, J.; Faddegon, B.; Perl, J.

    2013-01-01

    Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the

  6. Experimental study of vascular embolization with homemade second-level Copper coil

    International Nuclear Information System (INIS)

    Jiang Hua; Wang Jiaping; Li Yingchun; Tong Yuyun; Yang Qing; Yan Dong; Ding Lili; Yuan Shuguang

    2013-01-01

    Objective: To evaluate the embolic effect of homemade copper coil in rabbits. Methods: Seventeen New Zealand Big Ear Rabbit was included in this study. After conventional anesthesia, one common carotid artery or subclavian artery was embolized with second-level copper coated platinum microcoils (experimental group) through a 3F catheter, and the other common carotid artery or subclavian artery was embolized with second-level platinum micro-coils (control group) as control. Angiography was processed to observe the extent of vascular occlusion 10 min, 30 min, 3 d, 1 w, 2 w, 4 w, 6 w, and 12 w after embolization respectively. The rabbits were sacrificed to observe thrombosis and pathological change of the embolic artery 3 days, 1 w, 2 w, 4 w, 6 w and 12 w after the embolization. Vascular occlusion and thrombosis were compared between experimental group and the control group by using the exact probability method and rank sum test for statistical analysis. Results: Embolization experiment was successfully implemented in 15 of 17 rabbits. Twenty-one second-level copper coated platinum micro-coils were used in the experimental group, while 19 second-level platinum micro-coils were used in the control group. Ten min and 30 min after embolization, angiography showed that vascular embolization effect was not significantly different between the two groups. The vascular embolization effect of the experiment group was superior to control group 3 d, 1, 2, 4, 6 and 12 w after embolization (P < 0.05). Pathological examination showed that there were a lot of blood clots around the copper coil and in the proximal and distal arterial lumen. Only a small amount of blood clots was found around the platinum coil in the control group. For every time point of observation, thrombosis was more severe in the experiment group than that in the control group (P < 0.05). Conclusion: Second-level copper coated coil can be released with 4F catheter to embolize the vessel, showing good physical

  7. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  8. Validation of the Predicted Circumferential and Radial Mode Sound Power Levels in the Inlet and Exhaust Ducts of a Fan Ingesting Distorted Inflow

    Science.gov (United States)

    Koch, L. Danielle

    2012-01-01

    Fan inflow distortion tone noise has been studied computationally and experimentally. Data from two experiments in the NASA Glenn Advanced Noise Control Fan rig have been used to validate acoustic predictions. The inflow to the fan was distorted by cylindrical rods inserted radially into the inlet duct one rotor chord length upstream of the fan. The rods were arranged in both symmetric and asymmetric circumferential patterns. In-duct and farfield sound pressure level measurements were recorded. It was discovered that for positive circumferential modes, measured circumferential mode sound power levels in the exhaust duct were greater than those in the inlet duct and for negative circumferential modes, measured total circumferential mode sound power levels in the exhaust were less than those in the inlet. Predicted trends in overall sound power level were proven to be useful in identifying circumferentially asymmetric distortion patterns that reduce overall inlet distortion tone noise, as compared to symmetric arrangements of rods. Detailed comparisons between the measured and predicted radial mode sound power in the inlet and exhaust duct indicate limitations of the theory.

  9. The effect of preferred music genre selection versus preferred song selection on experimentally induced anxiety levels.

    Science.gov (United States)

    Walworth, Darcy DeLoach

    2003-01-01

    The purpose of this study was to investigate the differences of experimentally induced anxiety levels reached by subjects listening to no music (n = 30), subjects listening to music selected by the experimenter from the subject's preferred genre or artist listed as relaxing (n = 30), and subjects listening to a specific song they listed as relaxing (n = 30). Subjects consisted of 90 individuals, male and female, randomly assigned to one of the three groups mentioned above. Subjects in either music group filled out a questionnaire prior to participating in the study indicating their preference of music used for relaxation purposes. Subjects in Experimental Group 1 marked their preferred genres and/or artists, and Experimental Group 2 marked specific songs used for relaxation purposes. While the experimenter hypothesized subjects in Experimental Group 2 would show less anxiety than both the control group and Experimental Group 1, there were no significant differences found between the 2 music groups in anxiety levels reached. However, there was a statistically significant difference between the no music control group and both music groups in the anxiety level reached by subjects. Subjects listening to music, both songs chosen by the experimenter and subject selected songs, showed significantly less anxiety than subjects not listening to music.

  10. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  11. Analysis of progressive distorsion. Validation of the method based on effective primary stress. Discussion of Anderson's experimental data

    International Nuclear Information System (INIS)

    Moulin, Didier.

    1981-02-01

    An empirical rule usable for design by analysis against progressive distorsion has been set up from experiments conducted in C.E.N. Saclay. This rule is checked with experimental data obtained by W.F. ANDERSON, this experiment is sufficiently different from the Saclay one to evaluate the merits of the rule. The satisfactory results achieved, are another validation of the efficiency diagram on which the method is based [fr

  12. CFD simulation of a burner for syngas characterization and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Fantozzi, Francesco; Desideri, Umberto [University of Perugia (Italy). Dept. of Industrial Engineering], Emails: fanto@unipg.it, umberto.desideri@unipg.it; D' Amico, Michele [University of Perugia (Italy). Dept. of Energetic Engineering], E-mail: damico@crbnet.it

    2009-07-01

    Biomass and waste are distributed and renewable energy sources that may contribute effectively to sustainability if used on a small and micro scale. This requires the transformation through efficient technologies (gasification, pyrolysis and anaerobic digestion) into a suitable gaseous fuel to use in small internal combustion engines and gas turbines. The characterization of biomass derived syngas during combustion is therefore a key issue to improve the performance of small scale integrated plants because synthesis gas show significant differences with respect to Natural Gas (mixture of gases, low calorific value, hydrogen content, tar and particulate content) that may turn into ignition problems, combustion instabilities, difficulties in emission control and fouling. To this aim a burner for syngas combustion and LHV measurement through mass and energy balance was realized and connected to the rotary-kiln laboratory scale pyrolyzer at the Department of Industrial Engineering of the University of Perugia. A computational fluid dynamics (CFD) simulation of the burner was carried out considering the combustion of propane to investigate temperature and pressure distribution, heat transmission and distribution of the combustion products and by products. The simulation was carried out using the CFD program Star-CD. Before the simulation a geometrical model of the burner was built and the volume of model was subdivided in cells. A sensibility analysis of cells was carried out to estimate the approximation degree of the model. Experimental data about combustion emission were carried out with the propane combustion in the burner, the comparison between numerical results and experimental data was studied to validate the simulation for future works involved with the combustion of treated or raw (syngas with tar) syngas obtained from pyrolysis process. (author)

  13. An Experimentally Validated Numerical Modeling Technique for Perforated Plate Heat Exchangers.

    Science.gov (United States)

    White, M J; Nellis, G F; Kelin, S A; Zhu, W; Gianchandani, Y

    2010-11-01

    Cryogenic and high-temperature systems often require compact heat exchangers with a high resistance to axial conduction in order to control the heat transfer induced by axial temperature differences. One attractive design for such applications is a perforated plate heat exchanger that utilizes high conductivity perforated plates to provide the stream-to-stream heat transfer and low conductivity spacers to prevent axial conduction between the perforated plates. This paper presents a numerical model of a perforated plate heat exchanger that accounts for axial conduction, external parasitic heat loads, variable fluid and material properties, and conduction to and from the ends of the heat exchanger. The numerical model is validated by experimentally testing several perforated plate heat exchangers that are fabricated using microelectromechanical systems based manufacturing methods. This type of heat exchanger was investigated for potential use in a cryosurgical probe. One of these heat exchangers included perforated plates with integrated platinum resistance thermometers. These plates provided in situ measurements of the internal temperature distribution in addition to the temperature, pressure, and flow rate measured at the inlet and exit ports of the device. The platinum wires were deposited between the fluid passages on the perforated plate and are used to measure the temperature at the interface between the wall material and the flowing fluid. The experimental testing demonstrates the ability of the numerical model to accurately predict both the overall performance and the internal temperature distribution of perforated plate heat exchangers over a range of geometry and operating conditions. The parameters that were varied include the axial length, temperature range, mass flow rate, and working fluid.

  14. Retrieval of Droplet size Density Distribution from Multiple field of view Cross polarized Lidar Signals: Theory and Experimental Validation

    Science.gov (United States)

    2016-06-02

    Retrieval of droplet-size density distribution from multiple-field-of-view cross-polarized lidar signals: theory and experimental validation...Gilles Roy, Luc Bissonnette, Christian Bastille, and Gilles Vallee Multiple-field-of-view (MFOV) secondary-polarization lidar signals are used to...use secondary polarization. A mathematical relation among the PSD, the lidar fields of view, the scattering angles, and the angular depolarization

  15. An experimentally validated model for geometrically nonlinear plucking-based frequency up-conversion in energy harvesting

    Science.gov (United States)

    Kathpalia, B.; Tan, D.; Stern, I.; Erturk, A.

    2018-01-01

    It is well known that plucking-based frequency up-conversion can enhance the power output in piezoelectric energy harvesting by enabling cyclic free vibration at the fundamental bending mode of the harvester even for very low excitation frequencies. In this work, we present a geometrically nonlinear plucking-based framework for frequency up-conversion in piezoelectric energy harvesting under quasistatic excitations associated with low-frequency stimuli such as walking and similar rigid body motions. Axial shortening of the plectrum is essential to enable plucking excitation, which requires a nonlinear framework relating the plectrum parameters (e.g. overlap length between the plectrum and harvester) to the overall electrical power output. Von Kármán-type geometrically nonlinear deformation of the flexible plectrum cantilever is employed to relate the overlap length between the flexible (nonlinear) plectrum and the stiff (linear) harvester to the transverse quasistatic tip displacement of the plectrum, and thereby the tip load on the linear harvester in each plucking cycle. By combining the nonlinear plectrum mechanics and linear harvester dynamics with two-way electromechanical coupling, the electrical power output is obtained directly in terms of the overlap length. Experimental case studies and validations are presented for various overlap lengths and a set of electrical load resistance values. Further analysis results are reported regarding the combined effects of plectrum thickness and overlap length on the plucking force and harvested power output. The experimentally validated nonlinear plectrum-linear harvester framework proposed herein can be employed to design and optimize frequency up-conversion by properly choosing the plectrum parameters (geometry, material, overlap length, etc) as well as the harvester parameters.

  16. Commissioning and validation of the ATLAS Level-1 topological trigger

    CERN Document Server

    AUTHOR|(SzGeCERN)788741; The ATLAS collaboration; Hong, Tae Min

    2017-01-01

    The ATLAS experiment has recently commissioned a new hardware component of its first-level trigger: the topological processor (L1Topo). This innovative system, using state-of-the-art FPGA processors, selects events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Since the first-level trigger is a synchronous pipelined system, such requirements are applied within a latency of 200ns. We will present the first results from data recorded using the L1Topo trigger; these demonstrate a significantly improved background event rejection, thus allowing for a rate reduction without efficiency loss. This improvement has been shown for several physics processes leading to low-$P_{T}$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, we will discuss the use of an accurate L1Topo simulation as a powerful tool to validate and optimize the performance of this new trigger system. To reach ...

  17. Experimental Validation of the Electrokinetic Theory and Development of Seismoelectric Interferometry by Cross-Correlation

    Directory of Open Access Journals (Sweden)

    F. C. Schoemaker

    2012-01-01

    Full Text Available We experimentally validate a relatively recent electrokinetic formulation of the streaming potential (SP coefficient as developed by Pride (1994. The start of our investigation focuses on the streaming potential coefficient, which gives rise to the coupling of mechanical and electromagnetic fields. It is found that the theoretical amplitude values of this dynamic SP coefficient are in good agreement with the normalized experimental results over a wide frequency range, assuming no frequency dependence of the bulk conductivity. By adopting the full set of electrokinetic equations, a full-waveform wave propagation model is formulated. We compare the model predictions, neglecting the interface response and modeling only the coseismic fields, with laboratory measurements of a seismic wave of frequency 500 kHz that generates electromagnetic signals. Agreement is observed between measurement and electrokinetic theory regarding the coseismic electric field. The governing equations are subsequently adopted to study the applicability of seismoelectric interferometry. It is shown that seismic sources at a single boundary location are sufficient to retrieve the 1D seismoelectric responses, both for the coseismic and interface components, in a layered model.

  18. Experimental and Numerical Investigations on Feasibility and Validity of Prismatic Rock Specimen in SHPB

    Directory of Open Access Journals (Sweden)

    Xibing Li

    2016-01-01

    Full Text Available The paper presents experimental and numerical studies on the feasibility and validity of using prismatic rock specimens in split Hopkinson pressure bar (SHPB test. Firstly, the experimental tests are conducted to evaluate the stress and strain uniformity in the prismatic specimens during impact loading. The stress analysis at the ends of the specimen shows that stress equilibrium can be achieved after about three wave reflections in the specimen, and the balance can be well maintained for a certain time after peak stress. The strain analysis reveals that the prismatic specimen deforms uniformly during the dynamic loading period. Secondly, numerical simulation is performed to further verify the stress and strain uniformity in the prismatic specimen in SHPB test. It indicates that the stress equilibrium can be achieved in prismatic specimen despite a certain degree of stress concentration at the corners. The comparative experiments demonstrate that the change of specimen shape has no significant effect on dynamic responses and failure patterns of the specimen. Finally, a dynamic crack propagation test is presented to show the application of the present work in studying fracturing mechanisms under dynamic loading.

  19. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties of the surroun......This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties...... to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  20. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    Science.gov (United States)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  1. Experimental validation of a kinetic multi-component mechanism in a wide HCCI engine operating range for mixtures of n-heptane, iso-octane and toluene: Influence of EGR parameters

    International Nuclear Information System (INIS)

    Machrafi, Hatim

    2008-01-01

    The parameters that are present in exhaust gas recirculation (EGR) are believed to provide an important contribution to control the auto-ignition process of the homogeneous charge compression ignition (HCCI) in an engine. For the investigation of the behaviour of the auto-ignition process, a kinetic multi-component mechanism has been developed in former work, containing 62 reactions and 49 species for mixtures of n-heptane, iso-octane and toluene. This paper presents an experimental validation of this mechanism, comparing the calculated pressure, heat release, ignition delays and CO 2 emissions with experimental data performed on a HCCI engine. The validation is performed in a broad range of EGR parameters by varying the dilution by N 2 and CO 2 from 0 to 46 vol.%, changing the EGR temperature from 30 to 120 deg. C, altering the addition of CO and NO from 0 to 170 ppmv and varying the addition of CH 2 O from 0 to 1400 ppmv. These validations were performed respecting the HCCI conditions for the inlet temperature and the equivalence ratio. The results showed that the mechanism is validated experimentally in dilution ranges going up to 21-30 vol.%, depending on the species of dilution and over the whole range of the EGR temperature. The mechanism is validated over the whole range of CO and CH 2 O addition. As for the addition of NO, the mechanism is validated quantitatively up to 50 ppmv and qualitatively up to 170 ppmv

  2. Development, Implementation and Experimental Validations of Activation Products Models for Water Pool Reactors

    International Nuclear Information System (INIS)

    Petriw, S.N.

    2001-01-01

    Some parameters were obtained both calculations and experiments in order to determined the source of the meaning activation products in water pool reactors. In this case, the study was done in RA-6 reactor (Centro Atomico Bariloche - Argentina).In normal operation, neutron flux on core activates aluminium plates.The activity on coolant water came from its impurities activation and meanly from some quantity of aluminium that, once activated, leave the cladding and is transported by water cooling system.This quantity depends of the 'recoil range' of each activation reaction.The 'staying time' on pool (the time that nuclides are circulating on the reactor pool) is another characteristic parameter of the system.Stationary state activity of some nuclides depends of this time.Also, several theoretical models of activation on coolant water system are showed, and their experimental validations

  3. Innovative alpha radioactivity monitor for clearance level inspection based on ionized air transport technology (2). CFD-simulated and experimental ion transport efficiencies for uranium-attached pipes

    International Nuclear Information System (INIS)

    Hirata, Yosuke; Nakahara, Katsuhiko; Sano, Akira; Sato, Mitsuyoshi; Aoyama, Yoshio; Miyamoto, Yasuaki; Yamaguchi, Hiromi; Nanbu, Kenichi; Takahashi, Hiroyuki; Oda, Akinori

    2007-01-01

    An innovative alpha radioactivity monitor for clearance level inspection has been developed. This apparatus measures an ion current resulting from air ionization by alpha particles. Ions generated in the measurement chamber of about 1 m 3 in volume are transported by airflow to a sensor and measured. This paper presents computational estimation of ion transport efficiencies for two pipes with different lengths, the inner surfaces of which were covered with a thin layer of uranium. These ion transport efficiencies were compared with those experimentally obtained for the purpose of our model validation. Good agreement was observed between transport efficiencies from simulations and those experimentally estimated. Dependence of the transport efficiencies on the region of uranium coating was also examined, based on which anticipated errors arising from unclear positions of contamination are also discussed. (author)

  4. Evaluation of a performance assessment methodology for low-level radioactive waste disposal facilities: Validation needs. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, M.W.; Olague, N.E. [Sandia National Labs., Albuquerque, NM (United States)

    1995-02-01

    In this report, concepts on how validation fits into the scheme of developing confidence in performance assessments are introduced. A general framework for validation and confidence building in regulatory decision making is provided. It is found that traditional validation studies have a very limited role in developing site-specific confidence in performance assessments. Indeed, validation studies are shown to have a role only in the context that their results can narrow the scope of initial investigations that should be considered in a performance assessment. In addition, validation needs for performance assessment of low-level waste disposal facilities are discussed, and potential approaches to address those needs are suggested. These areas of topical research are ranked in order of importance based on relevance to a performance assessment and likelihood of success.

  5. Evaluation of a performance assessment methodology for low-level radioactive waste disposal facilities: Validation needs. Volume 2

    International Nuclear Information System (INIS)

    Kozak, M.W.; Olague, N.E.

    1995-02-01

    In this report, concepts on how validation fits into the scheme of developing confidence in performance assessments are introduced. A general framework for validation and confidence building in regulatory decision making is provided. It is found that traditional validation studies have a very limited role in developing site-specific confidence in performance assessments. Indeed, validation studies are shown to have a role only in the context that their results can narrow the scope of initial investigations that should be considered in a performance assessment. In addition, validation needs for performance assessment of low-level waste disposal facilities are discussed, and potential approaches to address those needs are suggested. These areas of topical research are ranked in order of importance based on relevance to a performance assessment and likelihood of success

  6. CPV cells cooling system based on submerged jet impingement: CFD modeling and experimental validation

    Science.gov (United States)

    Montorfano, Davide; Gaetano, Antonio; Barbato, Maurizio C.; Ambrosetti, Gianluca; Pedretti, Andrea

    2014-09-01

    Concentrating photovoltaic (CPV) cells offer higher efficiencies with regard to the PV ones and allow to strongly reduce the overall solar cell area. However, to operate correctly and exploit their advantages, their temperature has to be kept low and as uniform as possible and the cooling circuit pressure drops need to be limited. In this work an impingement water jet cooling system specifically designed for an industrial HCPV receiver is studied. Through the literature and by means of accurate computational fluid dynamics (CFD) simulations, the nozzle to plate distance, the number of jets and the nozzle pitch, i.e. the distance between adjacent jets, were optimized. Afterwards, extensive experimental tests were performed to validate pressure drops and cooling power simulation results.

  7. Experimental investigation of channel avulsion frequency on river deltas under rising sea levels

    Science.gov (United States)

    Silvestre, J.; Chadwick, A. J.; Steele, S.; Lamb, M. P.

    2017-12-01

    River deltas are low-relief landscapes that are socioeconomically important; they are home to over half a billion people worldwide. Many deltas are built by cycles of lobe growth punctuated by abrupt channel shifts, or avulsions, which often reoccur at a similar location and with a regular frequency. Previous experimental work has investigated the effect of hydrodynamic backwater in controlling channel avulsion location and timing on deltas under constant sea level conditions, but it is unclear how sea-level rise impacts avulsion dynamics. We present results from a flume experiment designed to isolate the role of relative sea-level rise on the evolution of a backwater-influenced delta. The experiment was conducted in the river-ocean facility at Caltech, where a 7m long, 14cm wide alluvial river drains into a 6m by 3m "ocean" basin. The experimental delta grew under subcritical flow, a persistent backwater zone, and a range of sea level rise rates. Without sea level rise, lobe progradation produced in-channel aggradation and periodic avulsions every 3.6 ± 0.9 hours, which corresponded to when channels aggraded to approximately one-half of their flow depth. With a modest rate of sea-level rise (0.25 mm/hr), we observed enhanced aggradation in the backwater zone, causing channels to aggrade more quickly and avulse more frequently (every 2.1 ± 0.6 hours). In future work, we expect further increases in the rate of relative sea-level rise to cause avulsion frequency to decrease as the delta drowns and the backwater zone retreats upstream. Experimental results can serve as tests of numerical models that are needed for hazard mitigation and coastal sustainability efforts on drowning deltas.

  8. Multiphysics modelling and experimental validation of an air-coupled array of PMUTs with residual stresses

    Science.gov (United States)

    Massimino, G.; Colombo, A.; D'Alessandro, L.; Procopio, F.; Ardito, R.; Ferrera, M.; Corigliano, A.

    2018-05-01

    In this paper a complete multiphysics modelling via the finite element method (FEM) of an air-coupled array of piezoelectric micromachined ultrasonic transducers (PMUT) and its experimental validation are presented. Two numerical models are described for the single transducer, axisymmetric and 3D, with the following features: the presence of fabrication induced residual stresses, which determine a non-linear initial deformed configuration of the diaphragm and a substantial fundamental mode frequency shift; the multiple coupling between different physics, namely electro-mechanical coupling for the piezo-electric model, thermo-acoustic-structural interaction and thermo-acoustic-pressure interaction for the waves propagation in the surrounding fluid. The model for the single transducer is enhanced considering the full set of PMUTs belonging to the silicon dye in a 4 × 4 array configuration. The results of the numerical multiphysics models are compared with experimental ones in terms of the initial static pre-deflection, of the diaphragm central point spectrum and of the sound intensity at 3.5 cm on the vertical direction along the axis of the diaphragm.

  9. Numerical modelling and experimental validation of hydrodynamics of an emulsion in an extraction column

    International Nuclear Information System (INIS)

    Paisant, Jean-Francois

    2014-01-01

    a second approach, an experimental device was sized in order to establish an extensional flow in order to characterize and validate the physical model by data acquisition. These series of experiments were conducted by coupling particle image velocimetry with laser induced fluorescence (FIL). Continuous phases velocity was obtained by PIV and a drop detecting and tracking algorithm has been developed to estimate dispersed and continuous phases velocities and the volume fraction of the dispersed phase. These results, such as velocities and strain rate tensor, have been used in a first validation of the model. (author) [fr

  10. Single-Molecule Force Spectroscopy Trajectories of a Single Protein and Its Polyproteins Are Equivalent: A Direct Experimental Validation Based on A Small Protein NuG2.

    Science.gov (United States)

    Lei, Hai; He, Chengzhi; Hu, Chunguang; Li, Jinliang; Hu, Xiaodong; Hu, Xiaotang; Li, Hongbin

    2017-05-22

    Single-molecule force spectroscopy (SMFS) has become a powerful tool in investigating the mechanical unfolding/folding of proteins at the single-molecule level. Polyproteins made of tandem identical repeats have been widely used in atomic force microscopy (AFM)-based SMFS studies, where polyproteins not only serve as fingerprints to identify single-molecule stretching events, but may also improve statistics of data collection. However, the inherent assumption of such experiments is that all the domains in the polyprotein are equivalent and one SMFS trajectory of stretching a polyprotein made of n domains is equivalent to n trajectories of stretching a single domain. Such an assumption has not been validated experimentally. Using a small protein NuG2 and its polyprotein (NuG2) 4 as model systems, here we use optical trapping (OT) to directly validate this assumption. Our results show that OT experiments on NuG2 and (NuG2) 4 lead to identical parameters describing the unfolding and folding kinetics of NuG2, demonstrating that indeed stretching a polyprotein of NuG2 is equivalent to stretching single NuG2 in force spectroscopy experiments and thus validating the use of polyproteins in SMFS experiments. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Mixture level models in Toshiba and General Electric blowdown experimental analysis

    International Nuclear Information System (INIS)

    Gebrim, A.N.

    1993-01-01

    Three different mixture level tracking methods to vertical flow channels were tested in two Blowdown experiments. The aim of the tests is to observe the Computational efficiency and the agreement of their results with the experimental data. The first method has been used in the system code ATHLET. The second one has been used in the system code developed at BNL. The third one is described in a report but there is no notice that it has been tested. The results show that the first and the third method produce good agreement with the experimental data. The third method need a fine nodalization to yield good results. (C.M.)

  12. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  13. Optimal Control of Diesel Engines: Numerical Methods, Applications, and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Jonas Asprion

    2014-01-01

    become complex systems. The exploitation of any leftover potential during transient operation is crucial. However, even an experienced calibration engineer cannot conceive all the dynamic cross couplings between the many actuators. Therefore, a highly iterative procedure is required to obtain a single engine calibration, which in turn causes a high demand for test-bench time. Physics-based mathematical models and a dynamic optimisation are the tools to alleviate this dilemma. This paper presents the methods required to implement such an approach. The optimisation-oriented modelling of diesel engines is summarised, and the numerical methods required to solve the corresponding large-scale optimal control problems are presented. The resulting optimal control input trajectories over long driving profiles are shown to provide enough information to allow conclusions to be drawn for causal control strategies. Ways of utilising this data are illustrated, which indicate that a fully automated dynamic calibration of the engine control unit is conceivable. An experimental validation demonstrates the meaningfulness of these results. The measurement results show that the optimisation predicts the reduction of the fuel consumption and the cumulative pollutant emissions with a relative error of around 10% on highly transient driving cycles.

  14. Validity of Level of Supervision Scales for Assessing Pediatric Fellows on the Common Pediatric Subspecialty Entrustable Professional Activities.

    Science.gov (United States)

    Mink, Richard B; Schwartz, Alan; Herman, Bruce E; Turner, David A; Curran, Megan L; Myers, Angela; Hsu, Deborah C; Kesselheim, Jennifer C; Carraccio, Carol L

    2018-02-01

    Entrustable professional activities (EPAs) represent the routine and essential activities that physicians perform in practice. Although some level of supervision scales have been proposed, they have not been validated. In this study, the investigators created level of supervision scales for EPAs common to the pediatric subspecialties and then examined their validity in a study conducted by the Subspecialty Pediatrics Investigator Network (SPIN). SPIN Steering Committee members used a modified Delphi process to develop unique scales for six of the seven common EPAs. The investigators sought validity evidence in a multisubspecialty study in which pediatric fellowship program directors and Clinical Competency Committees used the scales to evaluate fellows in fall 2014 and spring 2015. Separate scales for the six EPAs, each with five levels of progressive entrustment, were created. In both fall and spring, more than 300 fellows in each year of training from over 200 programs were assessed. In both periods and for each EPA, there was a progressive increase in entrustment levels, with second-year fellows rated higher than first-year fellows (P < .001) and third-year fellows rated higher than second-year fellows (P < .001). For each EPA, spring ratings were higher (P < .001) than those in the fall. Interrater reliability was high (Janson and Olsson's iota = 0.73). The supervision scales developed for these six common pediatric subspecialty EPAs demonstrated strong validity evidence for use in EPA-based assessment of pediatric fellows. They may also inform the development of scales in other specialties.

  15. Experimental validation of a model for diffusion-controlled absorption of organic compounds in the trachea

    Energy Technology Data Exchange (ETDEWEB)

    Gerde, P. [National Inst. for Working Life, Solna (Sweden); Muggenburg, B.A.; Thornton-Manning, J.R. [and others

    1995-12-01

    Most chemically induced lung cancer originates in the epithelial cells in the airways. Common conceptions are that chemicals deposited on the airway surface are rapidly absorbed through mucous membranes, limited primarily by the rate of blood perfusion in the mucosa. It is also commonly thought that for chemicals to induce toxicity at the site of entry, they must be either rapidly reactive, readily metabolizable, or especially toxic to the tissues at the site of entry. For highly lipophilic toxicants, there is a third option. Our mathematical model predicts that as lipophilicity increases, chemicals partition more readily into the cellular lipid membranes and diffuse more slowly through the tissues. Therefore, absorption of very lipophilic compounds will be almost entirely limited by the rate of diffusion through the epithelium rather than by perfusion of the capillary bed in the subepithelium. We have reported on a preliminary model for absorption through mucous membranes of any substance with a lipid/aqueous partition coefficient larger than one. The purpose of this work was to experimentally validate the model in Beagle dogs. This validated model on toxicant absorption in the airway mucosa will improve risk assessment of inhaled

  16. Energy performance of a ventilated façade by simulation with experimental validation

    International Nuclear Information System (INIS)

    Aparicio-Fernández, Carolina; Vivancos, José-Luis; Ferrer-Gisbert, Pablo; Royo-Pastor, Rafael

    2014-01-01

    A model for a building with ventilated façade was created using the software tool TRNSYS, version 17, and airflow parameters were simulated using TRNFlow. The results obtained with the model are compared and validated with experimental data. The temperature distribution along the air cavity was analysed and a chimney effect was observed, which produced the highest temperature gradient on the first floor. The heat flux of the external wall was analysed, and greater temperatures were observed on the external layer and inside the cavity. The model allows to calculate the energy demand of the building façade proposing and evaluating passive strategies. The corresponding office building for computer laboratories located in Valencia (Spain), was monitored for a year. The thermal behaviour of the floating external sheet was analysed using an electronic panel designed for the reading and storage of data. A feasibility study of the recovery of hot air inside the façade into the building was performed. The results obtained showed a lower heating demand when hot air is introduced inside the building, increasing the efficiency of heat recovery equipment. - Highlights: •An existing office building was monitored for a year. •A model of a ventilated façade by TRNSYS simulation tool was validated. •Air flow parameters inside the ventilated façade were identified. •Recovery of the hot air inside the façade for input into the building was studied

  17. Method of administration of PROMIS scales did not significantly impact score level, reliability, or validity

    DEFF Research Database (Denmark)

    Bjorner, Jakob B; Rose, Matthias; Gandek, Barbara

    2014-01-01

    OBJECTIVES: To test the impact of the method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). STUDY DESIGN AND SETTING: Two nonoverlapping parallel forms each containing eight items from......, no significant mode differences were found and all confidence intervals were within the prespecified minimal important difference of 0.2 standard deviation. Parallel-forms reliabilities were very high (ICC = 0.85-0.93). Only one across-mode ICC was significantly lower than the same-mode ICC. Tests of validity...... questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICCs), and convergent/discriminant validity. RESULTS: In difference score analyses...

  18. The development and experimental validation of a reduced ternary kinetic mechanism for the auto-ignition at HCCI conditions, proposing a global reaction path for ternary gasoline surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Machrafi, Hatim; Cavadias, Simeon; Amouroux, Jacques [UPMC Universite Paris 06, LGPPTS, Ecole Nationale Superieure de Chimie de Paris, 11, rue de Pierre et Marie Curie, 75005 Paris (France)

    2009-02-15

    To acquire a high amount of information of the behaviour of the Homogeneous Charge Compression Ignition (HCCI) auto-ignition process, a reduced surrogate mechanism has been composed out of reduced n-heptane, iso-octane and toluene mechanisms, containing 62 reactions and 49 species. This mechanism has been validated numerically in a 0D HCCI engine code against more detailed mechanisms (inlet temperature varying from 290 to 500 K, the equivalence ratio from 0.2 to 0.7 and the compression ratio from 8 to 18) and experimentally against experimental shock tube and rapid compression machine data from the literature at pressures between 9 and 55 bar and temperatures between 700 and 1400 K for several fuels: the pure compounds n-heptane, iso-octane and toluene as well as binary and ternary mixtures of these compounds. For this validation, stoichiometric mixtures and mixtures with an equivalence ratio of 0.5 are used. The experimental validation is extended by comparing the surrogate mechanism to experimental data from an HCCI engine. A global reaction pathway is proposed for the auto-ignition of a surrogate gasoline, using the surrogate mechanism, in order to show the interactions that the three compounds can have with one another during the auto-ignition of a ternary mixture. (author)

  19. Experimental validation of calculated capture rate for nucleus involved in fuel cycle

    International Nuclear Information System (INIS)

    Benslimane-Bouland, A.

    1997-09-01

    The framework of this study was the evaluation of the nuclear data requirements for Actinides and Fission Products applied to current nuclear reactors as well as future applications. This last item includes extended irradiation campaigns, 100 % Mixed Oxide fuel, transmutation or even incineration. The first part of this study presents different types of integral measurements which are available for capture rate measurements, as well as the methods used for reactor core calculation route design and nuclear data library validation. The second section concerns the analysis of three specific irradiation experiments. The results have shown the extent of the current knowledge on nuclear data as well as the associated uncertainties. The third and last section shows both the coherency between all the results, and the statistical method applied for nuclear data library adjustment. A relevant application of this method has demonstrated that only specifically chosen integral experiments can be of use for the validation of nuclear data libraries. The conclusion is reached that even if co-ordinated efforts between reactor and nuclear physicists have made possible a huge improvement in the knowledge of capture cross sections of the main nuclei such as uranium and plutonium, some improvements are currently necessary for the minor actinides (Np, Am and Cm). Both integral and differential measurements are recommended to improve the knowledge of minor actinide cross sections. As far as integral experiments are concerned, a set of criteria to be followed during the experimental conception have been defined in order to both reduce the number of required calculation approximations, and to increase as much as possible the maximum amount of extracted information. (author)

  20. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues.

    Science.gov (United States)

    Mourya, Devendra T; Yadav, Pragya D; Khare, Ajay; Khan, Anwar H

    2017-10-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  1. ELISA validation and determination of cut-off level for chloramphenicol residues in honey

    Directory of Open Access Journals (Sweden)

    Biernacki Bogumił

    2015-09-01

    Full Text Available An analytical validation of a screening ELISA for detection of chloramphenicol (CAP in honey was conducted according to the Commission Decision 2002/657/EC and Guidelines for the Validation of Screening Methods for Residues of Veterinary Medicines. The analyte was extracted from honey with a water and ethyl acetate mixture, and CAP concentrations were measured photometrically at 450 nm. The recovery rate of the analyte from spiked samples was 79%. The cut-off level of CAP in honey as the minimum recovery (0.17 units was established. Detection capability (CCβ was fixed at 0.25 μg kg−1. No relevant interferences between matrix effects and structurally related substances including florfenicol and thiamphenicol were observed. The ELISA method should be useful for determination of CAP residues in honey monitoring.

  2. Experimental validation of a kinetic multi-component mechanism in a wide HCCI engine operating range for mixtures of n-heptane, iso-octane and toluene: Influence of EGR parameters

    Energy Technology Data Exchange (ETDEWEB)

    Machrafi, Hatim [LGPPTS, Ecole Nationale Superieure de Chimie de Paris/ Universite Pierre et Marie Curie (Paris 6), 11, rue de Pierre et Marie Curie, 75231 Paris Cedex 05 (France)

    2008-11-15

    The parameters that are present in exhaust gas recirculation (EGR) are believed to provide an important contribution to control the auto-ignition process of the homogeneous charge compression ignition (HCCI) in an engine. For the investigation of the behaviour of the auto-ignition process, a kinetic multi-component mechanism has been developed in former work, containing 62 reactions and 49 species for mixtures of n-heptane, iso-octane and toluene. This paper presents an experimental validation of this mechanism, comparing the calculated pressure, heat release, ignition delays and CO{sub 2} emissions with experimental data performed on a HCCI engine. The validation is performed in a broad range of EGR parameters by varying the dilution by N{sub 2} and CO{sub 2} from 0 to 46 vol.%, changing the EGR temperature from 30 to 120 C, altering the addition of CO and NO from 0 to 170 ppmv and varying the addition of CH{sub 2}O from 0 to 1400 ppmv. These validations were performed respecting the HCCI conditions for the inlet temperature and the equivalence ratio. The results showed that the mechanism is validated experimentally in dilution ranges going up to 21-30 vol.%, depending on the species of dilution and over the whole range of the EGR temperature. The mechanism is validated over the whole range of CO and CH{sub 2}O addition. As for the addition of NO, the mechanism is validated quantitatively up to 50 ppmv and qualitatively up to 170 ppmv. (author)

  3. FY-09 Report: Experimental Validation of Stratified Flow Phenomena, Graphite Oxidation, and Mitigation Strategies of Air Ingress Accidents

    Energy Technology Data Exchange (ETDEWEB)

    Chang H. Oh; Eung S. Kim

    2009-12-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in the Next Generation Nuclear Plant (NGNP)/Gen-IV very high temperature reactor (VHTR). Phenomena Identification and Ranking Studies to date have identified that an air ingress event following on the heels of a VHTR depressurization is a very important incident. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority for the NGNP Project. Following a loss of coolant and system depressurization incident, air will enter the core through the break, leading to oxidation of the in-core graphite structure and fuel. If this accident occurs, the oxidation will accelerate heat-up of the bottom reflector and the reactor core and will eventually cause the release of fission products. The potential collapse of the core bottom structures causing the release of CO and fission products is one of the concerns. Therefore, experimental validation with the analytical model and computational fluid dynamic (CFD) model developed in this study is very important. Estimating the proper safety margin will require experimental data and tools, including accurate multidimensional thermal-hydraulic and reactor physics models, a burn-off model, and a fracture model. It will also require effective strategies to mitigate the effects of oxidation. The results from this research will provide crucial inputs to the INL NGNP/VHTR Methods Research and Development project. The second year of this three-year project (FY-08 to FY-10) was focused on (a) the analytical, CFD, and experimental study of air ingress caused by density-driven, stratified, countercurrent flow; (b) advanced graphite oxidation experiments and modeling; (c) experimental study of burn-off in the core bottom structures, (d) implementation of advanced

  4. Assessment of leaf carotenoids content with a new carotenoid index: Development and validation on experimental and model data

    Science.gov (United States)

    Zhou, Xianfeng; Huang, Wenjiang; Kong, Weiping; Ye, Huichun; Dong, Yingying; Casa, Raffaele

    2017-05-01

    Leaf carotenoids content (LCar) is an important indicator of plant physiological status. Accurate estimation of LCar provides valuable insight into early detection of stress in vegetation. With spectroscopy techniques, a semi-empirical approach based on spectral indices was extensively used for carotenoids content estimation. However, established spectral indices for carotenoids that generally rely on limited measured data, might lack predictive accuracy for carotenoids estimation in various species and at different growth stages. In this study, we propose a new carotenoid index (CARI) for LCar assessment based on a large synthetic dataset simulated from the leaf radiative transfer model PROSPECT-5, and evaluate its capability with both simulated data from PROSPECT-5 and 4SAIL and extensive experimental datasets: the ANGERS dataset and experimental data acquired in field experiments in China in 2004. Results show that CARI was the index most linearly correlated with carotenoids content at the leaf level using a synthetic dataset (R2 = 0.943, RMSE = 1.196 μg/cm2), compared with published spectral indices. Cross-validation results with CARI using ANGERS data achieved quite an accurate estimation (R2 = 0.545, RMSE = 3.413 μg/cm2), though the RBRI performed as the best index (R2 = 0.727, RMSE = 2.640 μg/cm2). CARI also showed good accuracy (R2 = 0.639, RMSE = 1.520 μg/cm2) for LCar assessment with leaf level field survey data, though PRI performed better (R2 = 0.710, RMSE = 1.369 μg/cm2). Whereas RBRI, PRI and other assessed spectral indices showed a good performance for a given dataset, overall their estimation accuracy was not consistent across all datasets used in this study. Conversely CARI was more robust showing good results in all datasets. Further assessment of LCar with simulated and measured canopy reflectance data indicated that CARI might not be very sensitive to LCar changes at low leaf area index (LAI) value, and in these conditions soil moisture

  5. CFD Simulation and Experimental Validation of Fluid Flow and Particle Transport in a Model of Alveolated Airways.

    Science.gov (United States)

    Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal

    2009-05-01

    Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 mum aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy.

  6. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    Directory of Open Access Journals (Sweden)

    Devendra T Mourya

    2017-01-01

    Full Text Available With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  7. Experimental investigation of stratified two-phase flows in the hot leg of a PWR for CFD validation

    Energy Technology Data Exchange (ETDEWEB)

    Vallee, Christophe; Lucas, Dirk [Helmholtz-Zentrum Dresden-Rossendorf (HZDR) e.V., Dresden (Germany). Inst. of Fluid Dynamics; Tomiyama, Akio [Kobe Univ. (Japan). Graduate School of Engineering; Murase, Michio [Institute of Nuclear Safety System Inc. (INSS), Fukui (Japan)

    2012-07-01

    Stratified two-phase flows were investigated in two different models of the hot leg of a pressurised water reactor (PWR) in order to provide experimental data for the development and validation of computational fluid dynamics (CFD) codes. Therefore, the local flow structure was visualised with a high-speed video camera. Moreover, one test section was designed with a rectangular cross-section to achieve optimum observation conditions. The phenomenon of counter-current flow limitation (CCFL) was investigated, which may affect the reflux condenser cooling mode in some accident scenarios. (orig.)

  8. Validation of an extraction paper chromatography (EPC) technique for estimation of trace levels of 90Sr in 90Y solutions obtained from 90Sr/90Y generator systems

    International Nuclear Information System (INIS)

    Usha Pandey; Yogendra Kumar; Ashutosh Dash

    2014-01-01

    While the extraction paper chromatography (EPC) technique constitutes a novel paradigm for the determination of few Becquerels of 90 Sr in MBq quantities of 90 Y obtained from 90 Sr/ 90 Y generator, validation of the technique is essential to ensure its usefulness as a real time analytical tool. With a view to explore the relevance and applicability of EPC technique as a real time quality control (QC) technique for the routine estimation of 90 Sr content in generator produced 90 Y, a systematic validation study was carried out diligently not only to establish its worthiness but also to broaden its horizon. The ability of the EPC technique to separate trace amounts of Sr 2+ in the presence of large amounts of Y 3+ was verified. The specificity of the technique for Y 3+ was demonstrated with 90 Y obtained by neutron irradiation. The method was validated under real experimental conditions and compared with a QC method described in US Pharmacopeia for detection of 90 Sr levels in 90 Y radiopharmaceuticals. (author)

  9. Control of a Vanadium Redox Battery and supercapacitor using a Three-Level Neutral Point Clamped converter

    Science.gov (United States)

    Etxeberria, A.; Vechiu, I.; Baudoin, S.; Camblong, H.; Kreckelbergh, S.

    2014-02-01

    The increasing use of distributed generators, which are mainly based on renewable sources, can create several issues in the operation of the electric grid. The microgrid is being analysed as a solution to the integration in the grid of the renewable sources at a high penetration level in a controlled way. The storage systems play a vital role in order to keep the energy and power balance of the microgrid. Due to the technical limitations of the currently available storage systems, it is necessary to use more than one storage technology to satisfy the requirements of the microgrid application. This work validates in simulations and experimentally the use of a Three-Level Neutral Point Clamped converter to control the power flow of a hybrid storage system formed by a SuperCapacitor and a Vanadium Redox Battery. The operation of the system is validated in two case studies in the experimental platform installed in ESTIA. The experimental results prove the validity of the proposed system as well as the designed control algorithm. The good agreement among experimental and simulation results also validates the simulation model, that can therefore be used to analyse the operation of the system in different case studies.

  10. Experimental validation of the influence of white matter anisotropy on the intracranial EEG forward solution.

    Science.gov (United States)

    Bangera, Nitin B; Schomer, Donald L; Dehghani, Nima; Ulbert, Istvan; Cash, Sydney; Papavasiliou, Steve; Eisenberg, Solomon R; Dale, Anders M; Halgren, Eric

    2010-12-01

    Forward solutions with different levels of complexity are employed for localization of current generators, which are responsible for the electric and magnetic fields measured from the human brain. The influence of brain anisotropy on the forward solution is poorly understood. The goal of this study is to validate an anisotropic model for the intracranial electric forward solution by comparing with the directly measured 'gold standard'. Dipolar sources are created at known locations in the brain and intracranial electroencephalogram (EEG) is recorded simultaneously. Isotropic models with increasing level of complexity are generated along with anisotropic models based on Diffusion tensor imaging (DTI). A Finite Element Method based forward solution is calculated and validated using the measured data. Major findings are (1) An anisotropic model with a linear scaling between the eigenvalues of the electrical conductivity tensor and water self-diffusion tensor in brain tissue is validated. The greatest improvement was obtained when the stimulation site is close to a region of high anisotropy. The model with a global anisotropic ratio of 10:1 between the eigenvalues (parallel: tangential to the fiber direction) has the worst performance of all the anisotropic models. (2) Inclusion of cerebrospinal fluid as well as brain anisotropy in the forward model is necessary for an accurate description of the electric field inside the skull. The results indicate that an anisotropic model based on the DTI can be constructed non-invasively and shows an improved performance when compared to the isotropic models for the calculation of the intracranial EEG forward solution.

  11. A multi-source satellite data approach for modelling Lake Turkana water level: calibration and validation using satellite altimetry data

    Directory of Open Access Journals (Sweden)

    N. M. Velpuri

    2012-01-01

    Full Text Available Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of inter- and intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellite-driven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE of 0.80 during the validation period (2004–2009. Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1–2 m. The lake level fluctuated in the range up to 4 m between the years 1998–2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated

  12. A multi-source satellite data approach for modelling Lake Turkana water level: Calibration and validation using satellite altimetry data

    Science.gov (United States)

    Velpuri, N.M.; Senay, G.B.; Asante, K.O.

    2012-01-01

    Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of interand intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellitedriven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE) of 0.80 during the validation period (2004-2009). Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1-2m. The lake level fluctuated in the range up to 4m between the years 1998-2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated satellite-driven water balance

  13. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    DEFF Research Database (Denmark)

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  14. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  15. Heat transfer to sub- and supercritical water flowing upward in a vertical tube at low mass fluxes: numerical analysis and experimental validation

    NARCIS (Netherlands)

    Odu, Samuel Obarinu; Koster, P.; van der Ham, Aloysius G.J.; van der Hoef, Martin Anton; Kersten, Sascha R.A.

    2016-01-01

    Heat transfer to supercritical water (SCW) flowing upward in a vertical heated tube at low mass fluxes (G ≤ 20 kg/m2 s) has been numerically investigated in COMSOL Multiphysics and validated with experimental data. The turbulence models, essential to describing local turbulence, in COMSOL have been

  16. Experimental evaluation of inner-vacancy level energies for comparison with theory

    International Nuclear Information System (INIS)

    Deslattes, R.D.; Kessler, E.G.

    1985-01-01

    This chapter deals with progress on the theoretical side in calculations of atomic inner-shell energy levels. In reaching what the authors consider to be the best available body of experimental data about inner-shell energy-level differences, three types of input are used: those lines which have been directly measured with high-resolution double-diffraction instruments; those obtained with high-resolution curved-crystal optics relative to gamma-ray standards, and those (low-energy) lines whose wavelength ratios with respect to directly measured X-ray lines have been taken from a very restricted set of earlier measurements. Application of X-ray absorption spectroscopy (XAS), X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), appearance-potential spectroscopy (APS), and X-ray emission spectroscopy (XES) to the problem of energy-level difference determination and single-vacancy energy level determination are described

  17. Signal Validation: A Survey of Theoretical and Experimental Studies at the KFKI Atomic Energy Research Institute

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.

    1996-07-01

    The aim of this survey paper is to collect the results of the theoretical and experimental work that has been done on early failure and change detection, signal/detector validation, parameter estimation and system identification problems in the Applied Reactor Physics Department of the KFKI-AEI. The present paper reports different applications of the theoretical methods using real and computer simulated data. The final goal is two-sided: 1) to better understand the mathematical/physical background of the applied methods and 2) to integrate the useful algorithms into a large, complex diagnostic software system. The software is under development, a preliminary version (called JEDI) has already been accomplished. (author)

  18. Health status in routine clinical practice: validity of the clinical COPD questionnaire at the individual patient level

    Directory of Open Access Journals (Sweden)

    de Vos Barbara

    2010-11-01

    Full Text Available Abstract Background There is a growing interest to use health status or disease control questionnaires in routine clinical practice. However, the validity of most questionnaires is established using techniques developed for group level validation. This study examines a new method, using patient interviews, to validate a short health status questionnaire, the Clinical COPD Questionnaire (CCQ, at the individual patient level. Methods Patients with COPD who visited an outpatient clinic completed the CCQ before the consultation, and the specialist physician completed it after the consultation. After the consultation all patients had a semi-structured in-depth interview. The patients' CCQ scores were compared with those of the treating clinician, and with mean scores from 5 clinicians from a pool of 20 who scored the CCQ after reading the transcript of the in-depth interviews only. Agreement was assessed using Lin's concordance correlation coefficient (CCC, and Blant and Altman plots. Interviews with patients with low agreement were reviewed for possible explanations. Results A total of 44 COPD patients (32 male, mean age 66 years, FEV1 45% of predicted participated. Agreement between the patients' CCQ scores and those of the treating clinicians (CCC = 0.87 and the mean score of the reviewing clinicians (CCC = 0.86 was very high. No systematic error was detected. No explanation for individuals with low agreement was found. Conclusion The validity of the CCQ on the individual patient level, as assessed by these methods, is good. Individual health status assessment with the CCQ is therefore sufficiently accurate to be used in routine clinical practice.

  19. Experimental Validation of Pulse Phase Tracking for X-Ray Pulsar Based

    Science.gov (United States)

    Anderson, Kevin

    2012-01-01

    Pulsars are a form of variable celestial source that have shown to be usable as aids for autonomous, deep space navigation. Particularly those sources emitting in the X-ray band are ideal for navigation due to smaller detector sizes. In this paper X-ray photons arriving from a pulsar are modeled as a non-homogeneous Poisson process. The method of pulse phase tracking is then investigated as a technique to measure the radial distance traveled by a spacecraft over an observation interval. A maximum-likelihood phase estimator (MLE) is used for the case where the observed frequency signal is constant. For the varying signal frequency case, an algorithm is used in which the observation window is broken up into smaller blocks over which an MLE is used. The outputs of this phase estimation process were then looped through a digital phase-locked loop (DPLL) in order to reduce the errors and produce estimates of the doppler frequency. These phase tracking algorithms were tested both in a computer simulation environment and using the NASA Goddard Space flight Center X-ray Navigation Laboratory Testbed (GXLT). This provided an experimental validation with photons being emitted by a modulated X-ray source and detected by a silicon-drift detector. Models of the Crab pulsar and the pulsar B1821-24 were used in order to generate test scenarios. Three different simulated detector trajectories were used to be tracked by the phase tracking algorithm: a stationary case, one with constant velocity, and one with constant acceleration. All three were performed in one-dimension along the line of sight to the pulsar. The first two had a constant signal frequency and the third had a time varying frequency. All of the constant frequency cases were processed using the MLE, and it was shown that they tracked the initial phase within 0.15% for the simulations and 2.5% in the experiments, based on an average of ten runs. The MLE-DPLL cascade version of the phase tracking algorithm was used in

  20. FINAL REPORT on Experimental Validation of Stratified Flow Phenomena, Graphite Oxidation, and Mitigation Strategies of Air Ingress Accidents

    Energy Technology Data Exchange (ETDEWEB)

    Chang H. Oh; Eung S. Kim; Hee C. NO; Nam Z. Cho

    2011-01-01

    The U.S. Department of Energy is performing research and development that focuses on key phenomena that are important during challenging scenarios that may occur in the Next Generation Nuclear Plant (NGNP)/Generation IV very high temperature reactor (VHTR). Phenomena Identification and Ranking studies to date have identified the air ingress event, following on the heels of a VHTR depressurization, as very important. Consequently, the development of advanced air ingress-related models and verification & validation are of very high priority for the NGNP Project. Following a loss of coolant and system depressurization incident, air ingress will occur through the break, leading to oxidation of the in-core graphite structure and fuel. This study indicates that depending on the location and the size of the pipe break, the air ingress phenomena are different. In an effort to estimate the proper safety margin, experimental data and tools, including accurate multidimensional thermal-hydraulic and reactor physics models, a burn-off model, and a fracture model are required. It will also require effective strategies to mitigate the effects of oxidation, eventually. This 3-year project (FY 2008–FY 2010) is focused on various issues related to the VHTR air-ingress accident, including (a) analytical and experimental study of air ingress caused by density-driven, stratified, countercurrent flow, (b) advanced graphite oxidation experiments, (c) experimental study of burn-off in the core bottom structures, (d) structural tests of the oxidized core bottom structures, (e) implementation of advanced models developed during the previous tasks into the GAMMA code, (f) full air ingress and oxidation mitigation analyses, (g) development of core neutronic models, (h) coupling of the core neutronic and thermal hydraulic models, and (i) verification and validation of the coupled models.

  1. Preliminary validation of RELAP5/Mod4.0 code for LBE cooled NACIE facility

    Energy Technology Data Exchange (ETDEWEB)

    Kumari, Indu; Khanna, Ashok, E-mail: akhanna@iitk.ac.in

    2017-04-01

    Highlights: • Detail discussion of thermo physical properties of Lead Bismuth Eutectic incorporated in the code RELAP5/Mod4.0 included. • Benchmarking of LBE properties in RELAP5/Mod4.0 against literature. • NACIE facility for three different power levels (10.8, 21.7 and 32.5 kW) under natural circulation considered for benchmarking. • Preliminary validation of the LBE properties against experimental data. • NACIE facility for power level 22.5 kW considered for validation. - Abstract: The one-dimensional thermal hydraulic computer code RELAP5 was developed for thermal hydraulic study of light water reactor as well as for nuclear research reactors. The purpose of this work is to evaluate the code RELAP5/Mod4.0 for analysis of research reactors. This paper consists of three major sections. The first section presents detailed discussions on thermo-physical properties of Lead Bismuth Eutectic (LBE) incorporated in RELAP5/Mod4.0 code. In the second section, benchmarking of RELAP5/Mod4.0 has been done with the Natural Circulation Experimental (NACIE) facility in comparison with Barone’s simulations using RELAP5/Mod3.3. Three different power levels (10.8 kW, 21.7 kW and 32.5 kW) under natural circulation conditions are considered. Results obtained for LBE temperatures, temperature difference across heat section, pin surface temperatures, mass flow rates and heat transfer coefficients in heat section heat exchanger are in agreement with Barone’s simulation results within 7% of average relative error. Third section presents validation of RELAP5/Mod4.0 against the experimental data of NACIE facility performed by Tarantino et al. test number 21 at power of 22.5 kW comparing the profiles of temperatures, mass flow rate and velocity of LBE. Simulation and experimental results agree within 7% of average relative error.

  2. Design and experimental validation for direct-drive fault-tolerant permanent-magnet vernier machines.

    Science.gov (United States)

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.

  3. Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines

    Directory of Open Access Journals (Sweden)

    Guohai Liu

    2014-01-01

    Full Text Available A fault-tolerant permanent-magnet vernier (FT-PMV machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs. This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM, the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.

  4. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  5. Experimental validation of a three-dimensional linear system model for breast tomosynthesis

    International Nuclear Information System (INIS)

    Zhao Bo; Zhou Jun; Hu Yuehoung; Mertelmeier, Thomas; Ludwig, Jasmina; Zhao Wei

    2009-01-01

    A three-dimensional (3D) linear model for digital breast tomosynthesis (DBT) was developed to investigate the effects of different imaging system parameters on the reconstructed image quality. In the present work, experimental validation of the model was performed on a prototype DBT system equipped with an amorphous selenium (a-Se) digital mammography detector and filtered backprojection (FBP) reconstruction methods. The detector can be operated in either full resolution with 85 μm pixel size or 2x1 pixel binning mode to reduce acquisition time. Twenty-five projection images were acquired with a nominal angular range of ±20 deg. The images were reconstructed using a slice thickness of 1 mm with 0.085x0.085 mm in-plane pixel dimension. The imaging performance was characterized by spatial frequency-dependent parameters including a 3D noise power spectrum (NPS) and in-plane modulation transfer function (MTF). Scatter-free uniform x-ray images were acquired at four different exposure levels for noise analysis. An aluminum (Al) edge phantom with 0.2 mm thickness was imaged to measure the in-plane presampling MTF. The measured in-plane MTF and 3D NPS were both in good agreement with the model. The dependence of DBT image quality on reconstruction filters was investigated. It was found that the slice thickness (ST) filter, a Hanning window to limit the high-frequency components in the slice thickness direction, reduces noise aliasing and improves 3D DQE. An ACR phantom was imaged to investigate the effects of angular range and detector operational modes on reconstructed image quality. It was found that increasing the angular range improves the MTF at low frequencies, resulting in better detection of large-area, low-contrast mass lesions in the phantom. There is a trade-off between noise and resolution for pixel binning and full resolution modes, and the choice of detector mode will depend on radiation dose and the targeted lesion.

  6. Experimental validation of a three-dimensional linear system model for breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Bo; Zhou Jun; Hu Yuehoung; Mertelmeier, Thomas; Ludwig, Jasmina; Zhao Wei [Department of Radiology, State University of New York at Stony Brook, L-4 120 Health Sciences Center, Stony Brook, New York 11794-8460 (United States); Siemens AG Healthcare, Henkestrasse 127, D-91052 Erlangen (Germany); Department of Radiology, State University of New York at Stony Brook, L-4 120 Health Sciences Center, Stony Brook, New York 11794-8460 (United States)

    2009-01-15

    A three-dimensional (3D) linear model for digital breast tomosynthesis (DBT) was developed to investigate the effects of different imaging system parameters on the reconstructed image quality. In the present work, experimental validation of the model was performed on a prototype DBT system equipped with an amorphous selenium (a-Se) digital mammography detector and filtered backprojection (FBP) reconstruction methods. The detector can be operated in either full resolution with 85 {mu}m pixel size or 2x1 pixel binning mode to reduce acquisition time. Twenty-five projection images were acquired with a nominal angular range of {+-}20 deg. The images were reconstructed using a slice thickness of 1 mm with 0.085x0.085 mm in-plane pixel dimension. The imaging performance was characterized by spatial frequency-dependent parameters including a 3D noise power spectrum (NPS) and in-plane modulation transfer function (MTF). Scatter-free uniform x-ray images were acquired at four different exposure levels for noise analysis. An aluminum (Al) edge phantom with 0.2 mm thickness was imaged to measure the in-plane presampling MTF. The measured in-plane MTF and 3D NPS were both in good agreement with the model. The dependence of DBT image quality on reconstruction filters was investigated. It was found that the slice thickness (ST) filter, a Hanning window to limit the high-frequency components in the slice thickness direction, reduces noise aliasing and improves 3D DQE. An ACR phantom was imaged to investigate the effects of angular range and detector operational modes on reconstructed image quality. It was found that increasing the angular range improves the MTF at low frequencies, resulting in better detection of large-area, low-contrast mass lesions in the phantom. There is a trade-off between noise and resolution for pixel binning and full resolution modes, and the choice of detector mode will depend on radiation dose and the targeted lesion.

  7. Experimental benchmark and code validation for airfoils equipped with passive vortex generators

    International Nuclear Information System (INIS)

    Baldacchino, D; Ferreira, C; Florentie, L; Timmer, N; Van Zuijlen, A; Manolesos, M; Chaviaropoulos, T; Diakakis, K; Papadakis, G; Voutsinas, S; González Salcedo, Á; Aparicio, M; García, N R.; Sørensen, N N.; Troldborg, N

    2016-01-01

    Experimental results and complimentary computations for airfoils with vortex generators are compared in this paper, as part of an effort within the AVATAR project to develop tools for wind turbine blade control devices. Measurements from two airfoils equipped with passive vortex generators, a 30% thick DU97W300 and an 18% thick NTUA T18 have been used for benchmarking several simulation tools. These tools span low-to-high complexity, ranging from engineering-level integral boundary layer tools to fully-resolved computational fluid dynamics codes. Results indicate that with appropriate calibration, engineering-type tools can capture the effects of vortex generators and outperform more complex tools. Fully resolved CFD comes at a much higher computational cost and does not necessarily capture the increased lift due to the VGs. However, in lieu of the limited experimental data available for calibration, high fidelity tools are still required for assessing the effect of vortex generators on airfoil performance. (paper)

  8. The Mitochondrial Protein Atlas: A Database of Experimentally Verified Information on the Human Mitochondrial Proteome.

    Science.gov (United States)

    Godin, Noa; Eichler, Jerry

    2017-09-01

    Given its central role in various biological systems, as well as its involvement in numerous pathologies, the mitochondrion is one of the best-studied organelles. However, although the mitochondrial genome has been extensively investigated, protein-level information remains partial, and in many cases, hypothetical. The Mitochondrial Protein Atlas (MPA; URL: lifeserv.bgu.ac.il/wb/jeichler/MPA ) is a database that provides a complete, manually curated inventory of only experimentally validated human mitochondrial proteins. The MPA presently contains 911 unique protein entries, each of which is associated with at least one experimentally validated and referenced mitochondrial localization. The MPA also contains experimentally validated and referenced information defining function, structure, involvement in pathologies, interactions with other MPA proteins, as well as the method(s) of analysis used in each instance. Connections to relevant external data sources are offered for each entry, including links to NCBI Gene, PubMed, and Protein Data Bank. The MPA offers a prototype for other information sources that allow for a distinction between what has been confirmed and what remains to be verified experimentally.

  9. CFD simulation and experimental validation of a GM type double inlet pulse tube refrigerator

    Science.gov (United States)

    Banjare, Y. P.; Sahoo, R. K.; Sarangi, S. K.

    2010-04-01

    Pulse tube refrigerator has the advantages of long life and low vibration over the conventional cryocoolers, such as GM and stirling coolers because of the absence of moving parts in low temperature. This paper performs a three-dimensional computational fluid dynamic (CFD) simulation of a GM type double inlet pulse tube refrigerator (DIPTR) vertically aligned, operating under a variety of thermal boundary conditions. A commercial computational fluid dynamics (CFD) software package, Fluent 6.1 is used to model the oscillating flow inside a pulse tube refrigerator. The simulation represents fully coupled systems operating in steady-periodic mode. The externally imposed boundary conditions are sinusoidal pressure inlet by user defined function at one end of the tube and constant temperature or heat flux boundaries at the external walls of the cold-end heat exchangers. The experimental method to evaluate the optimum parameters of DIPTR is difficult. On the other hand, developing a computer code for CFD analysis is equally complex. The objectives of the present investigations are to ascertain the suitability of CFD based commercial package, Fluent for study of energy and fluid flow in DIPTR and to validate the CFD simulation results with available experimental data. The general results, such as the cool down behaviours of the system, phase relation between mass flow rate and pressure at cold end, the temperature profile along the wall of the cooler and refrigeration load are presented for different boundary conditions of the system. The results confirm that CFD based Fluent simulations are capable of elucidating complex periodic processes in DIPTR. The results also show that there is an excellent agreement between CFD simulation results and experimental results.

  10. Experimental validation of Villain's conjecture about magnetic ordering in quasi-1D helimagnets

    Energy Technology Data Exchange (ETDEWEB)

    Cinti, F., E-mail: fabio.cinti@fi.infn.i [CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (Italy); CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Rettori, A. [CNISM and Department of Physics, University of Florence, 50019 Sesto Fiorentino (Italy); CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Pini, M.G. [ISC-CNR, Via Madonna del Piano 10, I-50019 Sesto Fiorentino (Italy); Mariani, M.; Micotti, E. [Department of Physics A. Volta and CNR-INFM, University of Pavia, Via Bassi 6, I-27100 Pavia (Italy); Lascialfari, A. [Department of Physics A. Volta and CNR-INFM, University of Pavia, Via Bassi 6, I-27100 Pavia (Italy); Institute of General Physiology and Biological Chemistry, University of Milano, Via Trentacoste 2, I-20134 Milano (Italy); CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Papinutto, N. [CIMeC, University of Trento, Via delle Regole, 101 38060 Mattarello (Italy); Department of Physics A. Volta and CNR-INFM, University of Pavia, Via Bassi 6, I-27100 Pavia (Italy); Amato, A. [Paul Scherrer Institute, CH-5232 Villingen PSI (Switzerland); Caneschi, A.; Gatteschi, D. [INSTM R.U. Firenze and Department of Chemistry, University of Florence, Via della Lastruccia 3, I-50019 Sesto Fiorentino (Italy); Affronte, M. [CNR-INFM S3 National Research Center, I-41100 Modena (Italy); Department of Physics, University of Modena and Reggio Emilia Via Campi 213/A, I-41100 Modena (Italy)

    2010-05-15

    Low-temperature magnetic susceptibility, zero-field muon spin resonance and specific heat measurements have been performed in the quasi-one-dimensional (1D) molecular helimagnetic compound Gd(hfac){sub 3}NITEt. The specific heat presents two anomalies at T{sub 0}=2.19(2)K and T{sub N}=1.88(2)K, while susceptibility and zero-field muon spin resonance show anomalies only at T{sub N}=1.88(2)K. The results suggest an experimental validation of Villain's conjecture of a two-step magnetic ordering in quasi-1D XY helimagnets: the paramagnetic phase and the helical spin solid phases are separated by a chiral spin liquid, where translational invariance is broken without violation of rotational invariance.

  11. Model development and experimental validation of capnophilic lactic fermentation and hydrogen synthesis by Thermotoga neapolitana.

    Science.gov (United States)

    Pradhan, Nirakar; Dipasquale, Laura; d'Ippolito, Giuliana; Fontana, Angelo; Panico, Antonio; Pirozzi, Francesco; Lens, Piet N L; Esposito, Giovanni

    2016-08-01

    The aim of the present study was to develop a kinetic model for a recently proposed unique and novel metabolic process called capnophilic (CO2-requiring) lactic fermentation (CLF) pathway in Thermotoga neapolitana. The model was based on Monod kinetics and the mathematical expressions were developed to enable the simulation of biomass growth, substrate consumption and product formation. The calibrated kinetic parameters such as maximum specific uptake rate (k), semi-saturation constant (kS), biomass yield coefficient (Y) and endogenous decay rate (kd) were 1.30 h(-1), 1.42 g/L, 0.1195 and 0.0205 h(-1), respectively. A high correlation (>0.98) was obtained between the experimental data and model predictions for both model validation and cross validation processes. An increase of the lactate production in the range of 40-80% was obtained through CLF pathway compared to the classic dark fermentation model. The proposed kinetic model is the first mechanistically based model for the CLF pathway. This model provides useful information to improve the knowledge about how acetate and CO2 are recycled back by Thermotoga neapolitana to produce lactate without compromising the overall hydrogen yield. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. ATLAS Level-1 Topological Trigger : Commissioning and Validation in Run 2

    CERN Document Server

    AUTHOR|(SzGeCERN)788741; The ATLAS collaboration; Hong, Tae Min

    2017-01-01

    The ATLAS experiment has recently commissioned a new hardware component of its first-level trigger: the topological processor (L1Topo). This innovative system, using state-of-the-art FPGA processors, selects events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Since the first-level trigger is a synchronous pipelined system, such requirements are applied within a latency of 200ns. We will present the first results from data recorded using the L1Topo trigger; these demonstrate a significantly improved background event rejection, thus allowing for a rate reduction without efficiency loss. This improvement has been shown for several physics processes leading to low-$P_{T}$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, we will discuss the use of an accurate L1Topo simulation as a powerful tool to validate and optimize the performance of this new trigger system. To reach ...

  13. Use of experimental design for the purge-and-trap-gas chromatography-mass spectrometry determination of methyl tert.-butyl ether, tert.-butyl alcohol and BTEX in groundwater at trace level.

    Science.gov (United States)

    Bianchi, F; Careri, M; Marengo, E; Musci, M

    2002-10-25

    An efficient method for the simultaneous determination of methyl tert.-butyl ether, tert.-butyl alcohol, benzene, toluene, ethylbenzene and xylene isomers in groundwater by purge-and-trap-gas chromatography-mass spectrometry was developed and validated. Experimental design was used to investigate the effects of temperature of extraction, time of extraction and percentage of salt added to the water samples. Regression models and desirability functions were applied to find the experimental conditions providing the highest global extraction yield. Validation was carried out in terms of limits of detection (LOD), limits of quantitation (LOQ), linearity and precision. LOD values ranging from 2.6 to 23 ng l(-1) were achieved, whereas linearity was statistically verified over two orders of magnitude for each compound. Precision was evaluated testing two concentration levels. Good results were obtained both in terms of intra-day repeatability and intermediate precision: RSD% lower than 4.5% at the highest concentration and lower than 13% at the lowest one were calculated for intra-day repeatability. A groundwater sample suspected of contamination by leaking underground petroleum storage tanks was analysed and some of the analytes were detected and quantitated.

  14. DMFC anode polarization: Experimental analysis and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Casalegno, A.; Marchesi, R. [Dipartimento di Energetica, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2008-01-03

    Anode two-phase flow has an important influence on DMFC performance and methanol crossover. In order to elucidate two-phase flow influence on anode performance, in this work, anode polarization is investigated combining experimental and modelling approach. A systematic experimental analysis of operating conditions influence on anode polarization is presented. Hysteresis due to operating condition is observed; experimental results suggest that it arises from methanol accumulation and has to be considered in evaluating DMFC performances and measurements reproducibility. A model of DMFC anode polarization is presented and utilised as tool to investigate anode two-phase flow. The proposed analysis permits one to produce a confident interpretation of the main involved phenomena. In particular, it confirms that methanol electro-oxidation kinetics is weakly dependent on methanol concentration and that methanol transport in gas phase produces an important contribution in anode feeding. Moreover, it emphasises the possibility to optimise anode flow rate in order to improve DMFC performance and reduce methanol crossover. (author)

  15. Experimental validation of waveform relaxation technique for power ...

    Indian Academy of Sciences (India)

    Two systems are considered: a HVDC controller tested with a detailed model of the converters, and a TCSC based damping controller tested with a low frequency model of a power system. The results are validated with those obtained using simulated models of the controllers. We also present results of an experiment in ...

  16. Upgrade of the Gas Flow Control System of the Resistive Current Leads of the LHC Inner Triplet Magnets: Simulation and Experimental Validation

    CERN Document Server

    Perin, A; Casas-Cubillos, J; Pezzetti, M

    2014-01-01

    The 600 A and 120 A circuits of the inner triplet magnets of the Large Hadron Collider are powered by resistive gas cooled current leads. The current solution for controlling the gas flow of these leads has shown severe operability limitations. In order to allow a more precise and more reliable control of the cooling gas flow, new flowmeters will be installed during the first long shutdown of the LHC. Because of the high level of radiation in the area next to the current leads, the flowmeters will be installed in shielded areas located up to 50 m away from the current leads. The control valves being located next to the current leads, this configuration leads to long piping between the valves and the flowmeters. In order to determine its dynamic behaviour, the proposed system was simulated with a numerical model and validated with experimental measurements performed on a dedicated test bench.

  17. Upgrade of the gas flow control system of the resistive current leads of the LHC inner triplet magnets: Simulation and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Perin, A.; Casas-Cubillos, J.; Pezzetti, M. [CERN, CH-1211 Geneva 23 (Switzerland); Almeida, M. [Universidade Federal de Minas Gerais, 31270-901 Belo Horizonte (Brazil)

    2014-01-29

    The 600 A and 120 A circuits of the inner triplet magnets of the Large Hadron Collider are powered by resistive gas cooled current leads. The current solution for controlling the gas flow of these leads has shown severe operability limitations. In order to allow a more precise and more reliable control of the cooling gas flow, new flowmeters will be installed during the first long shutdown of the LHC. Because of the high level of radiation in the area next to the current leads, the flowmeters will be installed in shielded areas located up to 50 m away from the current leads. The control valves being located next to the current leads, this configuration leads to long piping between the valves and the flowmeters. In order to determine its dynamic behaviour, the proposed system was simulated with a numerical model and validated with experimental measurements performed on a dedicated test bench.

  18. A heat pump driven and hollow fiber membrane-based liquid desiccant air dehumidification system: Modeling and experimental validation

    International Nuclear Information System (INIS)

    Zhang, Li-Zhi; Zhang, Ning

    2014-01-01

    A compression heat pump driven and membrane-based liquid desiccant air dehumidification system is presented. The dehumidifier and the regenerator are made of two hollow fiber membrane bundles packed in two shells. Water vapor can permeate through these membranes effectively, while the liquid desiccant droplets are prevented from cross-over. Simultaneous heating and cooling of the salt solution are realized with a heat pump system to improve energy efficiency. In this research, the system is built up and a complete modeling is performed for the system. Heat and mass transfer processes in the membrane modules, as well as in the evaporator, the condenser, and other key components are modeled in detail. The whole model is validated by experiment. The performances of SDP (specific dehumidification power), dehumidification efficiency, EER (energy efficiency ratio) of heat pump, and the COP (coefficient of performance) of the system are investigated numerically and experimentally. The results show that the model can predict the system accurately. The dehumidification capabilities and the energy efficiencies of the system are high. Further, it performs well even under the harsh hot and humid South China weather conditions. - Highlights: • A membrane-based and heat pump driven air dehumidification system is proposed. • A real experimental set up is built and used to validate the model for the whole system. • Performance under design and varying operation conditions is investigated. • The system performs well even under harsh hot and humid conditions

  19. Experimental validation of plant peroxisomal targeting prediction algorithms by systematic comparison of in vivo import efficiency and in vitro PTS1 binding affinity.

    Science.gov (United States)

    Skoulding, Nicola S; Chowdhary, Gopal; Deus, Mara J; Baker, Alison; Reumann, Sigrun; Warriner, Stuart L

    2015-03-13

    Most peroxisomal matrix proteins possess a C-terminal targeting signal type 1 (PTS1). Accurate prediction of functional PTS1 sequences and their relative strength by computational methods is essential for determination of peroxisomal proteomes in silico but has proved challenging due to high levels of sequence variability of non-canonical targeting signals, particularly in higher plants, and low levels of availability of experimentally validated non-canonical examples. In this study, in silico predictions were compared with in vivo targeting analyses and in vitro thermodynamic binding of mutated variants within the context of one model targeting sequence. There was broad agreement between the methods for entire PTS1 domains and position-specific single amino acid residues, including residues upstream of the PTS1 tripeptide. The hierarchy Leu>Met>Ile>Val at the C-terminal position was determined for all methods but both experimental approaches suggest that Tyr is underweighted in the prediction algorithm due to the absence of this residue in the positive training dataset. A combination of methods better defines the score range that discriminates a functional PTS1. In vitro binding to the PEX5 receptor could discriminate among strong targeting signals while in vivo targeting assays were more sensitive, allowing detection of weak functional import signals that were below the limit of detection in the binding assay. Together, the data provide a comprehensive assessment of the factors driving PTS1 efficacy and provide a framework for the more quantitative assessment of the protein import pathway in higher plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Magnetic Decoupling Design and Experimental Validation of a Radial-Radial Flux Compound-Structure Permanent-Magnet Synchronous Machine for HEVs

    Directory of Open Access Journals (Sweden)

    Zhiyi Song

    2012-10-01

    Full Text Available The radial-radial flux compound-structure permanent-magnet synchronous machine (CS-PMSM, integrated by two concentrically arranged permanent-magnet electric machines, is an electromagnetic power-splitting device for hybrid electric vehicles (HEVs. As the two electric machines share a rotor as structural and magnetic common part, their magnetic paths are coupled, leading to possible mutual magnetic-field interference and complex control. In this paper, a design method to ensure magnetic decoupling with minimum yoke thickness of the common rotor is investigated. A prototype machine is designed based on the proposed method, and the feasibility of magnetic decoupling and independent control is validated by experimental tests of mutual influence. The CS-PMSM is tested by a designed driving cycle, and functions to act as starter motor, generator and to help the internal combustion engine (ICE operate at optimum efficiency are validated.

  1. Simulation of volumetrically heated pebble beds in solid breeding blankets for fusion reactors. Modelling, experimental validation and sensitivity studies

    International Nuclear Information System (INIS)

    Hernandez Gonzalez, Francisco Alberto

    2016-01-01

    -situ effective thermal conductivity measurements of the pebble bed at room temperature by hot wire method. Steady state runs at 5 heating power levels encompassing the highest heat generation to be expected in the BU and relevant transient pulses have been performed. The 2D thermal map of the pebble bed at any power level has revealed a mostly symmetric distribution and no significant differences could be observed between the temperature read on the top and bottom surfaces at the interface layer between the pebble bed boundary and the test box of PREMUX. As a provision of modeling tools, two complementary approaches have been developed, aiming at giving a comprehensive modeling tool for prediction and validation purposes. The first is a deterministic, simplified thermo-mechanical model implemented in the commercial finite element code ANSYS. This model represents basic phenomena in the pebble beds, namely nonlinear elasticity, Drucker-Prager Cap plasticity, a non-associative flow rule and an isotropic hardening law. Preliminary validation of the model with the available literature on uniaxial compression tests comparing the axial compression stress against pebble bed strain at difference temperatures has shown a good agreement (root mean square errors <10%). The application of the model to PREMUX has shown a good general agreement as well with the temperature distribution dataset obtained during the experimental campaign with PREMUX. The predicted peak hydrostatic pressures are about ∝2.1 MPa and are located around the central heaters and thermocouples, while the maximum values for the bulk of the pebble bed are about 1.4 MPa. The second modeling approach is based on a probabilistic finite element method, which takes into account the inherent uncertainties of the model's input parameters and permits running a stochastic sensitivity analysis to obtain statistical information about the model outputs. This approach has been implemented to a thermal model of PREMUX developed with

  2. Simulation of volumetrically heated pebble beds in solid breeding blankets for fusion reactors. Modelling, experimental validation and sensitivity studies

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Gonzalez, Francisco Alberto

    2016-10-14

    low intrusion has been confirmed by in-situ effective thermal conductivity measurements of the pebble bed at room temperature by hot wire method. Steady state runs at 5 heating power levels encompassing the highest heat generation to be expected in the BU and relevant transient pulses have been performed. The 2D thermal map of the pebble bed at any power level has revealed a mostly symmetric distribution and no significant differences could be observed between the temperature read on the top and bottom surfaces at the interface layer between the pebble bed boundary and the test box of PREMUX. As a provision of modeling tools, two complementary approaches have been developed, aiming at giving a comprehensive modeling tool for prediction and validation purposes. The first is a deterministic, simplified thermo-mechanical model implemented in the commercial finite element code ANSYS. This model represents basic phenomena in the pebble beds, namely nonlinear elasticity, Drucker-Prager Cap plasticity, a non-associative flow rule and an isotropic hardening law. Preliminary validation of the model with the available literature on uniaxial compression tests comparing the axial compression stress against pebble bed strain at difference temperatures has shown a good agreement (root mean square errors <10%). The application of the model to PREMUX has shown a good general agreement as well with the temperature distribution dataset obtained during the experimental campaign with PREMUX. The predicted peak hydrostatic pressures are about ∝2.1 MPa and are located around the central heaters and thermocouples, while the maximum values for the bulk of the pebble bed are about 1.4 MPa. The second modeling approach is based on a probabilistic finite element method, which takes into account the inherent uncertainties of the model's input parameters and permits running a stochastic sensitivity analysis to obtain statistical information about the model outputs. This approach has been

  3. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.-M.

    2008-01-01

    CFD code validation requires experimental data that characterize the distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The paper reports about the use of wire-mesh sensors to study turbulent mixing processes in single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of other non

  4. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.M.

    2007-01-01

    CFD code validation requires experimental data that characterize distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The presentation reports about the use of wire-mesh sensors to study turbulent mixing processes in the single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of

  5. Experimental tooth clenching. A model for studying mechanisms of muscle pain.

    Science.gov (United States)

    Dawson, Andreas

    2013-01-01

    The overall goal of this thesis was to broaden knowledge of pain mechanisms in myofascial temporomandibular disorders (M-TMD). The specific aims were to: Develop a quality assessment tool for experimental bruxism studies (study I). Investigate proprioceptive allodynia after experimental tooth clenching exercises (study II). Evaluate the release of serotonin (5-HT), glutamate, pyruvate, and lactate in healthy subjects (study III) and in patients with M-TMD (study IV), after experimental tooth clenching exercises. In (I), tool development comprised 5 steps: (i) preliminary decisions, (ii) item generation, (iii) face-validity assessment, (iv) reliability and discriminative validity testing, and (v) instrument refinement. After preliminary decisions and a literature review, a list of 52 items to be considered for inclusion in the tool was generated. Eleven experts were invited to participate on the Delphi panel, of which 10 agreed. After four Delphi rounds, 8 items remained and were included in the Quality Assessment Tool for Experimental Bruxism Studies (Qu-ATEBS). Inter-observer reliability was acceptable (k = 0.77), and discriminative validity high (phi coefficient 0.79; P < 0.01). During refinement, 1 item was removed; the final tool comprised 7 items. In (II), 16 healthy females participated in three 60-min sessions, each with 24- and 48-h follow-ups. Participants were randomly assigned to a repetitive experimental tooth clenching task with a clenching level of 10%, 20%, or 40% of maximal voluntary clenching force (MVCF). Pain intensity, fatigue, perceived intensity of vibration (PIV), perceived discomfort (PD), and pressure pain threshold (PPT) were measured throughout. A significant increase in pain intensity and fatigue but not in PD was observed over time. A significant increase in PIV was only observed at 40 min, and PPT decreased significantly over time at 50 and 60 min compared to baseline. In (III), 30 healthy subjects (16 females, and 14 males

  6. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile); Camilla, S. [Departamento de Física, Universidad Tecnológica Metropolitana (Chile)

    2016-07-07

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the reference material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.

  7. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    Energy Technology Data Exchange (ETDEWEB)

    Wingefors, S.; Andersson, J.; Norrby, S. [Swedish Nuclear Power lnspectorate, Stockholm (Sweden). Office of Nuclear Waste Safety; Eisenberg, N.A.; Lee, M.P.; Federline, M.V. [U.S. Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Material Safety and Safeguards; Sagar, B.; Wittmeyer, G.W. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  8. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    International Nuclear Information System (INIS)

    Wingefors, S.; Andersson, J.; Norrby, S.

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  9. Computational study of a low head draft tube and validation with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Henau, V De; Payette, F A; Sabourin, M [Alstom Power Systems, Hydro 1350 chemin Saint-Roch, Sorel-Tracy (Quebec), J3R 5P9 (Canada); Deschenes, C; Gagnon, J M; Gouin, P, E-mail: vincent.dehenau@power.alstom.co [Hydraulic Machinery Laboratory, Laval University 1065 ave. de la Medecine, Quebec (Canada)

    2010-08-15

    The objective of this paper is to investigate methodologies to improve the reliability of CFD analysis of low head turbine draft tubes. When only the draft tube performance is investigated, the study indicates that draft tube only simulations with an adequate treatment of the inlet boundary conditions for velocity and turbulence are a good alternative to rotor/stator (stage) simulations. The definition of the inlet velocity in the near wall regions is critical to get an agreement between the stage and draft tube only solutions. An average turbulent kinetic energy intensity level and average turbulent kinetic energy dissipation length scale are sufficient as turbulence inlet conditions as long as these averages are coherent with the stage solution. Comparisons of the rotor/stator simulation results to the experimental data highlight some discrepancies between the predicted draft tube flow and the experimental observations.

  10. Development and Validation of LC Method for the Determination of Famciclovir in Pharmaceutical Formulation Using an Experimental Design

    Directory of Open Access Journals (Sweden)

    Srinivas Vishnumulaka

    2008-01-01

    Full Text Available A rapid and sensitive RP-HPLC method with UV detection (242 nm for routine analysis of famciclovir in pharmaceutical formulations was developed. Chromatography was performed with mobile phase containing a mixture of methanol and phosphate buffer (50:50, v/v with flow rate 1.0 mL min−1. Quantitation was accomplished with internal standard method. The procedure was validated for linearity (correlation coefficient =0.9999, accuracy, robustness and intermediate precision. Experimental design was used for validation of robustness and intermediate precision. To test robustness, three factors were considered; percentage v/v of methanol in mobile phase, flow rate and pH; flow rate, the percentage of organic modifier and pH have considerable important effect on the response. For intermediate precision measure the variables considered were: analyst, equipment and number of days. The RSD value (0.86%, n=24 indicated an acceptable precision of the analytical method. The proposed method was simple, sensitive, precise, accurate and quick and useful for routine quality control.

  11. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    Science.gov (United States)

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  12. LES Modeling with Experimental Validation of a Compound Channel having Converging Floodplain

    Science.gov (United States)

    Mohanta, Abinash; Patra, K. C.

    2018-04-01

    Computational fluid dynamics (CFD) is often used to predict flow structures in developing areas of a flow field for the determination of velocity field, pressure, shear stresses, effect of turbulence and others. A two phase three-dimensional CFD model along with the large eddy simulation (LES) model is used to solve the turbulence equation. This study aims to validate CFD simulations of free surface flow or open channel flow by using volume of fluid method by comparing the data observed in hydraulics laboratory of the National Institute of Technology, Rourkela. The finite volume method with a dynamic sub grid scale was carried out for a constant aspect ratio and convergence condition. The results show that the secondary flow and centrifugal force influence flow pattern and show good agreement with experimental data. Within this paper over-bank flows have been numerically simulated using LES in order to predict accurate open channel flow behavior. The LES results are shown to accurately predict the flow features, specifically the distribution of secondary circulations both for in-bank channels as well as over-bank channels at varying depth and width ratios in symmetrically converging flood plain compound sections.

  13. External gear pumps operating with non-Newtonian fluids: Modelling and experimental validation

    Science.gov (United States)

    Rituraj, Fnu; Vacca, Andrea

    2018-06-01

    External Gear Pumps are used in various industries to pump non-Newtonian viscoelastic fluids like plastics, paints, inks, etc. For both design and analysis purposes, it is often a matter of interest to understand the features of the displacing action realized by meshing of the gears and the description of the behavior of the leakages for this kind of pumps. However, very limited work can be found in literature about methodologies suitable to model such phenomena. This article describes the technique of modelling external gear pumps that operate with non-Newtonian fluids. In particular, it explains how the displacing action of the unit can be modelled using a lumped parameter approach which involves dividing fluid domain into several control volumes and internal flow connections. This work is built upon the HYGESim simulation tool, conceived by the authors' research team in the last decade, which is for the first time extended for the simulation of non-Newtonian fluids. The article also describes several comparisons between simulation results and experimental data obtained from numerous experiments performed for validation of the presented methodology. Finally, operation of external gear pump with fluids having different viscosity characteristics is discussed.

  14. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  15. Development and experimental validation of a thermoelectric test bench for laboratory lessons

    Directory of Open Access Journals (Sweden)

    Antonio Rodríguez

    2013-12-01

    Full Text Available The refrigeration process reduces the temperature of a space or a given volume while the power generation process employs a source of thermal energy to generate electrical power. Because of the importance of these two processes, training of engineers in this area is of great interest. In engineering courses it is normally studied the vapor compression and absorption refrigeration, and power generation systems such as gas turbine and steam turbine. Another type of cooling and generation less studied within the engineering curriculum, having a great interest, it is cooling and thermal generation based on Peltier and Seebeck effects. The theoretical concepts are useful, but students have difficulties understanding the physical meaning of their possible applications. Providing students with tools to test and apply the theory in real applications, will lead to a better understanding of the subject. Engineers must have strong theoretical, computational and also experimental skills. A prototype test bench has been built and experimentally validated to perform practical lessons of thermoelectric generation and refrigeration. Using this prototype students learn the most effective way of cooling systems and thermal power generation as well as basic concepts associated with thermoelectricity. It has been proven that students learn the process of data acquisition, and the technology used in thermoelectric devices. These practical lessons are implemented for a 60 people group of students in the development of subject of Thermodynamic including in the Degree in Engineering in Industrial Technologies of Public University of Navarra. Normal 0 21 false false false ES X-NONE X-NONE Normal 0 21 false false false ES X-NONE X-NONE Experimental Studies for the Evaluation of Non-Ionizing Radiation Levels

    International Nuclear Information System (INIS)

    Nasr, A.; Ashour, M.

    2008-01-01

    This article concerns the characteristic studies of non-ionizing; microwave, radiations. The power density levels, frequency ranges, modulation types, and the Fast Fourier Transform (FFT) are discussed. The experimental data are collected from the Egyptian Atomic Energy Authority (EAEA) locations in Nasr city and Anshas. This study has been carried out by Spectrum analyzer (SA) system, which implied radio frequency coaxial cable and horn antenna with height holders. The horn antenna was adjusted to scan all directions for investigating the signal strength. From this study, we obtain two main non-ionizing signals at center frequencies 900, and 1800 MHz, which are exploited by mobile communications networks. During the silence state, the measured maximum power densities levels for both frequencies are 0.553 μ W/cm 2 and 0.0191μW/cm 2 , respectively. While the measured maximum power densities, during alarm (ringing) state, are 98.67μ W/cm 2 and 2.961μ W/cm 2 for considered two frequencies, correspondingly. One can notice that the power densities are multiplied 178 times and 155 times for the same mentioned frequencies in that order. Moreover, these non-ionizing signals are analyzed theoretically and experimentally by utilizing FFT functions to clarify the Amplitude Modulations (AM) ratios and voltage strengths of these signals. Furthermore, the Occupied Band Width (OBW) ratio, and the division from the center frequency of the channel, (δFc) are clarified

  16. Evaluation of thermophysical properties of Al–Sn–Si alloys based on computational thermodynamics and validation by numerical and experimental simulation of solidification

    International Nuclear Information System (INIS)

    Bertelli, Felipe; Cheung, Noé; Ferreira, Ivaldo L.; Garcia, Amauri

    2016-01-01

    Highlights: • A numerical routine coupled to a computational thermodynamics software is proposed to calculate thermophysical properties. • The approach encompasses numerical and experimental simulation of solidification. • Al–Sn–Si alloys thermophysical properties are validated by experimental/numerical cooling rate results. - Abstract: Modelling of manufacturing processes of multicomponent Al-based alloys products, such as casting, requires thermophysical properties that are rarely found in the literature. It is extremely important to use reliable values of such properties, as they can influence critically on simulated output results. In the present study, a numerical routine is developed and connected in real runtime execution to a computational thermodynamic software with a view to permitting thermophysical properties such as: latent heats; specific heats; temperatures and heats of transformation; phase fractions and composition and density of Al–Sn–Si alloys as a function of temperature, to be determined. A numerical solidification model is used to run solidification simulations of ternary Al-based alloys using the appropriate calculated thermophysical properties. Directional solidification experiments are carried out with two Al–Sn–Si alloys compositions to provide experimental cooling rates profiles along the length of the castings, which are compared with numerical simulations in order to validate the calculated thermophysical data. For both cases a good agreement can be observed, indicating the relevance of applicability of the proposed approach.

  17. Development and Experimental Validation of a TRNSYS Dynamic Tool for Design and Energy Optimization of Ground Source Heat Pump Systems

    Directory of Open Access Journals (Sweden)

    Félix Ruiz-Calvo

    2017-09-01

    Full Text Available Ground source heat pump (GSHP systems stand for an efficient technology for renewable heating and cooling in buildings. To optimize not only the design but also the operation of the system, a complete dynamic model becomes a highly useful tool, since it allows testing any design modifications and different optimization strategies without actually implementing them at the experimental facility. Usually, this type of systems presents strong dynamic operating conditions. Therefore, the model should be able to predict not only the steady-state behavior of the system but also the short-term response. This paper presents a complete GSHP system model based on an experimental facility, located at Universitat Politècnica de València. The installation was constructed in the framework of a European collaborative project with title GeoCool. The model, developed in TRNSYS, has been validated against experimental data, and it accurately predicts both the short- and long-term behavior of the system.

  18. Experimental validation of plugging during drop formation in a T-junction.

    Science.gov (United States)

    Abate, Adam R; Mary, Pascaline; van Steijn, Volkert; Weitz, David A

    2012-04-21

    At low capillary number, drop formation in a T-junction is dominated by interfacial effects: as the dispersed fluid flows into the drop maker nozzle, it blocks the path of the continuous fluid; this leads to a pressure rise in the continuous fluid that, in turn, squeezes on the dispersed fluid, inducing pinch-off of a drop. While the resulting drop volume predicted by this "squeezing" mechanism has been validated for a range of systems, as of yet, the pressure rise responsible for the actual pinch-off has not been observed experimentally. This is due to the challenge of measuring the pressures in a T-junction with the requisite speed, accuracy, and localization. Here, we present an empirical study of the pressures in a T-junction during drop formation. Using Laplace sensors, pressure probes we have developed, we confirm the central ideas of the squeezing mechanism; however, we also uncover other findings, including that the pressure of the dispersed fluid is not constant but rather oscillates in anti-phase with that of the continuous fluid. In addition, even at the highest capillary number for which monodisperse drops can be formed, pressure oscillations persist, indicating that drop formation in confined geometries does not transition to an entirely shear-driven mechanism, but to a mechanism combining squeezing and shearing.

  19. Revealing the Effects of the Herbal Pair of Euphorbia kansui and Glycyrrhiza on Hepatocellular Carcinoma Ascites with Integrating Network Target Analysis and Experimental Validation.

    Science.gov (United States)

    Zhang, Yanqiong; Lin, Ya; Zhao, Haiyu; Guo, Qiuyan; Yan, Chen; Lin, Na

    2016-01-01

    Although the herbal pair of Euphorbia kansui (GS) and Glycyrrhiza (GC) is one of the so-called "eighteen antagonistic medicaments" in Chinese medicinal literature, it is prescribed in a classic Traditional Chinese Medicine (TCM) formula Gansui-Banxia-Tang for cancerous ascites, suggesting that GS and GC may exhibit synergistic or antagonistic effects in different combination designs. Here, we modeled the effects of GS/GC combination with a target interaction network and clarified the associations between the network topologies involving the drug targets and the drug combination effects. Moreover, the "edge-betweenness" values, which is defined as the frequency with which edges are placed on the shortest paths between all pairs of modules in network, were calculated, and the ADRB1-PIK3CG interaction exhibited the greatest edge-betweenness value, suggesting its crucial role in connecting the other edges in the network. Because ADRB1 and PIK3CG were putative targets of GS and GC, respectively, and both had functional interactions with AVPR2 approved as known therapeutic target for ascites, we proposed that the ADRB1-PIK3CG-AVPR2 signal axis might be involved in the effects of the GS-GC combination on ascites. This proposal was further experimentally validated in a H22 hepatocellular carcinoma (HCC) ascites model. Collectively, this systems-level investigation integrated drug target prediction and network analysis to reveal the combination principles of the herbal pair of GS and GC. Experimental validation in an in vivo system provided convincing evidence that different combination designs of GS and GC might result in synergistic or antagonistic effects on HCC ascites that might be partially related to their regulation of the ADRB1-PIK3CG-AVPR2 signal axis.

  1. Stochastic modeling of oligodendrocyte generation in cell culture: model validation with time-lapse data

    Directory of Open Access Journals (Sweden)

    Noble Mark

    2006-05-01

    Full Text Available Abstract Background The purpose of this paper is two-fold. The first objective is to validate the assumptions behind a stochastic model developed earlier by these authors to describe oligodendrocyte generation in cell culture. The second is to generate time-lapse data that may help biomathematicians to build stochastic models of cell proliferation and differentiation under other experimental scenarios. Results Using time-lapse video recording it is possible to follow the individual evolutions of different cells within each clone. This experimental technique is very laborious and cannot replace model-based quantitative inference from clonal data. However, it is unrivalled in validating the structure of a stochastic model intended to describe cell proliferation and differentiation at the clonal level. In this paper, such data are reported and analyzed for oligodendrocyte precursor cells cultured in vitro. Conclusion The results strongly support the validity of the most basic assumptions underpinning the previously proposed model of oligodendrocyte development in cell culture. However, there are some discrepancies; the most important is that the contribution of progenitor cell death to cell kinetics in this experimental system has been underestimated.

  2. Soft Sensing of Non-Newtonian Fluid Flow in Open Venturi Channel Using an Array of Ultrasonic Level Sensors—AI Models and Their Validations

    Science.gov (United States)

    Viumdal, Håkon; Mylvaganam, Saba

    2017-01-01

    In oil and gas and geothermal installations, open channels followed by sieves for removal of drill cuttings, are used to monitor the quality and quantity of the drilling fluids. Drilling fluid flow rate is difficult to measure due to the varying flow conditions (e.g., wavy, turbulent and irregular) and the presence of drilling cuttings and gas bubbles. Inclusion of a Venturi section in the open channel and an array of ultrasonic level sensors above it at locations in the vicinity of and above the Venturi constriction gives the varying levels of the drilling fluid in the channel. The time series of the levels from this array of ultrasonic level sensors are used to estimate the drilling fluid flow rate, which is compared with Coriolis meter measurements. Fuzzy logic, neural networks and support vector regression algorithms applied to the data from temporal and spatial ultrasonic level measurements of the drilling fluid in the open channel give estimates of its flow rate with sufficient reliability, repeatability and uncertainty, providing a novel soft sensing of an important process variable. Simulations, cross-validations and experimental results show that feedforward neural networks with the Bayesian regularization learning algorithm provide the best flow rate estimates. Finally, the benefits of using this soft sensing technique combined with Venturi constriction in open channels are discussed. PMID:29072595

  3. Hypersonic nozzle/afterbody CFD code validation. I - Experimental measurements

    Science.gov (United States)

    Spaid, Frank W.; Keener, Earl R.

    1993-01-01

    This study was conducted to obtain a detailed experimental description of the flow field created by the interaction of a single-expansion-ramp-nozzle flow with a hypersonic external stream. Data were obtained from a generic nozzle/afterbody model in the 3.5-Foot Hypersonic Wind Tunnel of the NASA Ames Research Center in a cooperative experimental program involving Ames and the McDonnell Douglas Research Laboratories. This paper presents experimental results consisting primarily of surveys obtained with a five-hole total-pressure/flow-direction probe and a total-temperature probe. These surveys were obtained in the flow field created by the interaction between the underexpanded jet plume and the external flow.

  4. Material characterization and non destructive testing by ultrasounds; modelling, simulation and experimental validation

    International Nuclear Information System (INIS)

    Noroy-Nadal, M.H.

    2002-06-01

    This memory presents the research concerning the characterization of materials and the Non Destructive Testing (N.D.T) by ultrasonics. The different topics include three steps: modeling, computations and experimental validation. The studied materials concern mainly metals. The memory is divided in four parts. The first one concerns the characterization of materials versus temperature. The determination of the shear modulus G(T) is especially studied for a large temperature range, and around the melting point. The second part is devoted to studies by photothermal devices essentially focused on the modeling of the mechanical displacement and the stress field in coated materials. In this particular field of interest, applications concern either the mechanical characterization of the coating, the defect detection in the structure and finally the evaluation of the coating adhesion. The third section is dedicated to microstructural characterization using acoustic microscopy. The evaluation of crystallographic texture is especially approached, for metallic objects obtained by forming. Before concluding and pointing out some perspectives to this work, the last section concerns the introduction of optimization techniques, applied to the material characterization by acoustic microscopy. (author)

  5. A validation methodology for fault-tolerant clock synchronization

    Science.gov (United States)

    Johnson, S. C.; Butler, R. W.

    1984-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating an experimental implementation of the Software Implemented Fault Tolerance (SIFT) clock synchronization algorithm. The design proof of the algorithm defines the maximum skew between any two nonfaulty clocks in the system in terms of theoretical upper bounds on certain system parameters. The quantile to which each parameter must be estimated is determined by a combinatorial analysis of the system reliability. The parameters are measured by direct and indirect means, and upper bounds are estimated. A nonparametric method based on an asymptotic property of the tail of a distribution is used to estimate the upper bound of a critical system parameter. Although the proof process is very costly, it is extremely valuable when validating the crucial synchronization subsystem.

  6. Experimental and numerical investigation of a linear Fresnel solar collector with flat plate receiver

    International Nuclear Information System (INIS)

    Bellos, Evangelos; Mathioulakis, Emmanouil; Tzivanidis, Christos; Belessiotis, Vassilis; Antonopoulos, Kimon A.

    2016-01-01

    Highlights: • A linear Fresnel solar collector with flat plate receiver is investigated. • The collector is investigated experimentally in energetic and exergetic terms. • The developed numerical model is validated with the experimental results. • The operation with thermal oil is also examined with the developed model. • The final results prove satisfying performance for medium temperature levels. - Abstract: In this study a linear Fresnel solar collector with flat plate receiver is investigated experimentally and numerically with Solidworks Flow Simulation. The developed model combines optical, thermal and flow analysis; something innovative and demanding which leads to accurate results. The main objective of this study is to determine the thermal, the optical and the exergetic performance of this collector in various operating conditions. For these reasons, the developed model is validated with the respective experimental data and after this step, the solar collector model is examined parametrically for various fluid temperature levels and solar incident angles. The use of thermal oil is also analyzed with the simulation tool in order to examine the collector performance in medium temperature levels. The experiments are performed with water as working fluid and for low temperature levels up to 100 °C. The final results proved that this solar collector is able to produce about 8.5 kW useful heat in summer, 5.3 kW in spring and 2.9 kW in winter. Moreover, the operation of this collector with thermal oil can lead to satisfying results up to 250 °C.

  7. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema

    DEFF Research Database (Denmark)

    Lassere, Marissa N; Johnson, Kent R; Boers, Maarten

    2007-01-01

    endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. RESULTS: The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation...... of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. CONCLUSION: Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery...... are then applied if there is serious counterevidence. A total score (0 to 15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. It was proposed that the term "surrogate" be restricted to markers attaining Levels 1 or 2 only. Most stakeholders agreed that this operationalization...

  8. Validation of the generalized model of two-phase thermosyphon loop based on experimental measurements of volumetric flow rate

    Science.gov (United States)

    Bieliński, Henryk

    2016-09-01

    The current paper presents the experimental validation of the generalized model of the two-phase thermosyphon loop. The generalized model is based on mass, momentum, and energy balances in the evaporators, rising tube, condensers and the falling tube. The theoretical analysis and the experimental data have been obtained for a new designed variant. The variant refers to a thermosyphon loop with both minichannels and conventional tubes. The thermosyphon loop consists of an evaporator on the lower vertical section and a condenser on the upper vertical section. The one-dimensional homogeneous and separated two-phase flow models were used in calculations. The latest minichannel heat transfer correlations available in literature were applied. A numerical analysis of the volumetric flow rate in the steady-state has been done. The experiment was conducted on a specially designed test apparatus. Ultrapure water was used as a working fluid. The results show that the theoretical predictions are in good agreement with the measured volumetric flow rate at steady-state.

  9. VIRmiRNA: a comprehensive resource for experimentally validated viral miRNAs and their targets.

    Science.gov (United States)

    Qureshi, Abid; Thakur, Nishant; Monga, Isha; Thakur, Anamika; Kumar, Manoj

    2014-01-01

    Viral microRNAs (miRNAs) regulate gene expression of viral and/or host genes to benefit the virus. Hence, miRNAs play a key role in host-virus interactions and pathogenesis of viral diseases. Lately, miRNAs have also shown potential as important targets for the development of novel antiviral therapeutics. Although several miRNA and their target repositories are available for human and other organisms in literature, but a dedicated resource on viral miRNAs and their targets are lacking. Therefore, we have developed a comprehensive viral miRNA resource harboring information of 9133 entries in three subdatabases. This includes 1308 experimentally validated miRNA sequences with their isomiRs encoded by 44 viruses in viral miRNA ' VIRMIRNA: ' and 7283 of their target genes in ' VIRMIRTAR': . Additionally, there is information of 542 antiviral miRNAs encoded by the host against 24 viruses in antiviral miRNA ' AVIRMIR': . The web interface was developed using Linux-Apache-MySQL-PHP (LAMP) software bundle. User-friendly browse, search, advanced search and useful analysis tools are also provided on the web interface. VIRmiRNA is the first specialized resource of experimentally proven virus-encoded miRNAs and their associated targets. This database would enhance the understanding of viral/host gene regulation and may also prove beneficial in the development of antiviral therapeutics. Database URL: http://crdd.osdd.net/servers/virmirna. © The Author(s) 2014. Published by Oxford University Press.

  10. Experimental validation of alternate integral-formulation method for predicting acoustic radiation based on particle velocity measurements.

    Science.gov (United States)

    Ni, Zhi; Wu, Sean F

    2010-09-01

    This paper presents experimental validation of an alternate integral-formulation method (AIM) for predicting acoustic radiation from an arbitrary structure based on the particle velocities specified on a hypothetical surface enclosing the target source. Both the normal and tangential components of the particle velocity on this hypothetical surface are measured and taken as the input to AIM codes to predict the acoustic pressures in both exterior and interior regions. The results obtained are compared with the benchmark values measured by microphones at the same locations. To gain some insight into practical applications of AIM, laser Doppler anemometer (LDA) and double hotwire sensor (DHS) are used as measurement devices to collect the particle velocities in the air. Measurement limitations of using LDA and DHS are discussed.

  11. Model validation using CFD-grade experimental database for NGNP Reactor Cavity Cooling Systems with water and air

    Energy Technology Data Exchange (ETDEWEB)

    Manera, Annalisa [Univ. of Michigan, Ann Arbor, MI (United States); Corradini, Michael [Univ. of Wisconsin, Madison, WI (United States); Petrov, Victor [Univ. of Michigan, Ann Arbor, MI (United States); Anderson, Mark [Univ. of Wisconsin, Madison, WI (United States); Tompkins, Casey [Univ. of Wisconsin, Madison, WI (United States); Nunez, Daniel [Univ. of Michigan, Ann Arbor, MI (United States)

    2018-02-13

    This project has been focused on the experimental and numerical investigations of the water-cooled and air-cooled Reactor Cavity Cooling System (RCCS) designs. At this aim, we have leveraged an existing experimental facility at the University of Wisconsin-Madison (UW), and we have designed and built a separate effect test facility at the University of Michigan. The experimental facility at UW has underwent several upgrades, including the installation of advanced instrumentation (i.e. wire-mesh sensors) built at the University of Michigan. These provides highresolution time-resolved measurements of the void-fraction distribution in the risers of the water-cooled RCCS facility. A phenomenological model has been developed to assess the water cooled RCCS system stability and determine the root cause behind the oscillatory behavior that occurs under normal two-phase operation. Testing under various perturbations to the water-cooled RCCS facility have resulted in changes in the stability of the integral system. In particular, the effects on stability of inlet orifices, water tank volume have and system pressure been investigated. MELCOR was used as a predictive tool when performing inlet orificing tests and was able to capture the Density Wave Oscillations (DWOs) that occurred upon reaching saturation in the risers. The experimental and numerical results have then been used to provide RCCS design recommendations. The experimental facility built at the University of Michigan was aimed at the investigation of mixing in the upper plenum of the air-cooled RCCS design. The facility has been equipped with state-of-theart high-resolution instrumentation to achieve so-called CFD grade experiments, that can be used for the validation of Computational Fluid Dynanmics (CFD) models, both RANS (Reynold-Averaged) and LES (Large Eddy Simulations). The effect of risers penetration in the upper plenum has been investigated as well.

  12. Experimental and numerical validation of a two-region-designed pebble bed reactor with dynamic core

    International Nuclear Information System (INIS)

    Jiang, S.Y.; Yang, X.T.; Tang, Z.W.; Wang, W.J.; Tu, J.Y.; Liu, Z.Y.; Li, J.

    2012-01-01

    Highlights: ► The experimental installation has been built to investigate the pebble flow. ► The feasibility of two-region pebble bed reactor has been verified. ► The pebble flow is more uniform in a taller vessel than that in a lower vessel. ► Larger base cone angle will decrease the scale of the stagnant zone. - Abstract: The pebble flow is the principal issue for the design of the pebble bed reactor. In order to verify the feasibility of a two-region-designed pebble bed reactor, the experimental installation with a taller vessel has been built, which is proportional to the real pebble bed reactor. With the aid of the experimental installation, the stable establishment and maintenance of the two-region arrangement has been verified, at the same time, the applicability of the DEM program has been also validated. Research results show: (1) The pebble's bouncing on the free surface is an important factor for the mixing of the different colored pebbles. (2) Through the guide plates installed in the top of the pebble packing, the size of the mixing zone can be reduced from 6–7 times to 3–4 times the pebble diameter. (3) The relationship between the width of the central region and the ratio of loading pebbles is approximately linear in the taller vessel. (4) The heighten part of the pebble packing can improve the uniformity of the flowing in the lower. (5) To increase the base cone angle can decrease the scale of the stagnant zone. All of these conclusions are meaningful to the design of the real pebble reactor.

  13. Discovery of potent, novel, non-toxic anti-malarial compounds via quantum modelling, virtual screening and in vitro experimental validation

    Directory of Open Access Journals (Sweden)

    Kaludov Nikola

    2011-09-01

    Full Text Available Abstract Background Developing resistance towards existing anti-malarial therapies emphasize the urgent need for new therapeutic options. Additionally, many malaria drugs in use today have high toxicity and low therapeutic indices. Gradient Biomodeling, LLC has developed a quantum-model search technology that uses quantum similarity and does not depend explicitly on chemical structure, as molecules are rigorously described in fundamental quantum attributes related to individual pharmacological properties. Therapeutic activity, as well as toxicity and other essential properties can be analysed and optimized simultaneously, independently of one another. Such methodology is suitable for a search of novel, non-toxic, active anti-malarial compounds. Methods A set of innovative algorithms is used for the fast calculation and interpretation of electron-density attributes of molecular structures at the quantum level for rapid discovery of prospective pharmaceuticals. Potency and efficacy, as well as additional physicochemical, metabolic, pharmacokinetic, safety, permeability and other properties were characterized by the procedure. Once quantum models are developed and experimentally validated, the methodology provides a straightforward implementation for lead discovery, compound optimizzation and de novo molecular design. Results Starting with a diverse training set of 26 well-known anti-malarial agents combined with 1730 moderately active and inactive molecules, novel compounds that have strong anti-malarial activity, low cytotoxicity and structural dissimilarity from the training set were discovered and experimentally validated. Twelve compounds were identified in silico and tested in vitro; eight of them showed anti-malarial activity (IC50 ≤ 10 μM, with six being very effective (IC50 ≤ 1 μM, and four exhibiting low nanomolar potency. The most active compounds were also tested for mammalian cytotoxicity and found to be non-toxic, with a

  14. Design and validation of the INICIARE instrument, for the assessment of dependency level in acutely ill hospitalised patients.

    Science.gov (United States)

    Morales-Asencio, José Miguel; Porcel-Gálvez, Ana María; Oliveros-Valenzuela, Rosa; Rodríguez-Gómez, Susana; Sánchez-Extremera, Lucrecia; Serrano-López, Francisco Andrés; Aranda-Gallardo, Marta; Canca-Sánchez, José Carlos; Barrientos-Trigo, Sergio

    2015-03-01

    The aim of this study was to establish the validity and reliability of an instrument (Inventario del NIvel de Cuidados mediante IndicAdores de clasificación de Resultados de Enfermería) used to assess the dependency level in acutely hospitalised patients. This instrument is novel, and it is based on the Nursing Outcomes Classification. Multiple existing instruments for needs assessment have been poorly validated and based predominately on interventions. Standardised Nursing Languages offer an ideal framework to develop nursing sensitive instruments. A cross-sectional validation study in two acute care hospitals in Spain. This study was implemented in two phases. First, the research team developed the instrument to be validated. In the second phase, the validation process was performed by experts, and the data analysis was conducted to establish the psychometric properties of the instrument. Seven hundred and sixty-one patient ratings performed by nurses were collected during the course of the research study. Data analysis yielded a Cronbach's alpha of 0·91. An exploratory factorial analysis identified three factors (Physiological, Instrumental and Cognitive-behavioural), which explained 74% of the variance. Inventario del NIvel de Cuidados mediante IndicAdores de clasificación de Resultados de Enfermería was demonstrated to be a valid and reliable instrument based on its use in acutely hospitalised patients to assess the level of dependency. Inventario del NIvel de Cuidados mediante IndicAdores de clasificación de Resultados de Enfermería can be used as an assessment tool in hospitalised patients during the nursing process throughout the entire hospitalisation period. It contributes information to support decisions on nursing diagnoses, interventions and outcomes. It also enables data codification in large databases. © 2014 John Wiley & Sons Ltd.

  15. QSPIN: A High Level Java API for Quantum Computing Experimentation

    Science.gov (United States)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  16. Experimental Investigation of Coolant Mixing in WWER and PWR Reactor Fuel Bundles by Laser Optical Techniques for CFD Validation

    International Nuclear Information System (INIS)

    Tar, D.; Baranyai, V; Ezsoel, Gy.; Toth, I.

    2010-01-01

    Non intrusive laser optical measurements have been carried out to investigate the coolant mixing in a model of the head part of a fuel assembly of a WWER reactor. The goal of this research was to investigate the coolant flow around the point based in-core thermocouple; and also provide experimental database as a validation tool for computational fluid dynamics calculations. The experiments have been carried out on a full size scale model of the head part of WWER-440/213 fuel assembly. In this paper first the previous results of the research project is summarised, when full field velocity vectors and temperature were obtained by particle image velocimetry and planar laser induced fluorescence, respectively. Then, preliminary results of the investigation of the influence of the flow in the central tube will be reported by presenting velocity measurement results. In order to have well measurable effect, extreme flow rates have been set in the central tube by applying an inner tube with controlled flow rates. Despite the extreme conditions, the influence of the central tube to the velocity field proved to be significant. Further measurement will be done for the investigation of the effect of the gaps at the spacer fixings by displacing the inner tube vertically, and also the temperature distribution will also be determined at similar geometries by laser induced fluorescence. The aim of the measurements was to establish an experimental database, as well as the validation of computational fluid dynamics calculations. (Authors)

  17. Defect level characterization of silicon nanowire arrays: Towards novel experimental paradigms

    Energy Technology Data Exchange (ETDEWEB)

    Carapezzi, Stefania; Castaldini, Antonio; Cavallini, Anna [Department of Physics and Astronomy, University of Bologna, V.le Berti Pichat 6/2, Bologna (Italy); Irrera, Alessia [IPCF CNR, Viale Stagno D' Alcontres n. 37-98158, Messina, Italy and MATIS IMM CNR, Viale Santa Sofia n. 64, 95123 Catania (Italy)

    2014-02-21

    The huge amount of knowledge, and infrastructures, brought by silicon (Si) technology, make Si Nanowires (NWs) an ideal choice for nano-electronic Si-based devices. This, in turn, challenges the scientific research to adapt the technical and theoretical paradigms, at the base of established experimental techniques, in order to probe the properties of these systems. Metal-assisted wet-Chemical Etching (MaCE) [1, 2] is a promising fast, easy and cheap method to grow high aspect-ratio aligned Si NWs. Further, contrary to other fabrication methods, this method avoids the possible detrimental effects related to Au diffusion into NWs. We investigated the bandgap level diagram of MaCE Si NW arrays, phosphorous-doped, by means of Deep Level Transient Spectroscopy. The presence of both shallow and deep levels has been detected. The results have been examined in the light of the specificity of the MaCE growth. The study of the electronic levels in Si NWs is, of course, of capital importance in view of the integration of Si NW arrays as active layers in actual devices.

  18. Experimental observation of dynamic ductile damage development under various triaxiality conditions - description of the principle

    Directory of Open Access Journals (Sweden)

    Pillon L.

    2012-08-01

    Full Text Available The Gurson model has been extended by Perrin to describe damage evolution in ductile viscoplastic materials. The so-called Gurson-Perrin model allows representing damage development with respect to strain-rate conditions. In order to fill a lack in current experimental procedures, we propose an experimental project able to test and validate the Gurson-Perrin model under various dynamic conditions and for different stress triaxiality levels.

  19. Impact of anti-charge sharing on the zero-frequency detective quantum efficiency of CdTe-based photon counting detector system: cascaded systems analysis and experimental validation

    Science.gov (United States)

    Ji, Xu; Zhang, Ran; Chen, Guang-Hong; Li, Ke

    2018-05-01

    Inter-pixel communication and anti-charge sharing (ACS) technologies have been introduced to photon counting detector (PCD) systems to address the undesirable charge sharing problem. In addition to improving the energy resolution of PCD, ACS may also influence other aspects of PCD performance such as detector multiplicity (i.e. the number of pixels triggered by each interacted photon) and detective quantum efficiency (DQE). In this work, a theoretical model was developed to address how ACS impacts the multiplicity and zero-frequency DQE [DQE(0)] of PCD systems. The work focused on cadmium telluride (CdTe)-based PCD that often involves the generation and transport of K-fluorescence photons. Under the parallel cascaded systems analysis framework, the theory takes both photoelectric and scattering effects into account, and it also considers both the reabsorption and escape of photons. In a new theoretical treatment of ACS, it was considered as a modified version of the conventional single pixel (i.e. non-ACS) mode, but with reduced charge spreading distance and K-fluorescence travel distance. The proposed theoretical model does not require prior knowledge of the detailed ACS implementation method for each specific PCD, and its parameters can be experimentally determined using a radioisotope without invoking any Monte-Carlo simulation. After determining the model parameters, independent validation experiments were performed using a diagnostic x-ray tube and four different polychromatic beams (from 50 to 120 kVp). Both the theoretical and experimental results demonstrate that ACS increased the first and second moments of multiplicity for a majority of the x-ray energy and threshold levels tested, except when the threshold level was much lower than the x-ray energy level. However, ACS always improved DQE(0) at all energy and threshold levels tested.

  20. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Haynes, D.A.; Mancini, R.C.; Cooley, J.H.; Tommasini, R.; Golovkin, I.E.; Sherrill, M.E.; Haan, S.W.

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  1. Investigation of a two-phase nozzle flow and validation of several computer codes by the experimental data

    International Nuclear Information System (INIS)

    Kedziur, F.

    1980-03-01

    Stationary experiments with a convergent nozzle are performed in order to validate advanced two-phase computer codes, which find application in the blowdown-phase of a loss-of-coolant accident (LOCA). The steam/water flow presents a broad variety of initial conditions: The pressure varies between 2 and 13 MPa, the void fraction between 0 (subcooled) and about 80%, a great number of subcritical as well as critical experiments with different flow pattern is investigated. Additional air/water experiments serve for the separation of phase transition effects. The transient acceleration of the fluid in the LOCA-case is simulated by a local acceleration in the experiments. The layout of the nozzle and the applied measurement technique allow for a separate testing of physical models and the determination of empirical model parameters, respectively: In the four codes DUESE, DRIX-20, RELAP4/MOD6 and STRUYA the models - if they exist - for slip between the phases, thermodynamic non-equilibrium, pipe friction and critical mass flow rate are validated and criticised in comparison with the experimental data, and the corresponding model parameters are determined. The parameters essentially are a function of the void fraction. (orig.) [de

  2. Experimental study on intermediate level radioactive waste processing

    International Nuclear Information System (INIS)

    Nagakura, Tadashi; Abe, Hirotoshi; Okazawa, Takao; Hattori, Seiichi; Maki, Yasuro

    1977-01-01

    In the disposal of intermediate level radioactive wastes, multilayer package will be adopted. The multilayer package consists of cement-solidified waste and a container such as a drum - can with concrete liner or a concrete container. So, on the waste to be cement-solidified in such container, experimental study was carried out as follows. (1) Cement-solidification method. (2) Mechanical behaviour of cement-solidified waste. The mechanical behaviour of the containers was studied by the finite element method and experiment, and the function of pressure-balancing valves was also studied. The following data on processing intermediate level radioactive wastes were obtained. (1) In the case of cement-solidified waste, the data to select the suitable solidifying material and the standard mixing proportion were determined. (2) The basic data concerning the uniaxial compressive strength of cement-solidified waste, the mechanical behaviour of cement-solidified waste packed in a drum under high hydrostatic pressure, the shock response of cement-solidified waste at the time of falling and so on were obtained. (3) The pressure-balancing valves worked at about 0.5 Kg/cm 2 pressure difference inside and outside a container, and the deformation of a drum cover was 10 to 13 mm. In case of the pressure difference less than 0,5 Kg/cm 2 , the valves shut, and water flow did occur. (auth.)

  3. Underwater behaviour of bitumen coated radioactive wastes: experimental validation of the Colonbo degradation model

    International Nuclear Information System (INIS)

    Gwinner, B.

    2004-03-01

    In the release scenario considered for geologic repository, water is thought to be the main aggressive agent with regards to bituminized radioactive waste (composed in general of 60 weight % of bitumen, 40% of soluble/insoluble salts and a few ppm of radionuclides). Since liquid water can diffuse in pure bitumen, leaching of bituminized waste results in the dissolution of the most soluble salts and leads to the development of a more or less concentrated saline solution-filled pore structure (called permeable layer). In consequence of the generation of a porous layer in the bituminized waste, leaching of salts and radionuclides can then take place. Research performed at the Atomic Energy Commission (CEA) aims therefore at understanding the consequences of ground-water immersion on the transport properties and radionuclides leaching of bituminized waste materials. To this end, a constitutive model (called COLONBO) which describes mathematically the leaching of bituminized waste has been developed. The COLONBO model is based on the following assumptions: 1. Water and dissolved salts migrate in the permeable layer according to Fick's first law. The diffusion of water and salts are quantified by effective diffusion coefficients which are unknown. 2. The mechanical properties of the bitumen matrix are not considered during leaching (free swelling). Up to now, the COLONBO model has been used only to model experimental water uptake and salt leach curves, leading (theoretical) estimates of the effective diffusion coefficients of water and salts in the permeable layer. The aim of this work was to validate experimentally the numerical results obtained with the COLONBO model. First, the correspondence between experimental and simulated water uptake and salt leach rates obtained on various bituminized waste materials is checked, leading estimates of the effective diffusion coefficients of water and salts in the permeable layer. Second, the evolution of the thickness and of the

  4. Thermodynamic properties of 9-fluorenone: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Kazakov, Andrei F.; Steele, William V.

    2012-01-01

    Highlights: ► Heat capacities were measured for the temperature range 5 K to 520 K. ► Vapor pressures were measured for the temperature range 368 K to 668 K. ► The enthalpy of combustion was measured and the enthalpy of formation was derived. ► Calculated and derived properties for the ideal gas are in excellent accord. ► Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Measurements leading to the calculation of thermodynamic properties for 9-fluorenone (IUPAC name 9H-fluoren-9-one and Chemical Abstracts registry number [486-25-9]) in the ideal-gas state are reported. Experimental methods were adiabatic heat-capacity calorimetry, inclined-piston manometry, comparative ebulliometry, and combustion calorimetry. Critical properties were estimated. Molar entropies for the ideal-gas state were derived from the experimental studies at selected temperatures T between T = 298.15 K and T = 600 K, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6 − 31 + G(d,p) level of theory. Values derived with the independent methods are shown to be in excellent accord with a scaling factor of 0.975 applied to the calculated frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in recent articles by this research group. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous.

  5. Correlation of Perfusion MRI and 18F-FDG PET Imaging Biomarkers for Monitoring Regorafenib Therapy in Experimental Colon Carcinomas with Immunohistochemical Validation

    Science.gov (United States)

    Eschbach, Ralf S.; Fendler, Wolfgang P.; Kazmierczak, Philipp M.; Hacker, Marcus; Rominger, Axel; Carlsen, Janette; Hirner-Eppeneder, Heidrun; Schuster, Jessica; Moser, Matthias; Havla, Lukas; Schneider, Moritz J.; Ingrisch, Michael; Spaeth, Lukas; Reiser, Maximilian F.; Nikolaou, Konstantin; Cyran, Clemens C.

    2015-01-01

    Objectives To investigate a multimodal, multiparametric perfusion MRI / 18F-fluoro-deoxyglucose-(18F-FDG)-PET imaging protocol for monitoring regorafenib therapy effects on experimental colorectal adenocarcinomas in rats with immunohistochemical validation. Materials and Methods Human colorectal adenocarcinoma xenografts (HT-29) were implanted subcutaneously in n = 17 (n = 10 therapy group; n = 7 control group) female athymic nude rats (Hsd:RH-Foxn1rnu). Animals were imaged at baseline and after a one-week daily treatment protocol with regorafenib (10 mg/kg bodyweight) using a multimodal, multiparametric perfusion MRI/18F-FDG-PET imaging protocol. In perfusion MRI, quantitative parameters of plasma flow (PF, mL/100 mL/min), plasma volume (PV, %) and endothelial permeability-surface area product (PS, mL/100 mL/min) were calculated. In 18F-FDG-PET, tumor-to-background-ratio (TTB) was calculated. Perfusion MRI parameters were correlated with TTB and immunohistochemical assessments of tumor microvascular density (CD-31) and cell proliferation (Ki-67). Results Regorafenib significantly (pregorafenib therapy effects on experimental colorectal adenocarcinomas in vivo with significant correlations between perfusion MRI parameters and 18F-FDG-PET validated by immunohistochemistry. PMID:25668193

  6. Experimental Validation of the Reverberation Effect in Room Electromagnetics

    DEFF Research Database (Denmark)

    Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri

    2015-01-01

    . This tail can be characterized with Sabine's or Eyring's reverberation models, which were initially developed in acoustics. So far, these models were only fitted to data collected from radio measurements, but no thorough validation of their prediction ability in electromagnetics has been performed yet...

  7. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, C., E-mail: hansec@uw.edu [PSI-Center, University of Washington, Seattle, Washington 98195 (United States); Columbia University, New York, New York 10027 (United States); Victor, B.; Morgan, K.; Hossack, A.; Sutherland, D. [HIT-SI Group, University of Washington, Seattle, Washington 98195 (United States); Jarboe, T.; Nelson, B. A. [HIT-SI Group, University of Washington, Seattle, Washington 98195 (United States); PSI-Center, University of Washington, Seattle, Washington 98195 (United States); Marklin, G. [PSI-Center, University of Washington, Seattle, Washington 98195 (United States)

    2015-05-15

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numerical validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.

  8. IUPAC critical evaluation of the rotational–vibrational spectra of water vapor, Part III: Energy levels and transition wavenumbers for H216O

    International Nuclear Information System (INIS)

    Tennyson, Jonathan; Bernath, Peter F.; Brown, Linda R.; Campargue, Alain; Császár, Attila G.; Daumont, Ludovic; Gamache, Robert R.; Hodges, Joseph T.; Naumenko, Olga V.; Polyansky, Oleg L.; Rothman, Laurence S.; Vandaele, Ann Carine; Zobov, Nikolai F.; Al Derzi, Afaf R.; Fábri, Csaba; Fazliev, Alexander Z.; Furtenbacher, Tibor

    2013-01-01

    This is the third of a series of articles reporting critically evaluated rotational–vibrational line positions, transition intensities, and energy levels, with associated critically reviewed labels and uncertainties, for all the main isotopologues of water. This paper presents experimental line positions, experimental-quality energy levels, and validated labels for rotational–vibrational transitions of the most abundant isotopologue of water, H 2 16 O. The latest version of the MARVEL (Measured Active Rotational–Vibrational Energy Levels) line-inversion procedure is used to determine the rovibrational energy levels of the electronic ground state of H 2 16 O from experimentally measured lines, together with their self-consistent uncertainties, for the spectral region up to the first dissociation limit. The spectroscopic network of H 2 16 O containstwo components, an ortho (o) and a para (p) one. For o-H 2 16 O and p-H 2 16 O, experimentally measured, assigned, and labeled transitions were analyzed from more than 100 sources. The measured lines come from one-photon spectra recorded at room temperature in absorption, from hot samples with temperatures up to 3000 K recorded in emission, and from multiresonance excitation spectra which sample levels up to dissociation. The total number of transitions considered is 184 667 of which 182 156 are validated: 68 027 between para states and 114 129 ortho ones. These transitions give rise to 18 486 validated energy levels, of which 10 446 and 8040 belong to o-H 2 16 O and p-H 2 16 O, respectively. The energy levels, including their labeling with approximate normal-mode and rigid-rotor quantum numbers, have been checked against ones determined from accurate variational nuclear motion computations employing exact kinetic energy operators as well as against previous compilations of energy levels. The extensive list of MARVEL lines and levels obtained are deposited in the supplementary data of this paper, as well as in a

  9. Reliability and Validity of a Survey of Cat Caregivers on Their Cats’ Socialization Level in the Cat’s Normal Environment

    Directory of Open Access Journals (Sweden)

    Margaret Slater

    2013-12-01

    Full Text Available Stray cats routinely enter animal welfare organizations each year and shelters are challenged with determining the level of human socialization these cats may possess as quickly as possible. However, there is currently no standard process to guide this determination. This study describes the development and validation of a caregiver survey designed to be filled out by a cat’s caregiver so it accurately describes a cat’s personality, background, and full range of behavior with people when in its normal environment. The results from this survey provided the basis for a socialization score that ranged from unsocialized to well socialized with people. The quality of the survey was evaluated based on inter-rater and test-retest reliability and internal consistency and estimates of construct and criterion validity. In general, our results showed moderate to high levels of inter-rater (median of 0.803, range 0.211–0.957 and test-retest agreement (median 0.92, range 0.211–0.999. Cronbach’s alpha showed high internal consistency (0.962. Estimates of validity did not highlight any major shortcomings. This survey will be used to develop and validate an effective assessment process that accurately differentiates cats by their socialization levels towards humans based on direct observation of cats’ behavior in an animal shelter.

  10. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within ±10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the 92 Mo(n, 2n) 91g Mo reaction in FENDL, and lack of activation cross section data, e.g., the 138 Ba(n, 2n) 137m Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  11. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within {+-}10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the {sup 92}Mo(n, 2n){sup 91g}Mo reaction in FENDL, and lack of activation cross section data, e.g., the {sup 138}Ba(n, 2n){sup 137m}Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  12. Closing the patient experience chasm: A two-level validation of the Consumer Quality Index Inpatient Hospital Care

    NARCIS (Netherlands)

    Smirnova, A.; Lombarts, K.; Arah, O.A.; Vleuten, C.P.M. van der

    2017-01-01

    BACKGROUND: Evaluation of patients' health care experiences is central to measuring patient-centred care. However, different instruments tend to be used at the hospital or departmental level but rarely both, leading to a lack of standardization of patient experience measures. OBJECTIVE: To validate

  13. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    computed counterparts resulting in an agreement within 2.5%, 5%, and 8% within solid water, bone, and lung, respectively. Conclusions: The proposed virtual point source model and characterization method can be used to compute absorbed dose in both the homogeneous and heterogeneous block phantoms within of 2%–8% of measured values, depending on the phantom and the beam quality. The authors’ results also provide experimental validation for their kV dose computation software, kVDoseCalc.

  14. Current Controller for Multi-level Front-end Converter and Its Digital Implementation Considerations on Three-level Flying Capacitor Topology

    Science.gov (United States)

    Tekwani, P. N.; Shah, M. T.

    2017-10-01

    This paper presents behaviour analysis and digital implementation of current error space phasor based hysteresis controller applied to three-phase three-level flying capacitor converter as front-end topology. The controller is self-adaptive in nature, and takes the converter from three-level to two-level mode of operation and vice versa, following various trajectories of sector change with the change in reference dc-link voltage demanded by the load. It keeps current error space phasor within the prescribed hexagonal boundary. During the contingencies, the proposed controller takes the converter in over modulation mode to meet the load demand, and once the need is satisfied, controller brings back the converter in normal operating range. Simulation results are presented to validate behaviour of controller to meet the said contingencies. Unity power factor is assured by proposed controller with low current harmonic distortion satisfying limits prescribed in IEEE 519-2014. Proposed controller is implemented using TMS320LF2407 16-bit fixed-point digital signal processor. Detailed analysis of numerical format to avoid overflow of sensed variables in processor, and per-unit model implementation in software are discussed and hardware results are presented at various stages of signal conditioning to validate the experimental setup. Control logic for the generation of reference currents is implemented in TMS320LF2407A using assembly language and experimental results are also presented for the same.

  15. Experimental Verification Of Hyper-V Performance Isolation Level

    Directory of Open Access Journals (Sweden)

    Krzysztof Rzecki

    2014-01-01

    Full Text Available The need for cost optimization in a broad sense constitutes the basis of operation of every enterprise. In the case of IT structure, which is present in almost every field of activity these days, one of the most commonly applied technologies leading to good cost-to-profit adjustment is virtualization. It consists in locating several operational systems with IT systems on a single server. In order for such optimization to be carried out correctly it has to be strictly controlled by means of allocating access to resources, which is known as performance isolation. Modern virtualizers allow to set up this allocation in quantitative terms (the number of processors, size of RAM, or disc space. It appears, however, that in qualitative terms (processor's time, RAM or hard disc bandwidth the actual allocation of resources does not always correspond with this configuration. This paper provides an experimental presentation of the achievable level of performance isolation of the Hyper-V virtualizer.

  16. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    OpenAIRE

    Devendra T Mourya; Pragya D Yadav; Ajay Khare; Anwar H Khan

    2017-01-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no ac...

  17. SU-F-J-41: Experimental Validation of a Cascaded Linear System Model for MVCBCT with a Multi-Layer EPID

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y; Rottmann, J; Myronakis, M; Berbeco, R [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana Farber Cancer Institute and Harvard Medical School, Boston, MA. (United States); Fueglistaller, R; Morf, D [Varian Medical Systems, Dattwil, Aargau (Switzerland); Wang, A; Shedlock, D; Star-Lack, J [Varian Medical Systems, Palo Alto, CA (United States)

    2016-06-15

    Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performed on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.

  18. Multi-Year Leaf-Level Response to Sub-Ambient and Elevated Experimental CO2 in Betula nana.

    Directory of Open Access Journals (Sweden)

    Alexandra J C Hincke

    Full Text Available The strong link between stomatal frequency and CO2 in woody plants is key for understanding past CO2 dynamics, predicting future change, and evaluating the significant role of vegetation in the hydrological cycle. Experimental validation is required to evaluate the long-term adaptive leaf response of C3 plants to CO2 conditions; however, studies to date have only focused on short-term single-season experiments and may not capture (1 the full ontogeny of leaves to experimental CO2 exposure or (2 the true adjustment of structural stomatal properties to CO2, which we postulate is likely to occur over several growing seasons. We conducted controlled growth chamber experiments at 150 ppmv, 450 ppmv and 800 ppmv CO2 with woody C3 shrub Betula nana (dwarf birch over two successive annual growing seasons and evaluated the structural stomatal response to atmospheric CO2 conditions. We find that while some adjustment of leaf morphological and stomatal parameters occurred in the first growing season where plants are exposed to experimental CO2 conditions, amplified adjustment of non-plastic stomatal properties such as stomatal conductance occurred in the second year of experimental CO2 exposure. We postulate that the species response limit to CO2 of B. nana may occur around 400-450 ppmv. Our findings strongly support the necessity for multi-annual experiments in C3 perennials in order to evaluate the effects of environmental conditions and provide a likely explanation of the contradictory results between historical and palaeobotanical records and experimental data.

  19. Simulation and Validation of the ATLAS Level-1 Topological Trigger

    CERN Document Server

    Bakker, Pepijn Johannes; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment has recently commissioned a new component of its first-level trigger: the L1 topological trigger. This system, using state-of-the-art FPGA processors, makes it possible to reject events by applying topological requirements, such as kinematic criteria involving clusters, jets, muons, and total transverse energy. The data recorded using the L1Topological trigger demonstrates that this innovative trigger strategy allows for an improved rejection rate without efficiency loss. This improvement has been shown for several relevant physics processes leading to low-$p_T$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, an accurate simulation of the L1Topological trigger is used to validate and optimize the performance of this trigger. To reach such an accuracy, this simulation must take into account the fact that the firmware algorithms are executed on a FPGA architecture, while the simulation is executed on a floating point architecture.

  20. Effects of treatments for experimental bone tumor on prostaglandin E level and bone scintigrams

    Energy Technology Data Exchange (ETDEWEB)

    Otsuka, Nobuaki; Ito, Yasuhiko; Yoneda, Masaya; Muranaka, Akira; Nishishita, Soichi; Morita, Rikushi [Kawasaki Medical School, Kurashiki, Okayama (Japan)

    1983-10-01

    The role of Prostaglandin E (PgE) level was studied experimentally as follows: 1) intrahepatic implantation of VX-2, 2) intravenous injection of VX-2, 3) effect of treatments on intramedullary implanted VX-2. The levels of PgE in intrahepatic and intravenous transplantation were not higher than that of intramedullary transplantation. Mitomycin C (MMC) did not reduce the PgE level and appearance time of bone scan abnormality was the same as that of untreated animals. A combination of indomethacin and MMC caused a delay in appearance time of bone scan abnormalities.

  1. Geochemical databases. Part 1. Pmatch: a program to manage thermochemical data. Part 2. The experimental validation of geochemical computer models

    International Nuclear Information System (INIS)

    Pearson, F.J. Jr.; Avis, J.D.; Nilsson, K.; Skytte Jensen, B.

    1993-01-01

    This work is carried out under cost-sharing contract with European Atomic Energy Community in the framework of its programme on Management and Storage of Radioactive Wastes. Part 1: PMATCH, A Program to Manage Thermochemical Data, describes the development and use of a computer program, by means of which new thermodynamic data from literature may be referenced to a common frame and thereby become internally consistent with an existing database. The report presents the relevant thermodynamic expressions and their use in the program is discussed. When there is not sufficient thermodynamic data available to describe a species behaviour under all conceivable conditions, the problems arising are thoroughly discussed and the available data is handled by approximating expressions. Part II: The Experimental Validation of Geochemical Computer models are the results of experimental investigations of the equilibria established in aqueous suspensions of mixtures of carbonate minerals (Calcium, magnesium, manganese and europium carbonates) compared with theoretical calculations made by means of the geochemical JENSEN program. The study revealed that the geochemical computer program worked well, and that its database was of sufficient validity. However, it was observed that experimental difficulties could hardly be avoided, when as here a gaseous component took part in the equilibria. Whereas the magnesium and calcium carbonates did not demonstrate mutual solid solubility, this produced abnormal effects when manganese and calcium carbonates were mixed resulting in a diminished solubility of both manganese and calcium. With tracer amounts of europium added to a suspension of calcite in sodium carbonate solutions long term experiments revealed a transition after 1-2 months, whereby the tracer became more strongly adsorbed onto calcite. The transition is interpreted as the nucleation and formation of a surface phase incorporating the 'species' NaEu(Co 3 ) 2

  2. Development of boiling transition analysis code TCAPE-INS/B based on mechanistic methods for BWR fuel bundles. Models and validations with boiling transition experimental data

    International Nuclear Information System (INIS)

    Ishida, Naoyuki; Utsuno, Hideaki; Kasahara, Fumio

    2003-01-01

    The Boiling Transition (BT) analysis code TCAPE-INS/B based on the mechanistic methods coupled with subchannel analysis has been developed for the evaluation of the integrity of Boiling Water Reactor (BWR) fuel rod bundles under abnormal operations. Objective of the development is the evaluation of the BT without using empirical BT and rewetting correlations needed for different bundle designs in the current analysis methods. TCAPE-INS/B consisted mainly of the drift-flux model, the film flow model, the cross-flow model, the thermal conductivity model and the heat transfer correlations. These models were validated systematically with the experimental data. The accuracy of the prediction for the steady-state Critical Heat Flux (CHF) and the transient temperature of the fuel rod surface after the occurrence of BT were evaluated on the validations. The calculations for the experiments with the single tube and bundles were carried out for the validations of the models incorporated in the code. The results showed that the steady-state CHF was predicted within about 6% average error. In the transient calculations, BT timing and temperature of the fuel rod surface gradient agreed well with experimental results, but rewetting was predicted lately. So, modeling of heat transfer phenomena during post-BT is under modification. (author)

  3. Closing the patient experience chasm: A two-level validation of the Consumer Quality Index Inpatient Hospital Care

    NARCIS (Netherlands)

    Smirnova, Alina; Lombarts, Kiki M. J. M. H.; Arah, Onyebuchi A.; van der Vleuten, Cees P. M.

    2017-01-01

    BackgroundEvaluation of patients' health care experiences is central to measuring patient-centred care. However, different instruments tend to be used at the hospital or departmental level but rarely both, leading to a lack of standardization of patient experience measures. ObjectiveTo validate the

  4. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  5. Testing the Validity of Local Flux Laws in an Experimental Eroding Landscape

    Science.gov (United States)

    Sweeney, K. E.; Roering, J. J.; Ellis, C.

    2015-12-01

    Linking sediment transport to landscape evolution is fundamental to interpreting climate and tectonic signals from topography and sedimentary deposits. Most geomorphic process laws consist of simple continuum relationships between sediment flux and local topography. However, recent work has shown that nonlocal formulations, whereby sediment flux depends on upslope conditions, are more accurate descriptions of sediment motion, particularly in steep topography. Discriminating between local and nonlocal processes in natural landscapes is complicated by the scarcity of high-resolution topographic data and by the difficulty of measuring sediment flux. To test the validity of local formulations of sediment transport, we use an experimental erosive landscape that combines disturbance-driven, diffusive sediment transport and surface runoff. We conducted our experiments in the eXperimental Landscape Model at St. Anthony Falls Laboratory a 0.5 x 0.5 m test flume filled with crystalline silica (D50 = 30μ) mixed with water to increase cohesion and preclude surface infiltration. Topography is measured with a sheet laser scanner; total sediment flux is tracked with a series of load cells. We simulate uplift (relative baselevel fall) by dropping two parallel weirs at the edges of the experiment. Diffusive sediment transport in our experiments is driven by rainsplash from a constant head drip tank fitted with 625 blunt needles of fixed diameter; sediment is mobilized both through drop impact and the subsequent runoff of the drops. To drive advective transport, we produce surface runoff via a ring of misters that produce droplets that are too small to disturb the sediment surface on impact. Using the results from five experiments that systematically vary the time of drip box rainfall relative to misting rainfall, we calculate local erosion in our experiments by differencing successive time-slices of topography and test whether these patterns are related to local topographic

  6. Mini-channel flow experiments and CFD validation analyses with the IFMIF Thermo- Hydraulic Experimental facility (ITHEX)

    International Nuclear Information System (INIS)

    Arbeiter, F.; Heinzel, V.; Leichtle, D.; Stratmanns, E.; Gordeev, S.

    2006-01-01

    The design of the IFMIF High Flux Test Module (HFTM) is based on the predictions for the heat transfer in narrow channels conducting helium flow of 50 o C inlet temperature at 0.3 MPa. The emerging helium flow conditions are in the transition regime of laminar to turbulent flow. The rectangular cooling channels are too short for the full development of the coolant flow. Relaminarization along the cooling passage is expected. At the shorter sides of the channels secondary flow occurs, which may have an impact on the temperature field inside the irradiation specimen's stack. As those conditions are not covered by available experimental data, the dedicated gas loop ITHEX has been constructed to operate up to a pressure of 0.42 MPa and temperatures of 200 o C. It's objective is to conduct experiments for the validation of the STAR-CD CFD code used for the design of the HFTM. As a first stage, two annular test-sections with hydraulic diameter of 1.2 mm have been used, where the experiments have been varied with respect to gas species (N 2 , He), inlet pressure, dimensionless heating span and Reynolds number encompassing the range of operational parameters of the HFTM. Local friction factors and Nusselt numbers have been obtained giving evidence that the transition regime will extend to Reynolds 10,000. For heating rates comparable to the HFTM filled with RAFM steels, local heat transfer coefficients are in consistence with the measured friction data. To validate local velocity profiles the ITHEX facility was further equipped with a flat rectangular test-section and a Laser Doppler Anemometry (LDA) system. An appropriate optical system has been developed and tested for the tiny observation volume of 40 μm diameter. Velocity profiles as induced by the transition of a wide inlet plenum to the flat mini-channels have been measured. Whereas the CFD models were able to reproduce the patterns far away from the nozzle, they show some disagreement for the conditions at the

  7. Optimization and design of an aircraft's morphing wing-tip demonstrator for drag reduction at low speeds, Part II - Experimental validation using Infra-Red transition measurement from Wind Tunnel tests

    Directory of Open Access Journals (Sweden)

    Andreea Koreanschi

    2017-02-01

    Full Text Available In the present paper, an ‘in-house’ genetic algorithm was numerically and experimentally validated. The genetic algorithm was applied to an optimization problem for improving the aerodynamic performances of an aircraft wing tip through upper surface morphing. The optimization was performed for 16 flight cases expressed in terms of various combinations of speeds, angles of attack and aileron deflections. The displacements resulted from the optimization were used during the wind tunnel tests of the wing tip demonstrator for the actuators control to change the upper surface shape of the wing. The results of the optimization of the flow behavior for the airfoil morphing upper-surface problem were validated with wind tunnel experimental transition results obtained with infra-red Thermography on the wing-tip demonstrator. The validation proved that the 2D numerical optimization using the ‘in-house’ genetic algorithm was an appropriate tool in improving various aspects of a wing’s aerodynamic performances.

  8. Combined Heat Transfer in High-Porosity High-Temperature Fibrous Insulations: Theory and Experimental Validation

    Science.gov (United States)

    Daryabeigi, Kamran; Cunnington, George R.; Miller, Steve D.; Knutson, Jeffry R.

    2010-01-01

    Combined radiation and conduction heat transfer through various high-temperature, high-porosity, unbonded (loose) fibrous insulations was modeled based on first principles. The diffusion approximation was used for modeling the radiation component of heat transfer in the optically thick insulations. The relevant parameters needed for the heat transfer model were derived from experimental data. Semi-empirical formulations were used to model the solid conduction contribution of heat transfer in fibrous insulations with the relevant parameters inferred from thermal conductivity measurements at cryogenic temperatures in a vacuum. The specific extinction coefficient for radiation heat transfer was obtained from high-temperature steady-state thermal measurements with large temperature gradients maintained across the sample thickness in a vacuum. Standard gas conduction modeling was used in the heat transfer formulation. This heat transfer modeling methodology was applied to silica, two types of alumina, and a zirconia-based fibrous insulation, and to a variation of opacified fibrous insulation (OFI). OFI is a class of insulations manufactured by embedding efficient ceramic opacifiers in various unbonded fibrous insulations to significantly attenuate the radiation component of heat transfer. The heat transfer modeling methodology was validated by comparison with more rigorous analytical solutions and with standard thermal conductivity measurements. The validated heat transfer model is applicable to various densities of these high-porosity insulations as long as the fiber properties are the same (index of refraction, size distribution, orientation, and length). Furthermore, the heat transfer data for these insulations can be obtained at any static pressure in any working gas environment without the need to perform tests in various gases at various pressures.

  9. Development of Safety Analysis Codes and Experimental Validation for a Very High Temperature Gas-Cooled Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Chang, H. Oh, PhD; Cliff Davis; Richard Moore

    2004-11-01

    The very high temperature gas-cooled reactors (VHTGRs) are those concepts that have average coolant temperatures above 900 degrees C or operational fuel temperatures above 1250 degrees C. These concepts provide the potential for increased energy conversion efficiency and for high-temperature process heat application in addition to power generation and nuclear hydrogen generation. While all the High Temperature Gas Cooled Reactor (HTGR) concepts have sufficiently high temperatures to support process heat applications, such as desalination and cogeneration, the VHTGR's higher temperatures are suitable for particular applications such as thermochemical hydrogen production. However, the high temperature operation can be detrimental to safety following a loss-of-coolant accident (LOCA) initiated by pipe breaks caused by seismic or other events. Following the loss of coolant through the break and coolant depressurization, air from the containment will enter the core by molecular diffusion and ultimately by natural convection, leading to oxidation of the in-core graphite structures and fuel. The oxidation will release heat and accelerate the heatup of the reactor core. Thus, without any effective countermeasures, a pipe break may lead to significant fuel damage and fission product release. The Idaho National Engineering and Environmental Laboratory (INEEL) has investigated this event for the past three years for the HTGR. However, the computer codes used, and in fact none of the world's computer codes, have been sufficiently developed and validated to reliably predict this event. New code development, improvement of the existing codes, and experimental validation are imperative to narrow the uncertaninty in the predictions of this type of accident. The objectives of this Korean/United States collaboration are to develop advanced computational methods for VHTGR safety analysis codes and to validate these computer codes.

  10. Experimental validation of control strategies for a microgrid test facility including a storage system and renewable generation sets

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Silvestro, Federico

    2012-01-01

    The paper is aimed at describing and validating some control strategies in the SYSLAB experimental test facility characterized by the presence of a low voltage network with a 15 kW-190 kWh Vanadium Redox Flow battery system and a 11 kW wind turbine. The generation set is connected to the local...... network and is fully controllable by the SCADA system. The control strategies, implemented on a local pc interfaced to the SCADA, are realized in Matlab-Simulink. The main purpose is to control the charge/discharge action of the storage system in order to present at the point of common coupling...... the desired power or energy profiles....

  11. A critical experimental study of the classical tactile threshold theory

    Directory of Open Access Journals (Sweden)

    Medina Leonel E

    2010-06-01

    Full Text Available Abstract Background The tactile sense is being used in a variety of applications involving tactile human-machine interfaces. In a significant number of publications the classical threshold concept plays a central role in modelling and explaining psychophysical experimental results such as in stochastic resonance (SR phenomena. In SR, noise enhances detection of sub-threshold stimuli and the phenomenon is explained stating that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. We designed an experiment to test the validity of the classical vibrotactile threshold. Using a second choice experiment, we show that individuals can order sensorial events below the level known as the classical threshold. If the observer's sensorial system is not activated by stimuli below the threshold, then a second choice could not be above the chance level. Nevertheless, our experimental results are above that chance level contradicting the definition of the classical tactile threshold. Results We performed a three alternative forced choice detection experiment on 6 subjects asking them first and second choices. In each trial, only one of the intervals contained a stimulus and the others contained only noise. According to the classical threshold assumptions, a correct second choice response corresponds to a guess attempt with a statistical frequency of 50%. Results show an average of 67.35% (STD = 1.41% for the second choice response that is not explained by the classical threshold definition. Additionally, for low stimulus amplitudes, second choice correct detection is above chance level for any detectability level. Conclusions Using a second choice experiment, we show that individuals can order sensorial events below the level known as a classical threshold. If the observer's sensorial system is not activated by stimuli below the threshold, then a second choice could not be above the chance

  12. Numerical modelling of transdermal delivery from matrix systems: parametric study and experimental validation with silicone matrices.

    Science.gov (United States)

    Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már

    2014-08-01

    A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  13. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  14. Experimental investigation of atomic lifetimes for the 2p53l levels in Ne-like sulphur

    International Nuclear Information System (INIS)

    Kirm, M.; Bengtsson, P.; Engstroem, L.

    1996-01-01

    This paper reports an experimental investigation of lifetimes of the 2p 5 3l levels in S VII, using the beam-foil method. Results are also given for some levels belonging to the 2p 5 4p, 4f and 5g configurations. All 3l lifetimes are obtained after extensive cascade corrections utilizing the non-linear ANDC technique along the decay chain 2p 6 -2p 5 3s-3p-3d-4f-5g. This work is the first investigation in the Ne-sequence to incorporate cascade corrections also for the 3d levels, and this is found to reduce the lifetimes by about 20% compared to previous experimental studies. For the very rapid 3s 1 P 1 decay, which is measured using the resonance transition at 72 A, we find that subtraction of the foil-position dependent background is important for a proper analysis and that this correction leads to a reduction in the evaluated lifetime by about 15%. With these experimental improvements all 3l lifetimes obtained are in good general agreement with recent theoretical predictions. (orig.)

  15. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  16. Validations of calibration-free measurements of electron temperature using double-pass Thomson scattering diagnostics from theoretical and experimental aspects

    Energy Technology Data Exchange (ETDEWEB)

    Tojo, H., E-mail: tojo.hiroshi@qst.go.jp; Hiratsuka, J.; Yatsuka, E.; Hatae, T.; Itami, K. [National Institutes for Quantum and Radiological Science and Technology, 801-1 Mukoyama, Naka 311-0193 (Japan); Yamada, I.; Yasuhara, R.; Funaba, H.; Hayashi, H. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Ejiri, A.; Togashi, H.; Takase, Y. [Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa 277-8561 (Japan)

    2016-09-15

    This paper evaluates the accuracy of electron temperature measurements and relative transmissivities of double-pass Thomson scattering diagnostics. The electron temperature (T{sub e}) is obtained from the ratio of signals from a double-pass scattering system, then relative transmissivities are calculated from the measured T{sub e} and intensity of the signals. How accurate the values are depends on the electron temperature (T{sub e}) and scattering angle (θ), and therefore the accuracy of the values was evaluated experimentally using the Large Helical Device (LHD) and the Tokyo spherical tokamak-2 (TST-2). Analyzing the data from the TST-2 indicates that a high T{sub e} and a large scattering angle (θ) yield accurate values. Indeed, the errors for scattering angle θ = 135° are approximately half of those for θ = 115°. The method of determining the T{sub e} in a wide T{sub e} range spanning over two orders of magnitude (0.01–1.5 keV) was validated using the experimental results of the LHD and TST-2. A simple method to provide relative transmissivities, which include inputs from collection optics, vacuum window, optical fibers, and polychromators, is also presented. The relative errors were less than approximately 10%. Numerical simulations also indicate that the T{sub e} measurements are valid under harsh radiation conditions. This method to obtain T{sub e} can be considered for the design of Thomson scattering systems where there is high-performance plasma that generates harsh radiation environments.

  17. Experimental study of the natural circulation phenomena

    International Nuclear Information System (INIS)

    Sabundjian, Gaiane; Andrade, Delvonei Alves de; Umbehaun, Pedro E.; Torres, Walmir M.; Castro, Alfredo Jose Alvim de; Belchior Junior, Antonio; Rocha, Ricardo Takeshi Vieira da; Damy, Osvaldo Luiz de Almeida; Torres, Eduardo

    2006-01-01

    The objective of this paper is to study the natural circulation in experimental loops and extend the results to nuclear facilities. New generation of compact nuclear power plants use the natural circulation as cooling and residual heat removal systems in case of accidents or shutdown. Lately the interest in this phenomenon, by scientific community, has increased. The experimental loop, described in this paper, was assembled at Escola Politecnica - USP at the Chemical Engineering Department. It is the goal to generate information to help with the understanding of the one and two phase natural circulation phenomena. Some experiments were performed with different levels of heat power and different flow of the cooling water at the secondary circuit. The data generated from these experiments are going to be used to validate some computational thermal hydraulic codes. Experimental results for one and two phase regimes are presented as well as the proposed model to simulate the flow regimes with the RELAP5 code. (author)

  18. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  19. Calculations for Adjusting Endogenous Biomarker Levels During Analytical Recovery Assessments for Ligand-Binding Assay Bioanalytical Method Validation.

    Science.gov (United States)

    Marcelletti, John F; Evans, Cindy L; Saxena, Manju; Lopez, Adriana E

    2015-07-01

    It is often necessary to adjust for detectable endogenous biomarker levels in spiked validation samples (VS) and in selectivity determinations during bioanalytical method validation for ligand-binding assays (LBA) with a matrix like normal human serum (NHS). Described herein are case studies of biomarker analyses using multiplex LBA which highlight the challenges associated with such adjustments when calculating percent analytical recovery (%AR). The LBA test methods were the Meso Scale Discovery V-PLEX® proinflammatory and cytokine panels with NHS as test matrix. The NHS matrix blank exhibited varied endogenous content of the 20 individual cytokines before spiking, ranging from undetectable to readily quantifiable. Addition and subtraction methods for adjusting endogenous cytokine levels in %AR calculations are both used in the bioanalytical field. The two methods were compared in %AR calculations following spiking and analysis of VS for cytokines having detectable endogenous levels in NHS. Calculations for %AR obtained by subtracting quantifiable endogenous biomarker concentrations from the respective total analytical VS values yielded reproducible and credible conclusions. The addition method, in contrast, yielded %AR conclusions that were frequently unreliable and discordant with values obtained with the subtraction adjustment method. It is shown that subtraction of assay signal attributable to matrix is a feasible alternative when endogenous biomarkers levels are below the limit of quantitation, but above the limit of detection. These analyses confirm that the subtraction method is preferable over that using addition to adjust for detectable endogenous biomarker levels when calculating %AR for biomarker LBA.

  20. Serum levels of cytokines in water buffaloes experimentally infected with Fasciola gigantica.

    Science.gov (United States)

    Zhang, Fu-Kai; Guo, Ai-Jiang; Hou, Jun-Ling; Sun, Miao-Miao; Sheng, Zhao-An; Zhang, Xiao-Xuan; Huang, Wei-Yi; Elsheikha, Hany M; Zhu, Xing-Quan

    2017-09-15

    Fasciola gigantica infection in water buffaloes causes significant economic losses especially in developing countries. Although modulation of the host immune response by cytokine neutralization or vaccination is a promising approach to control infection with this parasite, our understanding of cytokine's dynamic during F. gigantica infection is limited. To address this, we quantified the levels of serum cytokines produced in water buffaloes following experimental infection with F. gigantica. Five buffaloes were infected via oral gavage with 500 viable F. gigantica metacercariae and blood samples were collected from buffaloes one week before infection and for 13 consecutive weeks thereafter. The levels of 10 cytokines in serum samples were simultaneously determined using ELISA. F. gigantica failed to elicit the production of various pro-inflammatory cytokines, including interleukin-1β (IL-1β), IL-2, IL-6, IL-12, and IFN-γ. On the other hand, evidence of a Th2 type response was detected, but only early in the course of parasite colonization and included modest increase in the levels of IL-10 and IL-13. The results also revealed suppression of the immune responses as a feature of chronic F. gigantica infection in buffaloes. Taken together, F. gigantica seems to elicit a modest Th2 response at early stage of infection in order to downregulate harmful Th1- and Th17-type inflammatory responses in experimentally infected buffaloes. The full extent of anti-F. gigantica immune response and its relation to pathogenesis requires further study. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Real time risk analysis of kick detection: Testing and validation

    International Nuclear Information System (INIS)

    Islam, Rakibul; Khan, Faisal; Venkatesan, Ramchandran

    2017-01-01

    Oil and gas development is moving into harsh and remote locations where the highest level of safety is required. A blowout is one of the most feared accidents in oil and gas developments projects. The main objective of this paper is to test and validate the kick detection of blowout risk assessment model using uniquely developed experimental results. Kick detection is a major part of the blowout risk assessment model. The accuracy and timeliness of kick detection are dependent on the monitoring of multiple downhole parameters such as downhole pressure, fluid density, fluid conductivity and mass flow rate. In the present study these four parameters are considered in different logical combinations to assess the occurrence of kick and associated blowout risk. The assessed results are compared against the experimental observations. It is observed that simultaneous monitoring of mass flow rate combined with any one the three parameters provides most reliable detection of kick and potential blowout likelihood. The current work presents the framework for a dynamic risk assessment and management model. Upon success testing of this approach at the pilot and field levels, this approach could provide a paradigm shift in drilling safety. - Highlights: • A novel dynamic risk model of kick detection and blowout prediction. • Testing and Validation of the risk model. • Application of the dynamic risk model.

  2. Validation of the REL2005 code package on Gd-poisoned PWR type assemblies through the CAMELEON experimental program

    International Nuclear Information System (INIS)

    Blaise, Patrick; Vidal, Jean-Francois; Santamarina, Alain

    2009-01-01

    This paper details the validation of Gd-poisoned 17x17 PWR lattices, through several configurations of the CAMELEON experimental program, by using the newly qualified REL2005 French code package. After a general presentation of the CAMELEON program that took place in the EOLE critical Facility in Cadarache, one describes the new REL2005 code package relying on the deterministic transport code APOLLO2.8 based on characteristics method (MOC), and its new CEA2005 library based on the latest JEFF-3.1.1 nuclear data evaluation. For critical masses, the average Calculation-to-Experiment C/E's on the k eff are (136 ± 80) pcm and (300 ± 76) pcm for the reference 281 groups MOC and optimized 26 groups MOC schemes respectively. These values include also a drastic improvement of about 250 pcm due to the change in the library from JEF2.2 to JEFF3.1. For pin-by-pin radial power distributions, reference and REL2005 results are very close, with maximum discrepancies of the order of 2%, i.e., in the experimental uncertainty limits. The Optimized REL2005 code package allows to predict the reactivity worth of the Gd-clusters (averaged on 9 experimental configurations) to be C/E Δρ(Gd clusters) = +1.3% ± 2.3%. (author)

  3. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  4. Experimental nuclear level densities and γ-ray strength functions in Sc and V isotopes

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Ingebretsen, F.; Messelt, S.; Rekstad, J.; Siem, S.; Syed, N. U. H.; Chankova, R.; Loennroth, T.; Schiller, A.; Voinov, A.

    2008-01-01

    The nuclear physics group at the Oslo Cyclotron Laboratory has developed a method to extract nuclear level density and γ-ray strength function from first-generation γ-ray spectra. This method is applied on the nuclei 44,45 Sc and 50,51 V in this work. The experimental level densities of 44,45 Sc are compared to calculated level densities using a microscopic model based on BCS quasiparticles within the Nilsson level scheme. The γ-ray strength functions are also compared to theoretical expectations, showing an unexpected enhancement of the γ-ray strength for low γ energies (E γ ≤3 MeV) in all the isotopes studied here. The physical origin of this enhancement is not yet understood

  5. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance.

    Science.gov (United States)

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Design, Manufacturing and Experimental Validation of Optical Fiber Sensors Based Devices for Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Angela CORICCIATI

    2016-06-01

    Full Text Available The use of optical fiber sensors is a promising and rising technique used for Structural Health Monitoring (SHM, because permit to monitor continuously the strain and the temperature of the structure where they are applied. In the present paper three different types of smart devices, that are composite materials with an optical fiber sensor embedded inside them during the manufacturing process, are described: Smart Patch, Smart Rebar and Smart Textile, which are respectively a plate for local exterior intervention, a rod for shear and flexural interior reinforcement and a textile for an external whole application. In addition to the monitoring aim, the possible additional function of these devices could be the reinforcement of the structures where they are applied. In the present work, after technology manufacturing description, the experimental laboratory characterization of each device is discussed. At last, smart devices application on medium scale masonry walls and their validation by mechanical tests is described.

  7. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  8. Experimental study of rectenna coupling at low power level

    International Nuclear Information System (INIS)

    Douyère, A; Alicalapa, F; Lan Sun Luk, J-D; Rivière, S

    2013-01-01

    The experimental results presented in this paper focus on the performance of a rectenna array by studying the effect of mutual coupling between two rectennas. The measurements in several planes of the space are investigated and used to help us to define the minimum distance for future rectenna arrays that can be used at a low power density level. The single element chosen for the array is composed of a rectifier circuit and a CSPA (Circular Slot Patch Antenna). This study shows that at a distance greater than 6cm (λ/2) between two rectennas in reception, we observe that the DC received voltage is constant in the Y plane, while in the X plane, the DC received voltage remains constant whatever the distance. We deduce that these rectennas are uncoupled in this case. We can consider each rectenna like an independent system.

  9. Validation of hindi translation of DSM-5 level 1 cross-cutting symptom measure.

    Science.gov (United States)

    Goel, Ankit; Kataria, Dinesh

    2018-04-01

    The DSM-5 Level 1 Cross-Cutting Symptom Measure is a self- or informant-rated measure that assesses mental health domains which are important across psychiatric diagnoses. The absence of this self- or informant-administered instrument in Hindi, which is a major language in India, is an important limitation in using this scale. To translate the English version of the DSM-5 Level 1 Cross-Cutting Symptom Measure to Hindi and evaluate its psychometric properties. The study was conducted at a tertiary care hospital in Delhi. The DSM-5 Level 1 Cross-Cutting Symptom Measure was translated into Hindi using the World Health Organization's translation methodology. Mean and standard deviation were evaluated for continuous variables while for categorical variables frequency and percentages were calculated. The translated version was evaluated for cross-language equivalence, test-retest reliability, internal consistency, and split half reliability. Hindi version was found to have good cross-language equivalence and test-retest reliability at the level of items and domains. Twenty two of the 23 items and all the 23 items had a significant correlation (ρ Cutting Symptom Measure as translated in this study is a valid instrument. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Energetics and stability of azulene: From experimental thermochemistry to high-level quantum chemical calculations

    International Nuclear Information System (INIS)

    Sousa, Clara C.S.; Matos, M. Agostinha R.; Morais, Victor M.F.

    2014-01-01

    Highlights: • Experimental standard molar enthalpy of formation, sublimation azulene. • Mini-bomb combustion calorimetry, sublimation Calvet microcalorimetry. • High level composite ab initio calculations. • Computational estimate of the enthalpy of formation of azulene. • Discussion of stability and aromaticity of azulene. - Abstract: The standard (p 0 = 0.1 MPa) molar enthalpy of formation for crystalline azulene was derived from the standard molar enthalpy of combustion, in oxygen, at T = 298.15 K, measured in a mini-bomb combustion calorimeter (aneroid isoperibol calorimeter) and the standard molar enthalpy of sublimation, at T = 298.15 K, measured by Calvet microcalorimetry. From these experiments, the standard molar enthalpy of formation of azulene in the gaseous phase at T = 298.15 K was calculated. In addition, very accurate quantum chemical calculations at the G3 and G4 composite levels of calculation were conducted in order to corroborate our experimental findings and further clarify and establish the definitive standard enthalpy of formation of this interesting non-benzenoid hydrocarbon

  11. Experimental validation of flexible multibody dynamics beam formulations

    Energy Technology Data Exchange (ETDEWEB)

    Bauchau, Olivier A., E-mail: olivier.bauchau@sjtu.edu.cn; Han, Shilei [University of Michigan-Shanghai Jiao Tong University Joint Institute (China); Mikkola, Aki; Matikainen, Marko K. [Lappeenranta University of Technology, Department of Mechanical Engineering (Finland); Gruber, Peter [Austrian Center of Competence in Mechatronics GmbH (Austria)

    2015-08-15

    In this paper, the accuracies of the geometrically exact beam and absolute nodal coordinate formulations are studied by comparing their predictions against an experimental data set referred to as the “Princeton beam experiment.” The experiment deals with a cantilevered beam experiencing coupled flap, lag, and twist deformations. In the absolute nodal coordinate formulation, two different beam elements are used. The first is based on a shear deformable approach in which the element kinematics is described using two nodes. The second is based on a recently proposed approach featuring three nodes. The numerical results for the geometrically exact beam formulation and the recently proposed three-node absolute nodal coordinate formulation agree well with the experimental data. The two-node beam element predictions are similar to those of linear beam theory. This study suggests that a careful and thorough evaluation of beam elements must be carried out to assess their ability to deal with the three-dimensional deformations typically found in flexible multibody systems.

  12. Nonisothermal hydrologic transport experimental plan

    International Nuclear Information System (INIS)

    Rasmussen, T.C.; Evans, D.D.

    1992-09-01

    A field heater experimental plan is presented for investigating hydrologic transport processes in unsaturated fractured rock related to the disposal of high-level radioactive waste (HLW) in an underground repository. The experimental plan provides a methodology for obtaining data required for evaluating conceptual and computer models related to HLW isolation in an environment where significant heat energy is produced. Coupled-process models are currently limited by the lack of validation data appropriate for field scales that incorporate relevant transport processes. Presented in this document is a discussion of previous nonisothermal experiments. Processes expected to dominate heat-driven liquid, vapor, gas, and solute flow during the experiment are explained, and the conceptual model for nonisothermal flow and transport in unsaturated, fractured rock is described. Of particular concern is the ability to confirm the hypothesized conceptual model specifically, the establishment of higher water saturation zones within the host rock around the heat source, and the establishment of countercurrent flow conditions within the host rock near the heat source. Field experimental plans are presented using the Apache Leap Tuff Site to illustrate the implementation of the proposed methodology. Both small-scale preliminary experiments and a long-term experiment are described

  13. Experimental validation of a true-scale morphing flap for large civil aircraft applications

    Science.gov (United States)

    Pecora, R.; Amoroso, F.; Arena, M.; Noviello, M. C.; Rea, F.

    2017-04-01

    systems were duly analyzed and experimentally validated thus proving the overall device compliance with industrial standards and applicable airworthiness requirements.

  14. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema.

    Science.gov (United States)

    Lassere, Marissa N; Johnson, Kent R; Boers, Maarten; Tugwell, Peter; Brooks, Peter; Simon, Lee; Strand, Vibeke; Conaghan, Philip G; Ostergaard, Mikkel; Maksymowych, Walter P; Landewe, Robert; Bresnihan, Barry; Tak, Paul-Peter; Wakefield, Richard; Mease, Philip; Bingham, Clifton O; Hughes, Michael; Altman, Doug; Buyse, Marc; Galbraith, Sally; Wells, George

    2007-03-01

    There are clear advantages to using biomarkers and surrogate endpoints, but concerns about clinical and statistical validity and systematic methods to evaluate these aspects hinder their efficient application. Our objective was to review the literature on biomarkers and surrogates to develop a hierarchical schema that systematically evaluates and ranks the surrogacy status of biomarkers and surrogates; and to obtain feedback from stakeholders. After a systematic search of Medline and Embase on biomarkers, surrogate (outcomes, endpoints, markers, indicators), intermediate endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation level of evidence schema that evaluates biomarkers along 4 domains: Target, Study Design, Statistical Strength, and Penalties. Scores derived from 3 domains the Target that the marker is being substituted for, the Design of the (best) evidence, and the Statistical strength are additive. Penalties are then applied if there is serious counterevidence. A total score (0 to 15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. It was proposed that the term "surrogate" be restricted to markers attaining Levels 1 or 2 only. Most stakeholders agreed that this operationalization of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery, development, and approval.

  15. Development of an Experimental Measurement System for Human Error Characteristics and a Pilot Test

    International Nuclear Information System (INIS)

    Jang, Tong-Il; Lee, Hyun-Chul; Moon, Kwangsu

    2017-01-01

    Some items out of individual and team characteristics were partially selected, and a pilot test was performed to measure and evaluate them using the experimental measurement system of human error characteristics. It is one of the processes to produce input data to the Eco-DBMS. And also, through the pilot test, it was tried to take methods to measure and acquire the physiological data, and to develop data format and quantification methods for the database. In this study, a pilot test to measure the stress and the tension level, and team cognitive characteristics out of human error characteristics was performed using the human error characteristics measurement and experimental evaluation system. In an experiment measuring the stress level, physiological characteristics using EEG was measured in a simulated unexpected situation. As shown in results, although this experiment was pilot, it was validated that relevant results for evaluating human error coping effects of workers’ FFD management guidelines and unexpected situation against guidelines can be obtained. In following researches, additional experiments including other human error characteristics will be conducted. Furthermore, the human error characteristics measurement and experimental evaluation system will be utilized to validate various human error coping solutions such as human factors criteria, design, and guidelines as well as supplement the human error characteristics database.

  16. Internal Validity: A Must in Research Designs

    Science.gov (United States)

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  17. Experimentally validated structural vibration frequencies’ prediction from frictional temperature signatures using numerical simulation: A case of laced cantilever beam-like structures

    Directory of Open Access Journals (Sweden)

    Stephen M Talai

    2016-12-01

    Full Text Available This article pertains to the prediction of structural vibration frequencies from frictional temperature evolution through numerical simulation. To achieve this, a finite element analysis was carried on AISI 304 steel cantilever beam-like structures coupled with a lacing wire using the commercial software ABAQUS/CAE. The coupled temperature–displacement transient analysis simulated the frictional thermal generation. Furthermore, an experimental analysis was carried out with infrared cameras capturing the interfacial thermal images while the beams were subjected to forced excitation, thus validating the finite element analysis results. The analysed vibration frequencies using a MATLAB fast Fourier transform algorithm confirmed the validity of its prediction from the frictional temperature time domain waveform. This finding has a great significance to the mechanical and aerospace engineering communities for the effective structural health monitoring of dynamic structures online using infrared thermography, thus reducing the downtime and maintenance cost, leading to increased efficiency.

  18. Validation of a method to measure plutonium levels in marine sediments in Cuba

    International Nuclear Information System (INIS)

    Sibello Hernández, Rita Y.; Cartas Aguila, Héctor A.; Cozzella, María Letizia

    2008-01-01

    The main objective of this research was to develop and to validate a method of radiochemical separation of plutonium, suitable from the economic and practical point of view, in Cuba conditions. This method allowed to determine plutonium activity levels in the marine sediments from Cienfuegos Bay. The selected method of radiochemical separation was that of anionic chromatography and the measure technique was the quadrupole inductively coupled plasma mass spectrometry. The method was applied to a certified reference material, six repetitions were carried out and a good correspondence between the average measured value and the average certified value of plutonium was achieved, so the trueness of the method was demonstrated. It was also proven the precision of the method, since it was obtained a variation coefficient of 11% at 95% confidence level. The obtained results show that the presence of plutonium in the analyzed marine sediment samples is only due to the global radioactive fallout. (author)

  19. Monitoring Cell Death in Regorafenib-Treated Experimental Colon Carcinomas Using Annexin-Based Optical Fluorescence Imaging Validated by Perfusion MRI.

    Directory of Open Access Journals (Sweden)

    Philipp M Kazmierczak

    Full Text Available To investigate annexin-based optical fluorescence imaging (OI for monitoring regorafenib-induced early cell death in experimental colon carcinomas in rats, validated by perfusion MRI and multiparametric immunohistochemistry.Subcutaneous human colon carcinomas (HT-29 in athymic rats (n = 16 were imaged before and after a one-week therapy with regorafenib (n = 8 or placebo (n = 8 using annexin-based OI and perfusion MRI at 3 Tesla. Optical signal-to-noise ratio (SNR and MRI tumor perfusion parameters (plasma flow PF, mL/100mL/min; plasma volume PV, % were assessed. On day 7, tumors underwent immunohistochemical analysis for tumor cell apoptosis (TUNEL, proliferation (Ki-67, and microvascular density (CD31.Apoptosis-targeted OI demonstrated a tumor-specific probe accumulation with a significant increase of tumor SNR under therapy (mean Δ +7.78±2.95, control: -0.80±2.48, p = 0.021. MRI detected a significant reduction of tumor perfusion in the therapy group (mean ΔPF -8.17±2.32 mL/100 mL/min, control -0.11±3.36 mL/100 mL/min, p = 0.036. Immunohistochemistry showed significantly more apoptosis (TUNEL; 11392±1486 vs. 2921±334, p = 0.001, significantly less proliferation (Ki-67; 1754±184 vs. 2883±323, p = 0.012, and significantly lower microvascular density (CD31; 107±10 vs. 182±22, p = 0.006 in the therapy group.Annexin-based OI allowed for the non-invasive monitoring of regorafenib-induced early cell death in experimental colon carcinomas, validated by perfusion MRI and multiparametric immunohistochemistry.

  20. Validated methodology for quantifying infestation levels of dreissenid mussels in environmental DNA (eDNA) samples

    OpenAIRE

    Peñarrubia Lozano, Luis; Alcaraz Cazorla, Carles; Vaate, Abraham bij de; Sanz Ball-llosera, Núria; Pla Zanuy, Carles; Vidal Fàbrega, Oriol; Viñas de Puig, Jordi

    2016-01-01

    The zebra mussel (Dreissena polymorpha Pallas, 1771) and the quagga mussel (D. rostriformis Deshayes, 1838) are successful invasive bivalves with substantial ecological and economic impacts in freshwater systems once they become established. Since their eradication is extremely difficult, their detection at an early stage is crucial to prevent spread. In this study, we optimized and validated a qPCR detection method based on the histone H2B gene to quantify combined infestation levels of zebr...

  1. Experimental rhinovirus infection in volunteers.

    Science.gov (United States)

    Bardin, P G; Sanderson, G; Robinson, B S; Holgate, S T; Tyrrell, D A

    1996-11-01

    Experimental viral disease studies in volunteers have clarified many aspects of the pathogenesis of human viral disease. Recently, interest has focused on rhinovirus-associated asthma exacerbations, and new volunteer studies have suggested that airway responsiveness (AR) is enhanced during a cold. For scientific, ethical and safety reasons, it is important to use validated methods for the preparation of a virus inoculum and that the particular virological characteristics and host responses should not be altered. We have prepared a new human rhinovirus (HRV) inoculum using recent guidelines and assessed whether disease characteristics (for example, severity of colds or changes in AR) were retained. Studies were conducted in 25 clinically healthy volunteers using a validated HRV inoculum in the first 17 and a new inoculum in the subsequent eight subjects. Severity of cold symptoms, nasal wash albumin levels and airway responsiveness were measured, and the new inoculum was prepared from nasal washes obtained during the cold. The new inoculum was tested using standard virological and serological techniques, as well as a polymerase chain reaction for Mycoplasma pneumoniae. No contaminating viruses or organisms were detected and the methods suggested were workable. Good clinical colds developed in 20 of the 25 subjects and median symptom scores were similar in the validated and new inoculum groups (18 and 17.5, respectively; p=0.19). All subjects shed virus, and there were no differences noted in viral culture scores, nasal wash albumin and rates of seroconversion in the two groups. Although airway responsiveness increased in both groups (p=0.02 and p=0.05), the degree of change was similar. We have performed experimental rhinovirus infection studies and demonstrated similar clinical disease in two inoculum groups. Amplified airway responsiveness was induced; continuing studies will define the mechanisms and suggest modes of treatment.

  2. Theoretical modeling and experimental validation of a torsional piezoelectric vibration energy harvesting system

    Science.gov (United States)

    Qian, Feng; Zhou, Wanlu; Kaluvan, Suresh; Zhang, Haifeng; Zuo, Lei

    2018-04-01

    Vibration energy harvesting has been extensively studied in recent years to explore a continuous power source for sensor networks and low-power electronics. Torsional vibration widely exists in mechanical engineering; however, it has not yet been well exploited for energy harvesting. This paper presents a theoretical model and an experimental validation of a torsional vibration energy harvesting system comprised of a shaft and a shear mode piezoelectric transducer. The piezoelectric transducer position on the surface of the shaft is parameterized by two variables that are optimized to obtain the maximum power output. The piezoelectric transducer can work in d 15 mode (pure shear mode), coupled mode of d 31 and d 33, and coupled mode of d 33, d 31 and d 15, respectively, when attached at different angles. Approximate expressions of voltage and power are derived from the theoretical model, which gave predictions in good agreement with analytical solutions. Physical interpretations on the implicit relationship between the power output and the position parameters of the piezoelectric transducer is given based on the derived approximate expression. The optimal position and angle of the piezoelectric transducer is determined, in which case, the transducer works in the coupled mode of d 15, d 31 and d 33.

  3. Validation of Sea levels from coastal altimetry waveform retracking expert system: a case study around the Prince William Sound in Alaska

    Science.gov (United States)

    Idris, N. H.; Deng, X.; Idris, N. H.

    2017-05-01

    This paper presents the validation of Coastal Altimetry Waveform Retracking Expert System (CAWRES), a novel method to optimize the Jason satellite altimetric sea levels from multiple retracking solutions. The validation is conducted over the region of Prince William Sound in Alaska, USA, where altimetric waveforms are perturbed by emerged land and sea states. Validation is performed in twofold. First, comparison with existing retrackers (i.e. MLE4 and Ice) from the Sensor Geophysical Data Records (SGDR), and second, comparison with in-situ tide gauge data. From the first validation assessment, in general, CAWRES outperforms the MLE4 and Ice retrackers. In 4 out of 6 cases, the value of improvement percentage (standard deviation of difference) is higher (lower) than those of the SGDR retrackers. CAWRES also presents the best performance in producing valid observations, and has the lowest noise when compared to the SGDR retrackers. From the second assessment with tide gauge, CAWRES retracked sea level anomalies (SLAs) are consistent with those of the tide gauge. The accuracy of CAWRES retracked SLAs is slightly better than those of the MLE4. However, the performance of Ice retracker is better than those of CAWRES and MLE4, suggesting the empirical-based retracker is more effective. The results demonstrate that the CAWRES would have potential to be applied to coastal regions elsewhere.

  4. Measures of agreement between computation and experiment:validation metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Oberkampf, William Louis

    2005-08-01

    With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

  5. Highlighting the Need for Systems-level Experimental Characterization of Plant Metabolic Enzymes

    Directory of Open Access Journals (Sweden)

    Martin Karl Magnus Engqvist

    2016-07-01

    Full Text Available The biology of living organisms is determined by the action and interaction of a large number of individual gene products, each with specific functions. Discovering and annotating the function of gene products is key to our understanding of these organisms. Controlled experiments and bioinformatic predictions both contribute to functional gene annotation. For most species it is difficult to gain an overview of what portion of gene annotations are based on experiments and what portion represent predictions. Here, I survey the current state of experimental knowledge of enzymes and metabolism in Arabidopsis thaliana as well as eleven economically important crops and forestry trees – with a particular focus on reactions involving organic acids in central metabolism. I illustrate the limited availability of experimental data for functional annotation of enzymes in most of these species. Many enzymes involved in metabolism of citrate, malate, fumarate, lactate, and glycolate in crops and forestry trees have not been characterized. Furthermore, enzymes involved in key biosynthetic pathways which shape important traits in crops and forestry trees have not been characterized. I argue for the development of novel high-throughput platforms with which limited functional characterization of gene products can be performed quickly and relatively cheaply. I refer to this approach as systems-level experimental characterization. The data collected from such platforms would form a layer intermediate between bioinformatic gene function predictions and in-depth experimental studies of these functions. Such a data layer would greatly aid in the pursuit of understanding a multiplicity of biological processes in living organisms.

  6. Durability, mechanical, and thermal properties of experimental glass-ceramic forms for immobilizing ICPP high level waste

    International Nuclear Information System (INIS)

    Vinjamuri, K.

    1990-01-01

    The high-level liquid waste generated at the Idaho Chemical Processing Plant (ICPP) is routinely solidified into granular calcined high-level waste (HLW) and stored onsite. Research is being conducted at the ICPP on methods of immobilizing the HLW, including developing a durable glass-ceramic form which has the potential to significantly reduce the final waste volume by up to 60% compared to a glass form. Simulated, pilot plant, non-radioactive, calcines similar to the composition of the calcined HLW and glass forming additives are used to produce experimental glass-ceramic forms. The objective of the research reported in this paper is to study the impact of ground calcine particle size on durability and mechanical and thermal properties of experimental glass-ceramic forms

  7. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    Science.gov (United States)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  8. Applications of the parity space technique to the validation of the water level measurement of pressurizer for steady state and transients

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Bath, L.

    1983-01-01

    During the design of disturbance analysis and surveillance systems, safety parameter display systems, computerized operator support systems or advanced control rooms, sensor signal validation is commonly considered as the first task to be performed. After an introduction of the anticipated benefits of the signal validation techniques and a brief survey of the methods under current practices, a signal validation technique based upon the parity space methodology is presented. The efficiency of the method applied to the detection and the identification of five types of failures is illustrated with two examples when three water level measurements of a pressurizer of a nuclear plant are redundant. In the first example the use of the analytical redundancy technique is presented when only two identical sensors are available. A detailed description of the dynamic model of the pressurizer is given. In the second example the case of the identical water level sensors is considered. Performances of the software developed on a computer DEC PDP 11 are finally given

  9. HIPdb: a database of experimentally validated HIV inhibiting peptides.

    Science.gov (United States)

    Qureshi, Abid; Thakur, Nishant; Kumar, Manoj

    2013-01-01

    Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.

  10. Solar power plant performance evaluation: simulation and experimental validation

    Science.gov (United States)

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  11. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  12. On-chip gradient generation in 256 microfluidic cell cultures: simulation and experimental validation.

    Science.gov (United States)

    Somaweera, Himali; Haputhanthri, Shehan O; Ibraguimov, Akif; Pappas, Dimitri

    2015-08-07

    A microfluidic diffusion diluter was used to create a stable concentration gradient for dose response studies. The microfluidic diffusion diluter used in this study consisted of 128 culture chambers on each side of the main fluidic channel. A calibration method was used to find unknown concentrations with 12% error. Flow rate dependent studies showed that changing the flow rates generated different gradient patterns. Mathematical simulations using COMSOL Multi-physics were performed to validate the experimental data. The experimental data obtained for the flow rate studies agreed with the simulation results. Cells could be loaded into culture chambers using vacuum actuation and cultured for long times under low shear stress. Decreasing the size of the culture chambers resulted in faster gradient formation (20 min). Mass transport into the side channels of the microfluidic diffusion diluter used in this study is an important factor in creating the gradient using diffusional mixing as a function of the distance. To demonstrate the device's utility, an H2O2 gradient was generated while culturing Ramos cells. Cell viability was assayed in the 256 culture chambers, each at a discrete H2O2 concentration. As expected, the cell viability for the high concentration side channels increased (by injecting H2O2) whereas the cell viability in the low concentration side channels decreased along the chip due to diffusional mixing as a function of distance. COMSOL simulations were used to identify the effective concentration of H2O2 for cell viability in each side chamber at 45 min. The gradient effects were confirmed using traditional H2O2 culture experiments. Viability of cells in the microfluidic device under gradient conditions showed a linear relationship with the viability of the traditional culture experiment. Development of the microfluidic device used in this study could be used to study hundreds of concentrations of a compound in a single experiment.

  13. In silico analysis and experimental validation of azelastine hydrochloride (N4) targeting sodium taurocholate co-transporting polypeptide (NTCP) in HBV therapy.

    Science.gov (United States)

    Fu, L-L; Liu, J; Chen, Y; Wang, F-T; Wen, X; Liu, H-Q; Wang, M-Y; Ouyang, L; Huang, J; Bao, J-K; Wei, Y-Q

    2014-08-01

    The aim of this study was to explore sodium taurocholate co-transporting polypeptide (NTCP) exerting its function with hepatitis B virus (HBV) and its targeted candidate compounds, in HBV therapy. Identification of NTCP as a novel HBV target for screening candidate small molecules, was used by phylogenetic analysis, network construction, molecular modelling, molecular docking and molecular dynamics (MD) simulation. In vitro virological examination, q-PCR, western blotting and cytotoxicity studies were used for validating efficacy of the candidate compound. We used the phylogenetic analysis of NTCP and constructed its protein-protein network. Also, we screened compounds from Drugbank and ZINC, among which five were validated for their authentication in HepG 2.2.15 cells. Then, we selected compound N4 (azelastine hydrochloride) as the most potent of them. This showed good inhibitory activity against HBsAg (IC50 = 7.5 μm) and HBeAg (IC50 = 3.7 μm), as well as high SI value (SI = 4.68). Further MD simulation results supported good interaction between compound N4 and NTCP. In silico analysis and experimental validation together demonstrated that compound N4 can target NTCP in HepG2.2.15 cells, which may shed light on exploring it as a potential anti-HBV drug. © 2014 John Wiley & Sons Ltd.

  14. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  15. Development and experimental validation of a monte carlo modeling of the neutron emission from a d-t generator

    Science.gov (United States)

    Remetti, Romolo; Lepore, Luigi; Cherubini, Nadia

    2017-01-01

    An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.

  16. Modeling of the behavior of radon and its decay products in dwelling, and experimental validation of the model

    International Nuclear Information System (INIS)

    Gouronnec, A.M.; Robe, M.C.; Montassier, N.; Boulaud, D.

    1993-01-01

    A model of the type written by Jacobi is adapted to indoor air to describe the behavior of radon and its decay products within a dwelling, and is then adapted to a system of several stories. To start the validation of the model, computed data are compared with field measurements. The first observations we may make are that the model is consistent with data we have. But it is important to develop an exhaustive set of experimental data and to obtain as faithful as possible a representation of the mean situation; this specially concerns the ventilation rate of the enclosure and the rate of attachment to airborne particles. Further work should also be done to model deposition on surfaces. (orig.). (6 refs., 4 tabs.)

  17. Theoretical model with experimental validation of a regenerative blower for hydrogen recirculation in a PEM fuel cell system

    Energy Technology Data Exchange (ETDEWEB)

    Badami, M.; Mura, M. [Dipartimento di Energetica, Politecnico di Torino, C.so Duca degli Abruzzi 24, Torino (Italy)

    2010-03-15

    A theoretical model of a regenerative blower used for the hydrogen recirculation of a Proton Exchange Membrane (PEM) fuel cell (FC) for automotive applications has been implemented and validated by means of experimental data. A momentum exchange theory was used to determine the head-flow rate curves, whereas the circulatory flow rate was determined through a theory based on the consideration of the centrifugal force field in the side channel and in the impeller vane grooves. The model allows a good forecast to be made of the blower behaviour, and only needs its main geometrical characteristics and some fluid-dynamic data as input. For this reason, the model could be very interesting, especially during the first sizing and the design activity of the blower. (author)

  18. Mutant mice: experimental organisms as materialised models in biomedicine.

    Science.gov (United States)

    Huber, Lara; Keuck, Lara K

    2013-09-01

    Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Experimental validation of a numerical 3-D finite model applied to wind turbines design under vibration constraints: TREVISE platform

    Science.gov (United States)

    Sellami, Takwa; Jelassi, Sana; Darcherif, Abdel Moumen; Berriri, Hanen; Mimouni, Med Faouzi

    2018-04-01

    With the advancement of wind turbines towards complex structures, the requirement of trusty structural models has become more apparent. Hence, the vibration characteristics of the wind turbine components, like the blades and the tower, have to be extracted under vibration constraints. Although extracting the modal properties of blades is a simple task, calculating precise modal data for the whole wind turbine coupled to its tower/foundation is still a perplexing task. In this framework, this paper focuses on the investigation of the structural modeling approach of modern commercial micro-turbines. Thus, the structural model a complex designed wind turbine, which is Rutland 504, is established based on both experimental and numerical methods. A three-dimensional (3-D) numerical model of the structure was set up based on the finite volume method (FVM) using the academic finite element analysis software ANSYS. To validate the created model, experimental vibration tests were carried out using the vibration test system of TREVISE platform at ECAM-EPMI. The tests were based on the experimental modal analysis (EMA) technique, which is one of the most efficient techniques for identifying structures parameters. Indeed, the poles and residues of the frequency response functions (FRF), between input and output spectra, were calculated to extract the mode shapes and the natural frequencies of the structure. Based on the obtained modal parameters, the numerical designed model was up-dated.

  20. Irrigant flow in the root canal: experimental validation of an unsteady Computational Fluid Dynamics model using high-speed imaging.

    Science.gov (United States)

    Boutsioukis, C; Verhaagen, B; Versluis, M; Kastrinakis, E; van der Sluis, L W M

    2010-05-01

    To compare the results of a Computational Fluid Dynamics (CFD) simulation of the irrigant flow within a prepared root canal, during final irrigation with a syringe and a needle, with experimental high-speed visualizations and theoretical calculations of an identical geometry and to evaluate the effect of off-centre positioning of the needle inside the root canal. A CFD model was created to simulate irrigant flow from a side-vented needle inside a prepared root canal. Calculations were carried out for four different positions of the needle inside a prepared root canal. An identical root canal model was made from poly-dimethyl-siloxane (PDMS). High-speed imaging of the flow seeded with particles and Particle Image Velocimetry (PIV) were combined to obtain the velocity field inside the root canal experimentally. Computational, theoretical and experimental results were compared to assess the validity of the computational model. Comparison between CFD computations and experiments revealed good agreement in the velocity magnitude and vortex location and size. Small lateral displacements of the needle inside the canal had a limited effect on the flow field. High-speed imaging experiments together with PIV of the flow inside a simulated root canal showed a good agreement with the CFD model, even though the flow was unsteady. Therefore, the CFD model is able to predict reliably the flow in similar domains.

  1. Gamma streaming experiments for validation of Monte Carlo code

    International Nuclear Information System (INIS)

    Thilagam, L.; Mohapatra, D.K.; Subbaiah, K.V.; Iliyas Lone, M.; Balasubramaniyan, V.

    2012-01-01

    In-homogeneities in shield structures lead to considerable amount of leakage radiation (streaming) increasing the radiation levels in accessible areas. Development works on experimental as well as computational methods for quantifying this streaming radiation are still continuing. Monte Carlo based radiation transport code, MCNP is usually a tool for modeling and analyzing such problems involving complex geometries. In order to validate this computational method for streaming analysis, it is necessary to carry out some experimental measurements simulating these inhomogeneities like ducts and voids present in the bulk shields for typical cases. The data thus generated will be analysed by simulating the experimental set up employing MCNP code and optimized input parameters for the code in finding solutions for similar radiation streaming problems will be formulated. Comparison of experimental data obtained from radiation streaming experiments through ducts will give a set of thumb rules and analytical fits for total radiation dose rates within and outside the duct. The present study highlights the validation of MCNP code through the gamma streaming experiments carried out with the ducts of various shapes and dimensions. Over all, the present study throws light on suitability of MCNP code for the analysis of gamma radiation streaming problems for all duct configurations considered. In the present study, only dose rate comparisons have been made. Studies on spectral comparison of streaming radiation are in process. Also, it is planned to repeat the experiments with various shield materials. Since the penetrations and ducts through bulk shields are unavoidable in an operating nuclear facility the results on this kind of radiation streaming simulations and experiments will be very useful in the shield structure optimization without compromising the radiation safety

  2. ULiAS 4 - Experimental validation of a software that models ultrasonic wave propagation through an anisotropic weld

    International Nuclear Information System (INIS)

    Wirdelius, Haakan; Persson, Gert; Hamberg, Kenneth; Hoegberg, Kjell

    2008-01-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have evolved in Europe during the last decade. In order to elaborate these procedures, efforts have to be taken towards the development of mathematical models of applied NDT methods. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies, and the qualification of procedures and personnel. An important issue regarding all models is the validation, i.e. securing that the results of the model and the corresponding computer programs are correct. This can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. In this study a numerical model and experimental results are compared and the work has been performed in collaboration with SQC Kvalificeringscentrum AB. Four different welds have been investigated to give basic data to a mathematical model that describes the ultra sonic wave paths through the welds in these materials. The welds are made in austenitic stainless steel (type 18-8) and in Inconel 182. Two cuts outs are made in each weld, one longitudinal and one transversal cut across the welds, in order to determine the material orientation. In the numerical model the incident field, described by rays, is given by a P wave probe model. The ray tracing technique is based on geometrical optics and a 2D algorithm has been developed. The model of the weld is based on a relatively primitive assumption of the grain structure for a V-butt weld. The columnar structure of austenitic welds is here modelled as a weld where each sub region corresponds to a grain group. The response of the receiver is calculated according to Auld's reciprocity principle. UT data collection was performed by SQC according to guidelines given from Chalmers. The purpose to collect data from real inspection objects with known material structure is to compare experimental data with theoretically calculated

  3. ULiAS 4 - Experimental validation of a software that models ultrasonic wave propagation through an anisotropic weld

    Energy Technology Data Exchange (ETDEWEB)

    Wirdelius, Haakan; Persson, Gert; Hamberg, Kenneth (SCeNDT, Chalmers Univ. Of Tech., SE-412 96 Goeteborg (SE)); Hoegberg, Kjell (SQC Kvalificeringscentrum AB, SE-183 25 Taeby (SE))

    2008-07-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have evolved in Europe during the last decade. In order to elaborate these procedures, efforts have to be taken towards the development of mathematical models of applied NDT methods. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies, and the qualification of procedures and personnel. An important issue regarding all models is the validation, i.e. securing that the results of the model and the corresponding computer programs are correct. This can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. In this study a numerical model and experimental results are compared and the work has been performed in collaboration with SQC Kvalificeringscentrum AB. Four different welds have been investigated to give basic data to a mathematical model that describes the ultra sonic wave paths through the welds in these materials. The welds are made in austenitic stainless steel (type 18-8) and in Inconel 182. Two cuts outs are made in each weld, one longitudinal and one transversal cut across the welds, in order to determine the material orientation. In the numerical model the incident field, described by rays, is given by a P wave probe model. The ray tracing technique is based on geometrical optics and a 2D algorithm has been developed. The model of the weld is based on a relatively primitive assumption of the grain structure for a V-butt weld. The columnar structure of austenitic welds is here modelled as a weld where each sub region corresponds to a grain group. The response of the receiver is calculated according to Auld's reciprocity principle. UT data collection was performed by SQC according to guidelines given from Chalmers. The purpose to collect data from real inspection objects with known material structure is to compare experimental data with theoretically

  4. Concurrent Validation of Experimental Army Enlisted Personnel Selection and Classification Measures

    National Research Council Canada - National Science Library

    Knapp, Deirdre J; Tremble, Trueman R

    2007-01-01

    .... This report documents the method and results of the criterion-related validation. The predictor set includes measures of cognitive ability, temperament, psychomotor skills, values, expectations...

  5. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  6. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  7. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  8. ATHLET. Mod 3.0 Cycle A. Validation

    Energy Technology Data Exchange (ETDEWEB)

    Lerchl, G.; Austregesilo, H.; Glaeser, H.; Hrubisko, M.; Luther, W.

    2012-09-15

    ATHLET is an advanced best-estimate code which has been initially developed for the simulation of design basis and beyond design basis accidents (without core degradation) in light water reactors, including VVER and RBMK reactors. Furthermore, this program version enables the simulation of further working fluids like helium and liquid metals. The one-dimensional, two-phase fluiddynamic models are based on a five-equation model supplemented by a full-range drift-flux model, including a dynamic mixture-level tracking capability. Moreover, a two-fluid model based on six conservation equations is provided. The heat conduction and heat transfer module allows a flexible simulation of fuel rods and structures. The nuclear heat generation is calculated by a point-kinetics or by a one-dimensional kinetics model. A general control simulation module is provided for a flexible modelling of BOP- and auxiliary plant systems. Systematic code validation is performed by GRS and independent organizations. This Validation Manual is the fourth volume of the ATHLET Code Documentation comprising four volumes. This manual presents an overview about the complete ATHLET validation effort spent up to now. In addition, the results of five test cases simulated with the present ATHLET program version are compared with the experimental data.

  9. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  10. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  11. An experimental program for testing the validity of flow and transport models in unsaturated tuff: The Yucca Mountain Project

    International Nuclear Information System (INIS)

    Shephard, L.E.; Glass, R.J.; Siegel, M.D.; Tidwell, V.C.

    1990-01-01

    Groundwater flow and contaminant transport through the unsaturated zone are receiving increased attention as options for waste disposal in saturated media continue to be considered as a potential means for resolving the nation's waste management concerns. An experimental program is being developed to test the validity of conceptual flow and transport models that are being formulated to predict the long-term performance at Yucca Mountain. This program is in the developmental stage and will continue to evolve as information is acquired and knowledge is improved with reference to flow and transport in unsaturated fractured media. The general approach for directing the validation effort entails identifying those processes which may cause the site to fail relative to imposed regulatory requirements, evaluating the key assumptions underlying the conceptual models used or developed to describe these processes, and developing new conceptual models as needed. Emphasis is currently being placed in four general areas: flow and transport in unsaturated fractures; fracture-matrix interactions; infiltration flow instability; and evaluation of scale effects in heterogeneous fractured media. Preliminary results and plans or each of these areas for both the laboratory and field investigation components will be presented in the manuscript. 1 ref

  12. Nicotinamides: Evaluation of thermochemical experimental properties

    International Nuclear Information System (INIS)

    Zhabina, Aleksandra A.; Nagrimanov, Ruslan N.; Emel’yanenko, Vladimir N.; Solomonov, Boris N.; Verevkin, Sergey P.

    2016-01-01

    Highlights: • Vapor pressures measured by transpiration method. • Enthalpies of solution measured using high-precision solution calorimetry. • Enthalpies of fusion measured by DSC. • Sublimation enthalpies derived from transpiration and solution calorimetry in agreement. • Experimental results evaluated and compared with G4 calculations. - Abstract: Vapor pressures of the isomeric 2-, 3-, and 4-pyridinecarboxamides were measured by using the transpiration method. The enthalpies of sublimation/vaporization of these compounds at 298.15 K were derived from vapor pressure temperature dependences. The enthalpies of solution of the isomeric pyridinecarboxamides were measured with the high-precision solution calorimetry. The enthalpies of sublimation of 3- and 4-pyridinecarboxamides were independently derived with help of the solution calorimetry based procedure. The enthalpies of fusion of the pyridinecarboxamides were measured by the DSC. Thermochemical data isomeric pyridinecarboxamides were collected, evaluated, and tested for internal consistency. The high-level G4 quantum-chemical method was used for mutual validation of the experimental and theoretical gas phase enthalpies of formation successfully.

  13. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    Science.gov (United States)

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.

  14. Comprehensive distributed-parameters modeling and experimental validation of microcantilever-based biosensors with an application to ultrasmall biological species detection

    International Nuclear Information System (INIS)

    Faegh, Samira; Jalili, Nader

    2013-01-01

    Nanotechnological advancements have made a great contribution in developing label-free and highly sensitive biosensors. The detection of ultrasmall adsorbed masses has been enabled by such sensors which transduce molecular interaction into detectable physical quantities. More specifically, microcantilever-based biosensors have caught widespread attention for offering a label-free, highly sensitive and inexpensive platform for biodetection. Although there are a lot of studies investigating microcantilever-based sensors and their biological applications, a comprehensive mathematical modeling and experimental validation of such devices providing a closed form mathematical framework is still lacking. In almost all of the studies, a simple lumped-parameters model has been proposed. However, in order to have a precise biomechanical sensor, a comprehensive model is required being capable of describing all phenomena and dynamics of the biosensor. Therefore, in this study, an extensive distributed-parameters modeling framework is proposed for the piezoelectric microcantilever-based biosensor using different methodologies for the purpose of detecting an ultrasmall adsorbed mass over the microcantilever surface. An optimum modeling methodology is concluded and verified with the experiment. This study includes three main parts. In the first part, the Euler–Bernoulli beam theory is used to model the nonuniform piezoelectric microcantilever. Simulation results are obtained and presented. The same system is then modeled as a nonuniform rectangular plate. The simulation results are presented describing model's capability in the detection of an ultrasmall mass. Finally the last part presents the experimental validation verifying the modeling results. It was shown that plate modeling predicts the real situation with a degree of precision of 99.57% whereas modeling the system as an Euler–Bernoulli beam provides a 94.45% degree of precision. The detection of ultrasmall

  15. Investigation and experimental validation of the contribution of optical interconnects in the SYMPHONIE massively parallel computer

    International Nuclear Information System (INIS)

    Scheer, Patrick

    1998-01-01

    Progress in microelectronics lead to electronic circuits which are increasingly integrated, with an operating frequency and an inputs/outputs count larger than the ones supported by printed circuit board and back-plane technologies. As a result, distributed systems with several boards cannot fully exploit the performance of integrated circuits. In synchronous parallel computers, the situation is worsen since the overall system performances rely on the efficiency of electrical interconnects between the integrated circuits which include the processing elements (PE). The study of a real parallel computer named SYMPHONIE shows for instance that the system operating frequency is far smaller than the capabilities of the microelectronics technology used for the PE implementation. Optical interconnections may cancel these limitations by providing more efficient connections between the PE. Especially, free-space optical interconnections based on vertical-cavity surface-emitting lasers (VCSEL), micro-lens and PIN photodiodes are compatible with the required features of the PE communications. Zero bias modulation of VCSEL with CMOS-compatible digital signals is studied and experimentally demonstrated. A model of the propagation of truncated gaussian beams through micro-lenses is developed. It is then used to optimise the geometry of the detection areas. A dedicated mechanical system is also proposed and implemented for integrating free-space optical interconnects in a standard electronic environment, representative of the one of parallel computer systems. A specially designed demonstrator provides the experimental validation of the above physical concepts. (author) [fr

  16. Oxygenation level and hemoglobin concentration in experimental tumor estimated by diffuse optical spectroscopy

    Science.gov (United States)

    Orlova, A. G.; Kirillin, M. Yu.; Volovetsky, A. B.; Shilyagina, N. Yu.; Sergeeva, E. A.; Golubiatnikov, G. Yu.; Turchin, I. V.

    2017-07-01

    Using diffuse optical spectroscopy the level of oxygenation and hemoglobin concentration in experimental tumor in comparison with normal muscle tissue of mice have been studied. Subcutaneously growing SKBR-3 was used as a tumor model. Continuous wave fiber probe diffuse optical spectroscopy system was employed. Optical properties extraction approach was based on diffusion approximation. Decreased blood oxygen saturation level and increased total hemoglobin content were demonstrated in the neoplasm. The main reason of such differences between tumor and norm was significant elevation of deoxyhemoglobin concentration in SKBR-3. The method can be useful for diagnosis of tumors as well as for study of blood flow parameters of tumor models with different angiogenic properties.

  17. Construct validity test of evaluation tool for professional behaviors of entry-level occupational therapy students in the United States

    Directory of Open Access Journals (Sweden)

    Hon K. Yuen

    2016-06-01

    Full Text Available Purpose: This study aimed to test the construct validity of an instrument to measure student professional behaviors in entry-level occupational therapy (OT students in the academic setting. Methods: A total of 718 students from 37 OT programs across the United States answered a self-assessment survey of professional behavior that we developed. The survey consisted of ranking 28 attributes, each on a 5-point Likert scale. A split-sample approach was used for exploratory and then confirmatory factor analysis. Results: A three-factor solution with nine items was extracted using exploratory factor analysis [EFA] (n=430, 60%. The factors were ‘Commitment to Learning’ (2 items, ‘Skills for Learning’ (4 items, and ‘Cultural Competence’ (3 items. Confirmatory factor analysis (CFA on the validation split (n=288, 40% indicated fair fit for this three-factor model (fit indices: CFI=0.96, RMSEA=0.06, and SRMR=0.05. Internal consistency reliability estimates of each factor and the instrument ranged from 0.63 to 0.79. Conclusion: Results of the CFA in a separate validation dataset provided robust measures of goodness-of-fit for the three-factor solution developed in the EFA, and indicated that the three-factor model fitted the data well enough. Therefore, we can conclude that this student professional behavior evaluation instrument is a structurally validated tool to measure professional behaviors reported by entry-level OT students. The internal consistency reliability of each individual factor and the whole instrument was considered to be adequate to good.

  18. Wetting boundary condition for the color-gradient lattice Boltzmann method: Validation with analytical and experimental data

    Science.gov (United States)

    Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.

    2018-06-01

    In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.

  19. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    Science.gov (United States)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  20. Modeling and Experimental Validation of a Volumetric Expander Suitable for Waste Heat Recovery from an Automotive Internal Combustion Engine Using an Organic Rankine Cycle with Ethanol

    Directory of Open Access Journals (Sweden)

    José Galindo

    2016-04-01

    Full Text Available Waste heat recovery (WHR in exhaust gas flow of automotive engines has proved to be a useful path to increase the overall efficiency of internal combustion engines (ICE. Recovery potentials of up to 7% are shown in several works in the literature. However, most of them are theoretical estimations. Some present results from prototypes fed by steady flows generated in an auxiliary gas tank and not with actual engine exhaust gases. This paper deals with the modeling and experimental validation of an organic Rankine cycle (ORC with a swash-plate expander integrated in a 2 L turbocharged petrol engine using ethanol as working fluid. A global simulation model of the ORC was developed with a maximum difference of 5%, validated with experimental results. Considering the swash-plate as the main limiting factor, an additional specific submodel was implemented to model the physical phenomena in this element. This model allows simulating the fluid dynamic behavior of the swash-plate expander using a 0D model (Amesim. Differences up to 10.5% between tests and model results were found.

  1. Solar power plant performance evaluation: simulation and experimental validation

    International Nuclear Information System (INIS)

    Natsheh, E M; Albarbar, A

    2012-01-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P and O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  2. Online Energy Management Systems for Microgrids: Experimental Validation and Assessment Framework

    DEFF Research Database (Denmark)

    Hernández, Adriana Carolina Luna; Meng, Lexuan; Aldana, Nelson Leonardo Diaz

    2018-01-01

    operating costs and load disconnections. The whole energy management system has been tested experimentally in a test bench under both grid-connected and islanded mode. Also, its performance has been proved considering severe mismatches in forecast generation and load. Several experimental results have...

  3. Validation of the actuator disc and actuator line techniques for yawed rotor flows using the New Mexico experimental data

    DEFF Research Database (Denmark)

    Breton, S. P.; Shen, Wen Zhong; Ivanell, S.

    2017-01-01

    Experimental data acquired in the New Mexico experiment on a yawed 4.5m diameter rotor model turbine are used here to validate the actuator line (AL) and actuator disc (AD) models implemented in the Large Eddy Simulation code EllipSys3D in terms of loading and velocity field. Even without modelling...... the AD model can reproduce the averaged features of the flow. The importance of using high quality airfoil data (including 3D corrections) as well as a fine grid resolution is highlighted by the results obtained. Overall, it is found that both models can satisfactorily predict the 3D velocity field...... and blade loading of the New Mexico rotor under yawed inflow....

  4. [Experimental intervention study of safe injection in basic-level hospitals in Hunan by medical staff].

    Science.gov (United States)

    Li, Li; Li, Yinglan; Long, Yanfang; Zhou, Yang; Lu, Jingmei; Wu, Ying

    2013-07-01

    To experimentally intervene safe injection by medical staff in basic-level hospitals and observe the recent and long-term effect after the intervention and to provide practical measures to improve safe injection. We used random sampling methods to set up groups in county hospitals and township hospitals of Hunan Province, and offered lectures, delivered safe injection guide, brochure and on-site guidance in the experimental group. We surveyed the 2 groups after the intervention at 1 month and 6 months to compare the effect of unsafe injection behaviors and safe injection behaviors. One month after the intervention, the unsafe injection rate in the experimental group decreased from 27.8% to 21.7%, while in the control group injection the unsafe injection rate rose from 26.0% to 27.9%, with significant difference (Pinjection rate in the experimental group declined to 18.4% while the unsafe injection rate in the control group also dropped to 22.4%, with significant difference (Pinjection rate was decreased in the experimental group at different intervention points, with significant difference (Psafe injection behavior scores in the experimental group were higher than those in the control group after the intervention of 1 month and 6 month intervention (Psafe injection, distribution of safe injection guide, and comprehensive intervention model can significantly change the primary care practitioners' behaviors in unsafe injections and it is worth promoting.

  5. Analysis and experimental validation of through-thickness cracked large-scale biaxial fracture tests

    International Nuclear Information System (INIS)

    Wiesner, C.S.; Goldthorpe, M.R.; Andrews, R.M.; Garwood, S.J.

    1999-01-01

    Since 1984 TWI has been involved in an extensive series of tests investigating the effects of biaxial loading on the fracture behaviour of A533B steel. Testing conditions have ranged from the lower to upper shelf regions of the transition curve and covered a range of biaxiality ratios. In an attempt to elucidate the trends underlying the experimental results, finite element-based mechanistic models were used to analyse the effects of biaxial loading. For ductile fracture, a modified Gunson model was used and important effects on tearing behaviour were found for through thickness cracked wide plates, as observed in upper shelf tests. For cleavage fracture, both simple T-stress methods and the Anderson-Dodds and Beremin models were used. Whilst the effect of biaxiality on surface cracked plates was small, a marked effect of biaxial loading was found for the through-thickness crack. To further validate the numerical predictions for cleavage fracture, TWI have performed an additional series of lower shelf through thickness cracked biaxial wide plate fracture tests. These tests were performed using various biaxiality loading conditions varying from simple uniaxial loading, through equibiaxial loading, to a biaxiality ratio equivalent to a circumferential crack in a pressure vessel. These tests confirmed the predictions that there is a significant effect of biaxial loading on cleavage fracture of through thickness cracked plate. (orig.)

  6. Thermal fluid-solid interaction model and experimental validation for hydrostatic mechanical face seals

    Science.gov (United States)

    Huang, Weifeng; Liao, Chuanjun; Liu, Xiangfeng; Suo, Shuangfu; Liu, Ying; Wang, Yuming

    2014-09-01

    Hydrostatic mechanical face seals for reactor coolant pumps are very important for the safety and reliability of pressurized-water reactor power plants. More accurate models on the operating mechanism of the seals are needed to help improve their performance. The thermal fluid-solid interaction (TFSI) mechanism of the hydrostatic seal is investigated in this study. Numerical models of the flow field and seal assembly are developed. Based on the mechanism for the continuity condition of the physical quantities at the fluid-solid interface, an on-line numerical TFSI model for the hydrostatic mechanical seal is proposed using an iterative coupling method. Dynamic mesh technology is adopted to adapt to the changing boundary shape. Experiments were performed on a test rig using a full-size test seal to obtain the leakage rate as a function of the differential pressure. The effectiveness and accuracy of the TFSI model were verified by comparing the simulation results and experimental data. Using the TFSI model, the behavior of the seal is presented, including mechanical and thermal deformation, and the temperature field. The influences of the rotating speed and differential pressure of the sealing device on the temperature field, which occur widely in the actual use of the seal, are studied. This research proposes an on-line and assembly-based TFSI model for hydrostatic mechanical face seals, and the model is validated by full-sized experiments.

  7. Students' views about the nature of experimental physics

    Science.gov (United States)

    Wilcox, Bethany R.; Lewandowski, H. J.

    2017-12-01

    The physics community explores and explains the physical world through a blend of theoretical and experimental studies. The future of physics as a discipline depends on training of students in both the theoretical and experimental aspects of the field. However, while student learning within lecture courses has been the subject of extensive research, lab courses remain relatively under-studied. In particular, there is little, if any, data available that address the effectiveness of physics lab courses at encouraging students to recognize the nature and importance of experimental physics within the discipline as a whole. To address this gap, we present the first large-scale, national study (Ninstitutions=75 and Nstudents=7167 ) of undergraduate physics lab courses through analysis of students' responses to a research-validated assessment designed to investigate students' beliefs about the nature of experimental physics. We find that students often enter and leave physics lab courses with ideas about experimental physics as practiced in their courses that are inconsistent with the views of practicing experimental physicists, and this trend holds at both the introductory and upper-division levels. Despite this inconsistency, we find that both introductory and upper-division students are able to accurately predict the expertlike response even in cases where their views about experimentation in their lab courses disagree. These finding have implications for the recruitment, retention, and adequate preparation of students in physics.

  8. A tool for assessing continuity of care across care levels: an extended psychometric validation of the CCAENA questionnaire

    Directory of Open Access Journals (Sweden)

    Marta Beatriz Aller

    2013-12-01

    Full Text Available Background: The CCAENA questionnaire was developed to assess care continuity across levels from the patients’ perspective. The aim is to provide additional evidence on the psychometric properties of the scales of this questionnaire. Methods: Cross-sectional study by means of a survey of a random sample of 1500 patients attended in primary and secondary care in three healthcare areas of the Catalan healthcare system. Data were collected in 2010 using the CCAENA questionnaire. To assess psychometric properties, an exploratory factor analysis was performed (construct validity and the item-rest correlations and Cronbach’s alpha were calculated (internal consistency. Spearman correlation coefficients were calculated (multidimensionality and the ability to discriminate between groups was tested. Results: The factor analysis resulted in 21 items grouped into three factors: patient-primary care provider relationship, patient-secondary care provider relationship and continuity across care levels. Cronbach’s alpha indicated good internal consistency (0.97, 0.93, 0.80 and the correlation coefficients indicated that dimensions can be interpreted as separated scales. Scales discriminated patients according to healthcare area, age and educational level. Conclusion: The CCAENA questionnaire has proved to be a valid and reliable tool for measuring patients’ perceptions of continuity. Providers and researchers could apply the questionnaire to identify areas for healthcare improvement.

  9. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    Science.gov (United States)

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  10. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  11. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  12. Method validation for uranium content analysis using a potentiometer T-90

    International Nuclear Information System (INIS)

    Torowati; Ngatijo; Rahmiati

    2016-01-01

    An experimental method validation has been conducted for uranium content analysis using Potentiometer T-90. The method validation experiment was performed in the quality control laboratory of Experiment Fuel Element Installation, PTBBN - BATAN. The objective is to determine the level of precision and accuracy of analytical results for uranium analysis referring to the latest American Standard Test Method (ASTM) of ASTM C1267-11, which is a modified reference method by reducing of reagent consumption by 10% of the amount used by the original method. The ASTM C 1267-11 reference is a new ASTM as a substitute for the older ASTM namely ASTM C799, Vol.12.01, 2003. It is, therefore, necessary to validate the renewed method. The tool used for the analysis of uranium was potentiometer T-90 and the material used was standard uranium oxide powder CRM (Certificate Reference Material). Validation of the method was done by analyzing standard uranium powder by 7 times weighing and 7 times analysis. Analysis results were used to determine the level of accuracy, precision, Relative Standard Deviation (RSD) and Horwitz coefficient Variation and limit detection and quantitation. The average uranium obtained for this method validation is 84.36% with Standard Deviation (SD) of 0.12%, Relative Standard Deviation (RSD) 0.14% and 2/3 Horwitz coefficient Variation (CV Horwitz) 2.05%. The results show that RSD value is smaller than the value of (2/3) CV Horwitz, which means that this method has a high precision. The accuracy value obtained is 0.48%, and since the acceptance limit of high level of accuracy is when the accuracy value is <2.00%, this method is regarded as having a high degree of accuracy [1]. The limit of detection (LOD) and and the limit of quantitation (LOQ) are 0.0145 g/L and 0.0446 g/L respectively. It is concluded that the ASTM C 1267-11 reference method is valid for use. (author)

  13. Modeling and experimental validation of the solar loop for absorption solar cooling system using double-glazed collectors

    International Nuclear Information System (INIS)

    Marc, Olivier; Praene, Jean-Philippe; Bastide, Alain; Lucas, Franck

    2011-01-01

    Solar cooling applied to buildings is without a doubt an interesting alternative for reducing energy consumption in traditional mechanical steam compression air conditioning systems. The study of these systems should have a closely purely fundamental approach including the development of numerical models in order to predict the overall installation performance. The final objective is to estimate cooling capacity, power consumption, and overall installation performance with relation to outside factors (solar irradiation, outside temperature...). The first stage in this work consists of estimating the primary energy produced by the solar collector field. The estimation of this primary energy is crucial to ensure the evaluation of the cooling capacity and therefore the cooling distribution and thermal comfort in the building. Indeed, the absorption chiller performance is directly related to its heat source. This study presents dynamic models for double glazing solar collectors and compares the results of the simulation with experimental results taken from our test bench (two collectors). In the second part, we present an extensive collector field model (36 collectors) from our solar cooling installation at The University Institute of Technology in St Pierre, Reunion Island as well as our stratified tank storage model. A comparison of the simulation results with real scale solar experimental data taken from our installation enables validation of the double glazing solar collector and stratified tank dynamic models.

  14. Experimentally induced states of mind determine abstinent smokers' level of craving in reaction to smoking-cues

    Directory of Open Access Journals (Sweden)

    Arie Dijkstra

    2015-06-01

    Conclusions: The present studies provide experimental evidence that levels of craving can be determined by momentary states of mind. This theoretical perspective can be integrated in existing conditioning and social cognitive learning perspectives on craving and substance use.

  15. On the selection of shape and orientation of a greenhouse. Thermal modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Sethi, V.P. [Department of Mechanical Engineering, Punjab Agricultural University, Ludhiana 141 004, Punjab (India)

    2009-01-15

    In this study, five most commonly used single span shapes of greenhouses viz. even-span, uneven-span, vinery, modified arch and quonset type have been selected for comparison. The length, width and height (at the center) are kept same for all the selected shapes. A mathematical model for computing transmitted total solar radiation (beam, diffused and ground reflected) at each hour, for each month and at any latitude for the selected geometry greenhouses (through each wall, inclined surfaces and roofs) is developed for both east-west and north-south orientation. Computed transmitted solar radiation is then introduced in a transient thermal model developed to compute hourly inside air temperature for each shape and orientation. Experimental validation of both the models is carried out for the measured total solar radiation and inside air temperature for an east-west orientation, even-span greenhouse (for a typical day in summer) at Ludhiana (31 N and 77 E) Punjab, India. During the experimentation, capsicum crop is grown inside the greenhouse. The predicted and measured values are in close agreement. Results show that uneven-span shape greenhouse receives the maximum and quonset shape receives the minimum solar radiation during each month of the year at all latitudes. East-west orientation is the best suited for year round greenhouse applications at all latitudes as this orientation receives greater total radiation in winter and less in summer except near the equator. Results also show that inside air temperature rise depends upon the shape of the greenhouse and this variation from uneven-span shape to quonset shape is 4.6 C (maximum) and 3.5 C (daily average) at 31 N latitude. (author)

  16. Improved inhalation technology for setting safe exposure levels for workplace chemicals

    Science.gov (United States)

    Stuart, Bruce O.

    1993-01-01

    Threshold Limit Values recommended as allowable air concentrations of a chemical in the workplace are often based upon a no-observable-effect-level (NOEL) determined by experimental inhalation studies using rodents. A 'safe level' for human exposure must then be estimated by the use of generalized safety factors in attempts to extrapolate from experimental rodents to man. The recent development of chemical-specific physiologically-based toxicokinetics makes use of measured physiological, biochemical, and metabolic parameters to construct a validated model that is able to 'scale-up' rodent response data to predict the behavior of the chemical in man. This procedure is made possible by recent advances in personal computer software and the emergence of appropriate biological data, and provides an analytical tool for much more reliable risk evaluation and airborne chemical exposure level setting for humans.

  17. Construct-level predictive validity of educational attainment and intellectual aptitude tests in medical student selection: meta-regression of six UK longitudinal studies

    Science.gov (United States)

    2013-01-01

    Background Measures used for medical student selection should predict future performance during training. A problem for any selection study is that predictor-outcome correlations are known only in those who have been selected, whereas selectors need to know how measures would predict in the entire pool of applicants. That problem of interpretation can be solved by calculating construct-level predictive validity, an estimate of true predictor-outcome correlation across the range of applicant abilities. Methods Construct-level predictive validities were calculated in six cohort studies of medical student selection and training (student entry, 1972 to 2009) for a range of predictors, including A-levels, General Certificates of Secondary Education (GCSEs)/O-levels, and aptitude tests (AH5 and UK Clinical Aptitude Test (UKCAT)). Outcomes included undergraduate basic medical science and finals assessments, as well as postgraduate measures of Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) performance and entry in the Specialist Register. Construct-level predictive validity was calculated with the method of Hunter, Schmidt and Le (2006), adapted to correct for right-censorship of examination results due to grade inflation. Results Meta-regression analyzed 57 separate predictor-outcome correlations (POCs) and construct-level predictive validities (CLPVs). Mean CLPVs are substantially higher (.450) than mean POCs (.171). Mean CLPVs for first-year examinations, were high for A-levels (.809; CI: .501 to .935), and lower for GCSEs/O-levels (.332; CI: .024 to .583) and UKCAT (mean = .245; CI: .207 to .276). A-levels had higher CLPVs for all undergraduate and postgraduate assessments than did GCSEs/O-levels and intellectual aptitude tests. CLPVs of educational attainment measures decline somewhat during training, but continue to predict postgraduate performance. Intellectual aptitude tests have lower CLPVs than A-levels or GCSEs/O-levels

  18. Quantitative comparison of PZT and CMUT probes for photoacoustic imaging: Experimental validation.

    Science.gov (United States)

    Vallet, Maëva; Varray, François; Boutet, Jérôme; Dinten, Jean-Marc; Caliano, Giosuè; Savoia, Alessandro Stuart; Vray, Didier

    2017-12-01

    Photoacoustic (PA) signals are short ultrasound (US) pulses typically characterized by a single-cycle shape, often referred to as N-shape. The spectral content of such wideband signals ranges from a few hundred kilohertz to several tens of megahertz. Typical reception frequency responses of classical piezoelectric US imaging transducers, based on PZT technology, are not sufficiently broadband to fully preserve the entire information contained in PA signals, which are then filtered, thus limiting PA imaging performance. Capacitive micromachined ultrasonic transducers (CMUT) are rapidly emerging as a valid alternative to conventional PZT transducers in several medical ultrasound imaging applications. As compared to PZT transducers, CMUTs exhibit both higher sensitivity and significantly broader frequency response in reception, making their use attractive in PA imaging applications. This paper explores the advantages of the CMUT larger bandwidth in PA imaging by carrying out an experimental comparative study using various CMUT and PZT probes from different research laboratories and manufacturers. PA acquisitions are performed on a suture wire and on several home-made bimodal phantoms with both PZT and CMUT probes. Three criteria, based on the evaluation of pure receive impulse response, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) respectively, have been used for a quantitative comparison of imaging results. The measured fractional bandwidths of the CMUT arrays are larger compared to PZT probes. Moreover, both SNR and CNR are enhanced by at least 6 dB with CMUT technology. This work highlights the potential of CMUT technology for PA imaging through qualitative and quantitative parameters.

  19. Vivaldi: Visualization and validation of biomacromolecular NMR structures from the PDB

    Science.gov (United States)

    Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J

    2013-01-01

    We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. © Proteins 2013. © 2012 Wiley Periodicals, Inc. PMID:23180575

  20. Experimental investigation and CFD validation of Horizontal Air/Water slug flow

    International Nuclear Information System (INIS)

    Vallee, Christophe; Hoehne, Thomas

    2007-01-01

    For the investigation of co-current two-phase flows at atmospheric pressure and room temperature, the Horizontal Air/Water Channel (HAWAC) was built at Forschungszentrum Dresden-Rossendorf (FZD). At the channel inlet, a special device provides adjustable and well-defined inlet boundary conditions and therefore very good CFD validation possibilities. The HAWAC facility is designed for the application of optical measurement techniques, which deliver the high resolution required for CDF validation. Therefore, the 8 m long acrylic glass test-section with rectangular cross-section provides good observation possibilities. High-speed video observation was applied during slug flow. The camera images show the generation of slug flow from the inlet of the test-section. Parallel to the experiments, CFD calculations were carried out. The aim of the numerical simulations is to validate the prediction of slug flow with the existing multiphase flow models built in the commercial code ANSYS CFX. The Euler-Euler two-fluid model with the free surface option was applied to a grid of 600,000 control volumes. The turbulence was modelled separately for each phase using the k-ω based shear stress transport (SST) turbulence model. The results compare well in terms of slug formation, and breaking. The qualitative agreement between calculation and experiment is encouraging, while quantitative comparison show that further model improvement is needed. (author)