WorldWideScience

Sample records for experimental data

  1. HIRENASD Experimental Data

    Data.gov (United States)

    National Aeronautics and Space Administration — Experimental Data for HIRENASD configuration Zip files below contain the experimental data for the pressure coefficients, both the static and the forced oscillation...

  2. Experimental quantum data locking

    Science.gov (United States)

    Liu, Yang; Cao, Zhu; Wu, Cheng; Fukuda, Daiji; You, Lixing; Zhong, Jiaqiang; Numata, Takayuki; Chen, Sijing; Zhang, Weijun; Shi, Sheng-Cai; Lu, Chao-Yang; Wang, Zhen; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2016-08-01

    Classical correlation can be locked via quantum means: quantum data locking. With a short secret key, one can lock an exponentially large amount of information in order to make it inaccessible to unauthorized users without the key. Quantum data locking presents a resource-efficient alternative to one-time pad encryption which requires a key no shorter than the message. We report experimental demonstrations of a quantum data locking scheme originally proposed by D. P. DiVincenzo et al. [Phys. Rev. Lett. 92, 067902 (2004), 10.1103/PhysRevLett.92.067902] and a loss-tolerant scheme developed by O. Fawzi et al. [J. ACM 60, 44 (2013), 10.1145/2518131]. We observe that the unlocked amount of information is larger than the key size in both experiments, exhibiting strong violation of the incremental proportionality property of classical information theory. As an application example, we show the successful transmission of a photo over a lossy channel with quantum data (un)locking and error correction.

  3. HIRENASD Experimental Data - matlab format

    Data.gov (United States)

    National Aeronautics and Space Administration — This resource contains the experimental data that was included in tecplot input files but in matlab files. dba1_cp has all the results is dimensioned (7,2) first...

  4. Interpreting physicochemical experimental data sets.

    Science.gov (United States)

    Colclough, Nicola; Wenlock, Mark C

    2015-09-01

    With the wealth of experimental physicochemical data available to chemoinformaticians from the literature, commercial, and company databases an increasing challenge is the interpretation of such datasets. Subtle differences in experimental methodology used to generate these datasets can give rise to variations in physicochemical property values. Such methodology nuances will be apparent to an expert experimentalist but not necessarily to the data analyst and modeller. This paper describes the differences between common methodologies for measuring the four most important physicochemical properties namely aqueous solubility, octan-1-ol/water distribution coefficient, pK(a) and plasma protein binding highlighting key factors that can lead to systematic differences. Insight is given into how to identify datasets suitable for combining.

  5. Auditory presentation of experimental data

    Science.gov (United States)

    Lunney, David; Morrison, Robert C.

    1990-08-01

    Our research group has been working for several years on the development of auditory alternatives to visual graphs, primarily in order to give blind science students and scientists access to instrumental measurements. In the course of this work we have tried several modes for auditory presentation of data: synthetic speech, tones of varying pitch, complex waveforms, electronic music, and various non-musical sounds. Our most successful translation of data into sound has been presentation of infrared spectra as musical patterns. We have found that if the stick spectra of two compounds are visibly different, their musical patterns will be audibly different. Other possibilities for auditory presentation of data are also described, among them listening to Fourier transforms of spectra, and encoding data in complex waveforms (including synthetic speech).

  6. Quantum Experimental Data in Psychology and Economics

    Science.gov (United States)

    Aerts, Diederik; D'Hooghe, Bart; Haven, Emmanuel

    2010-12-01

    We prove a theorem which shows that a collection of experimental data of probabilistic weights related to decisions with respect to situations and their disjunction cannot be modeled within a classical probabilistic weight structure in case the experimental data contain the effect referred to as the ‘disjunction effect’ in psychology. We identify different experimental situations in psychology, more specifically in concept theory and in decision theory, and in economics (namely situations where Savage’s Sure-Thing Principle is violated) where the disjunction effect appears and we point out the common nature of the effect. We analyze how our theorem constitutes a no-go theorem for classical probabilistic weight structures for common experimental data when the disjunction effect is affecting the values of these data. We put forward a simple geometric criterion that reveals the non classicality of the considered probabilistic weights and we illustrate our geometrical criterion by means of experimentally measured membership weights of items with respect to pairs of concepts and their disjunctions. The violation of the classical probabilistic weight structure is very analogous to the violation of the well-known Bell inequalities studied in quantum mechanics. The no-go theorem we prove in the present article with respect to the collection of experimental data we consider has a status analogous to the well known no-go theorems for hidden variable theories in quantum mechanics with respect to experimental data obtained in quantum laboratories. Our analysis puts forward a strong argument in favor of the validity of using the quantum formalism for modeling the considered psychological experimental data as considered in this paper.

  7. Data Analysis in Experimental Biomedical Research

    DEFF Research Database (Denmark)

    Markovich, Dmitriy

    This thesis covers two non-related topics in experimental biomedical research: data analysis in thrombin generation experiments (collaboration with Novo Nordisk A/S), and analysis of images and physiological signals in the context of neurovascular signalling and blood flow regulation in the brain...... (collaboration with University of Copenhagen). The ongoing progress in experimental methods of thrombin generation allowed to introduce ready-to-use commercial assays for thrombin measurement. On the other hand, commercial assays use “black box” data analysis which makes it nearly impossible for researches...... to critically assess and compare obtained results. We reverse engineered the data analysis performed by CAT, a de facto standard assay in the field. This revealed a number of possibilities to improve its methods of data analysis. We found that experimental calibration data is described well with textbook...

  8. Developing Phenomena Models from Experimental Data

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance......A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover...... of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data....

  9. Developing Phenomena Models from Experimental Data

    DEFF Research Database (Denmark)

    unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance......A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover...... of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data....

  10. Processing Contexts for Experimental HEP Data

    Energy Technology Data Exchange (ETDEWEB)

    Paterno, Marc [Fermilab; Green, Chris [Fermilab

    2017-02-06

    This document provides, for those not closely associated with the experimental High Energy Physics (HEP) community, an introduction to data input and output requirements for a variety of data processing tasks. Examples in it are drawn from the art event processing framework, and from experiments and projects using art, most notably the LArSoft and NuTools projects.

  11. Modeling of Experimental Adsorption Isotherm Data

    Directory of Open Access Journals (Sweden)

    Xunjun Chen

    2015-01-01

    Full Text Available Adsorption is considered to be one of the most effective technologies widely used in global environmental protection areas. Modeling of experimental adsorption isotherm data is an essential way for predicting the mechanisms of adsorption, which will lead to an improvement in the area of adsorption science. In this paper, we employed three isotherm models, namely: Langmuir, Freundlich, and Dubinin-Radushkevich to correlate four sets of experimental adsorption isotherm data, which were obtained by batch tests in lab. The linearized and non-linearized isotherm models were compared and discussed. In order to determine the best fit isotherm model, the correlation coefficient (r2 and standard errors (S.E. for each parameter were used to evaluate the data. The modeling results showed that non-linear Langmuir model could fit the data better than others, with relatively higher r2 values and smaller S.E. The linear Langmuir model had the highest value of r2, however, the maximum adsorption capacities estimated from linear Langmuir model were deviated from the experimental data.

  12. Nonlinear hierarchical modeling of experimental infection data.

    Science.gov (United States)

    Singleton, Michael D; Breheny, Patrick J

    2016-08-01

    In this paper, we propose a nonlinear hierarchical model (NLHM) for analyzing longitudinal experimental infection (EI) data. The NLHM offers several improvements over commonly used alternatives such as repeated measures analysis of variance (RM-ANOVA) and the linear mixed model (LMM). It enables comparison of relevant biological properties of the course of infection including peak intensity, duration and time to peak, rather than simply comparing mean responses at each observation time. We illustrate the practical benefits of this model and the insights it yields using data from experimental infection studies on equine arteritis virus. Finally, we demonstrate via simulation studies that the NLHM substantially reduces bias and improves the power to detect differences in relevant features of the infection response between two populations. For example, to detect a 20% difference in response duration between two groups (n=15) in which the peak time and peak intensity were identical, the RM-ANOVA test had a power of just 11%, and LMM a power of just 12%. By comparison, the nonlinear model we propose had a power of 58% in the same scenario, while controlling the Type I error rate better than the other two methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Comparison of Blade Element Momentum Theory to Experimental Data Using Experimental Lift, Drag, and Power Data

    Science.gov (United States)

    Nealon, Tara; Miller, Mark; Kiefer, Janik; Hultmark, Marcus

    2016-11-01

    Blade Element Momentum (BEM) codes have often been used to simulate the power output and loads on wind turbine blades without performing CFD. When computing the lift and drag forces on the blades, the coefficients of lift and drag are normally calculated by interpolating values from standard airfoil data based on the angle of attack. However, there are several empirical corrections that are needed. Due to a lack of empirical data to compare against, the accuracy of these corrections and BEM in general is still not well known. For this presentation, results from an in-house written BEM code computed using experimental lift and drag coefficient data for the airfoils of the V27 wind turbine will be presented. The data is gathered in Princeton University's High Reynolds Number Testing Facility (HRTF) at full scale Reynolds numbers and over a large range of angles of attack. The BEM results are compared to experimental data of the same wind turbine, conducted at full scale Reynolds number and TSR, also in the HRTF. Conclusions will be drawn about the accuracy of the BEM code, and the corrections, regarding the usage of standard airfoil data versus the experimental data, as well as future applications to potentially improve large-eddy simulations of wind turbines in a similar manner.

  14. Status of experimental data for neutron induced reactions

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Mamoru [Tohoku Univ., Sendai (Japan)

    1998-11-01

    A short review is presented on the status of experimental data for neutron induced reactions above 20 MeV based on the EXFOR data base and journals. Experimental data which were obtained in a systematic manner and/or by plural authors are surveyed and tabulated for the nuclear data evaluation and the benchmark test of the evaluated data. (author). 61 refs.

  15. Neridronate: From Experimental Data to Clinical Use

    Directory of Open Access Journals (Sweden)

    Addolorata Corrado

    2017-09-01

    Full Text Available Neridronate is an amino-bisphosphonate that has been officially approved as a treatment for osteogenesis imperfecta, Paget’s disease of bone and type I complex regional pain syndrome in Italy. Neridronate is administered either intravenously or intramuscularly; thus, it represents a valid option for both cases with contraindications to the use of oral bisphosphonates and cases with contraindications or an inability to receive an intravenous administration of these drugs. Furthermore, although the official authorized use of neridronate is limited to only 3 bone diseases, many experimental and clinical studies support the rationale for its use and provide evidence of its effectiveness in other pathologic bone conditions that are characterized by altered bone remodelling.

  16. Experimental PCR data on soil DNA extracts

    Science.gov (United States)

    Griffin, Dale W.

    2016-01-01

    Bacillus species and B. anthracis presence/absence data were determined in 4,770 soil samples collected across the contiguous United States in collaboration with the USEPA. PCR data for Bacillus species and B. anthracis rpoB gene PCR amplicon detection was reported as non-detect (n), low (l), medium (m), and high (h). Results for both pag and lef genes of the pX01 plasmid were reported by the University of South Florida's Center for Biological Defense. This data was recorded as negative or positive for each of the genes and included the following combinations: neg/neg, pos/neg, neg/pos, and pos/pos. Data for the pX02 plasmid were recorded as negative (blank) or positive (Y).

  17. Experimental PCR Data on Soil DNA Extracts

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bacillus species and B. anthracis presence/absence data were determined in 4,770 soil samples collected across the contiguous United States, in cooperation with the...

  18. ALGORITHM OF PRIMARY STATISTICAL ANALYSIS OF ARRAYS OF EXPERIMENTAL DATA

    Directory of Open Access Journals (Sweden)

    LAUKHIN D. V.

    2017-02-01

    Full Text Available Annotation. Purpose. Construction of an algorithm for preliminary (primary estimation of arrays of experimental data for further obtaining a mathematical model of the process under study. Methodology. The use of the main regularities of the theory of processing arrays of experimental values in the initial analysis of data. Originality. An algorithm for performing a primary statistical analysis of the arrays of experimental data is given. Practical value. Development of methods for revealing statistically unreliable values in arrays of experimental data for the purpose of their subsequent detailed analysis and construction of a mathematical model of the studied processes.

  19. 16 CFR 1702.9 - Relevant experimental data.

    Science.gov (United States)

    2010-01-01

    ... 1702.9 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION PROCEDURES AND REQUIREMENTS § 1702.9 Relevant experimental data. Experimental data are generated in both animals...

  20. EXPERIMENTAL EVALUATION OF LIDAR DATA VISUALIZATION SCHEMES

    Directory of Open Access Journals (Sweden)

    S. Ghosh

    2012-07-01

    Full Text Available LiDAR (Light Detection and Ranging has attained the status of an industry standard method of data collection for gathering three dimensional topographic information. Datasets captured through LiDAR are dense, redundant and are perceivable from multiple directions, which is unlike other geospatial datasets collected through conventional methods. This three dimensional information has triggered an interest in the scientific community to develop methods for visualizing LiDAR datasets and value added products. Elementary schemes of visualization use point clouds with intensity or colour, triangulation and tetrahedralization based terrain models draped with texture. Newer methods use feature extraction either through the process of classification or segmentation. In this paper, the authors have conducted a visualization experience survey where 60 participants respond to a questionnaire. The questionnaire poses six different questions on the qualities of feature perception and depth for 12 visualization schemes. The answers to these questions are obtained on a scale of 1 to 10. Results are thus presented using the non-parametric Friedman's test, using post-hoc analysis for hypothetically ranking the visualization schemes based on the rating received and finally confirming the rankings through the Page's trend test. Results show that a heuristic based visualization scheme, which has been developed by Ghosh and Lohani (2011 performs the best in terms of feature and depth perception.

  1. HIRENASD Experimental Data, Static Cp Plots and Data files

    Data.gov (United States)

    National Aeronautics and Space Administration — Tecplot (ascii) and matlab files are posted here for the Static pressure coefficient data sets. To download all of the data in either tecplot format or matlab...

  2. Dynamic vehicle model for handling performance using experimental data

    Directory of Open Access Journals (Sweden)

    SangDo Na

    2015-11-01

    Full Text Available An analytical vehicle model is essential for the development of vehicle design and performance. Various vehicle models have different complexities, assumptions and limitations depending on the type of vehicle analysis. An accurate full vehicle model is essential in representing the behaviour of the vehicle in order to estimate vehicle dynamic system performance such as ride comfort and handling. An experimental vehicle model is developed in this article, which employs experimental kinematic and compliance data measured between the wheel and chassis. From these data, a vehicle model, which includes dynamic effects due to vehicle geometry changes, has been developed. The experimental vehicle model was validated using an instrumented experimental vehicle and data such as a step change steering input. This article shows a process to develop and validate an experimental vehicle model to enhance the accuracy of handling performance, which comes from precise suspension model measured by experimental data of a vehicle. The experimental force data obtained from a suspension parameter measuring device are employed for a precise modelling of the steering and handling response. The steering system is modelled by a lumped model, with stiffness coefficients defined and identified by comparing steering stiffness obtained by the measured data. The outputs, specifically the yaw rate and lateral acceleration of the vehicle, are verified by experimental results.

  3. Steam as turbine blade coolant: Experimental data generation

    Energy Technology Data Exchange (ETDEWEB)

    Wilmsen, B.; Engeda, A.; Lloyd, J.R. [Michigan State Univ., East Lansing, MI (United States)

    1995-10-01

    Steam as a coolant is a possible option to cool blades in high temperature gas turbines. However, to quantify steam as a coolant, there exists practically no experimental data. This work deals with an attempt to generate such data and with the design of an experimental setup used for the purpose. Initially, in order to guide the direction of experiments, a preliminary theoretical and empirical prediction of the expected experimental data is performed and is presented here. This initial analysis also compares the coolant properties of steam and air.

  4. Improving plant bioaccumulation science through consistent reporting of experimental data

    DEFF Research Database (Denmark)

    Fantke, Peter; Arnot, Jon A.; Doucette, William J.

    2016-01-01

    Experimental data and models for plant bioaccumulation of organic contaminants play a crucial role for assessing the potential human and ecological risks associated with chemical use. Plants are receptor organisms and direct or indirect vectors for chemical exposures to all other organisms. As new...... experimental data are generated they are used to improve our understanding of plant-chemical interactions that in turn allows for the development of better scientific knowledge and conceptual and predictive models. The interrelationship between experimental data and model development is an ongoing, never......-ending process needed to advance our ability to provide reliable quality information that can be used in various contexts including regulatory risk assessment. However, relatively few standard experimental protocols for generating plant bioaccumulation data are currently available and because of inconsistent...

  5. Detection of outliers in a gas centrifuge experimental data

    Directory of Open Access Journals (Sweden)

    M. C. V. Andrade

    2005-09-01

    Full Text Available Isotope separation with a gas centrifuge is a very complex process. Development and optimization of a gas centrifuge requires experimentation. These data contain experimental errors, and like other experimental data, there may be some gross errors, also known as outliers. The detection of outliers in gas centrifuge experimental data is quite complicated because there is not enough repetition for precise statistical determination and the physical equations may be applied only to control of the mass flow. Moreover, the concentrations are poorly predicted by phenomenological models. This paper presents the application of a three-layer feed-forward neural network to the detection of outliers in analysis of performed on a very extensive experiment.

  6. The experimental uncertainty of heterogeneous public K(i) data.

    Science.gov (United States)

    Kramer, Christian; Kalliokoski, Tuomo; Gedeck, Peter; Vulpetti, Anna

    2012-06-14

    The maximum achievable accuracy of in silico models depends on the quality of the experimental data. Consequently, experimental uncertainty defines a natural upper limit to the predictive performance possible. Models that yield errors smaller than the experimental uncertainty are necessarily overtrained. A reliable estimate of the experimental uncertainty is therefore of high importance to all originators and users of in silico models. The data deposited in ChEMBL was analyzed for reproducibility, i.e., the experimental uncertainty of independent measurements. Careful filtering of the data was required because ChEMBL contains unit-transcription errors, undifferentiated stereoisomers, and repeated citations of single measurements (90% of all pairs). The experimental uncertainty is estimated to yield a mean error of 0.44 pK(i) units, a standard deviation of 0.54 pK(i) units, and a median error of 0.34 pK(i) units. The maximum possible squared Pearson correlation coefficient (R(2)) on large data sets is estimated to be 0.81.

  7. Show and tell: disclosure and data sharing in experimental pathology

    Directory of Open Access Journals (Sweden)

    Paul N. Schofield

    2016-06-01

    Full Text Available Reproducibility of data from experimental investigations using animal models is increasingly under scrutiny because of the potentially negative impact of poor reproducibility on the translation of basic research. Histopathology is a key tool in biomedical research, in particular for the phenotyping of animal models to provide insights into the pathobiology of diseases. Failure to disclose and share crucial histopathological experimental details compromises the validity of the review process and reliability of the conclusions. We discuss factors that affect the interpretation and validation of histopathology data in publications and the importance of making these data accessible to promote replicability in research.

  8. Analysis of experimental data sets for local scour depth around ...

    African Journals Online (AJOL)

    2010-08-17

    Aug 17, 2010 ... This study sought to answer the following questions: Firstly, can data collected .... At the first stage of this study, 7 experimental data sets ...... di Milano, Piazza Leonardo da. Vinci 32, 20133, Milano, Italy. BALLIO F and ORSI E (2000) Time evaluation of scour around bridge abutments. Water Eng. Res.

  9. Experimental Data and Geometric Analysis Repository-EDGAR

    NARCIS (Netherlands)

    Aras, K.; Good, W.; Tate, J.; Burton, B.; Brooks, D.; Coll-Font, J.; Doessel, O.; Schulze, W.; Potyagaylo, D.; Wang, L.; Dam, P.M. van; MacLeod, R.

    2015-01-01

    INTRODUCTION: The "Experimental Data and Geometric Analysis Repository", or EDGAR is an Internet-based archive of curated data that are freely distributed to the international research community for the application and validation of electrocardiographic imaging (ECGI) techniques. The EDGAR project

  10. Advanced Crystallographic Data Collection Protocols for Experimental Phasing.

    Science.gov (United States)

    Finke, Aaron D; Panepucci, Ezequiel; Vonrhein, Clemens; Wang, Meitian; Bricogne, Gérard; Oliéric, Vincent

    2016-01-01

    Experimental phasing by single- or multi-wavelength anomalous dispersion (SAD or MAD) has become the most popular method of de novo macromolecular structure determination. Continuous advances at third-generation synchrotron sources have enabled the deployment of rapid data collection protocols that are capable of recording SAD or MAD data sets. However, procedural simplifications driven by the pursuit of high throughput have led to a loss of sophistication in data collection strategies, adversely affecting measurement accuracy from the viewpoint of anomalous phasing. In this chapter, we detail optimized strategies for collecting high-quality data for experimental phasing, with particular emphasis on minimizing errors from radiation damage as well as from the instrument. This chapter also emphasizes data processing for "on-the-fly" decision-making during data collection, a critical process when data quality depends directly on information gathered while at the synchrotron.

  11. Search for periodicities in experimental data using an autoregression data model

    CERN Document Server

    Belashev, B Z

    2001-01-01

    To process data obtained during interference experiments in high-energy physics, methods of spectral analysis are employed. Methods of spectral analysis, in which an autoregression model of experimental data is used, such as the maximum entropy technique as well as Pisarenko and Prony's method, are described. To show the potentials of the methods, experimental and simulated hummed data are discussed as an example.

  12. Experimental water droplet impingement data on modern aircraft surfaces

    Science.gov (United States)

    Papadakis, Michael; Breer, Marlin D.; Craig, Neil C.; Bidwell, Colin S.

    1991-01-01

    An experimental method has been developed to determine the water droplet impingement characteristics on two- and three-dimensional aircraft surfaces. The experimental water droplet impingement data are used to validate particle trajectory analysis codes that are used in aircraft icing analyses and engine inlet particle separator analyses. The aircraft surface is covered with thin strips of blotter paper in areas of interest. The surface is then exposed to an airstream that contains a dyed-water spray cloud. The water droplet impingement data are extracted from the dyed blotter paper strips by measuring the optical reflectance of each strip with an automated reflectometer. Preliminary experimental and analytical impingement efficiency data are presented for a NLF(1)-0414F airfoil, s swept MS(1)-0317 airfoil, a swept NACA 0012 wingtip and for a Boeing 737-300 engine inlet model.

  13. A Comparison of Experimental EPMA Data and Monte Carlo Simulations

    Science.gov (United States)

    Carpenter, P. K.

    2004-01-01

    Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.

  14. Analysis of experimental data sets for local scour depth around ...

    African Journals Online (AJOL)

    The performance of soft computing techniques to analyse and interpret the experimental data of local scour depth around bridge abutment, measured at different laboratory conditions and environment, is presented. The scour around bridge piers and abutments is, in the majority of cases, the main reason for bridge failures.

  15. An experimental method for ripple minimization in transmission data ...

    Indian Academy of Sciences (India)

    An experimental method for ripple minimization in transmission data for industrial X-ray computed tomography imaging system ... This emanates from the fact that in a computed tomographic imaging system, statistical variation inherent in the penetrating radiation used to probe the specimen, electronic noise generated in ...

  16. AeroValve Experimental Test Data Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Noakes, Mark W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-09-01

    This report documents the collection of experimental test data and presents performance characteristics for the AeroValve brand prototype pneumatic bidirectional solenoid valves tested at the Oak Ridge National Laboratory (ORNL) in July/August 2014 as part of a validation of AeroValve energy efficiency claims. The test stand and control programs were provided by AeroValve. All raw data and processing are included in the report attachments.

  17. Principles of Experimental Design for Big Data Analysis.

    Science.gov (United States)

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  18. Principles of Experimental Design for Big Data Analysis

    Science.gov (United States)

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  19. Optimization of Regression Models of Experimental Data Using Confirmation Points

    Science.gov (United States)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  20. Simulation of PVT and swelling experimental data: a systematic evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, Paulo S.M.V. [PETROBRAS S.A., Salvador, BA (Brazil). Unidade de Negocios da Bahia]. E-mail: psrocha@petrobras.com.br; Alves, Danilo C.R.; Sacramento, Vinicio S.; Costa, Gloria M.N. [Universidade Salvador (UNIFACS), Salvador, BA (Brazil). Centro de Estudos em Petroleo e Gas Natural (CEPGN)]. E-mail: gloria.costa@unifacs.br

    2004-07-01

    Accurate data of the phase behavior of oil and gas mixtures are needed, for example, for the design of process plants and reservoir simulation studies. Often, experimental PVT data are available, but in practice a few PVT measurements are carried out for a given mixture. Therefore, it is necessary to use a thermodynamic model when planning production strategies for a given petroleum reservoir. This raises the question of what accuracy can be obtained using a cubic equation of state for phase equilibrium calculations, for example at conditions in which oil and gas are being produced. The only way to improve the agreement between measured and calculated results is to adjust the equation of state parameters. Currently, there is not a clear methodology to make these modifications. The objective of this study is to investigate the best tuning to describe the PVT experimental data: differential liberation, constant composition expansion and swelling test. The following programs were used: SPECS and MI-PVT (Technical University of Denmark) and WinProp (windows version of CMGPROP). The Soave-Redlich-Kwong equation of sate was also used. Experimental data for 06 oil samples from Reconcavo Basin (Bahia- Brazil) were obtained in the CEPGN (Study Center on Oil and Natural Gas at UNIFACS) and used in the tuning (author)

  1. BirdsEyeView (BEV: graphical overviews of experimental data

    Directory of Open Access Journals (Sweden)

    Zhang Lifeng

    2012-09-01

    Full Text Available Abstract Background Analyzing global experimental data can be tedious and time-consuming. Thus, helping biologists see results as quickly and easily as possible can facilitate biological research, and is the purpose of the software we describe. Results We present BirdsEyeView, a software system for visualizing experimental transcriptomic data using different views that users can switch among and compare. BirdsEyeView graphically maps data to three views: Cellular Map (currently a plant cell, Pathway Tree with dynamic mapping, and Gene Ontology http://www.geneontology.org Biological Processes and Molecular Functions. By displaying color-coded values for transcript levels across different views, BirdsEyeView can assist users in developing hypotheses about their experiment results. Conclusions BirdsEyeView is a software system available as a Java Webstart package for visualizing transcriptomic data in the context of different biological views to assist biologists in investigating experimental results. BirdsEyeView can be obtained from http://metnetdb.org/MetNet_BirdsEyeView.htm.

  2. Considerations Related to Interpolation of Experimental Data Using Piecewise Functions

    Directory of Open Access Journals (Sweden)

    Stelian Alaci

    2016-12-01

    Full Text Available The paper presents a method for experimental data interpolation by means of a piecewise function, the points where the form of the function changes being found simultaneously with the other parameters utilized in an optimization criterion. The optimization process is based on defining the interpolation function using a single expression founded on the Heaviside function and regarding the optimization function as a generalised infinitely derivable function. The exemplification of the methodology is made via a tangible example.

  3. Systematic integration of experimental data and models in systems biology

    Directory of Open Access Journals (Sweden)

    Simeonidis Evangelos

    2010-11-01

    Full Text Available Abstract Background The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Results Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML. A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Conclusions Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  4. An Experimental Metagenome Data Management and AnalysisSystem

    Energy Technology Data Exchange (ETDEWEB)

    Markowitz, Victor M.; Korzeniewski, Frank; Palaniappan, Krishna; Szeto, Ernest; Ivanova, Natalia N.; Kyrpides, Nikos C.; Hugenholtz, Philip

    2006-03-01

    The application of shotgun sequencing to environmental samples has revealed a new universe of microbial community genomes (metagenomes) involving previously uncultured organisms. Metagenome analysis, which is expected to provide a comprehensive picture of the gene functions and metabolic capacity of microbial community, needs to be conducted in the context of a comprehensive data management and analysis system. We present in this paper IMG/M, an experimental metagenome data management and analysis system that is based on the Integrated Microbial Genomes (IMG) system. IMG/M provides tools and viewers for analyzing both metagenomes and isolate genomes individually or in a comparative context.

  5. Testability of evolutionary game dynamics based on experimental economics data

    Science.gov (United States)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    2017-11-01

    Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

  6. Study on analyses of experimental data at DCA

    Energy Technology Data Exchange (ETDEWEB)

    Min, Byung Joo; Suk, Ho Chun; Hazama, T. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-11-01

    In this report, the lattice characteristics of DCA are calculated to validate WIMS-AECL code for the lattice analysis of CANDU core by using experimental data of DCA and JNC. These results are compared with those of WIMS-ATR code. Analytical studies of some critical experiments had been performed to analyze the effects of fuel composition and lattice pitch. Different items of reactor physics such as local power peaking factor (LPF), effective multiplication factor (Keff) and coolant void reactivity were calculated for two coolant void fractions (0% and 100%). LPFs calculated by WIMS-ATR code are in close agreement with the experimental results. LPFs calculated by WIMS-AECL code with WINFRITH and ENDF/B-V libraries have similar values for both libraries but the differences between experimental data and results of WIMS-AECL code are larger than those of WIMS-ATR code. The maximum difference between the values calculated by WIMS-ATR and experimental values of LPFs are within 1.3%. LPF of Pu fuel cluster is found to be higher than that of the uranium fuel cluster. The coupled code systems WIMS-ATR and CITATION used in this analysis predict Keff within 1% {delta}K and coolant void reactivity within 4%{delta}K/K in all cases. The coolant void reactivity of uranium fuel is found to be positive for two lattice pitches (25.0 and 28.3 cm). Presence of plutonium fuel makes it more negative compare to uranium fuel. To validate WIMS-AECL code, the core characteristics of DCA shall be calculated by WIMS-AECL and CITATION codes in the future. 8 refs., 8 figs., 12 tabs. (Author)

  7. Comparison of ATHENA/RELAP results against ice experimental data

    CERN Document Server

    Moore-Richard, L

    2002-01-01

    In order to demonstrate the adequacy of the International Thermonuclear Experimental Reactor design from a safety stand point as well as investigating the behavior of two-phase flow phenomena during an ingress of coolant event, an integrated ICE test facility was constructed in Japan. The data generated from the ICE facility offers a valuable means to validate computer codes such as ATHENA /RELAP5, which is one of the codes used at the Idaho National Engineering And Environmental Laboratory (INEEL) to evaluate the safety of various fusion reactor concepts. In this paper we compared numerical results generated by the ATHENA code with corresponding test data from the ICE facility. Overall we found good agreement between the test data and the predicted results.

  8. Comparison of ATHENA/RELAP results against ice experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Richard L. E-mail: rmm@inel.gov; Merrill, Brad J

    2002-12-01

    In order to demonstrate the adequacy of the International Thermonuclear Experimental Reactor design from a safety stand point as well as investigating the behavior of two-phase flow phenomena during an ingress of coolant event, an integrated ICE test facility was constructed in Japan. The data generated from the ICE facility offers a valuable means to validate computer codes such as ATHENA/RELAP5, which is one of the codes used at the Idaho National Engineering And Environmental Laboratory (INEEL) to evaluate the safety of various fusion reactor concepts. In this paper we compared numerical results generated by the ATHENA code with corresponding test data from the ICE facility. Overall we found good agreement between the test data and the predicted results.

  9. Regression Model Optimization for the Analysis of Experimental Data

    Science.gov (United States)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  10. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  11. Radiation environment at LEO orbits: MC simulation and experimental data.

    Science.gov (United States)

    Zanini, Alba; Borla, Oscar; Damasso, Mario; Falzetta, Giuseppe

    The evaluations of the different components of the radiation environment in spacecraft, both in LEO orbits and in deep space is of great importance because the biological effect on humans and the risk for instrumentation strongly depends on the kind of radiation (high or low LET). That is important especially in view of long term manned or unmanned space missions, (mission to Mars, solar system exploration). The study of space radiation field is extremely complex and not completely solved till today. Given the complexity of the radiation field, an accurate dose evaluation should be considered an indispensable part of any space mission. Two simulation codes (MCNPX and GEANT4) have been used to assess the secondary radiation inside FO-TON M3 satellite and ISS. The energy spectra of primary radiation at LEO orbits have been modelled by using various tools (SPENVIS, OMERE, CREME96) considering separately Van Allen protons, the GCR protons and the GCR alpha particles. This data are used as input for the two MC codes and transported inside the spacecraft. The results of two calculation meth-ods have been compared. Moreover some experimental results previously obtained on FOTON M3 satellite by using TLD, Bubble dosimeter and LIULIN detector are considered to check the performances of the two codes. Finally the same experimental device are at present collecting data on the ISS (ASI experiment BIOKIS -nDOSE) and at the end of the mission the results will be compared with the calculation.

  12. Prediction of sonic boom from experimental near-field overpressure data. Volume 2: Data base construction

    Science.gov (United States)

    Glatt, C. R.; Reiners, S. J.; Hague, D. S.

    1975-01-01

    A computerized method for storing, updating and augmenting experimentally determined overpressure signatures has been developed. A data base of pressure signatures for a shuttle type vehicle has been stored. The data base has been used for the prediction of sonic boom with the program described in Volume I.

  13. Evaluation of Algebraic Reynolds Stress Model Assumptions Using Experimental Data

    Science.gov (United States)

    Jyoti, B.; Ewing, D.; Matovic, D.

    1996-11-01

    The accuracy of Rodi's ASM assumption is examined by evaluating the terms in Reynolds stress transport equation and their modelled counterparts. The basic model assumption: Dτ_ij/Dt + partial T_ijl/partial xl = (τ_ij/k )(Dk/Dt + partial Tl /partial xl ) (Rodi( Rodi W., ZAMM.), 56, pp. 219-221, 1976.), can also be broken into two stronger assumptions: Da_ij/Dt = 0 and (2) partial T_ijl/partial xl = (τ_ij/k )(partial Tl /partial xl ) (e.g. Taulbee( Taulbee D. B., Phys. of Fluids), 4(11), pp. 2555-2561, 1992.). Fu et al( Fu S., Huang P.G., Launder B.E. & Leschziner M.A., J. Fluid Eng.), 110(2), pp. 216-221., 1988 examined the accuracy of Rodi's assumption using the results of RSM calculation of axisymmetric jets. Since the RSM results did not accurately predict the experimental results either, it may be useful to examine the basic ASM model assumptions using experimental data. The database of Hussein, Capp and George( Hussein H., Capp S. & George W., J.F.M.), 258, pp.31-75., 1994. is sufficiently detailed to evaluate the terms of Reynolds stress transport equations individually, thus allowing both Rodi's and the stronger assumptions to be tested. For this flow assumption (1) is well satisfied for all the components (including \\overlineuv); however, assumption (2) does not seem as well satisfied.

  14. Using Experimental Data To Test And Improve Susy Theories

    CERN Document Server

    Wang, T

    2004-01-01

    There are several pieces of evidence that our world is described by a supersymmetric extension of the Standard Model. In this thesis, I assume this is the case and study how to use experimental data to test and improve supersymmetric standard models. Several experimental signatures and their implications are covered in this thesis: the result of the branching ratio of b → sγ is used to put constraints on SUSY models; the measured time-dependent CP asymmetry in the B → &phis;KS process is used to test unification scale models; the excess of positrons from cosmic rays helps us to test the property of the Lightest Supersymmetric Particle and the Cold Dark Matter production mechanisms; the LEP higgs search results are used to classify SUSY models; SUSY signatures at the Tevatron are used to distinguish different unification scale models; by considering the μ problem, SUSY theories are improved. Due to the large unknown parameter space, all of the above inputs should be used ...

  15. Experimental data for groundwave propagation over cylindrical surfaces

    DEFF Research Database (Denmark)

    King, Ray J.; Cho, Se.; Jaggard, D.

    1974-01-01

    Experimental data for the fields of EM groundwaves propagating over cylindrical homogeneous paths and two-section mixed paths were obtained by microwave (4.765 GHz) modeling. The cylindrical surfaces, which have a radius of20 lambda_{0}, closely approximate spherical surfaces insofar as groundwave...... propagation is concerned. The model is a curved tank which was constructed as a stratified combination of Plexiglas over distilled water, giving a predictable highly inductive surface impedance. Aluminum foil laid on the Plexiglas produced a nearly perfectly conducting surface wherever needed for the mixed...... the boundary where the residue series converges poorly. It is concluded that if the constitutive electrical parameters of the earth are precisely known and constant, the theory can be reliably applied to LF and VLF groundwave propagation over the earth where the constraints are even less severe....

  16. Theoretical interpretation of experimental data from direct dark matter detection

    Energy Technology Data Exchange (ETDEWEB)

    Shan Chung-Lin

    2007-10-15

    I derive expressions that allow to reconstruct the normalized one-dimensional velocity distribution function of halo WIMPs and to determine its moments from the recoil energy spectrum as well as from experimental data directly. The reconstruction of the velocity distribution function is further extended to take into account the annual modulation of the event rate. All these expressions are independent of the as yet unknown WIMP density near the Earth as well as of the WIMP-nucleus cross section. The only information about the nature of halo WIMPs which one needs is the WIMP mass. I also present a method for the determination of the WIMP mass by combining two (or more) experiments with different detector materials. This method is not only independent of the model of Galactic halo but also of that of WIMPs. (orig.)

  17. Universal Implicatures and Free Choice Effects: Experimental Data

    Directory of Open Access Journals (Sweden)

    Emmanuel Chemla

    2009-05-01

    Full Text Available Universal inferences like (i have been taken as evidence for a local/syntactic treatment of scalar implicatures (i.e. theories where the enrichment of "some" into "some but not all" can happen sub-sententially: (i Everybody read some of the books --> Everybody read [some but not all the books]. In this paper, I provide experimental evidence which casts doubt on this argument. The counter-argument relies on a new set of data involving free choice inferences (a sub-species of scalar implicatures and negative counterparts of (i, namely sentences with the quantifier "no" instead of "every". The results show that the globalist account of scalar implicatures is incomplete (mainly because of free choice inferences but that the distribution of universal inferences made available by the localist move remains incomplete as well (mainly because of the negative cases. doi:10.3765/sp.2.2 BibTeX info

  18. Experimental data from an oxygen plant for Mars

    Science.gov (United States)

    Schallhorn, P. A.; Colvin, J.; Sridhar, K. R.; Ramohalli, Kumar

    1991-01-01

    Experimental data are presented on various aspects of the plant operation intended to produce oxygen from the atmospheric carbon dioxide at any Martian site. A solid electrolytic cell is used in the tubular geometry. Anaerobic carbon dioxide is procured from a gas vendor and is used at a pressure of 1 bar (entrance to the plant). The variables include the voltage applied to the cell, the temperature of the cell, the current density, the flow rate and the duty-cycle. The oxygen production rates have been consistent with the specifications of the cell manufacturer. It is projected that a plant weighing between 145 and 197 kg can be built to produce 10 kg of oxygen per day. Various operational characteristics such as cell poisoning, carbon formation, nitrogen embrittlement, local current densities exceeding the breakdown potential for the cell material, reliability, risk, are all being quantitatively assessed.

  19. Status of experimental data related to Be in ITER materials R and D data bank

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Shigeru [ITER Joint Central Team, Muenchen (Germany)

    1998-01-01

    To keep traceability of many valuable raw data that were experimentally obtained in the ITER Technology R and D Tasks related to materials for In-Vessel components (divertor, first wall, blanket, vacuum vessel, etc.) and to easily make the best use of these data in the ITER design activities, the `ITER Materials R and D Data Bank` has been built up, with the use of Excel{sup TM} spread sheets. The paper describes status of experimental data collected in this data bank on thermo-mechanical properties of unirradiated and neutron irradiated Be, on plasma-material interactions of Be, on mechanical properties of various kinds of Be/Cu joints (including plasma sprayed Be), and on thermal fatigue tests of Be/Cu mock-ups. (author)

  20. Optical bandgap of semiconductor nanostructures: Methods for experimental data analysis

    Science.gov (United States)

    Raciti, R.; Bahariqushchi, R.; Summonte, C.; Aydinli, A.; Terrasi, A.; Mirabella, S.

    2017-06-01

    Determination of the optical bandgap (Eg) in semiconductor nanostructures is a key issue in understanding the extent of quantum confinement effects (QCE) on electronic properties and it usually involves some analytical approximation in experimental data reduction and modeling of the light absorption processes. Here, we compare some of the analytical procedures frequently used to evaluate the optical bandgap from reflectance (R) and transmittance (T) spectra. Ge quantum wells and quantum dots embedded in SiO2 were produced by plasma enhanced chemical vapor deposition, and light absorption was characterized by UV-Vis/NIR spectrophotometry. R&T elaboration to extract the absorption spectra was conducted by two approximated methods (single or double pass approximation, single pass analysis, and double pass analysis, respectively) followed by Eg evaluation through linear fit of Tauc or Cody plots. Direct fitting of R&T spectra through a Tauc-Lorentz oscillator model is used as comparison. Methods and data are discussed also in terms of the light absorption process in the presence of QCE. The reported data show that, despite the approximation, the DPA approach joined with Tauc plot gives reliable results, with clear advantages in terms of computational efforts and understanding of QCE.

  1. Status of experimental data of proton-induced reactions for intermediate-energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Yukinobu; Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Yamano, Naoki; Fukahori, Tokio

    1998-11-01

    The present status of experimental data of proton-induced reactions is reviewed, with particular attention to total reaction cross section, elastic and inelastic scattering cross section, double-differential particle production cross section, isotope production cross section, and activation cross section. (author)

  2. Predicting subsurface uranium transport: Mechanistic modeling constrained by experimental data

    Science.gov (United States)

    Ottman, Michael; Schenkeveld, Walter D. C.; Kraemer, Stephan

    2017-04-01

    Depleted uranium (DU) munitions and their widespread use throughout conflict zones around the world pose a persistent health threat to the inhabitants of those areas long after the conclusion of active combat. However, little emphasis has been put on developing a comprehensive, quantitative tool for use in remediation and hazard avoidance planning in a wide range of environments. In this context, we report experimental data on U interaction with soils and sediments. Here, we strive to improve existing risk assessment modeling paradigms by incorporating a variety of experimental data into a mechanistic U transport model for subsurface environments. 20 different soils and sediments from a variety of environments were chosen to represent a range of geochemical parameters that are relevant to U transport. The parameters included pH, organic matter content, CaCO3, Fe content and speciation, and clay content. pH ranged from 3 to 10, organic matter content from 6 to 120 g kg-1, CaCO3 from 0 to 700 g kg-1, amorphous Fe content from 0.3 to 6 g kg-1 and clay content from 4 to 580 g kg-1. Sorption experiments were then performed, and linear isotherms were constructed. Sorption experiment results show that among separate sets of sediments and soils, there is an inverse correlation between both soil pH and CaCO¬3 concentration relative to U sorptive affinity. The geological materials with the highest and lowest sorptive affinities for U differed in CaCO3 and organic matter concentrations, as well as clay content and pH. In a further step, we are testing if transport behavior in saturated porous media can be predicted based on adsorption isotherms and generic geochemical parameters, and comparing these modeling predictions with the results from column experiments. The comparison of these two data sets will examine if U transport can be effectively predicted from reactive transport modeling that incorporates the generic geochemical parameters. This work will serve to show

  3. LHC experimental data from today's data challenges to the promise of tomorrow

    CERN Multimedia

    CERN. Geneva; Panzer-Steindel, Bernd; Rademakers, Fons

    2003-01-01

    The LHC experiments constitute a challenge in several discipline in both High Energy Physics and Information Technologies. This is definitely the case for data acquisition, processing and analysis. This challenge has been addressed by many years or R&D activity during which prototypes of components or subsystems have been developed. This prototyping phase is now culminating with an evaluation of the prototypes in large-scale tests (approximately called "Data Challenges"). In a period of restricted funding, the expectation is to realize the LHC data acquisition and computing infrastructures by making extensive use of standard and commodity components. The lectures will start with a brief overview of the requirements of the LHC experiments in terms of data acquisition and computing. The different tasks of the experimental data chain will also be explained: data acquisition, selection, storage, processing and analysis. The major trends of the computing and networking industries will then be indicated with pa...

  4. An open-source data storage and visualization back end for experimental data.

    Science.gov (United States)

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert; Nielsen, Jane H; Chorkendorff, Ib

    2014-04-01

    In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high resilience to equipment failure, whereas the central storage of data dramatically eases backup and data exchange. The visualization front end allows direct monitoring of acquired data to see live progress of long-duration experiments. This enables the user to alter experimental conditions based on these data and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status of long-duration experiments, and implementation of instant alarms in the event of failure.

  5. Drying kinetic of industrial cassava flour: Experimental data in view.

    Science.gov (United States)

    Odetunmibi, Oluwole A; Adejumo, Oluyemisi A; Oguntunde, Pelumi E; Okagbue, Hilary I; Adejumo, Adebowale O; Suleiman, Esivue A

    2017-12-01

    In this data article, laboratory experimental investigation results on drying kinetic properties: the drying temperature (T), drying air velocity (V) and dewatering time (Te), each of the factors has five levels, and the experiment was replicated three times and the output: drying rate and drying time obtained, were observed. The experiment was conducted at National Centre for Agricultural Mechanization (NCAM) for a period of eight months, in 2014. Analysis of variance was carried out using randomized complete block design with factorial experiment on each of the outputs: drying rate and drying times of the industrial cassava flour. A clear picture on each of these outputs was provided separately using tables and figures. It was observed that all the main factors as well as two and three ways interactions are significant at 5% level for both drying time and rate. This also implies that the rate of drying grated unfermented cassava mash, to produce industrial cassava flour, depend on the dewatering time (the initial moisture content), temperature of drying, velocity of drying air as well as the combinations of these factors altogether. It was also discovered that all the levels of each of these factors are significantly difference from one another. In summary, the time of drying is a function of the dewatering time which was responsible for the initial moisture content. The higher the initial moisture content the longer the time of drying, and the lower the initial moisture content, the lower the time of drying. Also, the higher the temperature of drying the shorter the time of drying and vice versa. Also, the air velocity effect on the drying process was significant. As velocity increases, rate of drying also increases and vice versa. Finally, it can be deduced that the drying kinetics are influenced by these processing factors.

  6. Drying kinetic of industrial cassava flour: Experimental data in view

    Directory of Open Access Journals (Sweden)

    Oluwole A. Odetunmibi

    2017-12-01

    Full Text Available In this data article, laboratory experimental investigation results on drying kinetic properties: the drying temperature (T, drying air velocity (V and dewatering time (Te, each of the factors has five levels, and the experiment was replicated three times and the output: drying rate and drying time obtained, were observed. The experiment was conducted at National Centre for Agricultural Mechanization (NCAM for a period of eight months, in 2014. Analysis of variance was carried out using randomized complete block design with factorial experiment on each of the outputs: drying rate and drying times of the industrial cassava flour. A clear picture on each of these outputs was provided separately using tables and figures.It was observed that all the main factors as well as two and three ways interactions are significant at 5% level for both drying time and rate. This also implies that the rate of drying grated unfermented cassava mash, to produce industrial cassava flour, depend on the dewatering time (the initial moisture content, temperature of drying, velocity of drying air as well as the combinations of these factors altogether. It was also discovered that all the levels of each of these factors are significantly difference from one another. In summary, the time of drying is a function of the dewatering time which was responsible for the initial moisture content. The higher the initial moisture content the longer the time of drying, and the lower the initial moisture content, the lower the time of drying. Also, the higher the temperature of drying the shorter the time of drying and vice versa. Also, the air velocity effect on the drying process was significant. As velocity increases, rate of drying also increases and vice versa. Finally, it can be deduced that the drying kinetics are influenced by these processing factors. Keywords: Drying rate, Drying time, Drying kinetic, Industrial cassava flour, Temperature, Velocity, Dewatering, Moisture

  7. Comparison of Laboratory Experimental Data to XBeach Numerical Model Output

    Science.gov (United States)

    Demirci, Ebru; Baykal, Cuneyt; Guler, Isikhan; Sogut, Erdinc

    2016-04-01

    generating data sets for testing and validation of sediment transport relationships for sand transport in the presence of waves and currents. In these series, there is no structure in the basin. The second and third series of experiments were designed to generate data sets for development of tombolos in the lee of detached 4m-long rubble mound breakwater that is 4 m from the initial shoreline. The fourth series of experiments are conducted to investigate tombolo development in the lee of a 4m-long T-head groin with the head section in the same location of the second and the third tests. The fifth series of experiments are used to investigate tombolo development in the lee of a 3-m-long rubble-mound breakwater positioned 1.5 m offshore of the initial shoreline. In this study, the data collected from the above mentioned five experiments are used to compare the results of the experimental data with XBeach numerical model results, both for the "no-structure" and "with-structure" cases regarding to sediment transport relationships in the presence of only waves and currents as well as the shoreline changes together with the detached breakwater and the T-groin. The main purpose is to investigate the similarities and differences between the laboratory experimental data behavior with XBeach numerical model outputs for these five cases. References: Baykal, C., Sogut, E., Ergin, A., Guler, I., Ozyurt, G.T., Guler, G., and Dogan, G.G. (2015). Modelling Long Term Morphological Changes with XBeach: Case Study of Kızılırmak River Mouth, Turkey, European Geosciences Union, General Assembly 2015, Vienna, Austria, 12-17 April 2015. Gravens, M.B. and Wang, P. (2007). "Data report: Laboratory testing of longshore sand transport by waves and currents; morphology change behind headland structures." Technical Report, ERDC/CHL TR-07-8, Coastal and Hydraulics Laboratory, US Army Engineer Research and Development Center, Vicksburg, MS. Roelvink, D., Reniers, A., van Dongeren, A., van Thiel de

  8. Methods of experimental settlement of contradicting data in evaluated nuclear data libraries

    Directory of Open Access Journals (Sweden)

    V. A. Libman

    2016-12-01

    Full Text Available The latest versions of the evaluated nuclear data libraries (ENDLs have contradictions concerning data about neutron cross sections. To resolve this contradiction we propose the method of experimental verification. This method is based on using of the filtered neutron beams and following measurement of appropriate samples. The basic idea of the method is to modify the suited filtered neutron beam so that the differences between the neutron cross sections in accordance with different ENDLs become measurable. Demonstration of the method is given by the example of cerium, which according to the latest versions of four ENDLs has significantly different total neutron cross section.

  9. An experimental data set for benchmarking 1-D, transient heat and moisture transfer models of hygroscopic building materials. Part II: Experimental, numerical and analytical data

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Prabal [Department of Mechanical Engineering, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110016 (India); Osanyintola, Olalekan F. [XXL Engineering Ltd., 101-807 Manning Road NE, Calgary, AB (Canada); Olutimayin, Stephen O.; Simonson, Carey J. [Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, SK (Canada)

    2007-12-15

    This paper presents the experimental results on spruce plywood and cellulose insulation using the transient moisture transfer (TMT) facility presented in Part I [P. Talukdar, S.O. Olutmayin, O.F. Osanyintola, C.J. Simonson, An experimental data set for benchmarking 1-D, transient heat and moisture transfer models of hygroscopic building materials-Part-I: experimental facility and property data, Int. J. Heat Mass Transfer, in press, doi:10.1016/j.ijheatmasstransfer.2007.03.026] of this paper. The temperature, relative humidity and moisture accumulation distributions within both materials are presented following different and repeated step changes in air humidity and different airflow Reynolds numbers above the materials. The experimental data are compared with numerical data, numerical sensitivity studies and analytical solutions to increase the confidence in the experimental data set. (author)

  10. Application of covariance analysis to feed/ ration experimental data ...

    African Journals Online (AJOL)

    Correlation and Regression analyses were used to adjust for the covariate – initial weight of the experimental birds. The Fisher's F statistic for the straight forward Analysis of Variance (ANOVA) showed significant differences among the rations. With the ANOVA, the calculated F statistic was 4.025, with a probability of 0.0149.

  11. Towards Coordination Patterns for Complex Experimentations in Data Mining

    NARCIS (Netherlands)

    F. Arbab (Farhad); C. Diamantini (Claudia); D. Potena (Domenico); E. Storti (Emanuele)

    2010-01-01

    htmlabstractIn order to support the management of experimental activities in a networked scientic community, the exploitation of serviceoriented paradigm and technologies is a hot research topic in E-science. In particular, scientic workows can be modeled by resorting to the notion of process.

  12. Review of nuclear physics experimental data for space radiation.

    Science.gov (United States)

    Norbury, John W; Miller, Jack

    2012-11-01

    The available nuclear fragmentation data relevant to space radiation studies are reviewed. It is found that there are serious gaps in the data. Helium data are missing in the intervals 280 MeV n-3 GeV n and >15 GeV n. Carbon data are missing >15 GeV n. Iron projectile data are missing at all energies except in the interval 280 MeV n-3 GeV n.

  13. Management, Analysis, and Visualization of Experimental and Observational Data -- The Convergence of Data and Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Greenwald, Martin; Kleese van Dam, Kersten; Parashar, Manish; Wild, Stefan, M.; Wiley, H. Steven

    2016-10-27

    Scientific user facilities---particle accelerators, telescopes, colliders, supercomputers, light sources, sequencing facilities, and more---operated by the U.S. Department of Energy (DOE) Office of Science (SC) generate ever increasing volumes of data at unprecedented rates from experiments, observations, and simulations. At the same time there is a growing community of experimentalists that require real-time data analysis feedback, to enable them to steer their complex experimental instruments to optimized scientific outcomes and new discoveries. Recent efforts in DOE-SC have focused on articulating the data-centric challenges and opportunities facing these science communities. Key challenges include difficulties coping with data size, rate, and complexity in the context of both real-time and post-experiment data analysis and interpretation. Solutions will require algorithmic and mathematical advances, as well as hardware and software infrastructures that adequately support data-intensive scientific workloads. This paper presents the summary findings of a workshop held by DOE-SC in September 2015, convened to identify the major challenges and the research that is needed to meet those challenges.

  14. Experimental QR code optical encryption: noise-free data recovering.

    Science.gov (United States)

    Barrera, John Fredy; Mira-Agudelo, Alejandro; Torroba, Roberto

    2014-05-15

    We report, to our knowledge for the first time, the experimental implementation of a quick response (QR) code as a "container" in an optical encryption system. A joint transform correlator architecture in an interferometric configuration is chosen as the experimental scheme. As the implementation is not possible in a single step, a multiplexing procedure to encrypt the QR code of the original information is applied. Once the QR code is correctly decrypted, the speckle noise present in the recovered QR code is eliminated by a simple digital procedure. Finally, the original information is retrieved completely free of any kind of degradation after reading the QR code. Additionally, we propose and implement a new protocol in which the reception of the encrypted QR code and its decryption, the digital block processing, and the reading of the decrypted QR code are performed employing only one device (smartphone, tablet, or computer). The overall method probes to produce an outcome far more attractive to make the adoption of the technique a plausible option. Experimental results are presented to demonstrate the practicality of the proposed security system.

  15. HIRENASD Experimental Data, Magnitude & Phase of Oscillatory Cp/displacement

    Data.gov (United States)

    National Aeronautics and Space Administration — OUTDATED information. This data was replaced April 2012. The originally posted HIRENASD reduced data contained numerous errors. Tecplot (ascii) and matlab files of...

  16. Modeling Aerobic Carbon Source Degradation Processes using Titrimetric Data and Combined Respirometric-Titrimetric Data: Experimental Data and Model Structure

    DEFF Research Database (Denmark)

    Gernaey, Krist; Petersen, B.; Nopens, I.

    2002-01-01

    : 1.33 meq/ mmol) during substrate degradation. A model taking into account substrate uptake, CO2 production, and NH3 uptake for biomass growth is proposed to describe the aerobic degradation of a CxHyO2-type carbon source. Theoretical evaluation of this model for reference parameters showed...... that the proton effect due to aerobic substrate degradation is a function of the pH of the liquid phase. The proposed model could describe the experimental observations with both carbon sources. (C) 2002 Wiley Periodicals, Inc.......Experimental data are presented that resulted from aerobic batch degradation experiments in activated sludge with simple carbon sources (acetate and dextrose) as substrates. Data collection was done using combined respirometric-titrimetric measurements. The respirometer consists of an open aerated...

  17. Data collection and evaluation for experimental computer science research

    Science.gov (United States)

    Zelkowitz, Marvin V.

    1983-01-01

    The Software Engineering Laboratory was monitoring software development at NASA Goddard Space Flight Center since 1976. The data collection activities of the Laboratory and some of the difficulties of obtaining reliable data are described. In addition, the application of this data collection process to a current prototyping experiment is reviewed.

  18. Difficulty Factors and Preprocessing in Imbalanced Data Sets: An Experimental Study on Artificial Data

    Directory of Open Access Journals (Sweden)

    Wojciechowski Szymon

    2017-06-01

    Full Text Available In this paper we describe results of an experimental study where we checked the impact of various difficulty factors in imbalanced data sets on the performance of selected classifiers applied alone or combined with several preprocessing methods. In the study we used artificial data sets in order to systematically check factors such as dimensionality, class imbalance ratio or distribution of specific types of examples (safe, borderline, rare and outliers in the minority class. The results revealed that the latter factor was the most critical one and it exacerbated other factors (in particular class imbalance. The best classification performance was demonstrated by non-symbolic classifiers, particular by k-NN classifiers (with 1 or 3 neighbors - 1NN and 3NN, respectively and by SVM. Moreover, they benefited from different preprocessing methods - SVM and 1NN worked best with undersampling, while oversampling was more beneficial for 3NN.

  19. ANOVA parameters influence in LCF experimental data and simulation results

    Science.gov (United States)

    Delprete, C.; Sesanaa, R.; Vercelli, A.

    2010-06-01

    The virtual design of components undergoing thermo mechanical fatigue (TMF) and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation) and the damage and life model (for life assessment). The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF) tests, low cycle fatigue (LCF) tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo structural FEM

  20. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  1. Increasing process understanding by analyzing complex interactions in experimental data

    DEFF Research Database (Denmark)

    Naelapaa, Kaisa; Allesø, Morten; Kristensen, Henning Gjelstrup

    2009-01-01

    of experimental results. In this study, experiments based on mixed factorial design of coating process were performed. Drug release was analyzed by traditional analysis of variance (ANOVA) and generalized multiplicative ANOVA (GEMANOVA). GEMANOVA modeling is introduced in this study as a new tool for increased...... understanding of a coating process. It was possible to model the response, that is, the amount of drug released, using both mentioned techniques. However, the ANOVAmodel was difficult to interpret as several interactions between process parameters existed. In contrast to ANOVA, GEMANOVA is especially suited...

  2. Cheetah Experimental Platform Web 1.0: Cleaning Pupillary Data

    DEFF Research Database (Denmark)

    Zugal, Stefan; Pinggera, Jakob; Neurauter, Manuel

    2017-01-01

    –tracking devices led to increased attention for objectively measuring cognitive load via pupil dilation. However, this approach requires a standardized data processing routine to reliably measure cognitive load. This technical report presents CEP–Web, an open source platform to providing state of the art data...

  3. Heart rate and sentiment experimental data with common timeline.

    Science.gov (United States)

    Salamon, Jaromír; Mouček, Roman

    2017-12-01

    Sentiment extraction and analysis using spoken utterances or written corpora as well as collection and analysis of human heart rate data using sensors are commonly used techniques and methods. On the other hand, these have been not combined yet. The collected data can be used e.g. to investigate the mutual dependence of human physical and emotional activity. The paper describes the procedure of parallel acquisition of heart rate sensor data and tweets expressing sentiment and difficulties related to this procedure. The obtained datasets are described in detail and further discussed to provide as much information as possible for subsequent analyses and conclusions. Analyses and conclusions are not included in this paper. The presented experiment and provided datasets serve as the first basis for further studies where all four presented data sources can be used independently, combined in a reasonable way or used all together. For instance, when the data is used all together, performing studies comparing human sensor data, acquired noninvasively from the surface of the human body and considered as more objective, and human written data expressing the sentiment, which is at least partly cognitively interpreted and thus considered as more subjective, could be beneficial.

  4. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  5. Experimental Design Plant and Soil Measurement Data, Colorado Plateau, 2011

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These plant and soil data were collected by Timothy M. Wertin and Sasha C. Reed in the spring, summer, and fall of 2011 at a climate manipulation experiment site...

  6. Experimental study of digital image processing techniques for LANDSAT data

    Science.gov (United States)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  7. 40 CFR 158.270 - Experimental use permit data requirements for residue chemistry.

    Science.gov (United States)

    2010-07-01

    ... requirements for residue chemistry. 158.270 Section 158.270 Protection of Environment ENVIRONMENTAL PROTECTION... Experimental use permit data requirements for residue chemistry. All residue chemistry data, as described in... section 408(r) is sought. Residue chemistry data are not required for an experimental use permit issued on...

  8. 40 CFR 158.210 - Experimental use permit data requirements for product chemistry.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit data requirements for product chemistry. 158.210 Section 158.210 Protection of Environment ENVIRONMENTAL PROTECTION... Experimental use permit data requirements for product chemistry. All product chemistry data, as described in...

  9. 40 CFR 158.230 - Experimental use permit data requirements for toxicology.

    Science.gov (United States)

    2010-07-01

    ... requirements for toxicology. 158.230 Section 158.230 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Experimental Use Permits § 158.230 Experimental use permit data requirements for toxicology. All toxicology data, as described in paragraph (c) of...

  10. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  11. Numerical differentiation of experimental data: local versus global methods

    Science.gov (United States)

    Ahnert, Karsten; Abel, Markus

    2007-11-01

    In the context of the analysis of measured data, one is often faced with the task to differentiate data numerically. Typically, this occurs when measured data are concerned or data are evaluated numerically during the evolution of partial or ordinary differential equations. Usually, one does not take care for accuracy of the resulting estimates of derivatives because modern computers are assumed to be accurate to many digits. But measurements yield intrinsic errors, which are often much less accurate than the limit of the machine used, and there exists the effect of "loss of significance", well known in numerical mathematics and computational physics. The problem occurs primarily in numerical subtraction, and clearly, the estimation of derivatives involves the approximation of differences. In this article, we discuss several techniques for the estimation of derivatives. As a novel aspect, we divide into local and global methods, and explain the respective shortcomings. We have developed a general scheme for global methods, and illustrate our ideas by spline smoothing and spectral smoothing. The results from these less known techniques are confronted with the ones from local methods. As typical for the latter, we chose Savitzky-Golay-filtering and finite differences. Two basic quantities are used for characterization of results: The variance of the difference of the true derivative and its estimate, and as important new characteristic, the smoothness of the estimate. We apply the different techniques to numerically produced data and demonstrate the application to data from an aeroacoustic experiment. As a result, we find that global methods are generally preferable if a smooth process is considered. For rough estimates local methods work acceptably well.

  12. Behavioral effects of lead: commonalities between experimental and epidemiologic data.

    Science.gov (United States)

    Rice, D C

    1996-04-01

    Enormous effort has been focused over the last decade and a half on characterizing the behavioral effects of lead in the developing organism. While age-appropriate standardized measures of intelligence (IQ) have been the dependent variable most often used to assess lead-induced cognitive impairment in epidemiologic studies, researchers have also used a variety of other methods designed to assess specific behavioral processes sensitive to lead. Increased reaction time and poorer performance on vigilance tasks associated with increased lead body burden suggest increased distractibility and short attention span. Assessment of behavior on teachers' rating scales identified increased distractibility, impulsivity, nonpersistence, inability to follow sequences of directions, and inappropriate approach to problems as hallmarks of lead exposure. Robust deficits in learned skills such as reading, spelling, math, and word recognition have also been found. Spatial organizational perception and abilities seem particularly sensitive to lead-induced impairment. Assessment of complex tasks of learning and memory in both rats and monkeys has revealed overall deficits in function over a variety of behavioral tasks. Exploration of behavioral mechanisms responsible for these deficits identified increased distractibility perseveration, inability to inhibit inappropriate responding, and inability to change response strategy as underlying deficits. Thus, there is remarkable congruence between the epidemiologic and experimental literatures with regard to the behavioral processes identified as underlying the deficits inflicted by developmental lead exposure. However, careful behavioral analysis was required from researchers in both fields for such understanding to emerge.

  13. Systematic integration of experimental data and models in systems biology.

    NARCIS (Netherlands)

    Li, P.; Dada, J.O.; Jameson, D.; Spasic, I.; Swainston, N.; Carroll, K.; Dunn, W.; Khan, F.; Messiha, E.; Simeonides, E.; Weichart, D.; Winder, C.; Broomhead, D.S.; Goble, C.A.; Gaskell, S.J.; Kell, D.B.; Westerhoff, H.V.; Mendes, P.; Paton, N.W.

    2010-01-01

    Background: The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating

  14. AKK update. Improvements from new theoretical input and experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Albino, S.; Kniehl, B.A.; Kramer, G. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik

    2008-06-15

    We perform a number of improvements to the previous AKK extraction of fragmentation functions for {pi}{sup {+-}}, K{sup {+-}}, p/ anti p, K{sup 0}{sub S} and {lambda}/ anti {lambda} particles at next-to-leading order. Inclusive hadron production measurements from pp(anti p) reactions at BRAHMS, CDF, PHENIX and STAR are added to the data sample. We use the charge-sign asymmetry of the produced hadrons in pp reactions to constrain the valence quark fragmentations. Data from e{sup +}e{sup -} reactions in regions of smaller x and lower {radical}(s) are added. Hadron mass effects are treated for all observables and, for each particle, the hadron mass used for the description of the e{sup +}e{sup -} reaction is fitted. The baryons' fitted masses are found to be only around 1% above their true masses, while the values of the mesons' fitted masses have the correct order of magnitude. Large x resummation is applied in the coefficient functions of the e{sup +}e{sup -} reactions, and also in the evolution of the fragmentation functions, which in most cases results in a significant reduction of the minimized {chi}{sub 2}. To further exploit the data, all published normalization errors are incorporated via a correlation matrix. (orig.)

  15. Dismal: a spreadsheet for sequential data analysis and HCI experimentation.

    Science.gov (United States)

    Ritter, Frank E; Wood, Alexander B

    2005-02-01

    Dismal is a spreadsheet that works within GNU Emacs, a widely available programmable editor. Dismal has three features of particular interest to those who study behavior: (1) the ability to manipulate and align sequential data, (2) an open architecture that allows users to expand it to meet their particular needs, and (3) an instrumented and accessible interface for studies of human-computer interaction (HCI). Example uses of each of these capabilities are provided, including cognitive models that have had their sequential behavior aligned with subject's protocols, extensions useful for teaching and doing HCI design, and studies in which keystroke logs from the timing package in Dismal have been used.

  16. Manually Curated Database of Rice Proteins (MCDRP, a database of digitized experimental data on rice

    Directory of Open Access Journals (Sweden)

    Saurabh Raghuvanshi

    2016-11-01

    Full Text Available MCDRP or ‘Manually Curated Database of Rice Proteins’ is a database of digitized experimental datasets on rice proteins. Every aspect of the experimental data published in peer-reviewed research articles on rice biology has been digitized with the help of novel data curation models. These models use a semantic and structured arrangement of alpha-numeric notation, including several well known ontologies, to represent various aspect of the data. As a result data from more than 15,000 different experiments pertaining to about 2400 rice proteins has been digitized from over 540 published and peer-reviewed research articles. The database portal provides access to the digitized experimental data via search or browse functions. In essence, one can instantly access data from even a single data-point from a collection of thousands of the experimental datasets. On the other hand, one can easily access the digitized experimental data from multiple research articles on a rice protein. Based on the analysis and integration of the digitized experimental data, more than 800 different traits (molecular, biochemical or phenotypic have been precisely mapped onto the rice proteins along with the underlying experimental evidences. Similarly, over 4370 associations, based on experimental evidence, have been established between the rice proteins and various gene ontology terms. The database is being continuously updated and is freely available at www.genomeindia.org.in/biocuration.

  17. Comparing Simulated and Experimental Data from UCN τ

    Science.gov (United States)

    Howard, Dezrick; Holley, Adam

    2017-09-01

    The UCN τ experiment is designed to measure the average lifetime of a free neutron (τn) by trapping ultracold neutrons (UCN) in a magneto-gravitational trap and allowing them to β-decay, with the ultimate goal of minimizing the uncertainty to approximately 0.01% (0.1 s). Understanding the systematics of the experiment at the level necessary to reach this high precision may help to better understand the disparity between measurements from cold neutron beam and UCN bottle experiments (τn 888 s and τn 878 s, respectively). To assist in evaluating systemics that might conceivably contribute at this level, a neutron spin-tracking Monte Carlo simulation, which models a UCN population's behavior throughout a run, is currently under development. The simulation will utilize an empirical map of the magnetic field in the trap (see poster by K. Hoffman) by interpolating the field between measured points (see poster by J. Felkins) in order to model the depolarization mechanism with high fidelity. As a preliminary step, I have checked that the Monte Carlo model can reasonably reproduce the observed behavior of the experiment. In particular, I will present a comparison between simulated data and data acquired from the 2016-2017 UCN τ run cycle.

  18. Germanium Isotopic Fractionation in Iron Meteorites : Comparison with Experimental Data

    Science.gov (United States)

    Luais, B.; Toplis, M.; Tissandier, L.; Roskosz, M.

    2009-05-01

    Magmatic and non-magmatic iron meteorites are thought to be formed respectively by processes of metal- silicate segregation, and complex impacts on undifferentiated parent bodies. These processes are inferred from variations of siderophile element concentrations, such as Ge, Ni, Ir. Germanium is moderately siderophile, with metal-silicate partition coefficients which depend on oxygen fugacity. Germanium is also moderately volatile, and fractionation would be expected during high temperature processes. In order to investigate the extent of elemental and isotopic fractionation of germanium during metal-silicate equilibria and impact processes, we use a double approach including (1) Ge isotopic measurements of iron meteorites from non-magmatic and magmatic groups [1], and (2) experimental investigations of the isotopic fractionation associated with germanium transfer from an oxidized silicate liquid to a metallic phase under various fO2 conditions. Experiments were performed in a 1 atm vertical drop quench furnace, with starting materials corresponding to a glass of 1 bar An-Di euctectic composition doped with ˜ 4,000 ppm reference Ge standard, and pure Ni capsules as the metal phase. The assembly was heated at 1355°C for t =2 to 60 hrs over a range of fO2 from 4 log units below, to 2.5 log units above, the IW buffer. Metal and silicate phases were then mechanically separated. For isotopic measurements, the metal phase of these experiments and the selected iron meteorites were dissolved in high-purity dilute nitric acid. Chemical purification of Ge, and isotopic measurements using the Isoprobe MC-ICPMS follow Luais (2007). Germanium isotopic measurements of Fe-meteorites show that δ74Ge of magmatic irons are constant (δ74Ge=+1.77±0.22‰, 2σ), but heavier than non-magmatic irons (IAB : +1.15±0.2‰; IIE : -0.27 to +1.40±0.2‰). Time series experiments at the IW buffer show that there is a clear continuous increase in δ 74Ge in the metal as a function of time

  19. PARALLEL ITERATIVE RECONSTRUCTION OF PHANTOM CATPHAN ON EXPERIMENTAL DATA

    Directory of Open Access Journals (Sweden)

    M. A. Mirzavand

    2016-01-01

    Full Text Available The principles of fast parallel iterative algorithms based on the use of graphics accelerators and OpenGL library are considered in the paper. The proposed approach provides simultaneous minimization of the residuals of the desired solution and total variation of the reconstructed three- dimensional image. The number of necessary input data, i. e. conical X-ray projections, can be reduced several times. It means in a corresponding number of times the possibility to reduce radiation exposure to the patient. At the same time maintain the necessary contrast and spatial resolution of threedimensional image of the patient. Heuristic iterative algorithm can be used as an alternative to the well-known three-dimensional Feldkamp algorithm.

  20. Software for an Experimental Air-Ground Data Link : Volume 2. System Operation Manual

    Science.gov (United States)

    1975-10-01

    This report documents the complete software system developed for the Experimental Data Link System which was implemented for flight test during the Air-Ground Data Link Development Program (FAA-TSC- Project Number FA-13). The software development is ...

  1. 40 CFR 158.2084 - Experimental use permit biochemical pesticides nontarget organisms and environmental fate data...

    Science.gov (United States)

    2010-07-01

    ... pesticides nontarget organisms and environmental fate data requirements table. 158.2084 Section 158.2084 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Biochemical Pesticides § 158.2084 Experimental use permit biochemical pesticides...

  2. 40 CFR 158.2171 - Experimental use permit microbial pesticides product analysis data requirements table.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit microbial pesticides product analysis data requirements table. 158.2171 Section 158.2171 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Microbial Pesticides § 158.2171 Experimental use...

  3. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    Science.gov (United States)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  4. Reduced carbonic fluid at magmatic PT conditions: new experimental data.

    Science.gov (United States)

    Simakin, Alexander; Salova, Tamara; Rinat, Gabitov; Sergey, Isaenko

    2017-04-01

    -55-12040. References. Simakin AG, Salova TP, Gabitov RI and Isaenko SI. Dry CO2-CO fluid as an important potential deep Earth solvent. Geofluids (2016, online). Simakin AG (2014) Peculiarities of the fluid composition in the dry C-O-S system at PT parameters of the low crust by the data of the thermodynamic modeling. Petrology, 22, 50-59.

  5. Experimental data and boundary conditions for a Double - Skin Facade building in preheating mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    Frequent discussions of double skin façade energy performance have started a dialogue about the methods, models and tools for simulation of double façade systems and reliability of their results. Their reliability will increase with empirical validation of the software. Detailed experimental work...... with all information about the experimental data and measurements, necessary to complete an independent empirical validation of any simulation tool. The article includes detailed information about the experimental apparatus, experimental principles and experimental full-scale test facility ‘The Cube...

  6. Experimental data and boundary conditions for a Double - Skin Facade building in transparent insulation mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    Frequent discussions of double skin façade energy performance have started a dialogue about the methods, models and tools for simulation of double façade systems and reliability of their results. Their reliability will increase with empirical validation of the software. Detailed experimental work...... with all information about the experimental data and measurements, necessary to complete an independent empirical validation of any simulation tool. The article includes detailed information about the experimental apparatus, experimental principles and experimental full-scale test facility ‘The Cube...

  7. Strategies for comparing LES and experimental data of urban turbulence

    Science.gov (United States)

    Hertwig, D.; Nguyen van yen, R.; Patnaik, G.; Leitl, B.

    2012-12-01

    Unsteady flow within and above built environments is an important example of the complex nature of near-surface atmospheric turbulence. Typically, obstacle-resolving micro-scale meteorological models based on the Reynolds-averaged conservation equations are adopted to investigate and predict the mean-states of urban flow phenomena. The rapid advancements in computer capacities, however, fostered the use of time-resolved approaches like large-eddy simulation (LES) for applications on the urban micro-scale. LES has the potential to provide a realistic picture of the spatio-temporal behavior of turbulent flows within and above the urban canopy layer, which cannot be easily achieved with classic in-situ micro-meteorological measurements. The further success of eddy-resolving techniques, however, is coupled to the critical assessment of the model performance in terms of a rigorous validation against suitable reference data. This task is particularly challenging with regard to the time-dependent nature of the problem and the need to verify whether the model predicts turbulence structures in a realistic way. In this study, a hierarchy of validation strategies for urban LES flow fields is formulated and systematically applied. The test case is neutrally stratified turbulent flow in the inner city of Hamburg, Germany. The LES computations were conducted by the U.S. Naval Research Laboratory on the basis of a monotone integrated LES methodology. Reference experiments in terms of single-point, high resolution time-series measurements were carried out in the boundary-layer wind-tunnel facility at the University of Hamburg. The wind-tunnel model was built on a scale of 1:350 and included building structures with a full-scale spatial resolution of 0.5 m. Benchmark parameters for the congruent representation of atmospheric inflow conditions in the physical and the numerical model were obtained from long-term sonic anemometer measurements at a suburban meteorological field site

  8. Design of Experimental Data Publishing Software for Neutral Beam Injector on EAST

    Science.gov (United States)

    Zhang, Rui; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Zhang, Xiaodan; Wu, Deyun

    2015-02-01

    Neutral Beam Injection (NBI) is one of the most effective means for plasma heating. Experimental Data Publishing Software (EDPS) is developed to publish experimental data to get the NBI system under remote monitoring. In this paper, the architecture and implementation of EDPS including the design of the communication module and web page display module are presented. EDPS is developed based on the Browser/Server (B/S) model, and works under the Linux operating system. Using the data source and communication mechanism of the NBI Control System (NBICS), EDPS publishes experimental data on the Internet.

  9. Experimental burn plot trial in the Kruger National Park: history, experimental design and suggestions for data analysis

    Directory of Open Access Journals (Sweden)

    R. Biggs

    2003-12-01

    Full Text Available The experimental burn plot (EBP trial initiated in 1954 is one of few ongoing long-termfire ecology research projects in Africa. The trial aims to assess the impacts of differentfire regimes in the Kruger National Park. Recent studies on the EBPs have raised questions as to the experimental design of the trial, and the appropriate model specificationwhen analysing data. Archival documentation reveals that the original design was modified on several occasions, related to changes in the park's fire policy. These modifications include the addition of extra plots, subdivision of plots and changes in treatmentsover time, and have resulted in a design which is only partially randomised. The representativity of the trial plots has been questioned on account of their relatively small size,the concentration of herbivores on especially the frequently burnt plots, and soil variation between plots. It is suggested that these factors be included as covariates inexplanatory models or that certain plots be excluded from data analysis based on resultsof independent studies of these factors. Suggestions are provided for the specificationof the experimental design when analysing data using Analysis of Variance. It is concluded that there is no practical alternative to treating the trial as a fully randomisedcomplete block design.

  10. Stereochemical analysis of menthol and menthylamine isomers using calculated and experimental optical rotation data

    Science.gov (United States)

    Reinscheid, F.; Reinscheid, U. M.

    2016-01-01

    The complete series of menthol isomers and their corresponding amino derivatives (base and protonated/HCl forms), were investigated using experimental and theoretical data. Our study focused on the conformational and configurational analysis, and revealed that experimental data should be used in combination with calculated data. Furthermore, even in the case of the highly studied member, menthol, discrepancies were found among previously published literature values. We show that the correct determination of the population mix is a must for the correct prediction of the absolute configuration (AC) of neoisomenthol. The neoiso forms are of special interest since a number of structural inconsistences can be found in the literature. We present a stringent proof of the AC of neoisomenthol based on literature information. To the best of our knowledge, the AC of neoisomenthylamine is for the first time shown using experimental and calculated optical rotation data. A correction of a series of publications containing an important error in the assignment of (+)-menthylamine (correct: (+)-neomenthylamine) is presented. With 26 data pairs (experimental versus calculated) of optical rotation values a regression is performed. The AC of all 12 compounds, even the most difficult neoiso forms, could be predicted correctly using experimental low-temperature NMR data. Furthermore, if only experimental data with an optical rotation outside the range of -10 +10 are used, all 12 compounds would have been correctly assigned without low-temperature NMR data as restraints.

  11. Procedure for statistical analysis of one-parameter discrepant experimental data.

    Science.gov (United States)

    Badikov, Sergey A; Chechev, Valery P

    2012-09-01

    A new, Mandel-Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Data Processing System (DPS) software with experimental design, statistical analysis and data mining developed for use in entomological research.

    Science.gov (United States)

    Tang, Qi-Yi; Zhang, Chuan-Xi

    2013-04-01

    A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.

  13. LBA-ECO LC-02 Forest Flammability Data, Catuaba Experimental Farm, Acre, Brazil: 1998

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set provides the results of controlled burns conducted to assess the flammability of mature forests on the Catuaba Experimental Farm of the...

  14. LBA-ECO LC-02 Forest Flammability Data, Catuaba Experimental Farm, Acre, Brazil: 1998

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides the results of controlled burns conducted to assess the flammability of mature forests on the Catuaba Experimental Farm of the Federal...

  15. 40 CFR 158.2083 - Experimental use permit biochemical pesticides human health assessment data requirements table.

    Science.gov (United States)

    2010-07-01

    ... pesticides human health assessment data requirements table. 158.2083 Section 158.2083 Protection of... Biochemical Pesticides § 158.2083 Experimental use permit biochemical pesticides human health assessment data... determine the human health assessment data requirements for a particular biochemical pesticide product. (2...

  16. 40 CFR 158.2173 - Experimental use permit microbial pesticides toxicology data requirements table.

    Science.gov (United States)

    2010-07-01

    ... pesticides toxicology data requirements table. 158.2173 Section 158.2173 Protection of Environment... Pesticides § 158.2173 Experimental use permit microbial pesticides toxicology data requirements table. (a...: (c) Table. The following table shows the data requirements for microbial pesticide toxicology. The...

  17. How to determine a boundary condition for diffusion at a thin membrane from experimental data

    Science.gov (United States)

    Kosztołowicz, Tadeusz; WÄ sik, Sławomir; Lewandowska, Katarzyna D.

    2017-07-01

    We present a method of deriving a boundary condition for diffusion at a thin membrane from experimental data. Based on experimental results obtained for normal diffusion of ethanol in water, we show that the derived boundary condition at a membrane contains a term with a Riemann-Liouville fractional time derivative of order 1/2 . Such a form of the boundary condition shows that a transfer of particles through a thin membrane is a "long-memory process." The presented method is an example that an important part of the mathematical model of physical processes may be derived directly from experimental data.

  18. The importance of the accuracy of the experimental data for the prediction of solubility

    Directory of Open Access Journals (Sweden)

    SLAVICA ERIĆ

    2010-04-01

    Full Text Available Aqueous solubility is an important factor influencing several aspects of the pharmacokinetic profile of a drug. Numerous publications present different methodologies for the development of reliable computational models for the prediction of solubility from structure. The quality of such models can be significantly affected by the accuracy of the employed experimental solubility data. In this work, the importance of the accuracy of the experimental solubility data used for model training was investigated. Three data sets were used as training sets – data set 1, containing solubility data collected from various literature sources using a few criteria (n = 319, data set 2, created by substituting 28 values from data set 1 with uniformly determined experimental data from one laboratory (n = 319, and data set 3, created by including 56 additional components, for which the solubility was also determined under uniform conditions in the same laboratory, in the data set 2 (n = 375. The selection of the most significant descriptors was performed by the heuristic method, using one-parameter and multi-parameter analysis. The correlations between the most significant descriptors and solubility were established using multi-linear regression analysis (MLR for all three investigated data sets. Notable differences were observed between the equations corresponding to different data sets, suggesting that models updated with new experimental data need to be additionally optimized. It was successfully shown that the inclusion of uniform experimental data consistently leads to an improvement in the correlation coefficients. These findings contribute to an emerging consensus that improving the reliability of solubility prediction requires the inclusion of many diverse compounds for which solubility was measured under standardized conditions in the data set.

  19. BioQ: tracing experimental origins in public genomic databases using a novel data provenance model.

    Science.gov (United States)

    Saccone, Scott F; Quan, Jiaxi; Jones, Peter L

    2012-04-15

    Public genomic databases, which are often used to guide genetic studies of human disease, are now being applied to genomic medicine through in silico integrative genomics. These databases, however, often lack tools for systematically determining the experimental origins of the data. We introduce a new data provenance model that we have implemented in a public web application, BioQ, for assessing the reliability of the data by systematically tracing its experimental origins to the original subjects and biologics. BioQ allows investigators to both visualize data provenance as well as explore individual elements of experimental process flow using precise tools for detailed data exploration and documentation. It includes a number of human genetic variation databases such as the HapMap and 1000 Genomes projects. BioQ is freely available to the public at http://bioq.saclab.net.

  20. Preservation of thermalhydraulic and severe accident experimental data produced by the European Commission

    Energy Technology Data Exchange (ETDEWEB)

    Pla, Patricia; Ammirabile, Luca; Pascal, Ghislain; Annunziato, Alessandro [European Commission Joint Research Centre, Petten (Netherlands). Inst. for Energy and Transport; European Commission Joint Research Centre, Ispra (Italy). Inst. for the Protection and Security of the Citizen

    2014-07-15

    The experimental data recorded in Integral Effect Test Facilities (ITFs) are traditionally used in order to validate Best Estimate (BE) system codes and to investigate the behaviour of Nuclear Power Plants (NPP) under accident scenarios. In the same way, facilities dedicated to specific thermalhydraulic (TH) Severe Accident (SA) phenomena are used for the development and improvement of specific analytical models and codes used in the SA analysis for Light Water Reactors (LWR). The extent to which the existing reactor safety experimental databases are preserved was well known and frequently debated and questioned in the nuclear community. The Joint Research Centre (JRC) of the European Commission (EC) has been deeply involved in several projects for experimental data production and experimental data preservation. The paper is presenting these large EC initiatives on the production of experimental data and its storage in the JRC STRESA node. The objective of the paper is to further disseminate and promote the usage of the database containing these experimental data and to demonstrate long-term importance of well maintained experimental databases. At present time the Nuclear Reactor Safety Assessment Unit (NRSA) of the JRC Institute of Energy and Transport in Petten is engaged in the development of a new STRESA tool to secure EU storage for SA experimental data and calculations. The target is to keep the main features of the existing STRESA structure but using the new informatics technologies that are nowadays available and providing new capabilities. The development of this new STRESA tool should be completed by the end of 2014. (orig.)

  1. EXFOR – a global experimental nuclear reaction data repository: Status and new developments

    Directory of Open Access Journals (Sweden)

    Semkova Valentina

    2017-01-01

    Full Text Available Members of the International Network of Nuclear Reaction Data Centres (NRDC have collaborated since the 1960s on the worldwide collection, compilation and dissemination of experimental nuclear reaction data. New publications are systematically complied, and all agreed data assembled and incorporated within the EXFOR database. Recent upgrades to achieve greater completeness of the contents are described, along with reviews and adjustments of the compilation rules for specific types of data.

  2. Application of the M6T Tracker to Simulated and Experimental Multistatic Sonar Data

    NARCIS (Netherlands)

    Theije, P.A.M. de; Kester, L.J.H.M.; Bergmans, J.

    2006-01-01

    This paper describes the first results of applying a multi-sensor multi-hypothesis tracker, called M6T, to simulated and experimental sonar data sets. The simulated data have been generated in the context of the Multistatic Tracking Working Group (MSTWG). For a number of cases (number of sensors and

  3. 40 CFR 158.2174 - Experimental use permit microbial pesticides nontarget organisms and environmental fate data...

    Science.gov (United States)

    2010-07-01

    ... pesticides nontarget organisms and environmental fate data requirements table. 158.2174 Section 158.2174 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Microbial Pesticides § 158.2174 Experimental use permit microbial pesticides nontarget...

  4. Experimental data from a full-scale facility investigating radiant and convective terminals

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Jensen, Rasmus Lund

    The objective of this technical report is to provide information on the accuracy of the experiments performed in “the Cube” (part I, II and III). Moreover, this report lists the experimental data, which have been monitored in the test facility (part IV). These data are available online and can be...

  5. Determination of the angle of attack on the mexico rotor using experimental data

    DEFF Research Database (Denmark)

    Yang, Hua; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2010-01-01

    characteristics from experimental data on the MEXICO (Model Experiments in controlled Conditions) rotor. Detailed surface pressure and Particle Image Velocimetry (PIV) flow field at different rotor azimuth positions were examined for determining the sectional airfoil data. It is worthwhile noting that the present...

  6. 40 CFR 158.2080 - Experimental use permit data requirements-biochemical pesticides.

    Science.gov (United States)

    2010-07-01

    ... requirements-biochemical pesticides. 158.2080 Section 158.2080 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Biochemical Pesticides § 158.2080 Experimental use permit data requirements—biochemical pesticides. (a) Sections 158.2081...

  7. 40 CFR 158.2170 - Experimental use permit data requirements-microbial pesticides.

    Science.gov (United States)

    2010-07-01

    ... requirements-microbial pesticides. 158.2170 Section 158.2170 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Microbial Pesticides § 158.2170 Experimental use permit data requirements—microbial pesticides. (a) For all microbial pesticides. (1) The...

  8. GeneLab Phase 2: Integrated Search Data Federation of Space Biology Experimental Data

    Science.gov (United States)

    Tran, P. B.; Berrios, D. C.; Gurram, M. M.; Hashim, J. C. M.; Raghunandan, S.; Lin, S. Y.; Le, T. Q.; Heher, D. M.; Thai, H. T.; Welch, J. D.; hide

    2016-01-01

    The GeneLab project is a science initiative to maximize the scientific return of omics data collected from spaceflight and from ground simulations of microgravity and radiation experiments, supported by a data system for a public bioinformatics repository and collaborative analysis tools for these data. The mission of GeneLab is to maximize the utilization of the valuable biological research resources aboard the ISS by collecting genomic, transcriptomic, proteomic and metabolomic (so-called omics) data to enable the exploration of the molecular network responses of terrestrial biology to space environments using a systems biology approach. All GeneLab data are made available to a worldwide network of researchers through its open-access data system. GeneLab is currently being developed by NASA to support Open Science biomedical research in order to enable the human exploration of space and improve life on earth. Open access to Phase 1 of the GeneLab Data Systems (GLDS) was implemented in April 2015. Download volumes have grown steadily, mirroring the growth in curated space biology research data sets (61 as of June 2016), now exceeding 10 TB/month, with over 10,000 file downloads since the start of Phase 1. For the period April 2015 to May 2016, most frequently downloaded were data from studies of Mus musculus (39) followed closely by Arabidopsis thaliana (30), with the remaining downloads roughly equally split across 12 other organisms (each 10 of total downloads). GLDS Phase 2 is focusing on interoperability, supporting data federation, including integrated search capabilities, of GLDS-housed data sets with external data sources, such as gene expression data from NIHNCBIs Gene Expression Omnibus (GEO), proteomic data from EBIs PRIDE system, and metagenomic data from Argonne National Laboratory's MG-RAST. GEO and MG-RAST employ specifications for investigation metadata that are different from those used by the GLDS and PRIDE (e.g., ISA-Tab). The GLDS Phase 2 system

  9. Radiation-induced heart disease: review of experimental data on dose response and pathogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Schultz-Hector, S. (Institut fuer Strahlenbiologie, Neuherberg (Germany))

    1992-02-01

    Clinical and experimental heart irradiation can cause a variety of sequelae. A single dose of {>=} 15 Gy leads to a reversible exudative pericarditis, occurring in dogs, rabbits or rats at around 100 days. Its time-course is very similar in all species investigated, but there are considerable species and strain differences in severity and incidence. After longer, dose-dependent latency times chronic congestive myocardial failure develops. The paper reviews experimental data concerning dose response and pathogenesis. (author).

  10. Comparison of numerical results with experimental data for single-phase natural convection in an experimental sodium loop. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Ribando, R.J.

    1979-01-01

    A comparison is made between computed results and experimental data for a single-phase natural convection test in an experimental sodium loop. The test was conducted in the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility, an engineering-scale high temperature sodium loop at the Oak Ridge National Laboratory (ORNL) used for thermal-hydraulic testing of simulated Liquid Metal Fast Breeder Reactor (LMFBR) subassemblies at normal and off-normal operating conditions. Heat generation in the 19 pin assembly during the test was typical of decay heat levels. The test chosen for analysis in this paper was one of seven natural convection runs conducted in the facility using a variety of initial conditions and testing parameters. Specifically, in this test the bypass line was open to simulate a parallel heated assembly and the test was begun with a pump coastdown from a small initial forced flow. The computer program used to analyze the test, LONAC (LOw flow and NAtural Convection) is an ORNL-developed, fast-running, one-dimensional, single-phase, finite-difference model used for simulating forced and free convection transients in the THORS loop.

  11. Curation of Laboratory Experimental Data as Part of the Overall Data Lifecycle

    Directory of Open Access Journals (Sweden)

    Jeremy Frey

    2008-08-01

    Full Text Available The explosion in the production of scientific data in recent years is placing strains upon conventional systems supporting integration, analysis, interpretation and dissemination of data and thus constraining the whole scientific process. Support for handling large quantities of diverse information can be provided by e-Science methodologies and the cyber-infrastructure that enables collaborative handling of such data. Regard needs to be taken of the whole process involved in scientific discovery. This includes the consideration of the requirements of the users and consumers further down the information chain and what they might ideally prefer to impose on the generators of those data. As the degree of digital capture in the laboratory increases, it is possible to improve the automatic acquisition of the ‘context of the data’ as well as the data themselves. This process provides an opportunity for the data creators to ensure that many of the problems they often encounter in later stages are avoided. We wish to elevate curation to an operation to be considered by the laboratory scientist as part of good laboratory practice, not a procedure of concern merely to the few specialising in archival processes. Designing curation into experiments is an effective solution to the provision of high-quality metadata that leads to better, more re-usable data and to better science.

  12. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  13. A comparative study of processing simulated and experimental data in elastic laser light scattering.

    Science.gov (United States)

    Popovici, M A; Mincu, N; Popovici, A

    1999-03-15

    The intensity of the laser light scattered by a suspension of biological particles undergoing Brownian motion contains information about their size distribution function and optical properties. We used several methods (implemented in MathCAD programs), including a new one, to invert the Fredholm integral equation of the first kind, which represents the angular dependence of the elastic scattering of light. The algorithms were first tested on different sets of simulated data. Experimental data were obtained using biological samples and an experimental arrangement which are briefly described. We study the stability of the inversion procedures relative to the noise levels, and compute the first two moments of the retrieved size distribution function. A comparison of the results corresponding to simulated and experimental data is done, to select the best processing algorithm.

  14. The modENCODE Data Coordination Center: lessons in harvesting comprehensive experimental details

    OpenAIRE

    Washington, Nicole L; Stinson, E. O.; Perry, Marc D.; Ruzanov, Peter; Contrino, Sergio; Smith, Richard; Zha, Zheng; Lyne, Rachel; Carr, Adrian; Lloyd, Paul; Kephart, Ellen; McKay, Sheldon J.; Micklem, Gos; Stein, Lincoln D; Lewis, Suzanna E.

    2011-01-01

    The model organism Encyclopedia of DNA Elements (modENCODE) project is a National Human Genome Research Institute (NHGRI) initiative designed to characterize the genomes of Drosophila melanogaster and Caenorhabditis elegans. A Data Coordination Center (DCC) was created to collect, store and catalog modENCODE data. An effective DCC must gather, organize and provide all primary, interpreted and analyzed data, and ensure the community is supplied with the knowledge of the experimental conditions...

  15. APPLICATION OF LABVIEW DURING THE PROCESSING OF EXPERIMENTAL DATA BY STATISTICAL METHODS

    Directory of Open Access Journals (Sweden)

    Halyna V. Lutsenko

    2013-05-01

    Full Text Available The peculiarities appeared in the process of training of Physics and Engineering Students at the study of statistical methods and its practical use have been analyzed. The main types of problems appeared in statistical processing of experimental data have been investigated. The technique of the using of LabVIEW to design of program module of experimentally acquired data statistical parameters estimation has been considered. The structure of established software and technique of its development by using LabVIEW have been described. The procedure of the software creation and set of main LabVIEW elements have been described.

  16. Benchmark models and experimental data for a U(20) polyethylene-moderated critical system

    Energy Technology Data Exchange (ETDEWEB)

    Wetzel, Larry [Babcock & Wilcox Nuclear Operations Group Inc.; Busch, Robert D. [University of New Mexico, Albuquerque; Bowen, Douglas G [ORNL

    2015-01-01

    This work involves the analysis of recent experiments performed on the Aerojet General Nucleonics (AGN)-201M (AGN) polyethylene-moderated research reactor at the University of New Mexico (UNM). The experiments include 36 delayed critical (DC) configurations and 11 positive-period and rod-drop measurements (transient sequences). The Even Parity Neutron Transport (EVENT) radiation transport code was chosen to analyze these steady state and time-dependent experimental configurations. The UNM AGN specifications provided in a benchmark calculation report (2007) were used to initiate AGN EVENT model development and to test the EVENT AGN calculation methodology. The results of the EVENT DC experimental analyses compared well with the experimental data; the average AGN EVENT calculation bias in the keff is –0.0048% for the Legrendre Flux Expansion Order of 11 (P11) cases and +0.0119% for the P13 cases. The EVENT transient analysis also compared well with the AGN experimental data with respect to predicting the reactor period and control rod worth values. This paper discusses the benchmark models used, the recent experimental configurations, and the EVENT experimental analysis.

  17. An Open-Source Data Storage and Visualization Back End for Experimental Data

    DEFF Research Database (Denmark)

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert

    2014-01-01

    In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component f...

  18. The modENCODE Data Coordination Center: lessons in harvesting comprehensive experimental details.

    Science.gov (United States)

    Washington, Nicole L; Stinson, E O; Perry, Marc D; Ruzanov, Peter; Contrino, Sergio; Smith, Richard; Zha, Zheng; Lyne, Rachel; Carr, Adrian; Lloyd, Paul; Kephart, Ellen; McKay, Sheldon J; Micklem, Gos; Stein, Lincoln D; Lewis, Suzanna E

    2011-01-01

    The model organism Encyclopedia of DNA Elements (modENCODE) project is a National Human Genome Research Institute (NHGRI) initiative designed to characterize the genomes of Drosophila melanogaster and Caenorhabditis elegans. A Data Coordination Center (DCC) was created to collect, store and catalog modENCODE data. An effective DCC must gather, organize and provide all primary, interpreted and analyzed data, and ensure the community is supplied with the knowledge of the experimental conditions, protocols and verification checks used to generate each primary data set. We present here the design principles of the modENCODE DCC, and describe the ramifications of collecting thorough and deep metadata for describing experiments, including the use of a wiki for capturing protocol and reagent information, and the BIR-TAB specification for linking biological samples to experimental results. modENCODE data can be found at http://www.modencode.org.

  19. Damage detection algorithms applied to experimental modal data from the I-40 Bridge

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, C.; Jauregui, D.

    1996-01-01

    Because the I-40 Bridges over the Rio Grande were to be razed during the summer of 1993, the investigators were able to introduce damage into the structure in order to test various damage identification methods. To support this research effort, New Mexico State University (NMSU) contracted Los Alamos National Laboratory (LANL) to perform experimental modal analyses, and to develop experimentally verified numerical models of the bridge. Previous reports (LA-12767-MS and LA-12979-MS) summarize the results of the experimental modal analyses and the results obtained from numerical modal analyses conducted with finite element models. This report summarizes the application of five damage identification algorithms reported in the technical literature to the previously reported experimental and numerical modal data.

  20. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization.

    Science.gov (United States)

    Morrell, William C; Birkel, Garrett W; Forrer, Mark; Lopez, Teresa; Backman, Tyler W H; Dussault, Michael; Petzold, Christopher J; Baidoo, Edward E K; Costello, Zak; Ando, David; Alonso-Gutierrez, Jorge; George, Kevin W; Mukhopadhyay, Aindrila; Vaino, Ian; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Garcia Martin, Hector

    2017-12-15

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDD and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.

  1. Using historical and experimental data to reveal warming effects on ant assemblages.

    Directory of Open Access Journals (Sweden)

    Julian Resasco

    Full Text Available Historical records of species are compared with current records to elucidate effects of recent climate change. However, confounding variables such as succession, land-use change, and species invasions make it difficult to demonstrate a causal link between changes in biota and changes in climate. Experiments that manipulate temperature can overcome this issue of attribution, but long-term impacts of warming are difficult to test directly. Here we combine historical and experimental data to explore effects of warming on ant assemblages in southeastern US. Observational data span a 35-year period (1976-2011, during which mean annual temperatures had an increasing trend. Mean summer temperatures in 2010-2011 were ∼ 2.7 °C warmer than in 1976. Experimental data come from an ongoing study in the same region, for which temperatures have been increased ∼ 1.5-5.5 °C above ambient from 2010 to 2012. Ant species richness and evenness decreased with warming under natural but not experimental warming. These discrepancies could have resulted from differences in timescales of warming, abiotic or biotic factors, or initial species pools. Species turnover tended to increase with temperature in observational and experimental datasets. At the species level, the observational and experimental datasets had four species in common, two of which exhibited consistent patterns between datasets. With natural and experimental warming, collections of the numerically dominant, thermophilic species, Crematogaster lineolata, increased roughly two-fold. Myrmecina americana, a relatively heat intolerant species, decreased with temperature in natural and experimental warming. In contrast, species in the Solenopsis molesta group did not show consistent responses to warming, and Temenothorax pergandei was rare across temperatures. Our results highlight the difficulty of interpreting community responses to warming based on historical records or experiments alone. Because some

  2. Brain Radiation Information Data Exchange (BRIDE): integration of experimental data from low-dose ionising radiation research for pathway discovery.

    Science.gov (United States)

    Karapiperis, Christos; Kempf, Stefan J; Quintens, Roel; Azimzadeh, Omid; Vidal, Victoria Linares; Pazzaglia, Simonetta; Bazyka, Dimitry; Mastroberardino, Pier G; Scouras, Zacharias G; Tapio, Soile; Benotmane, Mohammed Abderrafi; Ouzounis, Christos A

    2016-05-11

    The underlying molecular processes representing stress responses to low-dose ionising radiation (LDIR) in mammals are just beginning to be understood. In particular, LDIR effects on the brain and their possible association with neurodegenerative disease are currently being explored using omics technologies. We describe a light-weight approach for the storage, analysis and distribution of relevant LDIR omics datasets. The data integration platform, called BRIDE, contains information from the literature as well as experimental information from transcriptomics and proteomics studies. It deploys a hybrid, distributed solution using both local storage and cloud technology. BRIDE can act as a knowledge broker for LDIR researchers, to facilitate molecular research on the systems biology of LDIR response in mammals. Its flexible design can capture a range of experimental information for genomics, epigenomics, transcriptomics, and proteomics. The data collection is available at: .

  3. Correction of Magnetic Optics and Beam Trajectory Using LOCO Based Algorithm with Expanded Experimental Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, A.; Edstrom, D.; Emanov, F. A.; Koop, I. A.; Perevedentsev, E. A.; Rogovsky, Yu. A.; Shwartz, D. B.; Valishev, A.

    2017-03-28

    Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.

  4. Evaluation of experimental data for wax and diamondoids solubility in gaseous systems

    DEFF Research Database (Denmark)

    Mohammadi, Amir H.; Gharagheizi, Farhad; Eslamimanesh, Ali

    2012-01-01

    of the residuals of two selected correlations results. In addition, the applicability domains of the investigated correlations and quality of the existing experimental data are examined accompanied by outlier diagnostics. Two previously applied Chrastil-type correlations including the original Chrastil and Mèndez...

  5. The Synthesis of Single-Subject Experimental Data: Extensions of the Basic Multilevel Model

    Science.gov (United States)

    Van den Noortgate, Wim; Moeyaert, Mariola; Ugille, Maaike; Beretvas, Tasha; Ferron, John

    2014-01-01

    Due to an increasing interest in the use of single-subject experimental designs (SSEDs), appropriate techniques are needed to analyze this type of data. The purpose of this paper proposal is to present four studies (Beretvas, Hembry, Van den Noortgate, & Ferron, 2013; Bunuan, Hembry & Beretvas, 2013; Moeyaert, Ugille, Ferron, Beretvas,…

  6. Experimental hydrogen-fueled automotive engine design data-base project. Volume 2. Main technical report

    Energy Technology Data Exchange (ETDEWEB)

    Swain, M.R.; Adt, R.R. Jr.; Pappas, J.M.

    1983-05-01

    Operational performance and emissions characteristics of hydrogen-fueled engines are reviewed. The project activities are reviewed including descriptions of the test engine and its components, the test apparatus, experimental techniques, experiments performed and the results obtained. Analyses of other hydrogen engine project data are also presented and compared with the results of the present effort.

  7. Can experimental data in humans verify the finite element-based bone remodeling algorithm?

    DEFF Research Database (Denmark)

    Wong, Christian; Gehrchen, P Martin; Kiaer, Thomas

    2008-01-01

    A finite element analysis-based bone remodeling study in human was conducted in the lumbar spine operated on with pedicle screws. Bone remodeling results were compared to prospective experimental bone mineral content data of patients operated on with pedicle screws....

  8. Experimental forests and ranges as a network for for long-term data

    Science.gov (United States)

    Martin Vavra; John Mitchell

    2010-01-01

    In the new millennium, national leaders and policymakers are facing profound issues regarding people and the environment. Experimental Forests and Ranges (EFRs), managed by the Forest Service, U.S. Department of Agriculture (USDA), form a network of locations amenable to the development of long-term data collection across many major ecosystems of the continental United...

  9. Upgrade of U.S. EPA's Experimental Stream Facility Supervisory Control and Data Acquisition System

    Science.gov (United States)

    The Supervisory control and data acquisition (SCADA) system for the U.S. EPA’s Experimental Stream Facility (ESF) was upgraded using Camile hardware and software in 2015. The upgrade added additional hardwired connections, new wireless capabilities, and included a complete rewrit...

  10. Optimal experimental design to estimate statistically significant periods of oscillations in time course data.

    Directory of Open Access Journals (Sweden)

    Márcio Mourão

    Full Text Available We investigated commonly used methods (Autocorrelation, Enright, and Discrete Fourier Transform to estimate the periodicity of oscillatory data and determine which method most accurately estimated periods while being least vulnerable to the presence of noise. Both simulated and experimental data were used in the analysis performed. We determined the significance of calculated periods by applying these methods to several random permutations of the data and then calculating the probability of obtaining the period's peak in the corresponding periodograms. Our analysis suggests that the Enright method is the most accurate for estimating the period of oscillatory data. We further show that to accurately estimate the period of oscillatory data, it is necessary that at least five cycles of data are sampled, using at least four data points per cycle. These results suggest that the Enright method should be more widely applied in order to improve the analysis of oscillatory data.

  11. Summary Report of the Workshop on The Experimental Nuclear Reaction Data Database

    Energy Technology Data Exchange (ETDEWEB)

    Semkova, V. [IAEA Nuclear Data Section, Vienna (Austria); Pritychenko, B. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2014-12-01

    The Workshop on the Experimental Nuclear Reaction Data Database (EXFOR) was held at IAEA Headquarters in Vienna from 6 to 10 October 2014. The workshop was organized to discuss various aspects of the EXFOR compilation process including compilation rules, different techniques for nuclear reaction data measurements, software developments, etc. A summary of the presentations and discussions that took place during the workshop is reported here.

  12. Experimental Data Does Not Violate Bell's Inequality for "Right Kolmogorov Space''

    DEFF Research Database (Denmark)

    Fischer, Paul; Avis, David; Hilbert, Astrid

    2008-01-01

    of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent...... probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory....

  13. Chemometrics in analytical chemistry-part I: history, experimental design and data analysis tools.

    Science.gov (United States)

    Brereton, Richard G; Jansen, Jeroen; Lopes, João; Marini, Federico; Pomerantsev, Alexey; Rodionova, Oxana; Roger, Jean Michel; Walczak, Beata; Tauler, Romà

    2017-10-01

    Chemometrics has achieved major recognition and progress in the analytical chemistry field. In the first part of this tutorial, major achievements and contributions of chemometrics to some of the more important stages of the analytical process, like experimental design, sampling, and data analysis (including data pretreatment and fusion), are summarised. The tutorial is intended to give a general updated overview of the chemometrics field to further contribute to its dissemination and promotion in analytical chemistry.

  14. A stochastic method for asphaltene structure formulation from experimental data: avoidance of implausible structures.

    Science.gov (United States)

    De León, Jennifer; Velásquez, Ana M; Hoyos, Bibian A

    2017-04-12

    This work presents a stochastic procedure designed to formulate a discrete set of molecular structures that, as a whole, adjust properly to experimental asphaltene data. This algorithm incorporates the pentane effect concept and Clar's sextet rule to the formulation process. The set of viable structures was constructed based on probability distribution functions obtained from experimental information and an isomer database containing all plausible configurations for a given number of rings, avoiding high-energy structures. This procedure was applied to a collection of experimental data from the literature. Ten sets, consisting of 5000 structures each, were obtained. Each set was then optimized. For the most accurate representation, four molecules were sufficient to properly reproduce the experimental input. The asphaltene system obtained is consistent with the reported molecular weight, number of aromatic rings and heteroatom content. Molecular dynamic simulations showed that the asphaltene representation adequately reproduced asphaltene aggregation behavior in toluene and n-heptane. In toluene, a single three-molecule aggregate was observed, and the majority of asphaltene molecules remained in a monomeric state. In n-heptane, aggregates containing up to four molecules were observed; both porous and compact aggregates were found. The asphaltene molecular representation obtained, which allows researchers to avoid inappropriate torsions in the molecule, is able to reproduce interplanar distances between aromatic cores of 4 Å or less for the aggregation state, as supported by experimental results.

  15. Intuitive web-based experimental design for high-throughput biomedical data.

    Science.gov (United States)

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

  16. Intuitive Web-Based Experimental Design for High-Throughput Biomedical Data

    Directory of Open Access Journals (Sweden)

    Andreas Friedrich

    2015-01-01

    Full Text Available Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

  17. From experimental zoology to big data: Observation and integration in the study of animal development.

    Science.gov (United States)

    Bolker, Jessica; Brauckmann, Sabine

    2015-06-01

    The founding of the Journal of Experimental Zoology in 1904 was inspired by a widespread turn toward experimental biology in the 19th century. The founding editors sought to promote experimental, laboratory-based approaches, particularly in developmental biology. This agenda raised key practical and epistemological questions about how and where to study development: Does the environment matter? How do we know that a cell or embryo isolated to facilitate observation reveals normal developmental processes? How can we integrate descriptive and experimental data? R.G. Harrison, the journal's first editor, grappled with these questions in justifying his use of cell culture to study neural patterning. Others confronted them in different contexts: for example, F.B. Sumner insisted on the primacy of fieldwork in his studies on adaptation, but also performed breeding experiments using wild-collected animals. The work of Harrison, Sumner, and other early contributors exemplified both the power of new techniques, and the meticulous explanation of practice and epistemology that was marshaled to promote experimental approaches. A century later, experimentation is widely viewed as the standard way to study development; yet at the same time, cutting-edge "big data" projects are essentially descriptive, closer to natural history than to the approaches championed by Harrison et al. Thus, the original questions about how and where we can best learn about development are still with us. Examining their history can inform current efforts to incorporate data from experiment and description, lab and field, and a broad range of organisms and disciplines, into an integrated understanding of animal development. © 2015 Wiley Periodicals, Inc.

  18. PEDRo: A database for storing, searching and disseminating experimental proteomics data

    Directory of Open Access Journals (Sweden)

    Garwood Kevin

    2004-09-01

    Full Text Available Abstract Background Proteomics is rapidly evolving into a high-throughput technology, in which substantial and systematic studies are conducted on samples from a wide range of physiological, developmental, or pathological conditions. Reference maps from 2D gels are widely circulated. However, there is, as yet, no formally accepted standard representation to support the sharing of proteomics data, and little systematic dissemination of comprehensive proteomic data sets. Results This paper describes the design, implementation and use of a Proteome Experimental Data Repository (PEDRo, which makes comprehensive proteomics data sets available for browsing, searching and downloading. It is also serves to extend the debate on the level of detail at which proteomics data should be captured, the sorts of facilities that should be provided by proteome data management systems, and the techniques by which such facilities can be made available. Conclusions The PEDRo database provides access to a collection of comprehensive descriptions of experimental data sets in proteomics. Not only are these data sets interesting in and of themselves, they also provide a useful early validation of the PEDRo data model, which has served as a starting point for the ongoing standardisation activity through the Proteome Standards Initiative of the Human Proteome Organisation.

  19. The essential value of long-term experimental data for hydrology and water management

    Science.gov (United States)

    Tetzlaff, Doerthe; Carey, Sean K.; McNamara, James P.; Laudon, Hjalmar; Soulsby, Chris

    2017-04-01

    Observations and data from long-term experimental watersheds are the foundation of hydrology as a geoscience. They allow us to benchmark process understanding, observe trends and natural cycles, and are prerequisites for testing predictive models. Long-term experimental watersheds also are places where new measurement technologies are developed. These studies offer a crucial evidence base for understanding and managing the provision of clean water supplies, predicting and mitigating the effects of floods, and protecting ecosystem services provided by rivers and wetlands. They also show how to manage land and water in an integrated, sustainable way that reduces environmental and economic costs.

  20. Comparison of various structural damage tracking techniques based on experimental data

    Science.gov (United States)

    Huang, Hongwei; Yang, Jann N.; Zhou, Li

    2008-03-01

    Damage identification of structures is an important task of a health monitoring system. The ability to detect damages on-line or almost on-line will ensure the reliability and safety of structures. Analysis methodologies for structural damage identification based on measured vibration data have received considerable attention, including the least-square estimation (LSE), extended Kalman filter (EKF), etc. Recently, new analysis methods, referred to as the sequential non-linear least-square estimation (SNLSE) and quadratic sum-squares error (QSSE), have been proposed for the damage tracking of structures. In this paper, these newly proposed analysis methods will be compared with the LSE and EKF approaches, in terms of accuracy, convergence and efficiency, for damage identification of structures based on experimental data. A series of experimental tests using a small-scale 3-story building model have been conducted. In these experimental tests, white noise excitations were applied to the model, and different damage scenarios were simulated and tested. Here, the capability of the adaptive LSE, EKF, SNLSE and QSSE approaches in tracking the structural damage are demonstrated using experimental data. The tracking results for the stiffness of all stories, based on each approach, are compared with the stiffness predicted by the finite-element method. The advantages and drawbacks for each damage tracking approach will be evaluated in terms of the accuracy, efficiency and practicality.

  1. The evaluation of experimental data in fast range for n + 56Fe(n,inl

    Directory of Open Access Journals (Sweden)

    Qian Jing

    2017-01-01

    Full Text Available Iron is one of the five materials selected for evaluation within the pilot international evaluation project CIELO. Analysis of experimental data for n+56Fe reaction is the basis for constraining theoretical calculations and eventual creation of the evaluated file. The detail analysis was performed for inelastic cross sections of neutron induced reactions with 56Fe in the fast range up to 20 MeV where there are significant differences among the main evaluated libraries, mainly caused by the different inelastic scattering cross section measurements. Gamma-ray production cross sections provide a way to gain experimental information about the inelastic cross section. Large discrepancies between experimental data for the 847-keV gamma ray produced in the 56Fe(n,n1'γ reaction were analyzed. In addition, experimental data for elastic scattering cross section between 9.41∼11 MeV were used to deduce the inelastic cross section from the unitarity constrain.

  2. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    Science.gov (United States)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  3. Experimental design data for the biosynthesis of citric acid using Central Composite Design method.

    Science.gov (United States)

    Kola, Anand Kishore; Mekala, Mallaiah; Goli, Venkat Reddy

    2017-06-01

    In the present investigation, we report that statistical design and optimization of significant variables for the microbial production of citric acid from sucrose in presence of filamentous fungi A. niger NCIM 705. Various combinations of experiments were designed with Central Composite Design (CCD) of Response Surface Methodology (RSM) for the production of citric acid as a function of six variables. The variables are; initial sucrose concentration, initial pH of medium, fermentation temperature, incubation time, stirrer rotational speed, and oxygen flow rate. From experimental data, a statistical model for this process has been developed. The optimum conditions reported in the present article are initial concentration of sucrose of 163.6 g/L, initial pH of medium 5.26, stirrer rotational speed of 247.78 rpm, incubation time of 8.18 days, fermentation temperature of 30.06 °C and flow rate of oxygen of 1.35 lpm. Under optimum conditions the predicted maximum citric acid is 86.42 g/L. The experimental validation carried out under the optimal values and reported citric acid to be 82.0 g/L. The model is able to represent the experimental data and the agreement between the model and experimental data is good.

  4. Experimental design data for the biosynthesis of citric acid using Central Composite Design method

    Directory of Open Access Journals (Sweden)

    Anand Kishore Kola

    2017-06-01

    Full Text Available In the present investigation, we report that statistical design and optimization of significant variables for the microbial production of citric acid from sucrose in presence of filamentous fungi A. niger NCIM 705. Various combinations of experiments were designed with Central Composite Design (CCD of Response Surface Methodology (RSM for the production of citric acid as a function of six variables. The variables are; initial sucrose concentration, initial pH of medium, fermentation temperature, incubation time, stirrer rotational speed, and oxygen flow rate. From experimental data, a statistical model for this process has been developed. The optimum conditions reported in the present article are initial concentration of sucrose of 163.6 g/L, initial pH of medium 5.26, stirrer rotational speed of 247.78 rpm, incubation time of 8.18 days, fermentation temperature of 30.06 °C and flow rate of oxygen of 1.35 lpm. Under optimum conditions the predicted maximum citric acid is 86.42 g/L. The experimental validation carried out under the optimal values and reported citric acid to be 82.0 g/L. The model is able to represent the experimental data and the agreement between the model and experimental data is good.

  5. STRAIN-CONTROLLED BIAXIAL TENSION OF NATURAL RUBBER: NEW EXPERIMENTAL DATA

    KAUST Repository

    Pancheri, Francesco Q.

    2014-03-01

    We present a new experimental method and provide data showing the response of 40A natural rubber in uniaxial, pure shear, and biaxial tension. Real-time biaxial strain control allows for independent and automatic variation of the velocity of extension and retraction of each actuator to maintain the preselected deformation rate within the gage area of the specimen. Wealso focus on the Valanis-Landel hypothesis that is used to verify and validate the consistency of the data.Weuse a threeterm Ogden model to derive stress-stretch relations to validate the experimental data. The material model parameters are determined using the primary loading path in uniaxial and equibiaxial tension. Excellent agreement is found when the model is used to predict the response in biaxial tension for different maximum in-plane stretches. The application of the Valanis-Landel hypothesis also results in excellent agreement with the theoretical prediction.

  6. Automatic Revision of Metabolic Networks through Logical Analysis of Experimental Data

    Science.gov (United States)

    Ray, Oliver; Whelan, Ken; King, Ross

    This paper presents a nonmonotonic ILP approach for the automatic revision of metabolic networks through the logical analysis of experimental data. The method extends previous work in two respects: by suggesting revisions that involve both the addition and removal of information; and by suggesting revisions that involve combinations of gene functions, enzyme inhibitions, and metabolic reactions. Our proposal is based on a new declarative model of metabolism expressed in a nonmonotonic logic programming formalism. With respect to this model, a mixture of abductive and inductive inference is used to compute a set of minimal revisions needed to make a given network consistent with some observed data. In this way, we describe how a reasoning system called XHAIL was able to correctly revise a state-of-the-art metabolic pathway in the light of real-world experimental data acquired by an autonomous laboratory platform called the Robot Scientist.

  7. iLAP: a workflow-driven software for experimental protocol development, data acquisition and analysis

    Directory of Open Access Journals (Sweden)

    McNally James

    2009-01-01

    Full Text Available Abstract Background In recent years, the genome biology community has expended considerable effort to confront the challenges of managing heterogeneous data in a structured and organized way and developed laboratory information management systems (LIMS for both raw and processed data. On the other hand, electronic notebooks were developed to record and manage scientific data, and facilitate data-sharing. Software which enables both, management of large datasets and digital recording of laboratory procedures would serve a real need in laboratories using medium and high-throughput techniques. Results We have developed iLAP (Laboratory data management, Analysis, and Protocol development, a workflow-driven information management system specifically designed to create and manage experimental protocols, and to analyze and share laboratory data. The system combines experimental protocol development, wizard-based data acquisition, and high-throughput data analysis into a single, integrated system. We demonstrate the power and the flexibility of the platform using a microscopy case study based on a combinatorial multiple fluorescence in situ hybridization (m-FISH protocol and 3D-image reconstruction. iLAP is freely available under the open source license AGPL from http://genome.tugraz.at/iLAP/. Conclusion iLAP is a flexible and versatile information management system, which has the potential to close the gap between electronic notebooks and LIMS and can therefore be of great value for a broad scientific community.

  8. Material response mechanisms are needed to obtain highly accurate experimental shock wave data

    Science.gov (United States)

    Forbes, Jerry W.

    2017-01-01

    The field of shock wave compression of matter has provided a simple set of equations relating thermodynamic and kinematic parameters that describe the conservation of mass, momentum and energy across a steady plane shock wave with one-dimensional flow. Well-known condensed matter shock wave experimental results will be reviewed to see whether the assumptions required for deriving these simple R-H equations are satisfied. Note that the material compression model is not required for deriving the 1-D conservation flow equations across a steady plane shock front. However, this statement is misleading from a practical experimental viewpoint since obtaining small systematic errors in shock wave measured parameters requires the material compression and release mechanisms to be known. A review will be presented on errors in shock wave data from common experimental techniques for elastic-plastic solids. Issues related to time scales of experiments, steady waves with long rise times and detonations will also be discussed

  9. Some experimental observations of crack-tip mechanics with displacement data

    Directory of Open Access Journals (Sweden)

    M. Mokhtari

    2015-07-01

    Full Text Available In the past two decades, crack-tip mechanics has been increasingly studied with full-field techniques. Within these techniques, Digital Image Correlation (DIC has been most widely used due to its many advantages, to extract important crack-tip information, including Stress Intensity Factor (SIF, Crack Opening Displacement, J-integral, T-stress, closure level, plastic zone size, etc. However, little information is given in the literature about the experimental setup that provides best estimations for the different parameters. The current work aims at understanding how the experimental conditions used in DIC influence the crack-tip information extracted experimentally. The influence of parameters such as magnification factor, size of the images, position of the images with respect the crack-tip and size of the subset used in the correlation is studied. The influence is studied in terms of SIF and T-stress by using Williams’ model. The concept of determination of the K-dominance zone from experimental data has also explored. In this regard, cyclic loading on a fatigue crack in a compact tension (CT specimen, made of aluminium 2024-T351 alloy, has been applied and the surface deformation ahead of the crack tip has been examined. The comparison between theoretical and experimental values of KI showed that the effect of subset size on the measured KI is negligible compared to the effect of size of the image.

  10. Velo and REXAN - Integrated Data Management and High Speed Analysis for Experimental Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kleese van Dam, Kerstin; Carson, James P.; Corrigan, Abigail L.; Einstein, Daniel R.; Guillen, Zoe C.; Heath, Brandi S.; Kuprat, Andrew P.; Lanekoff, Ingela T.; Lansing, Carina S.; Laskin, Julia; Li, Dongsheng; Liu, Yan; Marshall, Matthew J.; Miller, Erin A.; Orr, Galya; Pinheiro da Silva, Paulo; Ryu, Seun; Szymanski, Craig J.; Thomas, Mathew

    2013-01-10

    The Chemical Imaging Initiative at the Pacific Northwest National Laboratory (PNNL) is creating a ‘Rapid Experimental Analysis’ (REXAN) Framework, based on the concept of reusable component libraries. REXAN allows developers to quickly compose and customize high throughput analysis pipelines for a range of experiments, as well as supporting the creation of multi-modal analysis pipelines. In addition, PNNL has coupled REXAN with its collaborative data management and analysis environment Velo to create an easy to use data management and analysis environments for experimental facilities. This paper will discuss the benefits of Velo and REXAN in the context of three examples: PNNL High Resolution Mass Spectrometry - reducing analysis times from hours to seconds, and enabling the analysis of much larger data samples (100KB to 40GB) at the same time · ALS X-Ray tomography - reducing analysis times of combined STXM and EM data collected at the ALS from weeks to minutes, decreasing manual work and increasing data volumes that can be analysed in a single step ·Multi-modal nano-scale analysis of STXM and TEM data - providing a semi automated process for particle detection The creation of REXAN has significantly shortened the development time for these analysis pipelines. The integration of Velo and REXAN has significantly increased the scientific productivity of the instruments and their users by creating easy to use data management and analysis environments with greatly reduced analysis times and improved analysis capabilities.

  11. THE ART OF COLLECTING EXPERIMENTAL DATA INTERNATIONALLY: EXFOR, CINDA AND THE NRDC NETWORK.

    Energy Technology Data Exchange (ETDEWEB)

    HENRIKSSON,H.; SCHWERER, O.; ROCHMAN, D.; MIKHAYLYUKOVA, M.V.; OTUKA, N.

    2007-04-22

    The world-wide network of nuclear reaction data centers (NRDC) has, for about 40 years, provided data services to the scientific community. This network covers all types of nuclear reaction data, including neutron-induced, charged-particle-induced, and photonuclear data, used in a wide range of applications, such as fission reactors, accelerator driven systems, fusion facilities, nuclear medicine, materials analysis, environmental monitoring, and basic research. The now 13 nuclear data centers included in the NRDC are dividing the efforts of compilation and distribution for particular types of reactions and/or geographic regions all over the world. A central activity of the network is the collection and compilation of experimental nuclear reaction data and the related bibliographic information in the EXFOR and CINDA databases. Many of the individual data centers also distribute other types of nuclear data information, including evaluated data libraries, nuclear structure and decay data, and nuclear data reports. The network today ensures the world-wide transfer of information and coordinated evolution of an important source of nuclear data for current and future nuclear applications.

  12. Estimation of boundary heat flux using experimental temperature data in turbulent forced convection flow

    Science.gov (United States)

    Parwani, Ajit K.; Talukdar, Prabal; Subbarao, P. M. V.

    2015-03-01

    Heat flux at the boundary of a duct is estimated using the inverse technique based on conjugate gradient method (CGM) with an adjoint equation. A two-dimensional inverse forced convection hydrodynamically fully developed turbulent flow is considered. The simulations are performed with temperature data measured in the experimental test performed on a wind tunnel. The results show that the present numerical model with CGM is robust and accurate enough to estimate the strength and position of boundary heat flux.

  13. VALIDATION OF CFD PREDICTIONS OF FLOW IN A 3D ALVEOLATED BEND WITH EXPERIMENTAL DATA

    Science.gov (United States)

    VAN ERTBRUGGEN, C.; CORIERI, P.; THEUNISSEN, R.; RIETHMULLER, M.L.; DARQUENNE, C.

    2008-01-01

    Verifying numerical predictions with experimental data is an important aspect of any modeling studies. In the case of the lung, the absence of direct in-vivo flow measurements makes such verification almost impossible. We performed computational fluid dynamics (CFD) simulations in a 3D scaled-up model of an alveolated bend with rigid walls that incorporated essential geometrical characteristics of human alveolar structures and compared numerical predictions with experimental flow measurements made in the same model by Particle Image Velocimetry (PIV). Flow in both models was representative of acinar flow during normal breathing (0.82 ml/s). The experimental model was built in silicone and silicone oil was used as the carrier fluid. Flow measurements were obtained by an ensemble averaging procedure. CFD simulation was performed with STAR-CCM+ (CD-Adapco) using a polyhedral unstructured mesh. Velocity profiles in the central duct were parabolic and no bulk convection existed between the central duct and the alveoli. Velocities inside the alveoli were ∼2 orders of magnitude smaller than the mean velocity in the central duct. CFD data agreed well with those obtained by PIV. In the central duct, data agreed within 1%. The maximum simulated velocity along the centerline of the model was 0.5% larger than measured experimentally. In the alveolar cavities, data agreed within 15% on average. This suggests that CFD techniques can satisfactorily predict acinar-type flow. Such a validation ensure a great degree of confidence in the accuracy of predictions made in more complex models of the alveolar region of the lung using similar CFD techniques. PMID:17915225

  14. Preliminary analysing of experimental data for the development of high Cr Alloy Creep damage Constitutive Equations

    OpenAIRE

    An, Lili; Xu, Qiang; Xu, Donglai; Lu, Zhongyu

    2012-01-01

    This conference paper presents the current research of preliminary analysing of experimental data for the development of high Cr Alloy Creep damage Constitutive Equations (such as P91 alloy). Firstly, it briefly introduces the background of general creep deformation, rupture and continuum damage mechanics. Secondly, it illustrates the constitutive equations used for P91 alloy or its weldment, especially of the form and deficiencies of two kinds of most widely used typical creep damage constit...

  15. SWAP Modeling Results of Monitored Soil Water Moisture Data of Irrigation Experimental Study

    Science.gov (United States)

    Zeiliger, A.; Garsia-Orenes, F.; van den Elsen, E.; Mataix-Solera, J.; Semenov, V.

    2009-04-01

    In arid and semiarid zones of the Mediterranean regions a shortage of fresh water resources constitutes some time dramatic problem. In these regions with growing population and the scarce of rainfall irregularity in time during growing season an efficient use of water irrigation became a main challenge for future extensive agriculture development. In the frame of FP6 Water-Reuse project 516731 project a special field experimentation has been carried out in Alicante Region of Spain (Location UTM X: 693.809, Y: 4.279.922, Z: 626) on a Sandy Typic Xerofkuvent (Soil Survey Staff, 1999), Calcaric Fluvisol (WRB, FAO, 1989). with aim to investigate water regime in water repellent soils under irrigation of vine Vitus Labrusca. During field experimentation from 2006 till 2008 on 9 plots, there the same regime of irrigation water application was maintained, a monitoring of weather parameters was done by automatic meteorological station as well as a monitoring of soil water moisture was done by set of data-loggers and TDR-soil moisture sensors ECO-2 installed at different depts. SWAP model was used to simulate water regime of irrigated plots. Empirical coefficients of van Genuchten-Mualem's equations were calculated by pedotransfer functions derived from HYPRES data base using measured values of bulk density, organic matter content and soil texture. Testing of validity of the use of estimated curves was done by comparison with unsaturated soil hydraulic parameters of water retention and hydraulic conductivity measured in vitro by Wind's method on soil samples. Calibration of SWAP model for each plot was done on measured soil moisture data of irrigation events by adjusting a value of saturated hydraulic coefficient. Verification of the SWAP model was done by full range of experimental data. Similarity and non-similarity of the water regime at experimental plots as well as results of verification of SWAP model were analyzed

  16. Methods for obtaining and reducing experimental droplet impingement data on arbitrary bodies

    Science.gov (United States)

    Papadakis, Michael; Elangovan, R.; Freund, George A., Jr.; Breer, Marlin D.

    1991-01-01

    Experimental water droplet impingement data are used to validate particle trajectory computer codes used in the analysis and certification of aircraft de-icing/anti-icing systems. Water droplet impingement characteristics of aerodynamic surfaces are usually obtained from wind-tunnel dye tracer experiments. This paper presents a dye tracer method for measuring water droplet impingement characteristics on arbitrary geometries and a new data reduction method, based on laser reflectance measurements, for extracting impingement data. Extraction of impingement data has been a very time-consuming process in the past. The new data reduction method developed is at least an order of magnitude more efficient than the method previously used. The accuracy of the method is discussed and results obtained are presented.

  17. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    Energy Technology Data Exchange (ETDEWEB)

    Bhandarkar, Manisha, E-mail: manisha@ipr.res.in; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-11-15

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  18. Crystallographic anomalous diffraction data for the experimental phasing of two myelin proteins, gliomedin and periaxin

    Directory of Open Access Journals (Sweden)

    Huijong Han

    2017-04-01

    Full Text Available We present datasets that can be used for the experimental phasing of crystal structures of two myelin proteins. The structures were recently described in the articles “Periaxin and AHNAK nucleoprotein 2 form intertwined homodimers through domain swapping” (H. Han, P. Kursula, 2014 [1] and “The olfactomedin domain from gliomedin is a β-propeller with unique structural properties” (H. Han, P. Kursula, 2015 [2]. Crystals of periaxin were derivatized with tungsten and xenon prior to data collection, and diffraction data for these crystals are reported at 3 and 1 wavelengths, respectively. Crystallographic data for two different pressurizing times for xenon are provided. Gliomedin was derivatized with platinum, and data for single-wavelength anomalous dispersion are included. The data can be used to repeat the phasing experiments, to analyze heavy atom binding sites in proteins, as well as to optimize future derivatization experiments of protein crystals with these and other heavy-atom compounds.

  19. Experimental vapor-liquid equilibria data for binary mixtures of xylene isomers

    Directory of Open Access Journals (Sweden)

    W.L. Rodrigues

    2005-09-01

    Full Text Available Separation of aromatic C8 compounds by distillation is a difficult task due to the low relative volatilities of the compounds and to the high degree of purity required of the final commercial products. For rigorous simulation and optimization of this separation, the use of a model capable of describing vapor-liquid equilibria accurately is necessary. Nevertheless, experimental data are not available for all binaries at atmospheric pressure. Vapor-liquid equilibria data for binary mixtures were isobarically obtained with a modified Fischer cell at 100.65 kPa. The vapor and liquid phase compositions were analyzed with a gas chromatograph. The methodology was initially tested for cyclo-hexane+n-heptane data; results obtained are similar to other data in the literature. Data for xylene binary mixtures were then obtained, and after testing, were considered to be thermodynamically consistent. Experimental data were regressed with Aspen Plus® 10.1 and binary interaction parameters were reported for the most frequently used activity coefficient models and for the classic mixing rules of two cubic equations of state.

  20. Archival and Dissemination of the U.S. and Canadian Experimental Nuclear Reaction Data (EXFOR Project)

    Science.gov (United States)

    Pritychenko, Boris; Hlavac, Stanislav; Schwerer, Otto; Zerkin, Viktor

    2017-09-01

    The Exchange Format (EXFOR) or experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource includes numerical data sets and bibliographical information for more than 22,000 experiments since the beginning of nuclear science. Analysis of the experimental data sets, recovery and archiving will be discussed. Examples of the recent developments of the data renormalization, uploads and inverse reaction calculations for nuclear science and technology applications will be presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development and research activities. It is publicly available at the National Nuclear Data Center website http://www.nndc.bnl.gov/exfor and the International Atomic Energy Agency mirror site http://www-nds.iaea.org/exfor. This work was sponsored in part by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookha ven Science Associates, LLC.

  1. Simulation of Ultrasound Propagation Through Three-Dimensional Trabecular Bone Structures: Comparison with Experimental Data

    Science.gov (United States)

    Padilla, Frederic; Bossy, Emmanuel; Laugier, Pascal

    2006-08-01

    We present a direct comparison between numerical simulation of wave propagation, performed through 28 volumes of trabecular bone, and the corresponding experimental data obtained on the same specimens. The volumes were reconstructed from high resolution synchrotron microtomography experiments and were used as the input geometry in a three-dimensional (3D) finite-difference simulation tool developed in our laboratory. The version of the simulation algorithm that was used accounts for propagation in both the saturating fluid and bone, and does not take absorption into account. This algorithm has been validated in a previous paper [Bossy et al.: Med. Biol. 50 (2005) 5545] for simulation of wave propagation through trabecular bone. Two quantitative ultrasound parameters were studied at 1 MHz for both simulated and experimental signals: the normalized slope of the frequency dependent attenuation coefficient (also called normalized broadband ultrasound attenuation (nBUA) in the medical field), and the phase velocity at the center frequency. We show that the simulated and experimental nBUA are in close agreement, especially for the high porosity specimens. For specimens with a low porosity (or a high solid volume fraction), the simulation systematically underestimate the experimentally observed nBUA. This result suggests that the relative contribution of scattering and absorption to nBUA may vary with the bone volume fraction. A linear relationship is found between experimental and simulated phase velocity. Simulated phase velocity is found to be slightly higher than the experimental one, but this may be explained by the choice of material properties used for the simulation.

  2. Comparison of repulsive interatomic potentials calculated with an all-electron DFT approach with experimental data

    Science.gov (United States)

    Zinoviev, A. N.; Nordlund, K.

    2017-09-01

    The interatomic potential determines the nuclear stopping power in materials. Most ion irradiation simulation models are based on the universal Ziegler-Biersack-Littmark (ZBL) potential (Ziegler et al., 1983), which, however, is an average and hence may not describe the stopping of all ion-material combinations well. Here we consider pair-specific interatomic potentials determined experimentally and by density-functional theory simulations with DMol approach (DMol software, 1997) to choose basic wave functions. The interatomic potentials calculated using the DMol approach demonstrate an unexpectedly good agreement with experimental data. Differences are mainly observed for heavy atom systems, which suggests they can be improved by extending a basis set and more accurately considering the relativistic effects. Experimental data prove that the approach of determining interatomic potentials from quasielastic scattering can be successfully used for modeling collision cascades in ion-solids collisions. The data obtained clearly indicate that the use of any universal potential is limited to internuclear distances R < 7 af (af is the Firsov length).

  3. Can experimental data in humans verify the finite element-based bone remodeling algorithm?

    DEFF Research Database (Denmark)

    Wong, C.; Gehrchen, P.M.; Kiaer, T.

    2008-01-01

    : The validity of 2 bone remodeling algorithms was evaluated by comparing against prospective bone mineral content measurements. Also, the potential stress shielding effect was examined using the 2 bone remodeling algorithms and the experimental bone mineral data. SUMMARY OF BACKGROUND DATA: In previous studies...... operated on with pedicle screws between L4 and L5. The stress shielding effect was also examined. The bone remodeling results were compared with prospective bone mineral content measurements of 4 patients. They were measured after surgery, 3-, 6- and 12-months postoperatively. RESULTS: After 1 year...

  4. Pion Transition Form Factor at the Two-Loop Level VIS-À-VIS Experimental Data

    Science.gov (United States)

    Mikhailov, S. V.; Stefanis, N. G.

    We use light-cone QCD sum rules to calculate the pion-photon transition form factor, taking into account radiative corrections up to the next-to-next-to-leading order of perturbation theory. We compare the obtained predictions with all available experimental data from the CELLO, CLEO, and the BaBar Collaborations. We point out that the BaBar data are incompatible with the convolution scheme of QCD, on which our predictions are based, and can possibly be explained only with a violation of the factorization theorem. We pull together recent theoretical results and comment on their significance.

  5. The Risa R/Bioconductor package: integrative data analysis from experimental metadata and back again.

    Science.gov (United States)

    González-Beltrán, Alejandra; Neumann, Steffen; Maguire, Eamonn; Sansone, Susanna-Assunta; Rocca-Serra, Philippe

    2014-01-01

    The ISA-Tab format and software suite have been developed to break the silo effect induced by technology-specific formats for a variety of data types and to better support experimental metadata tracking. Experimentalists seldom use a single technique to monitor biological signals. Providing a multi-purpose, pragmatic and accessible format that abstracts away common constructs for describing Investigations, Studies and Assays, ISA is increasingly popular. To attract further interest towards the format and extend support to ensure reproducible research and reusable data, we present the Risa package, which delivers a central component to support the ISA format by enabling effortless integration with R, the popular, open source data crunching environment. The Risa package bridges the gap between the metadata collection and curation in an ISA-compliant way and the data analysis using the widely used statistical computing environment R. The package offers functionality for: i) parsing ISA-Tab datasets into R objects, ii) augmenting annotation with extra metadata not explicitly stated in the ISA syntax; iii) interfacing with domain specific R packages iv) suggesting potentially useful R packages available in Bioconductor for subsequent processing of the experimental data described in the ISA format; and finally v) saving back to ISA-Tab files augmented with analysis specific metadata from R. We demonstrate these features by presenting use cases for mass spectrometry data and DNA microarray data. The Risa package is open source (with LGPL license) and freely available through Bioconductor. By making Risa available, we aim to facilitate the task of processing experimental data, encouraging a uniform representation of experimental information and results while delivering tools for ensuring traceability and provenance tracking. The Risa package is available since Bioconductor 2.11 (version 1.0.0) and version 1.2.1 appeared in Bioconductor 2.12, both along with documentation

  6. CDApps: integrated software for experimental planning and data processing at beamline B23, Diamond Light Source.

    Science.gov (United States)

    Hussain, Rohanah; Benning, Kristian; Javorfi, Tamas; Longo, Edoardo; Rudd, Timothy R; Pulford, Bill; Siligardi, Giuliano

    2015-03-01

    The B23 Circular Dichroism beamline at Diamond Light Source has been operational since 2009 and has seen visits from more than 200 user groups, who have generated large amounts of data. Based on the experience of overseeing the users' progress at B23, four key areas requiring the most assistance are identified: planning of experiments and note-keeping; designing titration experiments; processing and analysis of the collected data; and production of experimental reports. To streamline these processes an integrated software package has been developed and made available for the users. The subsequent article summarizes the main features of the software.

  7. The upgrade of the J-TEXT experimental data access and management system

    Energy Technology Data Exchange (ETDEWEB)

    Yang, C., E-mail: yangchao_353@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Zhang, M. [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Zheng, W., E-mail: zhengwei@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Liu, R.; Zhuang, G. [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2014-05-15

    Highlights: • The J-TEXT DAMS is developed based on B/S model, which makes it conveniently access the system. • The JWeb-Scope adopts segment strategy to read data that improve the speed of reading data. • DAMS have integrated the management and JWeb-Scope and make an easy way for visitors to access the experiment data. • The JWeb-Scope can be visited all over the world, plot experiment data and zoom in or out smoothly. - Abstract: The experimental data of J-TEXT tokamak are stored in the MDSplus database. The old J-TEXT data access system is based on the tools provided by MDSplus. Since the number of signals is huge, the data retrieval for an experiment is difficult. To solve this problem, the J-TEXT experimental data access and management system (DAMS) based on MDSplus has been developed. The DAMS left the old MDSplus system unchanged providing new tools, which can help users to handle all signals as well as to retrieve signals they need thanks to the user information requirements. The DAMS also offers users a way to create their jScope configuration files which can be downloaded to the local computer. In addition, the DAMS provides a JWeb-Scope tool to visualize the signal in a browser. JWeb-Scope adopts segment strategy to read massive data efficiently. Users can plot one or more signals on their own choice and zoom-in, zoom-out smoothly. The whole system is based on B/S model, so that the users only need of the browsers to access the DAMS. The DAMS has been tested and it has a better user experience. It will be integrated into the J-TEXT remote participation system later.

  8. Generalized sample size determination formulas for experimental research with hierarchical data.

    Science.gov (United States)

    Usami, Satoshi

    2014-06-01

    Hierarchical data sets arise when the data for lower units (e.g., individuals such as students, clients, and citizens) are nested within higher units (e.g., groups such as classes, hospitals, and regions). In data collection for experimental research, estimating the required sample size beforehand is a fundamental question for obtaining sufficient statistical power and precision of the focused parameters. The present research extends previous research from Heo and Leon (2008) and Usami (2011b), by deriving closed-form formulas for determining the required sample size to test effects in experimental research with hierarchical data, and by focusing on both multisite-randomized trials (MRTs) and cluster-randomized trials (CRTs). These formulas consider both statistical power and the width of the confidence interval of a standardized effect size, on the basis of estimates from a random-intercept model for three-level data that considers both balanced and unbalanced designs. These formulas also address some important results, such as the lower bounds of the needed units at the highest levels.

  9. Review of experimental data for modelling LWR fuel cladding behaviour under loss of coolant accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Massih, Ali R. [Quantum Technologies AB, Uppsala Science Park (Sweden)

    2007-02-15

    Extensive range of experiments has been conducted in the past to quantitatively identify and understand the behaviour of fuel rod under loss-of-coolant accident (LOCA) conditions in light water reactors (LWRs). The obtained experimental data provide the basis for the current emergency core cooling system acceptance criteria under LOCA conditions for LWRs. The results of recent experiments indicate that the cladding alloy composition and high burnup effects influence LOCA acceptance criteria margins. In this report, we review some past important and recent experimental results. We first discuss the background to acceptance criteria for LOCA, namely, clad embrittlement phenomenology, clad embrittlement criteria (limitations on maximum clad oxidation and peak clad temperature) and the experimental bases for the criteria. Two broad kinds of test have been carried out under LOCA conditions: (i) Separate effect tests to study clad oxidation, clad deformation and rupture, and zirconium alloy allotropic phase transition during LOCA. (ii) Integral LOCA tests, in which the entire LOCA sequence is simulated on a single rod or a multi-rod array in a fuel bundle, in laboratory or in a tests and results are discussed and empirical correlations deduced from these tests and quantitative models are conferred. In particular, the impact of niobium in zirconium base clad and hydrogen content of the clad on allotropic phase transformation during LOCA and also the burst stress are discussed. We review some recent LOCA integral test results with emphasis on thermal shock tests. Finally, suggestions for modelling and further evaluation of certain experimental results are made.

  10. New experimental data for validation of nanoparticle dry deposition velocity models

    Science.gov (United States)

    Damay, P. E.; Maro, D.; Coppalle, A.; Lamaud, E.; Connan, O.; Herbert, D.; Talbaut, M.; Irvine, M.

    2009-04-01

    In Order to evaluate the impact of the aerosol pollution on ecosystems, we have to study the transfer functions of particles on vegetated canopies. One of them is the dry deposition, which is defined by the deposition velocity (Vd): the ratio between particles surface flux and the atmospheric aerosol concentration nearby the surface. This deposition velocity depends on many parameters. For example, the topography ground, the substrate, the micrometeorological conditions (turbulence), the aerosol characteristics (size, electric charge) and external fields (gravity, electric). Nowadays, there are several models of aerosol dry deposition which consider the effects of the turbulence, the particles size for a large range of diameter (some nm to 100 µm). In the case of nanoparticles, there is not enough reliable experimental data to allow a comparison with the dry deposition models. For operative models, the scattering of Vd experimental data of nanoparticles in a rural environment creates uncertainties larger than one order of magnitude. The study of the aerosols dry deposition velocity has remained an international challenge since the sixties and involves an in situ experimental approach, in order to consider the local particularities (substrate, turbulence, vegetated canopies, etc…) The main aim of this study is to obtain experimental data on aerosol dry deposition velocities onto rural areas. Therefore we have developed a direct eddy covariance method. The use of an Electrical Low Pressure Impactor (Outdoor ELPI, Dekati Inc.) for this method enables to calculate dry deposition velocities for atmospheric aerosols sizing from 7 nm to 2 µm. We present our results: discuss the impact of micrometeorological parameters and particle size on the dry deposition velocity and the possibility to apply this method on other environments.

  11. Theoretical and experimental comparison of an ultra-high-speed laser data transmission system

    Science.gov (United States)

    Tycz, M.

    1973-01-01

    The performance of a digital optical data transmission system is specified by the probability that the system erroneously decides a signal has or has not been transmitted. Two factors which induce signal fading and thereby decrease system performance are atmospheric scintillation and transmitter pointing inaccuracy. A channel simulator was developed that is capable of producing the effects of both atmospheric scintillation and the transmitter pointing problem for a neodymium-yag optical data transmission systems. Comparison of data taken from the modulated intensity of a beam having been transmitted through the channel simulator with experimental data from GEOS-B argon laser transmission through the atmosphere to a low earth-orbiting satellite indicates that the modulated signal intensity is log-normal to the degree of measured atmospheric scintillation.

  12. Comparison between a Computational Seated Human Model and Experimental Verification Data

    Directory of Open Access Journals (Sweden)

    Christian G. Olesen

    2014-01-01

    Full Text Available Sitting-acquired deep tissue injuries (SADTI are the most serious type of pressure ulcers. In order to investigate the aetiology of SADTI a new approach is under development: a musculo-skeletal model which can predict forces between the chair and the human body at different seated postures. This study focuses on comparing results from a model developed in the AnyBody Modeling System, with data collected from an experimental setup. A chair with force-measuring equipment was developed, an experiment was conducted with three subjects, and the experimental results were compared with the predictions of the computational model. The results show that the model predicted the reaction forces for different chair postures well. The correlation coefficients of how well the experiment and model correlate for the seat angle, backrest angle and footrest height was 0.93, 0.96, and 0.95. The study show a good agreement between experimental data and model prediction of forces between a human body and a chair. The model can in the future be used in designing wheelchairs or automotive seats.

  13. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    Science.gov (United States)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M

  14. The Correlation of Coupled Heat and Mass Transfer Experimental Data for Vertical Falling Film Absorption

    Energy Technology Data Exchange (ETDEWEB)

    Keyhani, M; Miller, W A

    1999-11-14

    Absorption chillers are gaining global acceptance as quality comfort cooling systems. These machines are the central chilling plants and the supply for cotnfort cooling for many large commercial buildings. Virtually all absorption chillers use lithium bromide (LiBr) and water as the absorption fluids. Water is the refrigerant. Research has shown LiBr to he one of the best absorption working fluids because it has a high affinity for water, releases water vapor at relatively low temperatures, and has a boiling point much higher than that of water. The heart of the chiller is the absorber, where a process of simultaneous heat and mass transfer occurs as the refrigerant water vapor is absorbed into a falling film of aqueous LiBr. The more water vapor absorbed into the falling film, the larger the chiller's capacity for supporting comfort cooling. Improving the performance of the absorber leads directly to efficiency gains for the chiller. The design of an absorber is very empirical and requires experimental data. Yet design data and correlations are sparse in the open literature. The experimental data available to date have been derived at LiBr concentrations ranging from 0.30 to 0.60 mass fraction. No literature data are readily available for the design operating conditions of 0.62 and 0.64 mass fraction of LiBr and absorber pressures of 0.7 and 1.0 kPa.

  15. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  16. Needle-tissue interaction forces--a survey of experimental data.

    Science.gov (United States)

    van Gerwen, Dennis J; Dankelman, Jenny; van den Dobbelsteen, John J

    2012-07-01

    The development of needles, needle-insertion simulators, and needle-wielding robots for use in a clinical environment depends on a thorough understanding of the mechanics of needle-tissue interaction. It stands to reason that the forces arising from this interaction are influenced by numerous factors, such as needle type, insertion speed, and tissue characteristics. However, exactly how these factors influence the force is not clear. For this reason, the influence of various factors on needle insertion-force was investigated by searching literature for experimental data. This resulted in a comprehensive overview of experimental insertion-force data available in the literature, grouped by factor for quick reference. In total, 99 papers presenting such force data were found, with typical peak forces in the order of 1-10N. The data suggest, for example, that higher velocity tends to decrease puncture force and increase friction. Furthermore, increased needle diameter was found to increase peak forces, and conical needles were found to create higher peak forces than beveled needles. However, many questions remain open for investigation, especially those concerning the influence of tissue characteristics. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Deep-trap model for thermoluminescence: emptying stage calculation and comparison with experimental data

    CERN Document Server

    Faien, J; Pilleyre, T; Miallier, D; Montret, M

    1999-01-01

    In a former paper (), a general TL model based on deep-trap competition was proposed, but only a preliminary qualitative approach was made for the heating stage. The purpose of the present paper is to present more accurate calculations and their comparison with experimental data. For the sake of simplicity, only the one TL-trap model is considered here. During heating, the following occurrences are taken into consideration after the thermal release of one electron: (i) retrapping on a TL-trap; (ii) capture in a deep trap; (iii) recombination with a spatially correlated hole; (iv) recombination with a non correlated hole centre. Using the 'quasi-equilibrium approximation', a set of two differential equations governing the kinetics is proposed. Their integration allows for the prediction of the shapes of glow curves and growth curves. Satisfying fits were obtained with a large range of experimental results.

  18. Evidence of Experimental Bias in the Life Sciences: Why We Need Blind Data Recording.

    Directory of Open Access Journals (Sweden)

    Luke Holman

    2015-07-01

    Full Text Available Observer bias and other "experimenter effects" occur when researchers' expectations influence study outcome. These biases are strongest when researchers expect a particular result, are measuring subjective variables, and have an incentive to produce data that confirm predictions. To minimize bias, it is good practice to work "blind," meaning that experimenters are unaware of the identity or treatment group of their subjects while conducting research. Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values. We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.

  19. Evidence of Experimental Bias in the Life Sciences: Why We Need Blind Data Recording.

    Science.gov (United States)

    Holman, Luke; Head, Megan L; Lanfear, Robert; Jennions, Michael D

    2015-07-01

    Observer bias and other "experimenter effects" occur when researchers' expectations influence study outcome. These biases are strongest when researchers expect a particular result, are measuring subjective variables, and have an incentive to produce data that confirm predictions. To minimize bias, it is good practice to work "blind," meaning that experimenters are unaware of the identity or treatment group of their subjects while conducting research. Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values. We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.

  20. Experimental data of deformation and cracking behaviour of concrete ties reinforced with multiple bars.

    Science.gov (United States)

    Rimkus, Arvydas; Gribniak, Viktor

    2017-08-01

    The data presented in this article are related to the research article entitled "Experimental Investigation of cracking and deformations of concrete ties reinforced with multiple bars" (Rimkus and Gribniak, 2017) [1]. The article provides data on deformation and cracking behaviour of 22 concrete ties reinforced with multiple bars. The number and diameter of the steel bars vary from 4 to 16 and from 5 mm to 14 mm, respectively. Two different covers (30 mm and 50 mm) are considered as well. The test recordings include average stains of the reinforcement and the concrete surface, the mean and maximum crack spacing, final crack patterns, and crack development schemes obtained using digital image correlation (DIC) system. The reported original data set is made publicity available for ensuring critical or extended analyses.

  1. The τ leptons theory and experimental data: Monte Carlo, fits, software and systematic errors

    Science.gov (United States)

    Was, Z.

    2015-03-01

    Status of τ lepton decay Monte Carlo generator TAUOLA is reviewed. Recent efforts on development of new hadronic currents are presented. Multitude new channels for anomalous τ decay modes and parameterization based on defaults used by BaBar collaboration are introduced. Also parameterization based on theoretical considerations are presented as an alternative. Lesson from comparison and fits to the BaBar and Belle data is recalled. It was found that as in the past, in particular at a time of comparisons with CLEO and ALEPH data, proper fitting, to as detailed as possible representation of the experimental data, is essential for appropriate developments of models of τ decays. In the later part of the presentation, use of the TAUOLA program for phenomenology of W, Z, H decays at LHC is adressed. Some new results, relevant for QED bremsstrahlung in such decays are presented as well.

  2. Privacy-preserving data cube for electronic medical records: An experimental evaluation.

    Science.gov (United States)

    Kim, Soohyung; Lee, Hyukki; Chung, Yon Dohn

    2017-01-01

    The aim of this study is to evaluate the effectiveness and efficiency of privacy-preserving data cubes of electronic medical records (EMRs). An EMR data cube is a complex of EMR statistics that are summarized or aggregated by all possible combinations of attributes. Data cubes are widely utilized for efficient big data analysis and also have great potential for EMR analysis. For safe data analysis without privacy breaches, we must consider the privacy preservation characteristics of the EMR data cube. In this paper, we introduce a design for a privacy-preserving EMR data cube and the anonymization methods needed to achieve data privacy. We further focus on changes in efficiency and effectiveness that are caused by the anonymization process for privacy preservation. Thus, we experimentally evaluate various types of privacy-preserving EMR data cubes using several practical metrics and discuss the applicability of each anonymization method with consideration for the EMR analysis environment. We construct privacy-preserving EMR data cubes from anonymized EMR datasets. A real EMR dataset and demographic dataset are used for the evaluation. There are a large number of anonymization methods to preserve EMR privacy, and the methods are classified into three categories (i.e., global generalization, local generalization, and bucketization) by anonymization rules. According to this classification, three types of privacy-preserving EMR data cubes were constructed for the evaluation. We perform a comparative analysis by measuring the data size, cell overlap, and information loss of the EMR data cubes. Global generalization considerably reduced the size of the EMR data cube and did not cause the data cube cells to overlap, but incurred a large amount of information loss. Local generalization maintained the data size and generated only moderate information loss, but there were cell overlaps that could decrease the search performance. Bucketization did not cause cells to overlap

  3. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  4. Invited review: Experimental design, data reporting, and sharing in support of animal systems modeling research.

    Science.gov (United States)

    McNamara, J P; Hanigan, M D; White, R R

    2016-12-01

    The National Animal Nutrition Program "National Research Support Project 9" supports efforts in livestock nutrition, including the National Research Council's committees on the nutrient requirements of animals. Our objective was to review the status of experimentation and data reporting in animal nutrition literature and to provide suggestions for the advancement of animal nutrition research and the ongoing improvement of field-applied nutrient requirement models. Improved data reporting consistency and completeness represent a substantial opportunity to improve nutrition-related mathematical models. We reviewed a body of nutrition research; recorded common phrases used to describe diets, animals, housing, and environmental conditions; and proposed equivalent numerical data that could be reported. With the increasing availability of online supplementary material sections in journals, we developed a comprehensive checklist of data that should be included in publications. To continue to improve our research effectiveness, studies utilizing multiple research methodologies to address complex systems and measure multiple variables will be necessary. From the current body of animal nutrition literature, we identified a series of opportunities to integrate research focuses (nutrition, reproduction and genetics) to advance the development of nutrient requirement models. From our survey of current experimentation and data reporting in animal nutrition, we identified 4 key opportunities to advance animal nutrition knowledge: (1) coordinated experiments should be designed to employ multiple research methodologies; (2) systems-oriented research approaches should be encouraged and supported; (3) publication guidelines should be updated to encourage and support sharing of more complete data sets; and (4) new experiments should be more rapidly integrated into our knowledge bases, research programs and practical applications. Copyright © 2016 American Dairy Science Association

  5. Robust Informatics Infrastructure Required For ICME: Combining Virtual and Experimental Data

    Science.gov (United States)

    Arnold, Steven M.; Holland, Frederic A. Jr.; Bednarcyk, Brett A.

    2014-01-01

    With the increased emphasis on reducing the cost and time to market of new materials, the need for robust automated materials information management system(s) enabling sophisticated data mining tools is increasing, as evidenced by the emphasis on Integrated Computational Materials Engineering (ICME) and the recent establishment of the Materials Genome Initiative (MGI). This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Further, the use of increasingly sophisticated nonlinear, anisotropic and or multi-scale models requires both the processing of large volumes of test data and complex materials data necessary to establish processing-microstructure-property-performance relationships. Fortunately, material information management systems have kept pace with the growing user demands and evolved to enable: (i) the capture of both point wise data and full spectra of raw data curves, (ii) data management functions such as access, version, and quality controls;(iii) a wide range of data import, export and analysis capabilities; (iv) data pedigree traceability mechanisms; (v) data searching, reporting and viewing tools; and (vi) access to the information via a wide range of interfaces. This paper discusses key principles for the development of a robust materials information management system to enable the connections at various length scales to be made between experimental data and corresponding multiscale modeling toolsets to enable ICME. In particular, NASA Glenn's efforts towards establishing such a database for capturing constitutive modeling behavior for both monolithic and composites materials

  6. Bridging the Gap Between Experimental Data and Model Parameterization for Chikungunya Virus Transmission Predictions.

    Science.gov (United States)

    Christofferson, Rebecca C; Mores, Christopher N; Wearing, Helen J

    2016-12-15

    Chikungunya virus (CHIKV) has experienced 2 major expansion events in the last decade. The most recently emerged sublineage (ECSA-V) was shown to have increased efficiency in a historically secondary vector, Aedes albopictus, leading to speculation that this was a major factor in expansion. Subsequently, a number of experimental studies focused on the vector competence of CHIKV, as well as transmission modeling efforts. Mathematical models have used these data to inform their own investigations, but some have incorrectly parameterized the extrinsic incubation period (EIP) of the mosquitoes, using vector competence data. Vector competence and EIP are part of the same process but are not often correctly reported together. Thus, the way these metrics are used for model parameterization can be problematic. We offer suggestions for bridging this gap for the purpose of standardization of reporting and to promote appropriate use of experimental data in modeling efforts. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  7. Experimental Comparison of 56 Gbit/s PAM-4 and DMT for Data Center Interconnect Applications

    DEFF Research Database (Denmark)

    Eiselt, Nicklas; Dochhan, Annika; Griesser, Helmut

    2016-01-01

    Four-level pulse amplitude modulation (PAM-4) and discrete multi-tone transmission (DMT) in combination with intensity modulation and direct-detection are two promising approaches for a low-power and low-cost solution for the next generation of data center interconnect applications. We experiment......Four-level pulse amplitude modulation (PAM-4) and discrete multi-tone transmission (DMT) in combination with intensity modulation and direct-detection are two promising approaches for a low-power and low-cost solution for the next generation of data center interconnect applications. We...... experimentally investigate and compare both modulation formats at a data rate of 56 Gb/s and a transmission wavelength of 1544 nm using the same experimental setup. We show that PAM-4 outperforms double sideband DMT and also vestigial sideband DMT for the optical back-to-back (b2b) case and also...... for a transmission distance of 80 km SSMF in terms of required OSNR at a FEC-threshold of 3.8e-3. However, it is also pointed out that both versions of DMT do not require any optical dispersion compensation to transmit over 80 km SSMF while this is essential for PAM-4. Thus, implementation effort and cost may...

  8. Gauging Students' Untutored Ability in Argumentation about Experimental Data: A South African case study

    Science.gov (United States)

    Lubben, Fred; Sadeck, Melanie; Scholtz, Zena; Braund, Martin

    2010-11-01

    This paper reports on a study into the untutored ability of Grade 10 students to engage in argumentation about the interpretation of experimental data, and the effect of unsupported small group discussions on this ability. In this study, argumentation intends to promote critical thinking. The sample includes 266 students from five South African schools across the resource spectrum, forming 70 friendship groups. Students are required to provide written interpretations of experimental data, and justify these interpretations based on the evidence and concepts of measurement. Individual responses form the basis of small group discussions after which students again provide written justified interpretations of the readings. The data show an initial low level of argumentation, with significant variation according to school resource levels. Considerable improvement in the level of argumentation occurs merely through small group discussions unsupported by the teacher. The findings suggest several factors influencing argumentation ability such as experience with practical work, perceptions of the purpose of small group discussions, the language ability to articulate ideas and cultural influences. Methodological issues arising from the study and implications for teaching and assessment are discussed.

  9. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  10. Hot Water Distribution System Program Documentation and Comparison to Experimental Data

    Energy Technology Data Exchange (ETDEWEB)

    Baskin, Evelyn [GE Infrastructure Energy; Craddick, William G [ORNL; Lenarduzzi, Roberto [ORNL; Wendt, Robert L [ORNL; Woodbury, Professor Keith A. [University of Alabama, Tuscaloosa

    2007-09-01

    In 2003, the California Energy Commission s (CEC s) Public Interest Energy Research (PIER) program funded Oak Ridge National Laboratory (ORNL) to create a computer program to analyze hot water distribution systems for single family residences, and to perform such analyses for a selection of houses. This effort and its results were documented in a report provided to CEC in March, 2004 [1]. The principal objective of effort was to compare the water and energy wasted between various possible hot water distribution systems for various different house designs. It was presumed that water being provided to a user would be considered suitably warm when it reached 105 F. Therefore, what was needed was a tool which could compute the time it takes for water reaching the draw point to reach 105 F, and the energy wasted during this wait. The computer program used to perform the analyses was a combination of a calculational core, produced by Dr. Keith A. Woodbury, Professor of Mechanical Engineering and Director, Alabama Industrial Assessment Center, University of Alabama, and a user interface based on LabVIEW, created by Dr. Roberto Lenarduzzi of ORNL. At that time, the computer program was in a relatively rough and undocumented form adequate to perform the contracted work but not in a condition where it could be readily used by those not involved in its generation. Subsequently, the CEC provided funding through Lawrence Berkeley National Laboratory (LBNL) to improve the program s documentation and user interface to facilitate use by others, and to compare the program s results to experimental data generated by Dr. Carl Hiller. This report describes the program and provides user guidance. It also summarizes the comparisons made to experimental data, along with options built into the program specifically to allow these comparisons. These options were necessitated by the fact that some of the experimental data required options and features not originally included in the program

  11. Parameters for Viability Check on Gravitational Theories Regarding the Experimental Data

    Directory of Open Access Journals (Sweden)

    Celakoska E. G.

    2009-01-01

    Full Text Available Parameterized post-Newtonian formalism requires an existence of a symmetric metric in a gravitational theory in order to perform aviability check regarding the experimental data. The requirement of a symmetric metric is a strong constraint satisfied by very narrowclass of theories. In this letter we propose a viability check of a theory using the corresponding theory equations of motion. It issufficient that a connection exists, not necessarily a metrical one. The method is based on an analysis of the Lorentz invariant terms in the equations of motion. An example of the method is presented on the Einstein-Infeld-Hoffmann equations.

  12. Kinetic energy in the collective quadrupole Hamiltonian from the experimental data

    Directory of Open Access Journals (Sweden)

    R.V. Jolos

    2017-06-01

    Full Text Available Dependence of the kinetic energy term of the collective nuclear Hamiltonian on collective momentum is considered. It is shown that the fourth order in collective momentum term of the collective quadrupole Hamiltonian generates a sizable effect on the excitation energies and the matrix elements of the quadrupole moment operator. It is demonstrated that the results of calculation are sensitive to the values of some matrix elements of the quadrupole moment. It stresses the importance for a concrete nucleus to have the experimental data for the reduced matrix elements of the quadrupole moment operator taken between all low lying states with the angular momenta not exceeding 4.

  13. A note on the analysis of germination data from complex experimental designs

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Andreasen, Christian; Streibig, Jens Carl

    2017-01-01

    In recent years germination experiments have become more and more complex. Typically, they are replicated in time as independent runs and at each time point they involve hierarchical, often factorial experimental designs, which are now commonly analysed by means of linear mixed models. However...... from event-time models fitted separately to data from each germination test by means of meta-analytic random effects models. We show that this approach provides a more appropriate appreciation of the sources of variation in hierarchically structured germination experiments as both between- and within...

  14. Reaction cross-section calculations using new experimental and theoretical level structure data for deformed nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Hoff, R.W.; Gardner, D.G.; Gardner, M.A.

    1985-05-01

    A technique for modeling level structures of odd-odd nuclei has been used to construct sets of discrete states with energies in the range 0 to 1.5 MeV for several nuclei in the rare-earth and actinide regions. The accuracy of the modeling technique was determined by comparison with experimental data. Examination was made of what effect the use of these new, more complete sets of discrete states has on the calculation of level densities, total reaction cross sections, and isomer ratios. 9 refs.

  15. CFD and experimental data of closed-loop wind tunnel flow

    Directory of Open Access Journals (Sweden)

    John Kaiser Calautit

    2016-06-01

    Full Text Available The data presented in this article were the basis for the study reported in the research articles entitled ‘A validated design methodology for a closed loop subsonic wind tunnel’ (Calautit et al., 2014 [1], which presented a systematic investigation into the design, simulation and analysis of flow parameters in a wind tunnel using Computational Fluid Dynamics (CFD. The authors evaluated the accuracy of replicating the flow characteristics for which the wind tunnel was designed using numerical simulation. Here, we detail the numerical and experimental set-up for the analysis of the closed-loop subsonic wind tunnel with an empty test section.

  16. An Experimental Seismic Data and Parameter Exchange System for Tsunami Warning Systems

    Science.gov (United States)

    Hoffmann, T. L.; Hanka, W.; Saul, J.; Weber, B.; Becker, J.; Heinloo, A.; Hoffmann, M.

    2009-12-01

    For several years GFZ Potsdam is operating a global earthquake monitoring system. Since the beginning of 2008, this system is also used as an experimental seismic background data center for two different regional Tsunami Warning Systems (TWS), the IOTWS (Indian Ocean) and the interim NEAMTWS (NE Atlantic and Mediterranean). The SeisComP3 (SC3) software, developed within the GITEWS (German Indian Ocean Tsunami Early Warning System) project, capable to acquire, archive and process real-time data feeds, was extended for export and import of individual processing results within the two clusters of connected SC3 systems. Therefore not only real-time waveform data are routed to the attached warning centers through GFZ but also processing results. While the current experimental NEAMTWS cluster consists of SC3 systems in six designated national warning centers in Europe, the IOTWS cluster presently includes seven centers, with another three likely to join in 2009/10. For NEAMTWS purposes, the GFZ virtual real-time seismic network (GEOFON Extended Virtual Network -GEVN) in Europe was substantially extended by adding many stations from Western European countries optimizing the station distribution. In parallel to the data collection over the Internet, a GFZ VSAT hub for secured data collection of the EuroMED GEOFON and NEAMTWS backbone network stations became operational and first data links were established through this backbone. For the Southeast Asia region, a VSAT hub has been established in Jakarta already in 2006, with some other partner networks connecting to this backbone via the Internet. Since its establishment, the experimental system has had the opportunity to prove its performance in a number of relevant earthquakes. Reliable solutions derived from a minimum of 25 stations were very promising in terms of speed. For important events, automatic alerts were released and disseminated by emails and SMS. Manually verified solutions are added as soon as they become

  17. Comparison of experimental data with results of some drying models for regularly shaped products

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Ahmet [Aksaray University, Department of Mechanical Engineering, Aksaray (Turkey); Aydin, Orhan [Karadeniz Technical University, Department of Mechanical Engineering, Trabzon (Turkey); Dincer, Ibrahim [University of Ontario Institute of Technology, Faculty of Engineering and Applied Science, Oshawa, ON (Canada)

    2010-05-15

    This paper presents an experimental and theoretical investigation of drying of moist slab, cylinder and spherical products to study dimensionless moisture content distributions and their comparisons. Experimental study includes the measurement of the moisture content distributions of slab and cylindrical carrot, slab and cylindrical pumpkin and spherical blueberry during drying at various temperatures (e.g., 30, 40, 50 and 60 C) at specific constant velocity (U = 1 m/s) and the relative humidity {phi}=30%. In theoretical analysis, two moisture transfer models are used to determine drying process parameters (e.g., drying coefficient and lag factor) and moisture transfer parameters (e.g., moisture diffusivity and moisture transfer coefficient), and to calculate the dimensionless moisture content distributions. The calculated results are then compared with the experimental moisture data. A considerably high agreement is obtained between the calculations and experimental measurements for the cases considered. The effective diffusivity values were evaluated between 0.741 x 10{sup -5} and 5.981 x 10{sup -5} m{sup 2}/h for slab products, 0.818 x 10{sup -5} and 6.287 x 10{sup -5} m{sup 2}/h for cylindrical products and 1.213 x 10{sup -7} and 7.589 x 10{sup -7} m{sup 2}/h spherical products using the model-I and 0.316 x 10{sup -5}-5.072 x 10{sup -5} m{sup 2}/h for slab products, 0.580 x 10{sup -5}-9.587 x 10{sup -5} m{sup 2}/h for cylindrical products and 1.408 x 10{sup -7}-13.913 x 10{sup -7} m{sup 2}/h spherical products using the model-II. (orig.)

  18. The Particle Physics Playground website: tutorials and activities using real experimental data

    Science.gov (United States)

    Bellis, Matthew; CMS Collaboration

    2016-03-01

    The CERN Open Data Portal provides access to data from the LHC experiments to anyone with the time and inclination to learn the analysis procedures. The CMS experiment has made a significant amount of data availible in basically the same format the collaboration itself uses, along with software tools and a virtual enviroment in which to run those tools. These same data have also been mined for educational exercises that range from very simple .csv files that can be analyzed in a spreadsheet to more sophisticated formats that use ROOT, a dominant software package in experimental particle physics but not used as much in the general computing community. This talk will present the Particle Physics Playground website (http://particle-physics-playground.github.io/), a project that uses data from the CMS experiment, as well as the older CLEO experiment, in tutorials and exercises aimed at high school and undergraduate students and other science enthusiasts. The data are stored as text files and the users are provided with starter Python/Jupyter notebook programs and accessor functions which can be modified to perform fairly high-level analyses. The status of the project, success stories, and future plans for the website will be presented. This work was supported in part by NSF Grant PHY-1307562.

  19. Modelling experimental image formation for likelihood-based classification of electron microscopy data

    Science.gov (United States)

    Scheres, Sjors H. W.; Núñez-Ramírez, Rafael; Gómez-Llorente, Yacob; Martín, Carmen San; Eggermont, Paul P. B.; Carazo, José María

    2007-01-01

    The coexistence of multiple distinct structural states often obstructs the application of three-dimensional cryo-electron microscopy to large macromolecular complexes. Maximum likelihood approaches are emerging as robust tools for solving the image classification problems that are posed by such samples. Here, we propose a statistical data model that allows for a description of the experimental image formation within the formulation of 2D and 3D maximum likelihood refinement. The proposed approach comprises a formulation of the probability calculations in Fourier space, including a spatial frequency-dependent noise model and a description of defocus-dependent imaging effects. The Expectation-Maximization like algorithms presented are generally applicable to the alignment and classification of structurally heterogeneous projection data. Their effectiveness is demonstrated with various examples, including 2D classification of top views of the archaeal helicase MCM, and 3D classification of 70S E.coli ribosome and Simian Virus 40 large T-antigen projections. PMID:17937907

  20. Quarkonium production at the LHC: A data-driven analysis of remarkably simple experimental patterns

    Science.gov (United States)

    Faccioli, Pietro; Lourenço, Carlos; Araújo, Mariana; Knünz, Valentin; Krätschmer, Ilse; Seixas, João

    2017-10-01

    The LHC quarkonium production data reveal a startling observation: the J / ψ, ψ (2S), χc1, χc2 and ϒ (nS)pT-differential cross sections in the central rapidity region are compatible with one universal momentum scaling pattern. Considering also the absence of strong polarizations of directly and indirectly produced S-wave mesons, we conclude that there is currently no evidence of a dependence of the partonic production mechanisms on the quantum numbers and mass of the final state. The experimental observations supporting this universal production scenario are remarkably significant, as shown by a new analysis approach, unbiased by specific theoretical calculations of partonic cross sections, which are only considered a posteriori, in comparisons with the data-driven results.

  1. Experimental Design and Data Analysis of In Vivo Fluorescence Imaging Studies.

    Science.gov (United States)

    Ding, Ying; Lin, Hui-Min

    2016-01-01

    The objective of this chapter is to provide researchers who conduct in vivo fluorescence imaging studies with guidance in statistical aspects in the experimental design and data analysis of such studies. In the first half of this chapter, we introduce the key statistical components for designing a sound in vivo experiment. Particular emphasis is placed on the issues and designs that pertain to fluorescence imaging studies. Examples representing several popular types of fluorescence imaging experiments are provided as case studies to demonstrate how to appropriately design such studies. In the second half of this chapter, we explain the fundamental statistical concepts and methods used in the data analysis of typical in vivo experiments. We also provide specific examples in in vivo imaging studies to illustrate the key steps of analysis procedure.

  2. New experimental data on the influence of extranuclear factors on the probability of radioactive decay

    CERN Document Server

    Bondarevskij, S I; Skorobogatov, G A

    2002-01-01

    New experimental data on influence of various extranuclear factors on probability (lambda) of radioactive decay are presented. During redox processes in solutions containing sup 1 sup 3 sup 9 Ce relative change in lambda measured by the DELTA I/I method was [I(Ce sup I sup V)-I(Ce sup I sup I sup I)]/I sub m sub e sub a sub n +(1.4+-0.6)x10 sup - sup 4. Using a modification of the method based on displacement of the age-old radioactive equilibrium, when a source MgO( sup 1 sup 2 sup 1 sup m Te) was cooled to 78 K, growth of lambda of tellurium nuclear isomer by 0.04+-0.02% was detected. New experimental data on increase in gamma-radioactivity of sample Be sup 1 sup 2 sup 3 sup m Te at the expense of low-temperature induced reaction, i.e. collective nuclear superluminescence, are provided

  3. Validation of NEPTUNE-CFD Two-Phase Flow Models Using Experimental Data

    Directory of Open Access Journals (Sweden)

    Jorge Pérez Mañes

    2014-01-01

    Full Text Available This paper deals with the validation of the two-phase flow models of the CFD code NEPTUNEC-CFD using experimental data provided by the OECD BWR BFBT and PSBT Benchmark. Since the two-phase models of CFD codes are extensively being improved, the validation is a key step for the acceptability of such codes. The validation work is performed in the frame of the European NURISP Project and it was focused on the steady state and transient void fraction tests. The influence of different NEPTUNE-CFD model parameters on the void fraction prediction is investigated and discussed in detail. Due to the coupling of heat conduction solver SYRTHES with NEPTUNE-CFD, the description of the coupled fluid dynamics and heat transfer between the fuel rod and the fluid is improved significantly. The averaged void fraction predicted by NEPTUNE-CFD for selected PSBT and BFBT tests is in good agreement with the experimental data. Finally, areas for future improvements of the NEPTUNE-CFD code were identified, too.

  4. Radionuclides in fruit systems: Model prediction-experimental data intercomparison study

    Energy Technology Data Exchange (ETDEWEB)

    Ould-Dada, Z. [Food Standards Agency, Radiological Protection and Research Management Division, Aviation House, 125 Kingsway, Room 715B, London WC2B 6NH (United Kingdom)]. E-mail: Zitouni.ould-dada@defra.gsi.gov.uk; Carini, F. [Universita Cattolica del Sacro Cuore, Faculty of Agricultural Sciences, Institute of Agricultural and Environmental Chemistry, Via Emilia Parmense, 84, I-29100 Piacenza (Italy); Eged, K. [Department of Radiochemistry, University of Veszprem, P.O. Box 158 H-8201, H-8200 Veszprem (Hungary); Kis, Z. [Department of Radiochemistry, University of Veszprem, P.O. Box 158 H-8201, H-8200 Veszprem (Hungary); Linkov, I. [ICF Consulting, Inc., 33 Hayden Ave, Lexington, MA 02421 (United States); Mitchell, N.G. [Mouchel Consulting Ltd., West Hall, Parvis Road, West Byfleet, Surrey, KT14 6EZ (United Kingdom); Mourlon, C. [Institute for Radiological Protection and Nuclear Safety (IRSN)/Environment and Emergency Operations Division (DEI), Laboratory of Environmental Modelling - LME, CE/Cadarache, 13 108 St Paul-lez-Durance Cedex (France); Robles, B. [CIEMAT, Dept. de Impacto Ambiental (DIAE), Edif. 3A, Avenida Complutense 22, E-28040 Madrid (Spain); Sweeck, L. [SCK-CEN, Boeretang 200, 2400 Mol (Belgium); Venter, A. [Enviros Consulting Ltd, Telegraphic House, Waterfront Quay, Salford Quays, Greater Manchester, M50 3XW (United Kingdom)

    2006-08-01

    This paper presents results from an international exercise undertaken to test model predictions against an independent data set for the transfer of radioactivity to fruit. Six models with various structures and complexity participated in this exercise. Predictions from these models were compared against independent experimental measurements on the transfer of {sup 134}Cs and {sup 85}Sr via leaf-to-fruit and soil-to-fruit in strawberry plants after an acute release. Foliar contamination was carried out through wet deposition on the plant at two different growing stages, anthesis and ripening, while soil contamination was effected at anthesis only. In the case of foliar contamination, predicted values are within the same order of magnitude as the measured values for both radionuclides, while in the case of soil contamination models tend to under-predict by up to three orders of magnitude for {sup 134}Cs, while differences for {sup 85}Sr are lower. Performance of models against experimental data is discussed together with the lessons learned from this exercise.

  5. Radionuclides in fruit systems. Model prediction-experimental data intercomparison study

    Energy Technology Data Exchange (ETDEWEB)

    Ould-Dada, Z. [Food Standards Agency, Radiological Protection and Research Management Division, Aviation House, 125 Kingsway, Room 715B, London WC2B 6NH (United Kingdom); Carini, F. [Universita Cattolica del Sacro Cuore, Faculty of Agricultural Sciences, Institute of Agricultural and Environmental Chemistry, Via Emilia Parmense, 84, I-29100 Piacenza (Italy); Eged, K.; Kis, Z. [Department of Radiochemistry, University of Veszprem, P.O. Box 158 H-8201, H-8200 Veszprem (Hungary); Linkov, I. [ICF Consulting, Inc., 33 Hayden Ave, Lexington, MA 02421 (United States); Mitchell, N.G. [Mouchel Consulting Ltd., West Hall, Parvis Road, West Byfleet, Surrey, KT14 6EZ (United Kingdom); Mourlon, C. [Institute for Radiological Protection and Nuclear Safety (IRSN)/Environment and Emergency Operations Division (DEI), Laboratory of Environmental Modelling LME, CE/Cadarache, 13 108 St Paul-lez-Durance Cedex (France); Robles, B. [CIEMAT, Dept. de Impacto Ambiental (DIAE), Edif. 3A, Avenida Complutense 22, E-28040 Madrid (Spain); Sweeck, L. [SCK/CEN, Boeretang 200, 2400 Mol (Belgium); Venter, A. [Enviros Consulting Ltd, Telegraphic House, Waterfront Quay, Salford Quays, Greater Manchester, M50 3XW (United Kingdom)

    2006-08-01

    This paper presents results from an international exercise undertaken to test model predictions against an independent data set for the transfer of radioactivity to fruit. Six models with various structures and complexity participated in this exercise. Predictions from these models were compared against independent experimental measurements on the transfer of {sup 134}Cs and {sup 85}Sr via leaf-to-fruit and soil-to-fruit in strawberry plants after an acute release. Foliar contamination was carried out through wet deposition on the plant at two different growing stages, anthesis and ripening, while soil contamination was effected at anthesis only. In the case of foliar contamination, predicted values are within the same order of magnitude as the measured values for both radionuclides, while in the case of soil contamination models tend to under-predict by up to three orders of magnitude for {sup 134}Cs, while differences for {sup 85}Sr are lower. Performance of models against experimental data is discussed together with the lessons learned from this exercise. (author)

  6. A statistical method for evaluation of the experimental phase equilibrium data of simple clathrate hydrates

    DEFF Research Database (Denmark)

    Eslamimanesh, Ali; Gharagheizi, Farhad; Mohammadi, Amir H.

    2012-01-01

    We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot, and the r......We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot......, and the residuals of a selected correlation results lead to define the probable outliers. This method not only contributes to outliers diagnostics but also identifies the range of applicability of the applied model and quality of the existing experimental data. The available correlation in the literature...... in exponential form is used to represent/predict the hydrate dissociation pressures for three-phase equilibrium conditions (liquid water/ice–vapor-hydrate). The investigated hydrate formers are methane, ethane, propane, carbon dioxide, nitrogen, and hydrogen sulfide. It is interpreted from the obtained results...

  7. Meteorological and snow distribution data in the Izas Experimental Catchment (Spanish Pyrenees) from 2011 to 2017

    Science.gov (United States)

    Revuelto, Jesús; Azorin-Molina, Cesar; Alonso-González, Esteban; Sanmiguel-Vallelado, Alba; Navarro-Serrano, Francisco; Rico, Ibai; López-Moreno, Juan Ignacio

    2017-12-01

    This work describes the snow and meteorological data set available for the Izas Experimental Catchment in the Central Spanish Pyrenees, from the 2011 to 2017 snow seasons. The experimental site is located on the southern side of the Pyrenees between 2000 and 2300 m above sea level, covering an area of 55 ha. The site is a good example of a subalpine environment in which the evolution of snow accumulation and melt are of major importance in many mountain processes. The climatic data set consists of (i) continuous meteorological variables acquired from an automatic weather station (AWS), (ii) detailed information on snow depth distribution collected with a terrestrial laser scanner (TLS, lidar technology) for certain dates across the snow season (between three and six TLS surveys per snow season) and (iii) time-lapse images showing the evolution of the snow-covered area (SCA). The meteorological variables acquired at the AWS are precipitation, air temperature, incoming and reflected solar radiation, infrared surface temperature, relative humidity, wind speed and direction, atmospheric air pressure, surface temperature (snow or soil surface), and soil temperature; all were taken at 10 min intervals. Snow depth distribution was measured during 23 field campaigns using a TLS, and daily information on the SCA was also retrieved from time-lapse photography. The data set (https://doi.org/10.5281/zenodo.848277" target="_blank">https://doi.org/10.5281/zenodo.848277) is valuable since it provides high-spatial-resolution information on the snow depth and snow cover, which is particularly useful when combined with meteorological variables to simulate snow energy and mass balance. This information has already been analyzed in various scientific studies on snow pack dynamics and its interaction with the local climatology or topographical characteristics. However, the database generated has great potential for understanding other environmental processes from a hydrometeorological

  8. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ilas, Germina [ORNL; Gauld, Ian C [ORNL

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  9. Data Communications Using Guided Elastic Waves by Time Reversal Pulse Position Modulation: Experimental Study

    Directory of Open Access Journals (Sweden)

    Yujie Ying

    2013-07-01

    Full Text Available In this paper, we present and demonstrate a low complexity elastic wave signaling and reception method to achieve high data rate communication on dispersive solid elastic media, such as metal pipes, using piezoelectric transducers of PZT (lead zirconate titanate. Data communication is realized using pulse position modulation (PPM as the signaling method and the elastic medium as the communication channel. The communication system first transmits a small number of training pulses to probe the dispersive medium. The time-reversed probe signals are then utilized as the information carrying waveforms. Rapid timing acquisition of transmitted waveforms for demodulation over elastic medium is made possible by exploring the reciprocity property of guided elastic waves. The experimental tests were conducted using a National Instrument PXI system for waveform excitation and data acquisition. Data telemetry bit rates of 10 kbps, 20 kbps, 50 kbps and 100 kbps with the average bit error rates of 0, 5.75 × 10−4, 1.09 × 10−2 and 5.01 × 10−2, respectively, out of a total of 40, 000 transmitted bits were obtained when transmitting at the center frequency of 250 kHz and a 500 kHz bandwidth on steel pipe specimens. To emphasize the influence of time reversal, no complex processing techniques, such as adaptive channel equalization or error correction coding, were employed.

  10. Fusion of metabolomics and proteomics data for biomarkers discovery: case study on the experimental autoimmune encephalomyelitis

    Directory of Open Access Journals (Sweden)

    Wijmenga Sybren S

    2011-06-01

    Full Text Available Abstract Background Analysis of Cerebrospinal Fluid (CSF samples holds great promise to diagnose neurological pathologies and gain insight into the molecular background of these pathologies. Proteomics and metabolomics methods provide invaluable information on the biomolecular content of CSF and thereby on the possible status of the central nervous system, including neurological pathologies. The combined information provides a more complete description of CSF content. Extracting the full combined information requires a combined analysis of different datasets i.e. fusion of the data. Results A novel fusion method is presented and applied to proteomics and metabolomics data from a pre-clinical model of multiple sclerosis: an Experimental Autoimmune Encephalomyelitis (EAE model in rats. The method follows a mid-level fusion architecture. The relevant information is extracted per platform using extended canonical variates analysis. The results are subsequently merged in order to be analyzed jointly. We find that the combined proteome and metabolome data allow for the efficient and reliable discrimination between healthy, peripherally inflamed rats, and rats at the onset of the EAE. The predicted accuracy reaches 89% on a test set. The important variables (metabolites and proteins in this model are known to be linked to EAE and/or multiple sclerosis. Conclusions Fusion of proteomics and metabolomics data is possible. The main issues of high-dimensionality and missing values are overcome. The outcome leads to higher accuracy in prediction and more exhaustive description of the disease profile. The biological interpretation of the involved variables validates our fusion approach.

  11. Unfolding linac photon spectra and incident electron energies from experimental transmission data, with direct independent validation.

    Science.gov (United States)

    Ali, E S M; McEwen, M R; Rogers, D W O

    2012-11-01

    In a recent computational study, an improved physics-based approach was proposed for unfolding linac photon spectra and incident electron energies from transmission data. In this approach, energy differentiation is improved by simultaneously using transmission data for multiple attenuators and detectors, and the unfolding robustness is improved by using a four-parameter functional form to describe the photon spectrum. The purpose of the current study is to validate this approach experimentally, and to demonstrate its application on a typical clinical linac. The validation makes use of the recent transmission measurements performed on the Vickers research linac of National Research Council Canada. For this linac, the photon spectra were previously measured using a NaI detector, and the incident electron parameters are independently known. The transmission data are for eight beams in the range 10-30 MV using thick Be, Al and Pb bremsstrahlung targets. To demonstrate the approach on a typical clinical linac, new measurements are performed on an Elekta Precise linac for 6, 10 and 25 MV beams. The different experimental setups are modeled using EGSnrc, with the newly added photonuclear attenuation included. For the validation on the research linac, the 95% confidence bounds of the unfolded spectra fall within the noise of the NaI data. The unfolded spectra agree with the EGSnrc spectra (calculated using independently known electron parameters) with RMS energy fluence deviations of 4.5%. The accuracy of unfolding the incident electron energy is shown to be ∼3%. A transmission cutoff of only 10% is suitable for accurate unfolding, provided that the other components of the proposed approach are implemented. For the demonstration on a clinical linac, the unfolded incident electron energies and their 68% confidence bounds for the 6, 10 and 25 MV beams are 6.1 ± 0.1, 9.3 ± 0.1, and 19.3 ± 0.2 MeV, respectively. The unfolded spectra for the clinical linac agree with the

  12. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  13. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    Directory of Open Access Journals (Sweden)

    F. Terzuoli

    2008-01-01

    Full Text Available Pressurized thermal shock (PTS modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV lifetime is the cold water emergency core cooling (ECC injection into the cold leg during a loss of coolant accident (LOCA. Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mécanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX, and a research code (NEPTUNE CFD. The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling.

  14. Testability of evolutionary game dynamics models based on experimental economics data

    CERN Document Server

    Wang, Yijia; Wang, Zhijian

    2016-01-01

    In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamic patterns of angular momentum and speed as the measurement variables, for evaluating the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental patterns, and then derive the related theoretical patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results is that, among all the non-parametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous method to ...

  15. A comparison between Geant4 PIXE simulations and experimental data for standard reference samples

    Energy Technology Data Exchange (ETDEWEB)

    Francis, Z., E-mail: ziad.francis@gmail.com [Université Saint Joseph, Faculty of Science, Department of Physics, Beirut (Lebanon); The Open University, Faculty of Science, Department of Physical Sciences, Walton Hall, MK7 6AA Milton Keynes (United Kingdom); El Bast, M. [Ion Beam Analysis Laboratory, Lebanese Atomic Energy Commission, National Council for Scientific Research, Beirut (Lebanon); El Haddad, R. [Université Saint Joseph, Faculty of Science, Department of Mathematics, Beirut (Lebanon); Mantero, A. [Istituto Nazionale di Fisica Nucleare, sez. di Genova, via Dodecaneso 33, 16146 Genova (Italy); Incerti, S. [Université Bordeaux 1, CNRS/IN2P3, Centre d’Etudes Nucléaires de Bordeaux-Gradignan, CENBG, Chemin du Solarium, 33175 Gradignan (France); Ivanchenko, V. [Ecoanalytica, Moscow State University, 119899 Moscow (Russian Federation); Geant4 Associates International Ltd., Hebden Bridge (United Kingdom); El Bitar, Z. [Institut Pluridisciplinaire Hubert Curien, CNRS/IN2P3, 67037 Strasbourg Cedex (France); Champion, C. [Université Bordeaux 1, CNRS/IN2P3, Centre d’Etudes Nucléaires de Bordeaux-Gradignan, CENBG, Chemin du Solarium, 33175 Gradignan (France); Bernal, M.A. [Instituto de Física Gleb Wataghin, Universidade Estadual de Campinas-UNICAMP, SP 13083-859 (Brazil); Roumie, M. [Ion Beam Analysis Laboratory, Lebanese Atomic Energy Commission, National Council for Scientific Research, Beirut (Lebanon)

    2013-12-01

    The Geant4 PIXE de-excitation processes are used to simulate proton beam interactions with sample materials of known composition. Simulations involve four mono-elemental materials; Cu, Fe, Si and Al and three relatively complex materials: stainless steel, phosphor bronze and basal BE-N reference material composed of 25 different elements. The simulation results are compared to experimental spectra acquired for real samples analyzed using 3 MeV incident protons delivered by an ion tandem accelerator. Data acquisition was performed using a Si(Li) detector and an aluminum funny filter was added for the three last mentioned samples depending on the configuration to reduce the noise and obtain clear resulting spectrum. The results show a good agreement between simulations and measurements for the different samples.

  16. Actinides sorption onto hematite. Experimental data, surface complexation modeling and linear free energy relationship

    Energy Technology Data Exchange (ETDEWEB)

    Romanchuk, Anna Y.; Kalmykov, Stephan N. [Lomonosov Moscow State Univ., Moscow (Russian Federation). Dept. of Chemistry

    2014-07-01

    The sorption of actinides in different valence states - Am(III), Th(IV), Np(V) and U(VI) onto hematite have been revisited with the special emphasis on the equilibrium constants of formation of surface species. The experimental sorption data have been treated using surface complexation modeling from which the set of new values of equilibrium constants were obtained. Formation of inner sphere monodentate surface species adequately describes the pH-sorption edges for actinide ions indicative the ionic electrostatic nature of bonding with small or no covalency contribution. The linear free energy relationship representing the correlation between the hydrolysis constants and surface complexation constants has been developed for various cations including K(I), Li(I), Na(I), Ag(I), Tl(I), Sr(II), Cu(II), Co(II), La(III), Eu(III), Ga(III), Am(III), Th(IV), Np(V), U(VI). (orig.)

  17. Comparison of SRIM, MCNPX and GEANT simulations with experimental data for thick Al absorbers.

    Science.gov (United States)

    Evseev, Ivan G; Schelin, Hugo R; Paschuk, Sergei A; Milhoretto, Edney; Setti, João A P; Yevseyeva, Olga; de Assis, Joaquim T; Hormaza, Joel M; Díaz, Katherin S; Lopes, Ricardo T

    2010-01-01

    Proton computerized tomography deals with relatively thick targets like the human head or trunk. In this case precise analytical calculation of the proton final energy is a rather complicated task, thus the Monte Carlo simulation stands out as a solution. We used the GEANT4.8.2 code to calculate the proton final energy spectra after passing a thick Al absorber and compared it with the same conditions of the experimental data. The ICRU49, Ziegler85 and Ziegler2000 models from the low energy extension pack were used. The results were also compared with the SRIM2008 and MCNPX2.4 simulations, and with solutions of the Boltzmann transport equation in the Fokker-Planck approximation. Copyright 2009 Elsevier Ltd. All rights reserved.

  18. Beauty photoproduction at HERA. k{sub T}-factorization versus experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Lipatov, A.V.; Zotov, N.P. [M.V. Lomonosov Moscow State Univ., Moscow (Russian Federation). D.V. Skobeltsyn Institute of Nuclear Physics

    2006-05-15

    We present calculations of the beauty photoproduction at HERA collider in the framework of the k{sub T}-factorization approach. Both direct and resolved photon contributions are taken into account. The unintegrated gluon densities in a proton and in a photon are obtained from the full CCFM, from unified BFKL-DGLAP evolution equations as well as from the Kimber-Martin-Ryskin prescription. We investigate different production rates (both inclusive and associated with hadronic jets) and compare our theoretical predictions with the recent experimental data taken by the H1 and ZEUS collaborations. Special attention is put on the x{sup obs}{sub {gamma}} variable which is sensitive to the relative contributions to the beauty production cross section. (Orig.)

  19. Fit of a sum of exponential functions to experimental data points

    Science.gov (United States)

    Monjoie, F. S.; Garnir, H. P.

    1993-01-01

    Expfit is a program aimed at the analysis of light decay curves in beam-foil spectroscopy experiments. It fits, using the least-squares method, a sum of exponential functions to experimental data points. A new technique, based on statistical tests, has been implemented to find the best number of parameters so that in most cases the fit is fully automatized. However, the user may give the initial parameters and determine the number of parameters to be adjusted or let Expfit find the best number of needed parameters. Expfit can print a report presenting the results of the fit under tabular and graphical format. Thanks to its graphic interface, built following the Apple Macintosh human interface guidelines, the program is easy to use.

  20. Comparison of GEANT4 very low energy cross section models with experimental data in water

    DEFF Research Database (Denmark)

    Incerti, S; Ivanchenko, A; Karamitros, M

    2010-01-01

    The GEANT4 general-purpose Monte Carlo simulation toolkit is able to simulate physical interaction processes of electrons, hydrogen and helium atoms with charge states (H0, H+) and (He0, He+, He2+), respectively, in liquid water, the main component of biological systems, down to the electron volt...... regime and the submicrometer scale, providing GEANT4 users with the so-called "GEANT4-DNA" physics models suitable for microdosimetry simulation applications. The corresponding software has been recently re-engineered in order to provide GEANT4 users with a coherent and unique approach to the simulation...... of electromagnetic interactions within the GEANT4 toolkit framework (since GEANT4 version 9.3 beta). This work presents a quantitative comparison of these physics models with a collection of experimental data in water collected from the literature....

  1. A Study of Advanced Image Processing Techniques on Experimental SWIFT Data

    Science.gov (United States)

    Trujillo, Christopher

    2017-06-01

    Accurately tracking the position of explosive-induced shock waves is a critical method for characterizing high explosive (HE) performance. The application of the shock wave image framing technique (SWIFT) has proven to be a successful diagnostic tool that utilizes ultra-high-speed imaging to capture time series images of explosively-driven shock waves propagating through transparent media. The use of common edge-detection algorithms, including Sobel, Canny, and Prewitt, tend to be susceptible to background noise and require noise reduction preprocessing that can alter the position of edge boundaries. In this paper, results produced by the implementation of advanced image-processing techniques on experimental SWIFT data show that shock wave position can accurately be detected and tracked, while also maintaining robustness to background image noise.

  2. Experimental investigation of effects of jet decay rate on jet-induced pressures on a flat plate: Tabulated data

    Science.gov (United States)

    Kuhlman, J. M.; Ousterhout, D. S.; Warcup, R. W.

    1978-01-01

    Tabular data are presented for an experimental study of the effects of jet decay rate on the jet-induced pressure distribution on a flat plate for a single jet issuing at right angle to the flat plate into a uniform crossflow. The data are presented in four sections: (1) presents the static nozzle calibration data; (2) lists the plate surface static pressure data and integrated loads; (3) lists the jet centerline trajectory data; and (4) lists the centerline dynamic pressure data.

  3. Thermodynamic analysis of chromium solubility data in liquid lithium containing nitrogen: Comparison between experimental data and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Krasin, Valery P., E-mail: vkrasin@rambler.ru; Soyustova, Svetlana I.

    2015-10-15

    The mathematical formalism for description of solute interactions in dilute solution of chromium and nitrogen in liquid lithium have been applied for calculating of the temperature dependence of the solubility of chromium in liquid lithium with the various nitrogen contents. It is shown that the derived equations are useful to provide understanding of a relationship between thermodynamic properties and local ordering in the Li–Cr–N melt. Comparison between theory and data reported in the literature for solubility of chromium in nitrogen-contaminated liquid lithium, was allowed to explain the reasons of the deviation of the experimental semi-logarithmic plot of chromium content in liquid lithium as a function of the reciprocal temperature from a straight line. - Highlights: • The activity coefficient of chromium in ternary melt can be obtained by means of integrating the Gibbs–Duhem equation. • In lithium with the high nitrogen content, the dependence of a logarithm of chromium solubility as a function of the reciprocal temperature has essentially nonlinear character. • At temperatures below a certain threshold, the process of dissolution of chromium in lithium will be controlled by the equilibrium concentration of nitrogen required for the formation of ternary nitride Li{sub 9}CrN{sub 5}at a given temperature.

  4. Comparison of various structural damage tracking techniques with unknown excitations based on experimental data

    Science.gov (United States)

    Huang, Hongwei; Yang, Jann N.; Zhou, Li

    2009-03-01

    An early detection of structural damages is critical for the decision making of repair and replacement maintenance in order to guarantee a specified structural reliability. Consequently, the structural damage detection, based on vibration data measured from the structural health monitoring (SHM) system, has received considerable attention recently. The traditional time-domain analysis techniques, such as the least square estimation (LSE) method and the extended Kalman filter (EKF) approach, require that all the external excitations (inputs) be available, which may not be the case for some SHM systems. Recently, these two approaches have been extended to cover the general case where some of the external excitations (inputs) are not measured, referred to as the LSE with unknown inputs (LSE-UI) and the EKF with unknown inputs (EKF-UI). Also, new analysis methods, referred to as the sequential non-linear least-square estimation with unknown inputs and unknown outputs (SNLSE-UI-UO) and the quadratic sum-square error with unknown inputs (QSSE-UI), have been proposed for the damage tracking of structures when some of the acceleration responses are not measured and the external excitations are not available. In this paper, these newly proposed analysis methods will be compared in terms of accuracy, convergence and efficiency, for damage identification of structures based on experimental data obtained through a series of experimental tests using a small-scale 3-story building model with white noise excitation. The capability of the LSE-UI, EKF-UI, SNLSE-UI-UO and QSSE-UI approaches in tracking the structural damages will be demonstrated.

  5. Examining dynamic interactions among experimental factors influencing hydrologic data assimilation with the ensemble Kalman filter

    Science.gov (United States)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Cai, X. M.; Ancell, B. C.; Fan, Y. R.

    2017-11-01

    The ensemble Kalman filter (EnKF) is recognized as a powerful data assimilation technique that generates an ensemble of model variables through stochastic perturbations of forcing data and observations. However, relatively little guidance exists with regard to the proper specification of the magnitude of the perturbation and the ensemble size, posing a significant challenge in optimally implementing the EnKF. This paper presents a robust data assimilation system (RDAS), in which a multi-factorial design of the EnKF experiments is first proposed for hydrologic ensemble predictions. A multi-way analysis of variance is then used to examine potential interactions among factors affecting the EnKF experiments, achieving optimality of the RDAS with maximized performance of hydrologic predictions. The RDAS is applied to the Xiangxi River watershed which is the most representative watershed in China's Three Gorges Reservoir region to demonstrate its validity and applicability. Results reveal that the pairwise interaction between perturbed precipitation and streamflow observations has the most significant impact on the performance of the EnKF system, and their interactions vary dynamically across different settings of the ensemble size and the evapotranspiration perturbation. In addition, the interactions among experimental factors vary greatly in magnitude and direction depending on different statistical metrics for model evaluation including the Nash-Sutcliffe efficiency and the Box-Cox transformed root-mean-square error. It is thus necessary to test various evaluation metrics in order to enhance the robustness of hydrologic prediction systems.

  6. Development of reactive force fields using ab initio molecular dynamics simulation minimally biased to experimental data

    Science.gov (United States)

    Chen, Chen; Arntsen, Christopher; Voth, Gregory A.

    2017-10-01

    Incorporation of quantum mechanical electronic structure data is necessary to properly capture the physics of many chemical processes. Proton hopping in water, which involves rearrangement of chemical and hydrogen bonds, is one such example of an inherently quantum mechanical process. Standard ab initio molecular dynamics (AIMD) methods, however, do not yet accurately predict the structure of water and are therefore less than optimal for developing force fields. We have instead utilized a recently developed method which minimally biases AIMD simulations to match limited experimental data to develop novel multiscale reactive molecular dynamics (MS-RMD) force fields by using relative entropy minimization. In this paper, we present two new MS-RMD models using such a parameterization: one which employs water with harmonic internal vibrations and another which uses anharmonic water. We show that the newly developed MS-RMD models very closely reproduce the solvation structure of the hydrated excess proton in the target AIMD data. We also find that the use of anharmonic water increases proton hopping, thereby increasing the proton diffusion constant.

  7. Experimental Comparisons of Entity-Relationship and Object Oriented Data Models

    Directory of Open Access Journals (Sweden)

    Peretz Shoval

    1997-05-01

    Full Text Available The extended entity-relationship (EER model is being "threatened" by the object-oriented (OO approach, which penetrates into the areas of system analysis and data modeling. The issue of which of the two data models is better for data modeling is still an open question. We address the question by conducting experimental comparisons between the models. The results of our experiments reveal that: a schema comprehension: ternary relationships are significantly easier to comprehend in the EER model than in the OO model; b the EER model supasses the OO model for designing unary and ternary relationships; c time: it takes less time to design EER schemas; d preferences: the EER model is preferred by designers. We conclude that even if the objective is to implement an OO database schema, the following procedure is still recommended: 1 create an EER conceptual schema, 2 map it to an OO schema, and c augment the OO schema with behavioral constructs that are unique to the OO approach.

  8. A study of the suitability of autoencoders for preprocessing data in breast cancer experimentation.

    Science.gov (United States)

    Macías-García, Laura; Luna-Romera, José María; García-Gutiérrez, Jorge; Martínez-Ballesteros, María; Riquelme-Santos, José C; González-Cámpora, Ricardo

    2017-08-01

    Breast cancer is the most common cause of cancer death in women. Today, post-transcriptional protein products of the genes involved in breast cancer can be identified by immunohistochemistry. However, this method has problems arising from the intra-observer and inter-observer variability in the assessment of pathologic variables, which may result in misleading conclusions. Using an optimal selection of preprocessing techniques may help to reduce observer variability. Deep learning has emerged as a powerful technique for any tasks related to machine learning such as classification and regression. The aim of this work is to use autoencoders (neural networks commonly used to feed deep learning architectures) to improve the quality of the data for developing immunohistochemistry signatures with prognostic value in breast cancer. Our testing on data from 222 patients with invasive non-special type breast carcinoma shows that an automatic binarization of experimental data after autoencoding could outperform other classical preprocessing techniques (such as human-dependent or automatic binarization only) when applied to the prognosis of breast cancer by immunohistochemical signatures. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Anatomical landmarks for registration of experimental image data to volumetric rodent brain atlasing templates.

    Science.gov (United States)

    Sergejeva, Marina; Papp, Eszter A; Bakker, Rembrandt; Gaudnek, Manuel A; Okamura-Oho, Yuko; Boline, Jyl; Bjaalie, Jan G; Hess, Andreas

    2015-01-30

    Assignment of anatomical reference is a key step in integration of the rapidly expanding collection of rodent brain data. Landmark-based registration facilitates spatial anchoring of diverse types of data not suitable for automated methods operating on voxel-based image information. Here we propose a standardized set of anatomical landmarks for registration of whole brain imaging datasets from the mouse and rat brain, and in particular for integration of experimental image data in Waxholm Space (WHS). Sixteen internal landmarks of the C57BL/6J mouse brain have been reliably identified: by different individuals, independent of their experience in anatomy; across different MRI contrasts (T1, T2, T2(*)) and other modalities (Nissl histology and block-face anatomy); in different specimens; in different slice acquisition angles; and in different image resolutions. We present a registration example between T1-weighted MRI and the mouse WHS template using these landmarks and reaching fairly high accuracy. Landmark positions identified in the mouse WHS template are shared through the Scalable Brain Atlas, accompanied by graphical and textual guidelines for locating each landmark. We identified 14 of the 16 landmarks in the WHS template for the Sprague Dawley rat. This landmark set can withstand substantial differences in acquisition angle, imaging modality, and is less vulnerable to subjectivity. This facilitates registration of multimodal 3D brain data to standard coordinate spaces for mouse and rat brain taking a step toward the creation of a common rodent reference system; raising data sharing to a qualitatively higher level. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Experimental energy consumption of Frame Slotted ALOHA and Distributed Queuing for data collection scenarios.

    Science.gov (United States)

    Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier

    2014-07-24

    Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.

  11. Experimental Energy Consumption of Frame Slotted ALOHA and Distributed Queuing for Data Collection Scenarios

    Directory of Open Access Journals (Sweden)

    Pere Tuset-Peiro

    2014-07-01

    Full Text Available Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC protocols, Frame Slotted ALOHA (FSA and Distributed Queuing (DQ. We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.

  12. Evaluation of CHF experimental data for non-square lattice 7-rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae Hyun; Yoo, Y. J.; Kim, K. K.; Zee, S. Q

    2001-01-01

    A series of CHF experiments are conducted for 7-rod hexagonal test bundles in order to investigate the CHF characteristics of self-sustained square finned (SSF) rod bundles. The experiments are performed in the freon-loop and water-loop located at IPPE in Russia, and 609 data of freon-12 and 229 data of water are obtained from 7 kinds of test bundles classified by the combination of heated length and axial/radial power distributions. As the result of the evaluation of four representative CHF correlations, the EPRI-1 correlation reveals good prediction capability for SSF test bundles. The inlet parameter CHF correlation suggested by IPPE calculates the mean and the standard deviation of P/M for uniformly heated test bundles as 1.002 and 0.049, respectively. In spite of its excellent accuracy, the correlation has a discontinuity point at the boundary between the low velocity and high velocity conditions. KAERI's inlet parameter correlation eliminates this defect by introducing the complete evaporation model at low velocity condition, and calculates the mean and standard deviation of P/M as 0.095 and 0.062 for uniformly heated 496 data points, respectively. The mean/standard deviation of local parameter CHF correlations suggested by IPPE and KAERI are evaluated as 1.023/0.178 and 1.002/0.158, respectively. The inlet parameter correlation developed from uniformly heated test bundles tends to under-predict CHF about 3% for axially non-uniformly heated test bundles. On the other hand, the local parameter correlation reveals large scattering of P/M, and requires re-optimization of the correlation for non-uniform axial power distributions. As the result of the analysis of experimental data, it reveals that the correction model of axial power shapes suggested by IPPE is applicable to the inlet parameter correlations. For the test bundle of radial non-uniform power distribution, the physically unexpected results are obtained at some experimental conditions. In addition

  13. An Experimental Seismic Data and Parameter Exchange System for Interim NEAMTWS

    Science.gov (United States)

    Hanka, W.; Hoffmann, T.; Weber, B.; Heinloo, A.; Hoffmann, M.; Müller-Wrana, T.; Saul, J.

    2009-04-01

    In 2008 GFZ Potsdam has started to operate its global earthquake monitoring system as an experimental seismic background data centre for the interim NEAMTWS (NE Atlantic and Mediterranean Tsunami Warning System). The SeisComP3 (SC3) software, developed within the GITEWS (German Indian Ocean Tsunami Early Warning System) project was extended to test the export and import of individual processing results within a cluster of SC3 systems. The initiated NEAMTWS SC3 cluster consists presently of the 24/7 seismic services at IMP, IGN, LDG/EMSC and KOERI, whereas INGV and NOA are still pending. The GFZ virtual real-time seismic network (GEOFON Extended Virtual Network - GEVN) was substantially extended by many stations from Western European countries optimizing the station distribution for NEAMTWS purposes. To amend the public seismic network (VEBSN - Virtual European Broadband Seismic Network) some attached centres provided additional private stations for NEAMTWS usage. In parallel to the data collection by Internet the GFZ VSAT hub for the secured data collection of the EuroMED GEOFON and NEAMTWS backbone network stations became operational and the first data links were established. In 2008 the experimental system could already prove its performance since a number of relevant earthquakes have happened in NEAMTWS area. The results are very promising in terms of speed as the automatic alerts (reliable solutions based on a minimum of 25 stations and disseminated by emails and SMS) were issued between 2 1/2 and 4 minutes for Greece and 5 minutes for Iceland. They are also promising in terms of accuracy since epicenter coordinates, depth and magnitude estimates were sufficiently accurate from the very beginning, usually don't differ substantially from the final solutions and provide a good starting point for the operations of the interim NEAMTWS. However, although an automatic seismic system is a good first step, 24/7 manned RTWCs are mandatory for regular manual verification

  14. Integrated system for production of neutronics and photonics calculational constants. Neutron-induced interactions: index of experimental data

    Energy Technology Data Exchange (ETDEWEB)

    MacGregor, M.H.; Cullen, D.E.; Howerton, R.J.; Perkins, S.T.

    1976-07-04

    Indexes to the neutron-induced interaction data in the Experimental Cross Section Information Library (ECSIL) as of July 4, 1976 are tabulated. The tabulation has two arrangements: isotope (ZA) order and reaction-number order.

  15. Experimental investigation of auroral generator regions with conjugate Cluster and FAST data

    Directory of Open Access Journals (Sweden)

    O. Marghitu

    2006-03-01

    Full Text Available Here and in the companion paper, Hamrin et al. (2006, we present experimental evidence for the crossing of auroral generator regions, based on conjugate Cluster and FAST data. To our knowledge, this is the first investigation that concentrates on the evaluation of the power density, E·J, in auroral generator regions, by using in-situ measurements. The Cluster data we discuss were collected within the Plasma Sheet Boundary Layer (PSBL, during a quiet magnetospheric interval, as judged from the geophysical indices, and several minutes before the onset of a small substorm, as indicated by the FAST data. Even at quiet times, the PSBL is an active location: electric fields are associated with plasma motion, caused by the dynamics of the plasma-sheet/lobe interface, while electrical currents are induced by pressure gradients. In the example we show, these ingredients do indeed sustain the conversion of mechanical energy into electromagnetic energy, as proved by the negative power density, E·J<0. The plasma characteristics in the vicinity of the generator regions indicate a complicated 3-D wavy structure of the plasma sheet boundary. Consistent with this structure, we suggest that at least part of the generated electromagnetic energy is carried away by Alfvén waves, to be dissipated in the ionosphere, near the polar cap boundary. Such a scenario is supported by the FAST data, which show energetic electron precipitation conjugated with the generator regions crossed by Cluster. A careful examination of the conjunction timing contributes to the validation of the generator signatures.

  16. Ship wakes and spectrograms: mathematical modelling and experimental data for finite-depth flows

    Science.gov (United States)

    McCue, Scott; Pethiyagoda, Ravindra; Moroney, Timothy; Macfarlane, Gregor; Binns, Jonathan

    2017-11-01

    We are concerned with how properties of a ship wake can be extracted from surface height data collected at a single point as the ship travels past. The tool we use is a spectrogram, which is a heat map that visualises the time-dependent frequency spectrum of the surface height signal. In this talk, the focus will be on presenting the theoretical framework which involves an idealised mathematical model with a pressure distribution applied to the surface. A geometric argument based on linear water wave theory provides encouraging results for both subcritical and supercritical flow regimes. We then summarise some recent findings obtained by comparing our analysis to experimental data collected at the Australian Maritime College for various sailing speeds and hull shapes*. Our work has the potential to inform ship design, the detection of irregular vessels, and how coastal damage is attributed to specific vessels in shipping channels. We acknowledge support of the Australian Research Council via the Discovery Project DP140100933.

  17. A survey and experimental comparison of distributed SPARQL engines for very large RDF data

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-10-19

    Distributed SPARQL engines promise to support very large RDF datasets by utilizing shared-nothing computer clusters. Some are based on distributed frameworks such as MapReduce; others implement proprietary distributed processing; and some rely on expensive preprocessing for data partitioning. These systems exhibit a variety of trade-offs that are not well-understood, due to the lack of any comprehensive quantitative and qualitative evaluation. In this paper, we present a survey of 22 state-of-the-art systems that cover the entire spectrum of distributed RDF data processing and categorize them by several characteristics. Then, we select 12 representative systems and perform extensive experimental evaluation with respect to preprocessing cost, query performance, scalability and workload adaptability, using a variety of synthetic and real large datasets with up to 4.3 billion triples. Our results provide valuable insights for practitioners to understand the trade-offs for their usage scenarios. Finally, we publish online our evaluation framework, including all datasets and workloads, for researchers to compare their novel systems against the existing ones.

  18. Performance Evaluation of Large Aperture 'Polished Panel' Optical Receivers Based on Experimental Data

    Science.gov (United States)

    Vilnrotter, Victor

    2013-01-01

    Recent interest in hybrid RF/Optical communications has led to the development and installation of a "polished-panel" optical receiver evaluation assembly on the 34-meter research antenna at Deep-Space Station 13 (DSS-13) at NASA's Goldstone Communications Complex. The test setup consists of a custom aluminum panel polished to optical smoothness, and a large-sensor CCD camera designed to image the point-spread function (PSF) generated by the polished aluminum panel. Extensive data has been obtained via realtime tracking and imaging of planets and stars at DSS-13. Both "on-source" and "off-source" data were recorded at various elevations, enabling the development of realistic simulations and analytic models to help determine the performance of future deep-space communications systems operating with on-off keying (OOK) or pulse-position-modulated (PPM) signaling formats with photon-counting detection, and compared with the ultimate quantum bound on detection performance for these modulations. Experimentally determined PSFs were scaled to provide realistic signal-distributions across a photon-counting detector array when a pulse is received, and uncoded as well as block-coded performance analyzed and evaluated for a well-known class of block codes.

  19. Pion distribution amplitude extracted from the experimental data with the local duality sum rule

    Science.gov (United States)

    Guo, Ze-Kun; Liu, Jueping

    2008-10-01

    The photon-to-pion transition form factor is investigated using the form of the renormalon-based twist-four pion distribution amplitude (DA) in the framework of the light-cone local-duality QCD sum rule, which, with suitable parameters, is insensitive to the higher-order Gegenbauer coefficients. With a careful determination for the insertion parameters so that the contribution from the higher-order Gegenbauer expansions is suppressed, the best-fit central values of the first two nontrivial Gegenbauer coefficients of the pion distribution amplitude are extracted out from the CLEO data to be a2(1GeV2)=0.145±0.055 and a4(1GeV2)=-(0.125±0.085), respectively. The rescaled photon-to-pion transition form factor with our best-fit parameters is consistent very well with both the CELLO data and the prediction of the interpolation formula in all the experimental accessible region of the momentum transfer. The shape of the pion distribution amplitude based on the two-parameter model favors the camel-like type, where the near-end-point values are suppressed more than the asymptotic DA, and satisfies the midpoint constraint from light-cone sum rules approximately.

  20. The modelling of condensation in horizontal tubes and the comparison with experimental data

    Directory of Open Access Journals (Sweden)

    Bryk Rafał

    2017-01-01

    Full Text Available The condensation in horizontal tubes plays an important role in determining the operation mode of passive safety systems of modern nuclear power plants. In this paper, two different approaches for modelling of this phenomenon are compared and verified against experimental data. The first approach is based on the flow regime map developed by Tandon. Depending on the regime, the heat transfer coefficient is calculated according to corresponding semi-empirical correlation. The second approach uses a general, fully empirical correlation proposed by Shah. Both models are developed with utilization of the object-oriented, equation-based Modelica language and the open-source Open-Modelica environment. The results are compared with data obtained during a large scale integral test, simulating a Loss of Coolant Accident scenario performed at the dedicated Integral Test Facility Karlstein (INKA which was built at the Components Testing Department of AREVA in Karlstein, Germany. The INKA facility was designed to test the performance of the passive safety systems of KERENA, the new AREVA boiling water reactor design. INKA represents the KERENA containment with a volume scaling of 1:24. Components heights and levels over the ground are in the full scale. The comparison of simulations results shows a good agreement.

  1. NSAIDs in the Acute Treatment of Migraine: A Review of Clinical and Experimental Data

    Directory of Open Access Journals (Sweden)

    Arpad Pardutz

    2010-06-01

    Full Text Available Migraine is a common disabling neurological disorder with a serious socio-economical burden. By blocking cyclooxygenase nonsteroidal anti-inflammatory drugs (NSAIDs decrease the synthesis of prostaglandins, which are involved in the pathophysiology of migraine headaches. Despite the introduction more than a decade ago of a new class of migraine-specific drugs with superior efficacy, the triptans, NSAIDs remain the most commonly used therapies for the migraine attack. This is in part due to their wide availability as over-the-counter drugs and their pharmaco-economic advantages, but also to a favorable efficacy/side effect profile at least in attacks of mild and moderate intensity. We summarize here both the experimental data showing that NSAIDs are able to influence several pathophysiological facets of the migraine headache and the clinical studies providing evidence for the therapeutic efficacy of various subclasses of NSAIDs in migraine therapy. Taken together these data indicate that there are several targets for NSAIDs in migraine pathophysiology and that on the spectrum of clinical potency acetaminophen is at the lower end while ibuprofen is among the most effective drugs. Acetaminophen and aspirin excluded, comparative trials between the other NSAIDs are missing. Since evidence-based criteria are scarce, the selection of an NSAID should take into account proof and degree of efficacy, rapid GI absorption, gastric ulcer risk and previous experience of each individual patient. If selected and prescribed wisely, NSAIDs are precious, safe and cost-efficient drugs for the treatment of migraine attacks.

  2. Capabilities of the WinLTP data acquisition program extending beyond basic LTP experimental functions.

    Science.gov (United States)

    Anderson, William W; Collingridge, Graham L

    2007-05-15

    WinLTP is a Windows data acquisition program designed for the investigation of long-term potentiation (LTP), long-term depression (LTD), and synaptic responses in general. The capabilities required for basic LTP and LTD experiments include alternating two-input extracellular pathway stimulation, LTP induction by single train, theta burst, and primed burst stimulation, and LTD induction by low frequency stimulation. WinLTP provides on-line analyses of synaptic waveforms including measurement of slope, peak amplitude, population-spike amplitude, average amplitude, area, rise time, decay time, duration, cell input resistance, and series resistance. WinLTP also has many advanced capabilities that extend beyond basic LTP experimental capabilities: (1) analysis of all the evoked synaptic potentials individually within a sweep, and the analysis of the entire train-evoked synaptic response as a single entity. (2) Multitasking can be used to run a Continuous Acquisition task (saving data to a gap-free Axon Binary File), while concurrently running the Stimulation/Acquisition Sweeps task. (3) Dynamic Protocol Scripting can be used to make more complicated protocols involving nested Loops (with counters), Delays, Sweeps (with various stimulations), and Run functions (which execute a block of functions). Protocol flow can be changed while the experiment is running. WinLTP runs on National Instruments M-Series and Molecular Devices Digidata 132x boards, and is available at www.winltp.com.

  3. Methods for estimating Curie temperatures of titanomaghemites from experimental J s-T data

    Science.gov (United States)

    Moskowitz, Bruce M.

    1981-03-01

    Methods for determining the Curie temperature ( T c) of titanomaghemites from experimental saturation magnetization-temperature ( J s-T ) data are reviewed. J s-T curves for many submarine basalts and synthetic titanomaghemites are irreversible and determining Curie temperatures from these curves is not a straightforward procedure. Subsequently, differences of sometimes over 100°C in the values of T c may result just from the method of calculation. Two methods for determining T c will be discussed: (1) the graphical method, and (2) the extrapolation method. The graphical method is the most common method employed for determining Curie temperatures of submarine basalts and synthetic titanomaghemites. The extrapolation method based on the quantum mechanical and thermodynamic aspects of the temperature variation of saturation magnetization near T c, although not new to solid state physics, has not been used for estimating Curie temperatures of submarine basalts. The extrapolation method is more objective than the graphical method and uses the actual magnetization data in estimating T c.

  4. Performance evaluation of large aperture "polished panel" optical receivers based on experimental data

    Science.gov (United States)

    Vilnrotter, V.

    Recent interest in the development of hybrid RF/Optical communications has led to the installation of a “ polished-panel” optical receiver evaluation assembly on the 34-meter research antenna at Deep-Space Station 13 (DSS-13) at NASA's Goldstone Deep Space Communications Complex1. The test setup consists of a custom aluminum panel polished to optical smoothness, and a large-sensor CCD camera designed to image the point-spread function (PSF) generated by the polished aluminum panel. Extensive data has been obtained via real-time tracking and imaging of planets and stars at DSS-13. Both “ on-source” and “ off-source” data were recorded at various elevations, enabling the development of realistic simulations and analytic models to help determine the performance of future deep-space communications systems operating with on-off keying (OOK) or pulse-position-modulated (PPM) signaling formats, and compared with the ultimate quantum bound on detection performance. Experimentally determined PSFs were scaled to provide realistic signal-distributions across a photon-counting detector array when a pulse is received, and uncoded as well as block-coded performance analyzed and evaluated for a well-known class of block codes.

  5. Optimal experimental design for improving the estimation of growth parameters of Lactobacillus viridescens from data under non-isothermal conditions.

    Science.gov (United States)

    Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges

    2017-01-02

    In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and Tmin (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h(0.5)°C)] and Tmin=-1.33 (±1.26) [°C], with R(2)=0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h(0.5)°C)] and Tmin=-0.24 (±0.55) [°C], with R(2)=0.990 and RMSE=0.436. The parameters obtained from OED approach

  6. Analysis of experimental hydrogen engine data and hydrogen vehicle performance and emissions simulation

    Energy Technology Data Exchange (ETDEWEB)

    Aceves, S.A. [Lawrence Livermore National Lab., CA (United States)

    1996-10-01

    This paper reports the engine and vehicle simulation and analysis done at Lawrence Livermore (LLNL) as a part of a joint optimized hydrogen engine development effort. Project participants are: Sandia National Laboratory; Los Alamos National Laboratory; and the University of Miami. Fuel cells are considered as the ideal power source for future vehicles, due to their high efficiency and low emissions. However, extensive use of fuel cells in light-duty vehicles is likely to be years away, due to their high manufacturing cost. Hydrogen-fueled, spark-ignited, homogeneous-charge engines offer a near-term alternative to fuel cells. Hydrogen in a spark-ignited engine can be burned at very low equivalence ratios. NO{sub x} emissions can be reduced to less than 10 ppm without catalyst. HC and CO emissions may result from oxidation of engine oil, but by proper design are negligible (a few ppm). Lean operation also results in increased indicated efficiency due to the thermodynamic properties of the gaseous mixture contained in the cylinder. The high effective octane number of hydrogen allows the use of a high compression ratio, further increasing engine efficiency. In this paper, a simplified engine model is used for predicting hydrogen engine efficiency and emissions. The model uses basic thermodynamic equations for the compression and expansion processes, along with an empirical correlation for heat transfer, to predict engine indicated efficiency. A friction correlation and a supercharger/turbocharger model are then used to calculate brake thermal efficiency. The model is validated with many experimental points obtained in a recent evaluation of a hydrogen research engine. The experimental data are used to adjust the empirical constants in the heat release rate and heat transfer correlation. The results indicate that hydrogen lean-burn spark-ignite engines can provide Equivalent Zero Emission Vehicle (EZEV) levels in either a series hybrid or a conventional automobile.

  7. Flow Control Under Low-Pressure Turbine Conditions Using Pulsed Jets: Experimental Data Archive

    Science.gov (United States)

    Volino, Ralph J.; Ibrahim, Mounir B.

    2012-01-01

    This publication is the final report of research performed under an NRA/Cooperative Interagency Agreement, and includes a supplemental CD-ROM with detailed data. It is complemented by NASA/CR-2012-217416 and NASA/CR-2012-217417 which include a Ph.D. Dissertation and an M.S. thesis respectively, performed under this contract. In this study the effects of unsteady wakes and flow control using vortex generator jets (VGJs) were studied experimentally and computationally on the flow over the L1A low pressure turbine (LPT) airfoil. The experimental facility was a six passage linear cascade in a low speed wind tunnel at the U.S. Naval Academy. In parallel, computational work using the commercial code FLUENT (ANSYS, Inc.) was performed at Cleveland State University, using Unsteady Reynolds Averaged Navier Stokes (URANS) and Large Eddy Simulations (LES) methods. In the first phase of the work, the baseline flow was documented under steady inflow conditions without flow control. URANS calculations were done using a variety of turbulence models. In the second phase of the work, flow control was added using steady and pulsed vortex generator jets. The VGJs successfully suppressed separation and reduced aerodynamic losses. Pulsed operation was more effective and mass flow requirements are very low. Numerical simulations of the VGJs cases showed that URANS failed to capture the effect of the jets. LES results were generally better. In the third phase, effects of unsteady wakes were studied. Computations with URANS and LES captured the wake effect and generally predicted separation and reattachment to match the experiments. Quantitatively the results were mixed. In the final phase of the study, wakes and VGJs were combined and synchronized using various timing schemes. The timing of the jets with respect to the wakes had some effect, but in general once the disturbance frequency was high enough to control separation, the timing was not very important. This is the supplemental CD-ROM

  8. An easy-to-build remote laboratory with data transfer using the Internet School Experimental System

    Science.gov (United States)

    Schauer, František; Lustig, František; Dvořák, Jiří; Ožvoldová, Miroslava

    2008-07-01

    The present state of information communication technology makes it possible to devise and run computer-based e-laboratories accessible to any user with a connection to the Internet, equipped with very simple technical means and making full use of web services. Thus, the way is open for a new strategy of physics education with strongly global features, based on experiment and experimentation. We name this strategy integrated e-learning, and remote experiments across the Internet are the foundation for this strategy. We present both pedagogical and technical reasoning for the remote experiments and outline a simple system based on a server-client approach, and on web services and Java applets. We give here an outline of the prospective remote laboratory system with data transfer using the Internet School Experimental System (ISES) as hardware and ISES WEB Control kit as software. This approach enables the simple construction of remote experiments without building any hardware and virtually no programming, using a paste and copy approach with typical prebuilt blocks such as a camera view, controls, graphs, displays, etc. We have set up and operate at present seven experiments, running round the clock, with more than 12 000 connections since 2005. The experiments are widely used in practical teaching of both university and secondary level physics. The recording of the detailed steps the experimentor takes during the measurement enables detailed study of the psychological aspects of running the experiments. The system is ready for a network of universities to start covering the basic set of physics experiments. In conclusion we summarize the results achieved and experiences of using remote experiments built on the ISES hardware system.

  9. An easy-to-build remote laboratory with data transfer using the Internet School Experimental System

    Energy Technology Data Exchange (ETDEWEB)

    Schauer, Frantisek; Ozvoldova, Miroslava [Trnava University, Faculty of Pedagogy, Department of Physics, Trnava (Slovakia); Lustig, Frantisek; Dvorak, JirI [Charles University, Faculty of Mathematics and Physics, Department of Didactics of Physics, Prague (Czech Republic)], E-mail: fschauer@ft.utb.cz

    2008-07-15

    The present state of information communication technology makes it possible to devise and run computer-based e-laboratories accessible to any user with a connection to the Internet, equipped with very simple technical means and making full use of web services. Thus, the way is open for a new strategy of physics education with strongly global features, based on experiment and experimentation. We name this strategy integrated e-learning, and remote experiments across the Internet are the foundation for this strategy. We present both pedagogical and technical reasoning for the remote experiments and outline a simple system based on a server-client approach, and on web services and Java applets. We give here an outline of the prospective remote laboratory system with data transfer using the Internet School Experimental System (ISES) as hardware and ISES WEB Control kit as software. This approach enables the simple construction of remote experiments without building any hardware and virtually no programming, using a paste and copy approach with typical prebuilt blocks such as a camera view, controls, graphs, displays, etc. We have set up and operate at present seven experiments, running round the clock, with more than 12 000 connections since 2005. The experiments are widely used in practical teaching of both university and secondary level physics. The recording of the detailed steps the experimentor takes during the measurement enables detailed study of the psychological aspects of running the experiments. The system is ready for a network of universities to start covering the basic set of physics experiments. In conclusion we summarize the results achieved and experiences of using remote experiments built on the ISES hardware system.

  10. Evaluation of medical countermeasures against organophosphorus compounds: the value of experimental data and computer simulations.

    Science.gov (United States)

    Worek, Franz; Aurbek, Nadine; Herkert, Nadja M; John, Harald; Eddleston, Michael; Eyer, Peter; Thiermann, Horst

    2010-09-06

    Despite extensive research for more than six decades on medical countermeasures against poisoning by organophosphorus compounds (OP) the treatment options are meagre. The presently established acetylcholinesterase (AChE) reactivators (oximes), e.g. obidoxime and pralidoxime, are insufficient against a number of nerve agents and there is ongoing debate on the benefit of oxime treatment in human OP pesticide poisoning. Up to now, the therapeutic efficacy of oximes was mostly evaluated in animal models but substantial species differences prevent direct extrapolation of animal data to humans. Hence, it was considered essential to establish relevant experimental in vitro models for the investigation of oximes as antidotes and to develop computer models for the simulation of oxime efficacy in different scenarios of OP poisoning. Kinetic studies on the various interactions between erythrocyte AChE from various species, structurally different OP and different oximes provided a basis for the initial assessment of the ability of oximes to reactivate inhibited AChE. In the present study, in vitro enzyme-kinetic and pharmacokinetic data from a minipig model of dimethoate poisoning and oxime treatment were used to calculate dynamic changes of AChE activities. It could be shown that there is a close agreement between calculated and in vivo AChE activities. Moreover, computer simulations provided insight into the potential and limitations of oxime treatment. In the end, such data may be a versatile tool for the ongoing discussion of the pros and cons of oxime treatment in human OP pesticide poisoning. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  11. Modeling of experimental data on trace elements and organic compounds content in industrial waste dumps.

    Science.gov (United States)

    Smoliński, Adam; Drobek, Leszek; Dombek, Václav; Bąk, Andrzej

    2016-11-01

    The main objective of the study presented was to investigate the differences between 20 mine waste dumps located in the Silesian Region of Poland and Czech Republic, in terms of trace elements and polycyclic aromatic hydrocarbons contents. The Principal Component Analysis and Hierarchical Clustering Analysis were applied in exploration of the studied data. Since the data set was affected by outlying objects, the employment of a relevant analysis strategy was necessary. The final PCA model was constructed with the use of the Expectation-Maximization iterative approach preceded by a correct identification of outliers. The analysis of the experimental data indicated that three mine waste dumps located in Poland were characterized by the highest concentrations of dibenzo(g,h,i)anthracene and benzo(g,h,i)perylene, and six objects located in Czech Republic and three objects in Poland were distinguished by high concentrations of chrysene and indeno (1.2.3-cd) pyrene. Three of studied mine waste dumps, one located in Czech Republic and two in Poland, were characterized by low concentrations of Cr, Ni, V, naphthalene, acenaphthene, fluorene, phenanthrene, anthracene, fluoranthen, benzo(a)anthracene, chrysene, benzo (b) fluoranthene, benzo (k) fluoranthene, benzo(a)pyrene, dibenzo(g,h,i)anthracene, benzo(g,h,i)perylene and indeno (1.2.3-cd) pyrene in comparison with the remaining ones. The analysis contributes to the assessment and prognosis of ecological and health risks related to the emission of trace elements and organic compounds (PAHs) from the waste dumps examined. No previous research of similar scope and aims has been reported for the area concerned. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Experimental impact cratering provides ground truth data for understanding planetary-scale collision processes

    Science.gov (United States)

    Poelchau, Michael H.; Deutsch, Alex; Kenkmann, Thomas

    2013-04-01

    exponentially reduces crater volumes and cratering efficiency relative to non-porous rocks, and also yields less steep ejecta angles. Microstructural analysis of the subsurface shows a zone of pervasive grain crushing and pore space reduction. This is in good agreement with new mesoscale numerical models, which are able to quantify localized shock pressure behavior in the target's pore space. Planar shock recovery experiments confirm these local pressure excursions, based on microanalysis of shock metamorphic features in quartz. Saturation of porous target rocks with water counteracts many of the effects of porosity. Post-impact analysis of projectile remnants shows that during mixing of projectile and target melts, the Fe of the projectile is preferentially partitioned into target melt to a greater degree than Ni and Co. We plan to continue evaluating the experimental results in combination with numerical models. These models help to quantify and evaluate cratering processes, while experimental data serve as benchmarks to validate the improved numerical models, thus helping to "bridge the gap" between experiments and nature. The results confirm and expand current crater scaling laws, and make an application to craters on planetary surfaces possible.

  13. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates

    DEFF Research Database (Denmark)

    Schwämmle, Veit; León, Ileana R.; Jensen, Ole Nørregaard

    2013-01-01

    changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets...... with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved...... to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods...

  14. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  15. Spontaneous Time Symmetry Breaking in System with Mixed Strategy Nash Equilibrium: Evidences in Experimental Economics Data

    Science.gov (United States)

    Wang, Zhijian; Xu, Bin; Zhejiang Collaboration

    2011-03-01

    In social science, laboratory experiment with human subjects' interaction is a standard test-bed for studying social processes in micro level. Usually, as in physics, the processes near equilibrium are suggested as stochastic processes with time-reversal symmetry (TRS). To the best of our knowledge, near equilibrium, the breaking time symmetry, as well as the existence of robust time anti-symmetry processes, has not been reported clearly in experimental economics till now. By employing Markov transition method to analysis the data from human subject 2x2 Games with wide parameters and mixed Nash equilibrium, we study the time symmetry of the social interaction process near Nash equilibrium. We find that, the time symmetry is broken, and there exists a robust time anti-symmetry processes. We also report the weight of the time anti-symmetry processes in the total processes of each the games. Evidences in laboratory marketing experiments, at the same time, are provided as one-dimension cases. In these cases, time anti-symmetry cycles can also be captured. The proposition of time anti-symmetry processes is small, but the cycles are distinguishable.

  16. Experimental data showing the thermal behavior of a flat roof with phase change material.

    Science.gov (United States)

    Tokuç, Ayça; Başaran, Tahsin; Yesügey, S Cengiz

    2015-12-01

    The selection and configuration of building materials for optimal energy efficiency in a building require some assumptions and models for the thermal behavior of the utilized materials. Although the models for many materials can be considered acceptable for simulation and calculation purposes, the work for modeling the real time behavior of phase change materials is still under development. The data given in this article shows the thermal behavior of a flat roof element with a phase change material (PCM) layer. The temperature and energy given to and taken from the building element are reported. In addition the solid-liquid behavior of the PCM is tracked through images. The resulting thermal behavior of the phase change material is discussed and simulated in [1] A. Tokuç, T. Başaran, S.C. Yesügey, An experimental and numerical investigation on the use of phase change materials in building elements: the case of a flat roof in Istanbul, Build. Energy, vol. 102, 2015, pp. 91-104.

  17. Physiology of cerebral venous blood flow: from experimental data in animals to normal function in humans.

    Science.gov (United States)

    Schaller, B

    2004-11-01

    In contrast to the cerebroarterial system, the cerebrovenous system is not well examined and only partly understood. The cerebrovenous system represents a complex three-dimensional structure that is often asymmetric and considerably represent more variable pattern than the arterial anatomy. Particular emphasis is devoted to the venous return to extracranial drainage routes. As the state-of-the-art-imaging methods are playing a greater role in visualizing the intracranial venous system at present, its clinically pertinent anatomy and physiology has gain increasing interest, even so only few data are available. For this reason, experimental research on specific biophysical (fluid dynamic, rheologic factors) and hemodynamic (venous pressure, cerebral venous blood flow) parameters of the cerebral venous system is more on the focus; especially as these parameters are different to the cerebral arterial system. Particular emphasis is devoted to the venous return to extracranial drainage routes. From the present point of view, it seems that the cerebrovenous system may be one of the most important factors that guarantee normal brain function. In the light of this increasing interest in the cerebral venous system, the authors have summarized the current knowledge of the physiology of the cerebrovenous system and discuss it is in the light of its clinical relevance.

  18. Confronting Theoretical Predictions With Experimental Data; Fitting Strategy For Multi-Dimensional Distributions

    Directory of Open Access Journals (Sweden)

    Tomasz Przedziński

    2015-01-01

    Full Text Available After developing a Resonance Chiral Lagrangian (RχL model to describe hadronic τ lepton decays [18], the model was confronted with experimental data. This was accomplished using a fitting framework which was developed to take into account the complexity of the model and to ensure the numerical stability for the algorithms used in the fitting. Since the model used in the fit contained 15 parameters and there were only three 1-dimensional distributions available, we could expect multiple local minima or even whole regions of equal potential to appear. Our methods had to thoroughly explore the whole parameter space and ensure, as well as possible, that the result is a global minimum. This paper is focused on the technical aspects of the fitting strategy used. The first approach was based on re-weighting algorithm published in [17] and produced results in around two weeks. Later approach, with improved theoretical model and simple parallelization algorithm based on Inter-Process Communication (IPC methods of UNIX system, reduced computation time down to 2-3 days. Additional approximations were introduced to the model decreasing time to obtain the preliminary results down to 8 hours. This allowed to better validate the results leading to a more robust analysis published in [12].

  19. Fitting a simple model of inelastic scattering in Monte Carlo code to experimental data

    CERN Document Server

    Stary, V

    1999-01-01

    Monte Carlo simulations are a very useful tool for simulating many physical processes. The method has often been used for electron microscopy, spectroscopy and microanalysis. In this paper, the backscattering coefficient and coefficient of elastic reflection in the energy range 0.2-5 keV for C, Al, Cu, Ag and Au is simulated and compared with literature data. For the elastic cross sections the program PWADIR was used, whereas for inelastic ones the simple hyperbolic distribution of energy losses was utilized. Due to the hyperbolic shape, one free parameter, the minimal energy loss W sub m sub i sub n appears. This quantity hardly influences the dependence of the backscattering coefficient on primary energy, but strongly influences the energy dependence of the coefficient of elastic reflection. By comparison of the calculated and measured values of this coefficient we were able to find the optimal values of W sub m sub i sub n , and, due to mutual dependence, the 'experimental' values of the inelastic mean fre...

  20. Bias determination for space accelerometers using the ZARM Catapult system - experimental setup and data analysis

    Science.gov (United States)

    Selig, Hanns; Santos Rodrigues, Manuel; Touboul, Pierre; Liorzou, Françoise

    2012-07-01

    Accelerometers for space applications - like the electrostatic differential accelrometer for the MICROSCOPE mission for testing the equivalence principle in space - have to be tested and qualified in μg-conditions in order to demonstrate the system operation and to determine the characteristic sensor parameters. One important characteristic property is the sensor bias. In principle one can determine the sensor bias directly by using the ZARM catapult system as test platform. Even in the evacuated drop tube the residual air pressure results in an air friction that depends on the capsule velocity. At the apex (highest point of the capsule trajectory) the acceleration (relative to the gravitational acceleration g) becomes zero due to the zero velocity at the apex. The direct measurement of the vertical linear acceleration sensor bias is affected by some additional effects that have to be understood in order to be able to determine the sensor bias. Two catapult campaigns have been carried out to demonstrate the principles of the bias determination using a SuperStar accelerometer (Onera). The presentation gives an overview on the experimental setup and on the corresponding data analysis.

  1. Experimental neutron capture data of $^{58}$Ni from the CERN n_TOF facility

    CERN Document Server

    Žugec, P.; Colonna, N.; Bosnar, D.; Altstadt, S.; Andrzejewski, J.; Audouin, L.; Bécares, V.; Bečvář, F.; Belloni, F.; Berthoumieux, E.; Billowes, J.; Boccone, V.; Brugger, M.; Calviani, M.; Calviño, F.; Cano-Ott, D.; Carrapiço, C.; Cerutti, F.; Chiaveri, E.; Chin, M.; Cortés, G.; Cortés-Giraldo, M.A.; Diakaki, M.; Domingo-Pardo, C.; Duran, I.; Dzysiuk, N.; Eleftheriadis, C.; Ferrari, A.; Fraval, K.; Ganesan, S.; García, A.R.; Giubrone, G.; Gómez-Hornillos, M.B.; Gonçalves, I.F.; González-Romero, E.; Griesmayer, E.; Guerrero, C.; Gunsing, F.; Gurusamy, P.; Jenkins, D.G.; Jericha, E.; Kadi, Y.; Käppeler, F.; Karadimos, D.; Koehler, P.; Kokkoris, M.; Krtička, M.; Kroll, J.; Langer, C.; Lederer, C.; Leeb, H.; Leong, L.S.; Losito, R.; Manousos, A.; Marganiec, J.; Martìnez, T.; Massimi, C.; Mastinu, P.F.; Mastromarco, M.; Meaze, M.; Mendoza, E.; Mengoni, A.; Milazzo, P.M.; Mingrone, F.; Mirea, M.; Mondalaers, W.; Paradela, C.; Pavlik, A.; Perkowski, J.; Pignatari, M.; Plompen, A.; Praena, J.; Quesada, J.M.; Rauscher, T.; Reifarth, R.; Riego, A.; Roman, F.; Rubbia, C.; Sarmento, R.; Schillebeeckx, P.; Schmidt, S.; Tagliente, G.; Tain, J.L.; Tarrío, D.; Tassan-Got, L.; Tsinganis, A.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Versaci, R.; Vermeulen, M.J.; Vlachoudis, V.; Vlastou, R.; Wallner, A.; Ware, T.; Weigand, M.; Weiß, C.; Wright, T.

    2013-01-01

    The $^{58}$Ni $(n,\\gamma)$ cross section has been measured at the neutron time of flight facility n_TOF at CERN, in the energy range from 27 meV up to 400 keV. In total, 51 resonances have been analyzed up to 122 keV. Maxwellian averaged cross sections (MACS) have been calculated for stellar temperatures of kT$=$5-100 keV with uncertainties of less than 6%, showing fair agreement with recent experimental and evaluated data up to kT = 50 keV. The MACS extracted in the present work at 30 keV is 34.2$\\pm$0.6$_\\mathrm{stat}\\pm$1.8$_\\mathrm{sys}$ mb, in agreement with latest results and evaluations, but 12% lower relative to the recent KADoNIS compilation of astrophysical cross sections. When included in models of the s-process nucleosynthesis in massive stars, this change results in a 60% increase of the abundance of $^{58}$Ni, with a negligible propagation on heavier isotopes. The reason is that, using both the old or the new MACS, 58Ni is efficiently depleted by neutron captures.

  2. Experimental data showing the thermal behavior of a flat roof with phase change material

    Directory of Open Access Journals (Sweden)

    Ayça Tokuç

    2015-12-01

    Full Text Available The selection and configuration of building materials for optimal energy efficiency in a building require some assumptions and models for the thermal behavior of the utilized materials. Although the models for many materials can be considered acceptable for simulation and calculation purposes, the work for modeling the real time behavior of phase change materials is still under development. The data given in this article shows the thermal behavior of a flat roof element with a phase change material (PCM layer. The temperature and energy given to and taken from the building element are reported. In addition the solid–liquid behavior of the PCM is tracked through images. The resulting thermal behavior of the phase change material is discussed and simulated in [1] A. Tokuç, T. Başaran, S.C. Yesügey, An experimental and numerical investigation on the use of phase change materials in building elements: the case of a flat roof in Istanbul, Build. Energy, vol. 102, 2015, pp. 91–104.

  3. Sixty years of research, 60 years of data: long-term US Forest Service data management on the Penobscot Experimental Forest

    Science.gov (United States)

    Matthew B. Russell; Spencer R. Meyer; John C. Brissette; Laura Kenefic

    2014-01-01

    The U.S. Department of Agriculture, Forest Service silvicultural experiment on the Penobscot Experimental Forest (PEF) in Maine represents 60 years of research in the northern conifer and mixedwood forests of the Acadian Forest Region. The objective of this data management effort, which began in 2008, was to compile, organize, and archive research data collected in the...

  4. Data Processing and Experimental Design for Micrometeorite Impacts in Small Bodies

    Science.gov (United States)

    Jensen, E.; Lederer, S.; Smith, D.; Strojia, C.; Cintala, M.; Zolensky, M.; Keller, L.

    2014-01-01

    as whole mineral rocks to investigate the differences in shock propagation when voids are present. By varying velocity, ambient temperature, and porosity, we can investigate different variables affecting impacts in the solar system. -Data indicates that there is a non-linear relationship between peak shock pressure and the variation in infrared spectral absorbances by the distorted crystal structure. The maximum variability occurs around 37 GPa in enstatite and forsterite. The particle size distribution of the impacted material similarly changes with velocity/peak shock pressure. -The experiments described above are designed to measure the near- to mid-IR effects from these changes to the mineral structure. See Lederer et al., this meeting for additional experimental results.

  5. Experimental Seismic Event-screening Criteria at the Prototype International Data Center

    Science.gov (United States)

    Fisk, M. D.; Jepsen, D.; Murphy, J. R.

    - Experimental seismic event-screening capabilities are described, based on the difference of body-and surface-wave magnitudes (denoted as Ms:mb) and event depth. These capabilities have been implemented and tested at the prototype International Data Center (PIDC), based on recommendations by the IDC Technical Experts on Event Screening in June 1998. Screening scores are presented that indicate numerically the degree to which an event meets, or does not meet, the Ms:mb and depth screening criteria. Seismic events are also categorized as onshore, offshore, or mixed, based on their 90% location error ellipses and an onshore/offshore grid with five-minute resolution, although this analysis is not used at this time to screen out events.Results are presented of applications to almost 42,000 events with mb>=3.5 in the PIDC Standard Event Bulletin (SEB) and to 121 underground nuclear explosions (UNE's) at the U.S. Nevada Test Site (NTS), the Semipalatinsk and Novaya Zemlya test sites in the Former Soviet Union, the Lop Nor test site in China, and the Indian, Pakistan, and French Polynesian test sites. The screening criteria appear to be quite conservative. None of the known UNE's are screened out, while about 41 percent of the presumed earthquakes in the SEB with mb>=3.5 are screened out. UNE's at the Lop Nor, Indian, and Pakistan test sites on 8 June 1996, 11 May 1998, and 28 May 1998, respectively, have among the lowest Ms:mb scores of all events in the SEB.To assess the validity of the depth screening results, comparisons are presented of SEB depth solutions to those in other bulletins that are presumed to be reliable and independent. Using over 1600 events, the comparisons indicate that the SEB depth confidence intervals are consistent with or shallower than over 99.8 percent of the corresponding depth estimates in the other bulletins. Concluding remarks are provided regarding the performance of the experimental event-screening criteria, and plans for future

  6. Experimental data and boundary conditions for a Double-Skin Facade building in external air curtain mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    was carried out in a full scale test facility ‘The Cube’, in order to compile three sets of high quality experimental data for validation purposes. The data sets are available for preheating mode, external air curtain mode and transparent insulation mode. The objective of this article is to provide the reader...

  7. Experimental and Computational Method for Determining Parameters of Stress-Strain State from the Data Obtainable by Interference Optical Techniques

    Directory of Open Access Journals (Sweden)

    Razumovsky I.

    2010-06-01

    Full Text Available Experimental and computational method for determining parameters of stress-strain state is proposed which is based on estimation of compliance between the data sets obtained experimentally and the results of numerical calculations of the boundary problems in formulation of which all distinctive features of area geometry, character of the loads being considered and deformation characteristics of materials are taken into account. The procedure proposed was checked at a number of practically important problems.

  8. Reservoir capacity estimates in shale plays based on experimental adsorption data

    Science.gov (United States)

    Ngo, Tan

    Fine-grained sedimentary rocks are characterized by a complex porous framework containing pores in the nanometer range that can store a significant amount of natural gas (or any other fluids) through adsorption processes. Although the adsorbed gas can take up to a major fraction of the total gas-in-place in these reservoirs, the ability to produce it is limited, and the current technology focuses primarily on the free gas in the fractures. A better understanding and quantification of adsorption/desorption mechanisms in these rocks is therefore required, in order to allow for a more efficient and sustainable use of these resources. Additionally, while water is still predominantly used to fracture the rock, other fluids, such as supercritical CO2 are being considered; here, the idea is to reproduce a similar strategy as for the enhanced recovery of methane in deep coal seams (ECBM). Also in this case, the feasibility of CO2 injection and storage in hydrocarbon shale reservoirs requires a thorough understanding of the rock behavior when exposed to CO2, thus including its adsorption characteristics. The main objectives of this Master's Thesis are as follows: (1) to identify the main controls on gas adsorption in mudrocks (TOC, thermal maturity, clay content, etc.); (2) to create a library of adsorption data measured on shale samples at relevant conditions and to use them for estimating GIP and gas storage in shale reservoirs; (3) to build an experimental apparatus to measure adsorption properties of supercritical fluids (such as CO2 or CH 4) in microporous materials; (4) to measure adsorption isotherms on microporous samples at various temperatures and pressures. The main outcomes of this Master's Thesis are summarized as follows. A review of the literature has been carried out to create a library of methane and CO2 adsorption isotherms on shale samples from various formations worldwide. Large discrepancies have been found between estimates of the adsorbed gas density

  9. Promoting the experimental dialogue between working memory and chunking: Behavioral data and simulation.

    Science.gov (United States)

    Portrat, Sophie; Guida, Alessandro; Phénix, Thierry; Lemaire, Benoît

    2016-04-01

    Working memory (WM) is a cognitive system allowing short-term maintenance and processing of information. Maintaining information in WM consists, classically, in rehearsing or refreshing it. Chunking could also be considered as a maintenance mechanism. However, in the literature, it is more often used to explain performance than explicitly investigated within WM paradigms. Hence, the aim of the present paper was (1) to strengthen the experimental dialogue between WM and chunking, by studying the effect of acronyms in a computer-paced complex span task paradigm and (2) to formalize explicitly this dialogue within a computational model. Young adults performed a WM complex span task in which they had to maintain series of 7 letters for further recall while performing a concurrent location judgment task. The series to be remembered were either random strings of letters or strings containing a 3-letter acronym that appeared in position 1, 3, or 5 in the series. Together, the data and simulations provide a better understanding of the maintenance mechanisms taking place in WM and its interplay with long-term memory. Indeed, the behavioral WM performance lends evidence to the functional characteristics of chunking that seems to be, especially in a WM complex span task, an attentional time-based mechanism that certainly enhances WM performance but also competes with other processes at hand in WM. Computational simulations support and delineate such a conception by showing that searching for a chunk in long-term memory involves attentionally demanding subprocesses that essentially take place during the encoding phases of the task.

  10. The speed of learning instructed stimulus-response association rules in human: experimental data and model.

    Science.gov (United States)

    Bugmann, Guido; Goslin, Jeremy; Duchamp-Viret, Patricia

    2013-11-06

    Humans can learn associations between visual stimuli and motor responses from just a single instruction. This is known to be a fast process, but how fast is it? To answer this question, we asked participants to learn a briefly presented (200ms) stimulus-response rule, which they then had to rapidly apply after a variable delay of between 50 and 1300ms. Participants showed a longer response time with increased variability for short delays. The error rate was low and did not vary with the delay, showing that participants were able to encode the rule correctly in less than 250ms. This time is close to the fastest synaptic learning speed deemed possible by diffusive influx of AMPA receptors. Learning continued at a slower pace in the delay period and was fully completed in average 900ms after rule presentation onset, when response latencies dropped to levels consistent with basic reaction times. A neural model was proposed that explains the reduction of response times and of their variability with the delay by (i) a random synaptic learning process that generates weights of average values increasing with the learning time, followed by (ii) random crossing of the firing threshold by a leaky integrate-and-fire neuron model, and (iii) assuming that the behavioural response is initiated when all neurons in a pool of m neurons have fired their first spike after input onset. Values of m=2 or 3 were consistent with the experimental data. The proposed model is the simplest solution consistent with neurophysiological knowledge. Additional experiments are suggested to test the hypothesis underlying the model and also to explore forgetting effects for which there were indications for the longer delay conditions. This article is part of a Special Issue entitled Neural Coding 2012. © 2013 Elsevier B.V. All rights reserved.

  11. DWT analysis of numerical and experimental data for the diagnosis of dynamic eccentricities in induction motors

    Science.gov (United States)

    Antonino-Daviu, J.; Jover, P.; Riera, M.; Arkkio, A.; Roger-Folch, J.

    2007-08-01

    The behaviour of an induction machine during a startup transient can provide useful information for the diagnosis of electromechanical faults. During this process, the machine works under high stresses and the effects of the faults may also be larger than those in steady-state. These facts may help to amplify the magnitude of the indicators of some incipient faults. In addition, fault components with frequencies dependant on the slip evolve in a particular way during that transient, a fact that allows the diagnosis of the corresponding fault and the discrimination between different faults. The discrete wavelet transform (DWT) is an ideal tool for analysing signals with frequency spectrum variable in time. Some research works have applied with success the DWT to the stator startup current in order to diagnose the presence of broken rotor bars in induction machines. However, few works have used this technique for the study of other common faults, such as eccentricities. In this work, time-frequency analysis of the stator startup current is carried out in order to detect the presence of dynamic eccentricities in an induction motor. For this purpose, the DWT is applied and wavelet signals at different levels are studied. Data are obtained from simulations, using a finite element (FE) model of an induction motor, which allows forcing several kinds of faults in the machine, and also from experimental tests. The results show the validity of the approach for detecting the fault and discriminating with respect to other failures, presenting for certain applications (or working conditions) some advantages over the traditional stationary analysis.

  12. ChIP-chip versus ChIP-seq: Lessons for experimental design and data analysis

    Science.gov (United States)

    2011-01-01

    Background Chromatin immunoprecipitation (ChIP) followed by microarray hybridization (ChIP-chip) or high-throughput sequencing (ChIP-seq) allows genome-wide discovery of protein-DNA interactions such as transcription factor bindings and histone modifications. Previous reports only compared a small number of profiles, and little has been done to compare histone modification profiles generated by the two technologies or to assess the impact of input DNA libraries in ChIP-seq analysis. Here, we performed a systematic analysis of a modENCODE dataset consisting of 31 pairs of ChIP-chip/ChIP-seq profiles of the coactivator CBP, RNA polymerase II (RNA PolII), and six histone modifications across four developmental stages of Drosophila melanogaster. Results Both technologies produce highly reproducible profiles within each platform, ChIP-seq generally produces profiles with a better signal-to-noise ratio, and allows detection of more peaks and narrower peaks. The set of peaks identified by the two technologies can be significantly different, but the extent to which they differ varies depending on the factor and the analysis algorithm. Importantly, we found that there is a significant variation among multiple sequencing profiles of input DNA libraries and that this variation most likely arises from both differences in experimental condition and sequencing depth. We further show that using an inappropriate input DNA profile can impact the average signal profiles around genomic features and peak calling results, highlighting the importance of having high quality input DNA data for normalization in ChIP-seq analysis. Conclusions Our findings highlight the biases present in each of the platforms, show the variability that can arise from both technology and analysis methods, and emphasize the importance of obtaining high quality and deeply sequenced input DNA libraries for ChIP-seq analysis. PMID:21356108

  13. LCA of management strategies for RDF incineration and gasification bottom ash based on experimental leaching data.

    Science.gov (United States)

    Di Gianfilippo, Martina; Costa, Giulia; Pantini, Sara; Allegrini, Elisa; Lombardi, Francesco; Astrup, Thomas Fruergaard

    2016-01-01

    The main characteristics and environmental properties of the bottom ash (BA) generated from thermal treatment of waste may vary significantly depending on the type of waste and thermal technology employed. Thus, to ensure that the strategies selected for the management of these residues do not cause adverse environmental impacts, the specific properties of BA, in particular its leaching behavior, should be taken into account. This study focuses on the evaluation of potential environmental impacts associated with two different management options for BA from thermal treatment of Refuse Derived Fuel (RDF): landfilling and recycling as a filler for road sub bases. Two types of thermal treatment were considered: incineration and gasification. Potential environmental impacts were evaluated by life-cycle assessment (LCA) using the EASETECH model. Both non-toxicity related impact categories (i.e. global warming and mineral abiotic resource depletion) and toxic impact categories (i.e. human toxicity and ecotoxicity) were assessed. The system boundaries included BA transport from the incineration/gasification plants to the landfills and road construction sites, leaching of potentially toxic metals from the BA, the avoided extraction, crushing, transport and leaching of virgin raw materials for the road scenarios, and material and energy consumption for the construction of the landfills. To provide a quantitative assessment of the leaching properties of the two types of BA, experimental leaching data were used to estimate the potential release from each of the two types of residues. Specific attention was placed on the sensitivity of leaching properties and the determination of emissions by leaching, including: leaching data selection, material properties and assumptions related to emission modeling. The LCA results showed that for both types of BA, landfilling was associated with the highest environmental impacts in the non-toxicity related categories. For the toxicity

  14. Pore Size Distribution Influence on Suction Properties of Calcareous Stones in Cultural Heritage: Experimental Data and Model Predictions

    Directory of Open Access Journals (Sweden)

    Giorgio Pia

    2016-01-01

    Full Text Available Water sorptivity symbolises an important property associated with the preservation of porous construction materials. The water movement into the microstructure is responsible for deterioration of different types of materials and consequently for the indoor comfort worsening. In this context, experimental sorptivity tests are incompatible, because they require large quantities of materials in order to statistically validate the results. Owing to these reasons, the development of analytical procedure for indirect sorptivity valuation from MIP data would be highly beneficial. In this work, an Intermingled Fractal Units’ model has been proposed to evaluate sorptivity coefficient of calcareous stones, mostly used in historical buildings of Cagliari, Sardinia. The results are compared with experimental data as well as with other two models found in the literature. IFU model better fits experimental data than the other two models, and it represents an important tool for estimating service life of porous building materials.

  15. Protein folding: Defining a standard set of experimental conditions and a preliminary kinetic data set of two-state proteins

    DEFF Research Database (Denmark)

    Maxwell, Karen L.; Wildes, D.; Zarrine-Afsar, A.

    2005-01-01

    rates, thermodynamics, and structure across diverse sets of proteins. These difficulties include the wide, potentially confounding range of experimental conditions and methods employed to date and the difficulty of obtaining correct and complete sequence and structural details for the characterized...... constructs. The lack of a single approach to data analysis and error estimation, or even of a common set of units and reporting standards, further hinders comparative studies of folding. In an effort to overcome these problems, we define here a consensus set of experimental conditions (25°C at pH 7.0, 50 m......M buffer), data analysis methods, and data reporting standards that we hope will provide a benchmark for experimental studies. We take the first step in this initiative by describing the folding kinetics of 30 apparently two-state proteins or protein domains under the consensus conditions. The goal of our...

  16. Experimental Data in Support of the 1991 Shock Classification of Chondrites

    Science.gov (United States)

    Schmitt, R. T.; Stoffler, D.

    1995-09-01

    We present results of shock recovery experiments performed on the H6(S1) chondrite Kernouv . These data and new observations on ordinary chondrites confirm the recently proposed classification system [1] and provide additional criteria for determining the shock stage, the shock pressure, and, under certain conditions, also the ambient (pre-shock) temperature during shock metamorphism of any chondrite sample. Two series of experiments at 293 K and 920 K and 10, 15, 20, 25, 30, 45, and 60 GPa were made with a high explosive device [2] using 0.5 mm thick disks of the Kernouv chondrite. Shock effects in olivine, orthopyroxene, plagioclase, and troilite and shock-induced melt products were studied by optical [3], electron optical and X-ray diffraction methods. All essential characteristics of the six progressive stages of shock metamorphism (S1 - S6) observed in natural samples of chondrites [1] have been reproduced experimentally except for opaque shock veins and the high pressure polymorphs of olivine and pyroxene (ringwoodite/wadsleyite and majorite), well known from naturally shocked chondrites. This is probably due to the special sample and containment geometry and the extremely short pressure pulses (0.2 - 0.8 microseconds) in the experiments. The shock experiments provided a clear understanding of the shock wave behavior of troilite and of the shock-induced melting, mobilization, and exsolution-recrystallization of composite troilite-metal grains. At 293 K troilite is monocrystalline up to 35 GPa displaying undulatory extinction from 10 to 25 GPa, partial recrystallization from 30 - 45 GPa, and complete recrystallization above 45 GPa. Local melting of troilite/metal grains starts at 30 GPa and composite grains displaying exsolution textures of both phases are formed which get mobilized and deposited into fractures of neighbouring silicate grains above 45 GPa. For a pre-shock temperature of 293 K the pressure at which diagnostic shock effects are formed, is

  17. Current status of the European contribution to the Remote Data Access System of the ITER Remote Experimentation Centre

    Energy Technology Data Exchange (ETDEWEB)

    De Tommasi, G., E-mail: detommas@unina.it [Fusion for Energy, 08019 Barcelona (Spain); Consorzio CREATE/DIETI, Università degli Studi di Napoli Federico II, Via Claudio 21, 80125 Napoli (Italy); Manduchi, G. [Consorzio RFX, Corso Stati Uniti 4, Padova 35127 (Italy); Muir, D.G. [CCFE, Culham Science Centre, Abingdon OX14 3EA (United Kingdom); Ide, S.; Naito, O.; Urano, H. [Japan Atomic Energy Agency, Naka Fusion Institute, Naka, Ibaraki 311-0193 (Japan); Clement-Lorenzo, S. [Fusion for Energy, 08019 Barcelona (Spain); Nakajima, N. [BA IFERC Project Team, Rokkasho-mura, Aomori 039-3212 (Japan); Ozeki, T. [Japan Atomic Energy Agency, Naka Fusion Institute, Naka, Ibaraki 311-0193 (Japan); Sartori, F. [Fusion for Energy, 08019 Barcelona (Spain)

    2015-10-15

    The ITER Remote Experimentation Centre (REC) is one of the projects under implementation within the BA agreement. The final objective of the REC is to allow researchers to take part in the experimentation on ITER from a remote location. Before ITER first operations, the REC will be used to evaluate ITER-relevant technologies for remote participation. Among the different software tools needed for remote participation, an important one is the Remote Data Access System (RDA), which provides a single software infrastructure to access data stored at the remotely participating experiment, regardless of the geographical location of the users. This paper introduces the European contribution to the RDA system for the REC.

  18. Experimental Demonstration of Optical Switching of Tbit/s Data Packets for High Capacity Short-Range Networks

    DEFF Research Database (Denmark)

    Medhin, Ashenafi Kiros; Kamchevska, Valerija; Hu, Hao

    2015-01-01

    Record-high 1.28-Tbit/s optical data packets are experimentally switched in the optical domain using a LiNbO3 switch. An in-band notch-filter labeling scheme scalable to 65,536 labels is employed and a 3-km transmission distance is demonstrated.......Record-high 1.28-Tbit/s optical data packets are experimentally switched in the optical domain using a LiNbO3 switch. An in-band notch-filter labeling scheme scalable to 65,536 labels is employed and a 3-km transmission distance is demonstrated....

  19. Experimental simulation: using generative modelling and palaeoecological data to understand human-environment interactions

    Directory of Open Access Journals (Sweden)

    George Perry

    2016-10-01

    Full Text Available The amount of palaeoecological information available continues to grow rapidly, providing improved descriptions of the dynamics of past ecosystems and enabling them to be seen from new perspectives. At the same time, there has been concern over whether palaeoecological enquiry needs to move beyond descriptive inference to a more hypothesis-focussed or experimental approach; however, the extent to which conventional hypothesis-driven scientific frameworks can be applied to historical contexts (i.e., the past is the subject of ongoing debate. In other disciplines concerned with human-environment interactions, including physical geography and archaeology, there has been growing use of generative simulation models, typified by agent-based approaches. Generative modelling encourages counter-factual questioning (what if…?, a mode of argument that is particularly important in systems and time-periods, such as the Holocene and now the Anthropocene, where the effects of humans and other biophysical processes are deeply intertwined. However, palaeoecologically focused simulation of the dynamics of the ecosystems of the past either seems to be conducted to assess the applicability of some model to the future or treats humans simplistically as external forcing factors. In this review we consider how generative simulation-modelling approaches could contribute to our understanding of past human-environment interactions. We consider two key issues: the need for null models for understanding past dynamics and the need to be able learn more from pattern-based analysis. In this light, we argue that there is considerable scope for palaeocology to benefit from developments in generative models and their evaluation. We discuss the view that simulation is a form of experiment and, by using case studies, consider how the many patterns available to palaeoecologists can support model evaluation in a way that moves beyond simplistic pattern-matching and how such models

  20. Experimental Demonstration of 32 Gbaud 4-PAM for Data Center Interconnections of up to 320 km

    DEFF Research Database (Denmark)

    Madsen, Peter; Suhr, Lau Frejstrup; Clausen, Anders

    2017-01-01

    This paper presents experimental results demonstrating a 64 Gbps 4-PAM transmission over a 320 km SSMF span employing standard 80 km fiber spans for metro links. The receiver consists of a LPF and a DFE utilizing the DD-LMS algorithm.......This paper presents experimental results demonstrating a 64 Gbps 4-PAM transmission over a 320 km SSMF span employing standard 80 km fiber spans for metro links. The receiver consists of a LPF and a DFE utilizing the DD-LMS algorithm....

  1. Selecting appropriate dynamic model for elastomeric engine mounts to approximate experimental FRF data of them

    Directory of Open Access Journals (Sweden)

    Jahani K.

    2010-06-01

    Full Text Available In this paper, the capabilities of different dynamic analytical models to approximate experimentally measured FRFs of elastomeric engine mounts of a passenger car are investigated. Artificial neural networks is used in identifying the dynamic characteristics of each model. Impact hammer test is implemented to extract measured FRFs and harmonic analysis is used to get the counterpart response of the models. Here linear and orthotropic material properties are considered for elastomeric media. The frequency response functions of updated models are compared with experimentally detected ones and advantages and limitations of each model to simulate the real dynamic behaviour of elastomeric engine mounts are discussed

  2. REVIEW OF EXPERIMENTAL CAPABILITIES AND HYDRODYNAMIC DATA FOR VALIDATION OF CFD-BASED PREDICTIONS FOR SLURRY BUBBLE COLUMN REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    Donna Post Guillen; Daniel S. Wendt; Steven P. Antal; Michael Z. Podowski

    2007-11-01

    The purpose of this paper is to document the review of several open-literature sources of both experimental capabilities and published hydrodynamic data to aid in the validation of a Computational Fluid Dynamics (CFD) based model of a slurry bubble column (SBC). The review included searching the Web of Science, ISI Proceedings, and Inspec databases, internet searches as well as other open literature sources. The goal of this study was to identify available experimental facilities and relevant data. Integral (i.e., pertaining to the SBC system), as well as fundamental (i.e., separate effects are considered), data are included in the scope of this effort. The fundamental data is needed to validate the individual mechanistic models or closure laws used in a Computational Multiphase Fluid Dynamics (CMFD) simulation of a SBC. The fundamental data is generally focused on simple geometries (i.e., flow between parallel plates or cylindrical pipes) or custom-designed tests to focus on selected interfacial phenomena. Integral data covers the operation of a SBC as a system with coupled effects. This work highlights selected experimental capabilities and data for the purpose of SBC model validation, and is not meant to be an exhaustive summary.

  3. REVIEW OF EXPERIMENTAL CAPABILITIES AND HYDRODYNAMIC DATA FOR VALIDATION OF CFD BASED PREDICTIONS FOR SLURRY BUBBLE COLUMN REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    Donna Post Guillen; Daniel S. Wendt

    2007-11-01

    The purpose of this paper is to document the review of several open-literature sources of both experimental capabilities and published hydrodynamic data to aid in the validation of a Computational Fluid Dynamics (CFD) based model of a slurry bubble column (SBC). The review included searching the Web of Science, ISI Proceedings, and Inspec databases, internet searches as well as other open literature sources. The goal of this study was to identify available experimental facilities and relevant data. Integral (i.e., pertaining to the SBC system), as well as fundamental (i.e., separate effects are considered), data are included in the scope of this effort. The fundamental data is needed to validate the individual mechanistic models or closure laws used in a Computational Multiphase Fluid Dynamics (CMFD) simulation of a SBC. The fundamental data is generally focused on simple geometries (i.e., flow between parallel plates or cylindrical pipes) or custom-designed tests to focus on selected interfacial phenomena. Integral data covers the operation of a SBC as a system with coupled effects. This work highlights selected experimental capabilities and data for the purpose of SBC model validation, and is not meant to be an exhaustive summary.

  4. Presentation and comparison of experimental critical heat flux data at conditions prototypical of light water small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Greenwood, M.S., E-mail: 1greenwoodms@ornl.gov; Duarte, J.P.; Corradini, M.

    2017-06-15

    Highlights: • Low mass flux and moderate to high pressure CHF experimental results are presented. • Facility uses chopped-cosine heater profile in a 2 × 2 square bundle geometry. • The EPRI, CISE-GE, and W-3 CHF correlations provide reasonable average CHF prediction. • Neural network analysis predicts experimental data and demonstrates utility of method. - Abstract: The critical heat flux (CHF) is a two-phase flow phenomenon which rapidly decreases the efficiency of the heat transfer performance at a heated surface. This phenomenon is one of the limiting criteria in the design and operation of light water reactors. Deviations of operating parameters greatly alters the CHF condition and must be experimentally determined for any new parameters such as those proposed in small modular reactors (SMR) (e.g. moderate to high pressure and low mass fluxes). Current open literature provides too little data for functional use at the proposed conditions of prototypical SMRs. This paper presents a brief summary of CHF data acquired from an experimental facility at the University of Wisconsin-Madison designed and built to study CHF at high pressure and low mass flux ranges in a 2 × 2 chopped cosine rod bundle prototypical of conceptual SMR designs. The experimental CHF test inlet conditions range from pressures of 8–16 MPa, mass fluxes of 500–1600 kg/m2 s, and inlet water subcooling from 250 to 650 kJ/kg. The experimental data is also compared against several accepted prediction methods whose application ranges are most similar to the test conditions.

  5. Numerical Validation of a Vortex Model against ExperimentalData on a Straight-Bladed Vertical Axis Wind Turbine

    Directory of Open Access Journals (Sweden)

    Eduard Dyachuk

    2015-10-01

    Full Text Available Cyclic blade motion during operation of vertical axis wind turbines (VAWTs imposes challenges on the simulations models of the aerodynamics of VAWTs. A two-dimensional vortex model is validated against the new experimental data on a 12-kW straight-bladed VAWT, which is operated at an open site. The results on the normal force on one blade are analyzed. The model is assessed against the measured data in the wide range of tip speed ratios: from 1.8 to 4.6. The predicted results within one revolution have a similar shape and magnitude as the measured data, though the model does not reproduce every detail of the experimental data. The present model can be used when dimensioning the turbine for maximum loads.

  6. A Hierarchical Modeling Approach to Data Analysis and Study Design in a Multi-Site Experimental fMRI Study

    Science.gov (United States)

    Zhou, Bo; Konstorum, Anna; Duong, Thao; Tieu, Kinh H.; Wells, William M.; Brown, Gregory G.; Stern, Hal S.; Shahbaba, Babak

    2013-01-01

    We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model…

  7. Experimental data from irradiation of physical detectors disclose weaknesses in basic assumptions of the δ ray theory of track structure

    DEFF Research Database (Denmark)

    Olsen, K. J.; Hansen, Jørgen-Walther

    1985-01-01

    The applicability of track structure theory has been tested by comparing predictions based on the theory with experimental high-LET dose-response data for an amino acid alanine and a nylon based radiochromic dye film radiation detector. The linear energy transfer LET, has been varied from 28...

  8. Randomization and Data-Analysis Items in Quality Standards for Single-Case Experimental Studies

    Science.gov (United States)

    Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick

    2015-01-01

    Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…

  9. Research as Pedagogy: Using Experimental Data Collection as a Course Learning Tool

    Science.gov (United States)

    Beard, Virginia; Booke, Paula

    2016-01-01

    Integrating research in the classroom experience is recognized as potentially important in enhancing student learning (Price 2001; Schmid 1992). This article asks if student integration as research subjects augments their learning about political science. A quasi-experimental project focused on media usage, construction, and influences on the…

  10. Extraction of potential energy in charge asymmetry coordinate from experimental fission data

    Energy Technology Data Exchange (ETDEWEB)

    Pasca, H. [Joint Institute for Nuclear Research, Dubna (Russian Federation); ' ' Babes-Bolyai' ' Univ., Cluj-Napoca (Romania); Andreev, A.V.; Adamian, G.G. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Antonenko, N.V. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Tomsk Polytechnic Univ. (Russian Federation). Mathematical Physics Dept.

    2016-12-15

    For fissioning isotopes of Ra, Ac, Th, Pa, and U, the potential energies as a function of the charge asymmetry coordinate are extracted from the experimental charge distributions of the fission fragment and compared with the calculated scission-point driving potentials. The role of the potential energy surfaces in the description of the fission charge distribution is discussed. (orig.)

  11. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .2. DESIGN EVALUATION ON MEASURED DATA

    NARCIS (Netherlands)

    DUINEVELD, CAA; SMILDE, AK; DOORNBOS, DA

    The construction of a small experimental design for a combination of process and mixture variables is a problem which has not been solved completely by now. In a previous paper we evaluated some designs with theoretical measures. This second paper evaluates the capabilities of the best of these

  12. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .2. DESIGN EVALUATION ON MEASURED DATA

    NARCIS (Netherlands)

    DUINEVELD, C. A. A.; Smilde, A. K.; Doornbos, D. A.

    1993-01-01

    The construction of a small experimental design for a combination of process and mixture variables is a problem which has not been solved completely by now. In a previous paper we evaluated some designs with theoretical measures. This second paper evaluates the capabilities of the best of these

  13. Investigation of firebrand generation from an experimental fire: Development of a reliable data collection methodology

    Science.gov (United States)

    Jan C. Thomas; Eric V. Mueller; Simon Santamaria; Michael Gallagher; Mohamad El Houssami; Alexander Filkov; Kenneth Clark; Nicholas Skowronski; Rory M. Hadden; William Mell; Albert Simeoni

    2017-01-01

    An experimental approach has been developed to quantify the characteristics and flux of firebrands during a management-scale wildfire in a pine-dominated ecosystem. By characterizing the local fire behavior and measuring the temporal and spatial variation in firebrand collection, the flux of firebrands has been related to the fire behavior for the first time. This...

  14. Does evidence presentation format affect judgment? An experimental evaluation of displays of data for judgments

    NARCIS (Netherlands)

    Sanfey, A.G.; Hastie, R.

    1998-01-01

    Information relevant to a prediction was presented in one of eight formats: a table of numbers, a brief text, a longer biographical story, and five different types of bar graphs. Experimental participants made judgments of marathon finishing times based on information about the runners' ages, prior

  15. A case study in experimental exploration: exploratory data selection at the Large Hadron Collider

    NARCIS (Netherlands)

    Karaca, Koray

    2017-01-01

    In this paper, I propose an account that accommodates the possibility of experimentation being exploratory in cases where the procedures necessary to plan and perform an experiment are dependent on the theoretical accounts of the phenomena under investigation. The present account suggests that

  16. Void lattice formation in electron irradiated CaF{sub 2}: Statistical analysis of experimental data and cellular automata simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zvejnieks, G., E-mail: guntars@latnet.lv; Merzlyakov, P.; Kuzovkov, V.N.; Kotomin, E.A.

    2016-02-01

    Calcium fluoride (CaF{sub 2}) is an important optical material widely used in both microlithography and deep UV windows. It is known that under certain conditions electron beam irradiation can create therein a superlattice consisting of vacancy clusters (called a void lattice). The goal of this paper is twofold. Firstly, to perform a quantitative analysis of experimental TEM images demonstrating void lattice formation, we developed two distinct image filters. As a result, we can easily calculate vacancy concentration, vacancy cluster distribution function as well as average distances between defect clusters. The results for two suggested filters are similar and demonstrate that experimental void cluster growth is accompanied by a slight increase of the void lattice constant. Secondly, we proposed a microscopic model that allows us to reproduce a macroscopic void ordering, in agreement with experimental data, and to resolve existing theoretical and experimental contradictions. Our computer simulations demonstrate that macroscopic void lattice self-organization can occur only in a narrow parameter range. Moreover, we studied the kinetics of a void lattice ordering, starting from an initial disordered stage, in a good agreement with the TEM experimental data.

  17. Experimental investigation of the impulse gas injection into liquid and the use of experimental data for verification of the HYDRA-IBRAE/LM thermohydraulic code

    Science.gov (United States)

    Lobanov, P. D.; Usov, E. V.; Butov, A. A.; Pribaturin, N. A.; Mosunova, N. A.; Strizhov, V. F.; Chukhno, V. I.; Kutlimetov, A. E.

    2017-10-01

    Experiments with impulse gas injection into model coolants, such as water or the Rose alloy, performed at the Novosibirsk Branch of the Nuclear Safety Institute, Russian Academy of Sciences, are described. The test facility and the experimental conditions are presented in details. The dependence of coolant pressure on the injected gas flow and the time of injection was determined. The purpose of these experiments was to verify the physical models of thermohydraulic codes for calculation of the processes that could occur during the rupture of tubes of a steam generator with heavy liquid metal coolant or during fuel rod failure in water-cooled reactors. The experimental results were used for verification of the HYDRA-IBRAE/LM system thermohydraulic code developed at the Nuclear Safety Institute, Russian Academy of Sciences. The models of gas bubble transportation in a vertical channel that are used in the code are described in detail. A two-phase flow pattern diagram and correlations for prediction of friction of bubbles and slugs as they float up in a vertical channel and of two-phase flow friction factor are presented. Based on the results of simulation of these experiments using the HYDRA-IBRAE/LM code, the arithmetic mean error in predicted pressures was calculated, and the predictions were analyzed considering the uncertainty in the input data, geometry of the test facility, and the error of the empirical correlation. The analysis revealed major factors having a considerable effect on the predictions. The recommendations are given on updating of the experimental results and improvement of the models used in the thermohydraulic code.

  18. Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data

    Science.gov (United States)

    2017-03-02

    of using polarization qubits on a chip is coupling the photons from free space spontaneous parametric down- conversion sources to the chip...respect to regular quantum computer architectures , creating processes without a causal order is an experimental task. Supported by this project we...definite causal order by measuring a causal witness. This mathematical object incorporates a series of measurements which are designed to yield a

  19. Virtual Diagnostics Interface: Real Time Comparison of Experimental Data and CFD Predictions for a NASA Ares I-Like Vehicle

    Science.gov (United States)

    Schwartz, Richard J.; Fleming, Gary A.

    2007-01-01

    Virtual Diagnostics Interface technology, or ViDI, is a suite of techniques utilizing image processing, data handling and three-dimensional computer graphics. These techniques aid in the design, implementation, and analysis of complex aerospace experiments. LiveView3D is a software application component of ViDI used to display experimental wind tunnel data in real-time within an interactive, three-dimensional virtual environment. The LiveView3D software application was under development at NASA Langley Research Center (LaRC) for nearly three years. LiveView3D recently was upgraded to perform real-time (as well as post-test) comparisons of experimental data with pre-computed Computational Fluid Dynamics (CFD) predictions. This capability was utilized to compare experimental measurements with CFD predictions of the surface pressure distribution of the NASA Ares I Crew Launch Vehicle (CLV) - like vehicle when tested in the NASA LaRC Unitary Plan Wind Tunnel (UPWT) in December 2006 - January 2007 timeframe. The wind tunnel tests were conducted to develop a database of experimentally-measured aerodynamic performance of the CLV-like configuration for validation of CFD predictive codes.

  20. Immunoinformatics Features Linked to Leishmania Vaccine Development: Data Integration of Experimental and In Silico Studies.

    Science.gov (United States)

    Brito, Rory C F; Guimarães, Frederico G; Velloso, João P L; Corrêa-Oliveira, Rodrigo; Ruiz, Jeronimo C; Reis, Alexandre B; Resende, Daniela M

    2017-02-10

    Leishmaniasis is a wide-spectrum disease caused by parasites from Leishmania genus. There is no human vaccine available and it is considered by many studies as apotential effective tool for disease control. To discover novel antigens, computational programs have been used in reverse vaccinology strategies. In this work, we developed a validation antigen approach that integrates prediction of B and T cell epitopes, analysis of Protein-Protein Interaction (PPI) networks and metabolic pathways. We selected twenty candidate proteins from Leishmania tested in murine model, with experimental outcome published in the literature. The predictions for CD4⁺ and CD8⁺ T cell epitopes were correlated with protection in experimental outcomes. We also mapped immunogenic proteins on PPI networks in order to find Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways associated with them. Our results suggest that non-protective antigens have lowest frequency of predicted T CD4⁺ and T CD8⁺ epitopes, compared with protective ones. T CD4⁺ and T CD8⁺ cells are more related to leishmaniasis protection in experimental outcomes than B cell predicted epitopes. Considering KEGG analysis, the proteins considered protective are connected to nodes with few pathways, including those associated with ribosome biosynthesis and purine metabolism.

  1. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DEFF Research Database (Denmark)

    Morell, William C.; Birkel, Garrett W.; Forrer, Mark

    2017-01-01

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which are not gener......Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which...

  2. Experimental, water droplet impingement data on two-dimensional airfoils, axisymmetric inlet and Boeing 737-300 engine inlet

    Science.gov (United States)

    Papadakis, M.; Elangovan, E.; Freund, G. A., Jr.; Breer, M. D.

    1987-01-01

    An experimental method has been developed to determine the droplet impingement characteristics on two- and three-dimensional bodies. The experimental results provide the essential droplet impingement data required to validate particle trajectory codes, used in aircraft icing analyses and engine inlet particle separator analyses. A body whose water droplet impingement characteristics are required is covered at strategic locations by thin strips of moisture absorbing (blotter) paper, and then exposed to an air stream containing a dyed-water spray cloud. Water droplet impingement data are extracted from the dyed blotter strips, by measuring the optical reflectance of the dye deposit on the strips, using an automated reflectometer. Impingement efficiency data obtained for a NACA 65(2)015 airfoil section, a supercritical airfoil section, and Being 737-300 and axisymmetric inlet models are presented in this paper.

  3. Experimental data of biomaterial derived from Malva sylvestris and charcoal tablet powder for Hg2+ removal from aqueous solutions

    Directory of Open Access Journals (Sweden)

    Alireza Rahbar

    2016-09-01

    Full Text Available In this experimental data article, a novel biomaterial was provided from Malva sylvestris and characterized its properties using various instrumental techniques. The operating parameters consisted of pH and adsorbent dose on Hg2+ adsorption from aqueous solution using M. sylvestris powder (MSP were compared with charcoal tablet powder (CTP, a medicinal drug. The data acquired showed that M. sylvestris is a viable and very promising alternative adsorbent for Hg2+ removal from aqueous solutions. The experimental data suggest that the MSP is a potential adsorbent to use in medicine for treatment of poisoning with heavy metals; however, the application in animal models is a necessary step before the eventual application of MSP in situations involving humans.

  4. SMEX04 Walnut Gulch Experimental Watershed Soil Moisture Data: Arizona, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — Notice to Data Users: The documentation for this data set was provided solely by the Principal Investigator(s) and was not further developed, thoroughly reviewed, or...

  5. 40 CFR 158.2082 - Experimental use permit biochemical pesticides residue data requirements table.

    Science.gov (United States)

    2010-07-01

    ... biochemical pesticide products when Tier II or Tier III toxicology data are required, as specified for... pesticides residue data requirements table. 158.2082 Section 158.2082 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Biochemical Pesticides...

  6. Experimental Peptide Identification Repository (EPIR): an integrated peptide-centric platform for validation and mining of tandem mass spectrometry data

    DEFF Research Database (Denmark)

    Kristensen, Dan Bach; Brønd, Jan Christian; Nielsen, Peter Aagaard

    2004-01-01

    LC MS/MS has become an established technology in proteomic studies, and with the maturation of the technology the bottleneck has shifted from data generation to data validation and mining. To address this bottleneck we developed Experimental Peptide Identification Repository (EPIR), which...... information. In the present study, the utility of EPIR and associated software tools is demonstrated on LC MS/MS data derived from a set of model proteins and complex protein mixtures derived from MCF-7 breast cancer cells. Emphasis is placed on the key strengths of EPIR, including the ability to validate...

  7. Investigation on the quasifission process by theoretical analysis of experimental data of fissionlike reaction products

    Energy Technology Data Exchange (ETDEWEB)

    Giardina, G; Mandaglio, G; Curciarello, F; Leo, V De; Fazio, G; Manganaro, M; Romaniuk, M [Dipartimento di Fisica, Universita di Messina, I-98166 Messina (Italy); Nasirov, A K [Joint Institute for Nuclear Research, 141980, Dubna (Russian Federation); Sacca, C, E-mail: giardina@nucleo.unime.it [Dipartimento di Scienze della Terra, Universita di Messina, I-98166 Messina (Italy)

    2011-02-01

    The hindrance to complete fusion is a phenomenon presenting in the most part of the capture events in reactions with massive nuclei. This phenomenon is due to the onset of the quasifission process which competes with complete fusion during the evolution of the composed system formed at capture stage. The branching ratio between quasifission and complete fusion strongly depends from different characteristics of reacting nuclei in the entrance channel. The experimental and theoretical investigations of reaction dynamics connected with the formation of composed system is nowadays the main subject of the nuclear reactions. There is ambiguity in establishment of the reaction mechanism leading to the observed binary fissionlike fragments. The correct estimation of the fusion probability is important in planning experiments for the synthesis of superheavy elements. The experimental determination of evaporation residues only is not enough to restore the true reaction dynamics. The experimental observation of fissionlike fragments only cannot assure the correct distinguishing of products of the quasifission, fast fission, and fusion-fission processes which have overlapping in the mass (angular, kinetic energy) distributions of fragments. In this paper we consider a wide set of reactions (with different mass asymmetry and mass symmetry parameters) with the aim to explain the role played by many quantities on the reaction mechanisms. We also present the results of study of the {sup 48}Ca+{sup 249}Bk reaction used to synthesize superheavy nuclei with Z = 117 by the determination of the evaporation residue cross sections and the effective fission barriers < B{sub f} > of excited nuclei formed along the de-excitation cascade of the compound nucleus.

  8. CORRELATION OF EXPERIMENTAL AND THEORETICAL DATA FOR MANTLE TANKS USED IN LOW FLOW SDHW SYSTEMS

    DEFF Research Database (Denmark)

    Shah, Louise Jivan; Furbo, Simon

    1997-01-01

    -calculations, a detailed analysis of the heat transfer from the solar collector fluid to the wall of the hot water tank is performed. The analysis has resulted in a correlation for the heat transfer between the solar collector fluid and the wall of the hot water........1. The model is validated against the experimental tests, and good agreement between measured and calculated results is achieved. The results from the CFD-calculations are used to illustrate the thermal behaviour and the fluid dynamics in the mantle and in the hot water tank. With the CFD...

  9. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Actis, S. [Paul-Scherrer-Institute Wuerenlingen and Villigen, Villigen (Switzerland); Arbuzov, A. [Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics; Balossini, G. [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Pavia (IT)] (and others)

    2009-12-15

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e{sup +}e{sup -} colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on {tau} decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and {tau} decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed. (orig.)

  10. Experimental Water Droplet Impingement Data on Airfoils, Simulated Ice Shapes, an Engine Inlet and a Finite Wing

    Science.gov (United States)

    Papadakis, M.; Breer, M.; Craig, N.; Liu, X.

    1994-01-01

    An experimental method has been developed to determine the water droplet impingement characteristics on two- and three-dimensional aircraft surfaces. The experimental water droplet impingement data are used to validate particle trajectory analysis codes that are used in aircraft icing analyses and engine inlet particle separator analyses. The aircraft surface is covered with thin strips of blotter paper in areas of interest. The surface is then exposed to an airstream that contains a dyed-water spray cloud. The water droplet impingement data are extracted from the dyed blotter paper strips by measuring the optical reflectance of each strip with an automated reflectometer. Experimental impingement efficiency data represented for a NLF (1)-0414 airfoil, a swept MS (1)-0317 airfoil, a Boeing 737-300 engine inlet model, two simulated ice shapes and a swept NACA 0012 wingtip. Analytical impingement efficiency data are also presented for the NLF (1)-0414 airfoil and the Boeing 737-300 engine inlet model.

  11. Review of nuclear data improvement needs for nuclear radiation measurement techniques used at the CEA experimental reactor facilities

    Directory of Open Access Journals (Sweden)

    Destouches Christophe

    2016-01-01

    Full Text Available The constant improvement of the neutron and gamma calculation codes used in experimental nuclear reactors goes hand in hand with that of the associated nuclear data libraries. The validation of these calculation schemes always requires the confrontation with integral experiments performed in experimental reactors to be completed. Nuclear data of interest, straight as cross sections, or elaborated ones such as reactivity, are always derived from a reaction rate measurement which is the only measurable parameter in a nuclear sensor. So, in order to derive physical parameters from the electric signal of the sensor, one needs specific nuclear data libraries. This paper presents successively the main features of the measurement techniques used in the CEA experimental reactor facilities for the on-line and offline neutron/gamma flux characterizations: reactor dosimetry, neutron flux measurements with miniature fission chambers and Self Power Neutron Detector (SPND and gamma flux measurements with chamber ionization and TLD. For each technique, the nuclear data necessary for their interpretation will be presented, the main identified needs for improvement identified and an analysis of their impact on the quality of the measurement. Finally, a synthesis of the study will be done.

  12. Comparison between 2D turbulence model ESEL and experimental data from AUG and COMPASS tokamaks

    DEFF Research Database (Denmark)

    Ondac, Peter; Horacek, Jan; Seidl, Jakub

    2015-01-01

    In this article we have used the 2D fluid turbulence numerical model, ESEL, to simulate turbulent transport in edge tokamak plasma. Basic plasma parameters from the ASDEX Upgrade and COMPASS tokamaks are used as input for the model, and the output is compared with experimental observations obtain...... for an extension of the ESEL model from 2D to 3D to fully resolve the parallel dynamics, and the coupling from the plasma to the sheath.......In this article we have used the 2D fluid turbulence numerical model, ESEL, to simulate turbulent transport in edge tokamak plasma. Basic plasma parameters from the ASDEX Upgrade and COMPASS tokamaks are used as input for the model, and the output is compared with experimental observations obtained...... by reciprocating probe measurements from the two machines. Agreements were found in radial profiles of mean plasma potential and temperature, and in a level of density fluctuations. Disagreements, however, were found in the level of plasma potential and temperature fluctuations. This implicates a need...

  13. 40 CFR 158.243 - Experimental use permit data requirements for terrestrial and aquatic nontarget organisms.

    Science.gov (United States)

    2010-07-01

    ... either one waterfowl species or one upland game bird species for terrestrial, aquatic, forestry, and... greenhouse uses. 4. Data are required on waterfowl and upland game bird species. 5. Data are required on one... indoor and greenhouse uses, testing with only one of either fish species is required. 6. EP or TEP...

  14. 40 CFR 158.2172 - Experimental use permit microbial pesticides residue data requirements table.

    Science.gov (United States)

    2010-07-01

    ... results of testing: i. Indicate the potential to cause adverse human health effects or the product... pesticides residue data requirements table. 158.2172 Section 158.2172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Microbial Pesticides § 158...

  15. Using regional broccoli trial data to select experimental hybrids for input into advanced yield trials

    Science.gov (United States)

    A large amount of phenotypic trait data are being generated in regional trials that are implemented as part of the Specialty Crop Research Initiative (SCRI) project entitled “Establishing an Eastern Broccoli Industry”. These data are used to identify the best entries in the trials for inclusion in ...

  16. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    Science.gov (United States)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  17. Comparison between a new TRNSYS model and experimental data of phase change materials in a solar combisystem

    Energy Technology Data Exchange (ETDEWEB)

    Bony, J.; Citherlet, S.

    2007-07-01

    In the framework of the IEA Task 32 (Solar Heating and Cooling Programme), we developed a numeric model to simulate heat transfer in phase change materials (PCM), and experimental data. The analyzed system is bulk PCM plunged in a water tank storage of a solar combisystem (heating and domestic hot water production). The numerical model, based on the enthalpy approach, takes into account hysteresis and subcooling characteristic and also the conduction and the convection in the PCM. This model has been implemented in an existing TRNSYS type of water tank storage. The simulations has been compared with experimental data obtained with a solar installation using water tank storage of about 900 litres, already studied during the IEA Task 26 (Weiss 2003). (author)

  18. Ignition and Growth Modeling of Detonating LX-04 (85% HMX / 15% VITON) Using New and Previously Obtained Experimental Data

    Science.gov (United States)

    Tarver, Craig

    2017-06-01

    An Ignition and Growth reactive flow model for detonating LX-04 (85% HMX / 15% Viton) was developed using new and previously obtained experimental data on: cylinder test expansion; wave curvature; failure diameter; and laser interferometric copper and tantalum foil free surface velocities and LiF interface particle velocity histories. A reaction product JWL EOS generated by the CHEETAH code compared favorably with the existing, well normalized LX-04 product JWL when both were used with the Ignition and Growth model. Good agreement with all existing experimental data was obtained. Keywords: LX-04, HMX, detonation, Ignition and Growth PACS:82.33.Vx, 82.40.Fp This work was performed under the auspices of the U. S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  19. SPoRT: Transitioning NASA and NOAA Experimental Data to the Operational Weather Community

    Science.gov (United States)

    Jedlovec, Gary J.

    2013-01-01

    Established in 2002 to demonstrate the weather and forecasting application of real-time EOS measurements, the NASA Short-term Prediction Research and Transition (SPoRT) program has grown to be an end-to-end research to operations activity focused on the use of advanced NASA modeling and data assimilation approaches, nowcasting techniques, and unique high-resolution multispectral data from EOS satellites to improve short-term weather forecasts on a regional and local scale. With the ever-broadening application of real-time high resolution satellite data from current EOS, Suomi NPP, and planned JPSS and GOES-R sensors to weather forecast problems, significant challenges arise in the acquisition, delivery, and integration of the new capabilities into the decision making process of the operational weather community. For polar orbiting sensors such as MODIS, AIRS, VIIRS, and CRiS, the use of direct broadcast ground stations is key to the real-time delivery of the data and derived products in a timely fashion. With the ABI on the geostationary GOES-R satellite, the data volumes will likely increase by a factor of 5-10 from current data streams. However, the high data volume and limited bandwidth of end user facilities presents a formidable obstacle to timely access to the data. This challenge can be addressed through the use of subsetting techniques, innovative web services, and the judicious selection of data formats. Many of these approaches have been implemented by SPoRT for the delivery of real-time products to NWS forecast offices and other weather entities. Once available in decision support systems like AWIPS II, these new data and products must be integrated into existing and new displays that allow for the integration of the data with existing operational products in these systems. SPoRT is leading the way in demonstrating this enhanced capability. This paper will highlight the ways SPoRT is overcoming many of the challenges presented by the enormous data

  20. Validation of the actuator disc and actuator line techniques for yawed rotor flows using the New Mexico experimental data

    DEFF Research Database (Denmark)

    Breton, S. P.; Shen, Wen Zhong; Ivanell, S.

    2017-01-01

    Experimental data acquired in the New Mexico experiment on a yawed 4.5m diameter rotor model turbine are used here to validate the actuator line (AL) and actuator disc (AD) models implemented in the Large Eddy Simulation code EllipSys3D in terms of loading and velocity field. Even without modelling...... and blade loading of the New Mexico rotor under yawed inflow....

  1. Experimental demonstration of a format-flexible single-carrier coherent receiver using data-aided digital signal processing.

    Science.gov (United States)

    Elschner, Robert; Frey, Felix; Meuer, Christian; Fischer, Johannes Karl; Alreesh, Saleem; Schmidt-Langhorst, Carsten; Molle, Lutz; Tanimura, Takahito; Schubert, Colja

    2012-12-17

    We experimentally demonstrate the use of data-aided digital signal processing for format-flexible coherent reception of different 28-GBd PDM and 4D modulated signals in WDM transmission experiments over up to 7680 km SSMF by using the same resource-efficient digital signal processing algorithms for the equalization of all formats. Stable and regular performance in the nonlinear transmission regime is confirmed.

  2. Evaluation of the 235U prompt fission neutron spectrum including a detailed analysis of experimental data and improved model information

    Directory of Open Access Journals (Sweden)

    Neudecker Denise

    2017-01-01

    Full Text Available We present an evaluation of the 235U prompt fission neutron spectrum (PFNS induced by thermal to 20-MeV neutrons. Experimental data and associated covariances were analyzed in detail. The incident energy dependence of the PFNS was modeled with an extended Los Alamos model combined with the Hauser-Feshbach and the exciton models. These models describe prompt fission, pre-fission compound nucleus and pre-equilibrium neutron emissions. The evaluated PFNS agree well with the experimental data included in this evaluation, preliminary data of the LANL and LLNL Chi-Nu measurement and recent evaluations by Capote et al. and Rising et al. However, they are softer than the ENDF/B-VII.1 (VII.1 and JENDL-4.0 PFNS for incident neutron energies up to 2 MeV. Simulated effective multiplication factors keff of the Godiva and Flattop-25 critical assemblies are further from the measured keff if the current data are used within VII.1 compared to using only VII.1 data. However, if this work is used with ENDF/B-VIII.0β2 data, simulated values of keff agree well with the measured ones.

  3. Experimental processing of a model data set using Geobit seismic software

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Sang Yong [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    A seismic data processing software, Geobit, has been developed and is continuously updated to implement newer processing techniques and to support more hardware platforms. Geobit is intended to support all Unix platforms ranging from PC to CRAY. The current version supports two platform, i.e., PC/Linux and Sun Sparc based Sun OS 4.1.x. PC/Linux attracted geophysicists in some universities trying to install Geobit in their laboratories to be used as their research tool. However, one of the problem is the difficulty in getting the seismic data. The primary reason is its huge volume. The field data is too bulky to fit their relatively small storage media, such as PC disk. To solve the problem, KIGAM released a model seismic data set via ftp.kigam.re.kr. This study aims two purposes. The first one is testing Geobit software for its suitability in seismic data processing. The test includes reproducing the model through the seismic data processing. If it fails to reproduce the original model, the software is considered buggy and incomplete. However, if it can successfully reproduce the input model, I would be proud of what I have accomplished for the last few years in writing Geobit. The second purpose is to give a guide on Geobit usage by providing an example set of job files needed to process a given data. This example will help scientists lacking Geobit experience to concentrate on their study more easily. Once they know the Geobit processing technique, and later on Geobit programming, they can implement their own processing idea, contributing newer technologies to Geobit. The complete Geobit job files needed to process the model data is written, in the following job sequence: (1) data loading, (2) CDP sort, (3) decon analysis, (4) velocity analysis, (5) decon verification, (6) stack, (7) filter analysis, (8) filtered stack, (9) time migration, (10) depth migration. The control variables in the job files are discussed. (author). 10 figs., 1 tab.

  4. Empowering students' scientific reasoning about energy through experimentation and data analyses

    Science.gov (United States)

    Abdelkareem, Hasan

    The goal of this study was to explore how middle school learners reason from the scientific data that they collect while conducting energy related activities. Specifically, the targeted data analysis skills were: realizing the variables of investigation, finding patterns in data, and developing model-based reasoning about energy. The study was conducted as part of a science curriculum design project called IQWST---Investigation and Questioning our World through Science and Technology. Two experienced science teachers and their 30 middle school students from an independent school participated in this study by piloting a designed unit about energy. Data was collected through classroom observations and in-depth clinical interviews. Those interviews were hold towards the end of the energy unit enactment with a focus group of six students. Participants responded to two types of questions: direct data that were similar to the investigations they conducted in the classroom and indirect data that were represented by energy scenarios. Students' reasoning from data was then analyzed and synthesized using a special coding technique in order to answer the research questions. Findings of this research have shown three types of results: (a) Middle school learners have shown a tendency to use force-dynamic causation about energy (e.g., energy is associated with living things, an enabler to do work and activity, and a materialized thing that moves like fluid). Very few students were able to reason about energy using a model-based perspective. (b) Participants were able to reason about scientific data on the local level (i.e., connecting data with direct observations) but rarely were they able to transform these local skills into global reasoning that implies developing models about energy. (c) Although the two science teachers were able to enact the designed activities through dialogical approach, there was no evidence that they had tendency to reinforce model-based reasoning

  5. COMPARISON BETWEEN 2D TURBULENCE MODEL ESEL AND EXPERIMENTAL DATA FROM AUG AND COMPASS TOKAMAKS

    Directory of Open Access Journals (Sweden)

    Peter Ondac

    2015-04-01

    Full Text Available In this article we have used the 2D fluid turbulence numerical model, ESEL, to simulate turbulent transport in edge tokamak plasma. Basic plasma parameters from the ASDEX Upgrade and COMPASS tokamaks are used as input for the model, and the output is compared with experimental observations obtained by reciprocating probe measurements from the two machines. Agreements were found in radial profiles of mean plasma potential and temperature, and in a level of density fluctuations. Disagreements, however, were found in the level of plasma potential and temperature fluctuations. This implicates a need for an extension of the ESEL model from 2D to 3D to fully resolve the parallel dynamics, and the coupling from the plasma to the sheath.

  6. Experimental data on the properties of natural fiber particle reinforced polymer composite material

    Directory of Open Access Journals (Sweden)

    D. Chandramohan

    2017-08-01

    Full Text Available This paper presents an experimental study on the development of polymer bio-composites. The powdered coconut shell, walnut shells and Rice husk are used as reinforcements with bio epoxy resin to form hybrid composite specimens. The fiber compositions in each specimen are 1:1 while the resin and hardener composition 10:1 respectively. The fabricated composites were tested as per ASTM standards to evaluate mechanical properties such as tensile strength, flexural strength, shear strength and impact strength are evaluated in both with moisture and without moisture. The result of test shows that hybrid composite has far better properties than single fibre glass reinforced composite under mechanical loads. However it is found that the incorporation of walnut shell and coconut shell fibre can improve the properties.

  7. Calibration of SWAT model for woody plant encroachment using paired experimental watershed data

    Science.gov (United States)

    Qiao, Lei; Zou, Chris B.; Will, Rodney E.; Stebler, Elaine

    2015-04-01

    Globally, rangeland has been undergoing a transition from herbaceous dominated grasslands into tree or shrub dominated woodlands with great uncertainty of associated changes in water budget. Previous modeling studies simulated the impact of woody plant encroachment on hydrological processes using models calibrated and constrained primarily by historic streamflow from intermediate sized watersheds. In this study, we calibrated the Soil and Water Assessment Tool (SWAT model), a widely used model for cropping and grazing systems, for a prolifically encroaching juniper species, eastern redcedar (Juniperus virginiana), in the south-central Great Plains using species-specific biophysical and hydrological parameters and in situ meteorological forcing from three pairs of experimental watersheds (grassland versus eastern redcedar woodland) for a period of 3-years covering a dry-to-wet cycle. The multiple paired watersheds eliminated the potentially confounding edaphic and topographic influences from changes in hydrological processes related to woody encroachment. The SWAT model was optimized with the Shuffled complexes with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complexes Evolution (SCE_UA). The mean Nash-Sutcliff coefficient (NSCE) values of the calibrated model for daily and monthly runoff from experimental watersheds reached 0.96 and 0.97 for grassland, respectively, and 0.90 and 0.84 for eastern redcedar woodland, respectively. We then validated the calibrated model with a nearby, larger watershed undergoing rapid eastern redcedar encroachment. The NSCE value for monthly streamflow over a period of 22 years was 0.79. We provide detailed biophysical and hydrological parameters for tallgrass prairie under moderate grazing and eastern redcedar, which can be used to calibrate any model for further validation and application by the hydrologic modeling community.

  8. Dynamics tromal hydration during phacoemulsification: a comparative study with experimental data

    Directory of Open Access Journals (Sweden)

    Erhan Özyol

    2017-03-01

    Full Text Available AIM: To present a different approach called dynamic stromal hydration. Though common, conventional hydration technique should be standardized to ascertain wound integrity at the time of stromal hydration during cataract surgery. No explicit criteria exist to suggest that hydration of wound edges is adequate. METHODS: This study was designed as prospective, randomized, comparative study. Leakage sites were detected by continuous irrigation. At that point, stromal hydration was performed in consideration of the leakage points. The wound edges were hydrated until no further leakage could be visually detected. Trypan blue 0.0125% was applied over the wound sites, and each wound was individually observed for leakage. On the day after surgery, Seidel's test was performed to assess wound integrity.RESULTS: All 120 eyes in the experimental group were evaluated, including all 360 wound sites-120 left side ports, 120 right side ports, and 120 main incisions-as were all 120 eyes in the control group. Though the dye test revealed leakage of aqueous humour from only 29 wound sites of 22 eyes(8.0% of 360 woundsin the experimental group, leakage appeared in 41 wound sites of 30 eyes(11.3% of 360 woundsin the control group. When groups were compared in terms of leakage, the difference was statistically different(P=0.042.CONCLUSION: Dynamic stromal hydration-meaning standardized conventional stromal hydration-is a direct observational technique that allows the easy evaluation of wound integrity at the time of stromal hydration by way of observing wound dynamics.

  9. Two-loop contribution to the pion transition form factor vs. experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Mikhailov, S.V.; Stefanis, N.G. [Bogoliubov Laboratory of Theoretical Physics, JINR, 141980 Dubna (Russian Federation)

    2010-01-15

    We present predictions for the pion-photon transition form factor, which is derived with the help of light-cone sum rules and including the main part of the NNLO radiative corrections. We show that, when the Bakulev-Mikhailov-Stefanis (BMS) pion distribution amplitude is used, the obtained predictions agree well with the CELLO and the CLEO data. We found that no model distribution amplitude can reproduce the observed Q{sup 2} growth of the new BaBar data, though the BMS model complies with several BaBar data points.

  10. Two-loop contribution to the pion transition form factor vs. experimental data

    Science.gov (United States)

    Mikhailov, S. V.; Stefanis, N. G.

    2010-01-01

    We present predictions for the pion-photon transition form factor, which is derived with the help of light-cone sum rules and including the main part of the NNLO radiative corrections. We show that, when the Bakulev-Mikhailov-Stefanis (BMS) pion distribution amplitude is used, the obtained predictions agree well with the CELLO and the CLEO data. We found that no model distribution amplitude can reproduce the observed Q growth of the new BaBar data, though the BMS model complies with several BaBar data points.

  11. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    Science.gov (United States)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  12. Portable audio magnetotellurics - experimental measurements and joint inversion with radiomagnetotelluric data from Gotland, Sweden

    Science.gov (United States)

    Shan, Chunling; Kalscheuer, Thomas; Pedersen, Laust B.; Erlström, Mikael; Persson, Lena

    2017-08-01

    Field setup of an audio magnetotelluric (AMT) station is a very time consuming and heavy work load. In contrast, radio magnetotelluric (RMT) equipment is more portable and faster to deploy but has shallower investigation depth owing to its higher signal frequencies. To increase the efficiency in the acquisition of AMT data from 10 to 300 Hz, we introduce a modification of the AMT method, called portable audio magnetotellurics (PAMT), that uses a lighter AMT field system and (owing to the disregard of signals at frequencies of less than 10 Hz) shortened data acquisition time. PAMT uses three magnetometers pre-mounted on a rigid frame to measure magnetic fields and steel electrodes to measure electric fields. Field tests proved that the system is stable enough to measure AMT fields in the given frequency range. A PAMT test measurement was carried out on Gotland, Sweden along a 3.5 km profile to study the ground conductivity and to map shallow Silurian marlstone and limestone formations, deeper Silurian, Ordovician and Cambrian sedimentary structures and crystalline basement. RMT data collected along a coincident profile and regional airborne very low frequency (VLF) data support the interpretation of our PAMT data. While only the RMT and VLF data constrain a shallow ( 20-50 m deep) transition between Silurian conductive ( 1000 Ωm resistivity) limestone, the single-method inversion models of both the PAMT and the RMT data show a transition into a conductive layer of 3 to 30 Ωm resistivity at 80 m depth suggesting the compatibility of the two data sets. This conductive layer is interpreted as saltwater saturated succession of Silurian, Ordovician and Cambrian sedimentary units. Towards the lower boundary of this succession (at 600 m depth according to boreholes), only the PAMT data constrain the structure. As supported by modelling tests and sensitivity analysis, the PAMT data only contain a vague indication of the underlying crystalline basement. A PAMT and RMT

  13. Global performance parameters for different pneumatic bioreactors operating with water and glycerol solution: experimental data and CFD simulation.

    Science.gov (United States)

    Rodriguez, G Y; Valverde-Ramírez, M; Mendes, C E; Béttega, R; Badino, A C

    2015-11-01

    Global variables play a key role in evaluation of the performance of pneumatic bioreactors and provide criteria to assist in system selection and design. The purpose of this work was to use experimental data and computational fluid dynamics (CFD) simulations to determine the global performance parameters gas holdup ([Formula: see text]) and volumetric oxygen transfer coefficient (k L a), and conduct an analysis of liquid circulation velocity, for three different geometries of pneumatic bioreactors: bubble column, concentric-tube airlift, and split tube airlift. All the systems had 5 L working volumes and two Newtonian fluids of different viscosities were used in the experiments: distilled water and 10 cP glycerol solution. Considering the high oxygen demand in certain types of aerobic fermentations, the assays were carried out at high flow rates. In the present study, the performances of three pneumatic bioreactors with different geometries and operating with two different Newtonian fluids were compared. A new CFD modeling procedure was implemented, and the simulation results were compared with the experimental data. The findings indicated that the concentric-tube airlift design was the best choice in terms of both gas holdup and volumetric oxygen transfer coefficient. The CFD results for gas holdup were consistent with the experimental data, and indicated that k L a was strongly influenced by bubble diameter and shape.

  14. Comparison of neurofuzzy logic and neural networks in modelling experimental data of an immediate release tablet formulation.

    Science.gov (United States)

    Shao, Qun; Rowe, Raymond C; York, Peter

    2006-08-01

    This study compares the performance of neurofuzzy logic and neural networks using two software packages (INForm and FormRules) in generating predictive models for a published database for an immediate release tablet formulation. Both approaches were successful in developing good predictive models for tablet tensile strength and drug dissolution profiles. While neural networks demonstrated a slightly superior capability in predicting unseen data, neurofuzzy logic had the added advantage of generating rule sets representing the cause-effect relationships contained in the experimental data.

  15. Data mining of fractured experimental data using neurofuzzy logic-discovering and integrating knowledge hidden in multiple formulation databases for a fluid-bed granulation process.

    Science.gov (United States)

    Shao, Q; Rowe, R C; York, P

    2008-06-01

    In the pharmaceutical field, current practice in gaining process understanding by data analysis or knowledge discovery has generally focused on dealing with single experimental databases. This limits the level of knowledge extracted in the situation where data from a number of sources, so called fractured data, contain interrelated information. This situation is particularly relevant for complex processes involving a number of operating variables, such as a fluid-bed granulation. This study investigated three data mining strategies to discover and integrate knowledge "hidden" in a number of small experimental databases for a fluid-bed granulation process using neurofuzzy logic technology. Results showed that more comprehensive domain knowledge was discovered from multiple databases via an appropriate data mining strategy. This study also demonstrated that the textual information excluded in individual databases was a critical parameter and often acted as the precondition for integrating knowledge extracted from different databases. Consequently generic knowledge of the domain was discovered, leading to an improved understanding of the granulation process. 2007 Wiley-Liss, Inc

  16. Adjusting for heterogeneity of experimental data in genetic evaluation of dry matter intake in dairy cattle.

    Science.gov (United States)

    Uddin, M E; Meuwissen, T; Veerkamp, R F

    2017-11-20

    The objectives of the present study were (i) to find the best fitted model for repeatedly measured daily dry matter intake (DMI) data obtained from different herds and experiments across lactations and (ii) to get better estimates of the genetic parameters and better genetic evaluations. After editing, there were 572,512 daily DMI records of 3,495 animals (Holstein cows) from 11 different herds across 13 lactations and the animals were under 110 different nutritional experiments. The fitted model for this data set was a univariate repeated-measure animal model (called model 1) in which additive genetic and permanent environmental (within and across lactations) effects were fitted as random. Model 1 was fitted as two distinct models (called models 2 and 3) based on alternative fixed effect corrections. For unscaled data, each model (models 2 and 3) was fitted as a homoscedastic (HOM) model first and then as a heteroscedastic (HET) model. Then, data were scaled by multiplying with particular herd-scaling factors, which were calculated by accounting for heterogeneity of phenotypic within-herd variances. Models were selected based on cross-validation and prediction accuracy results. Scaling factors were re-estimated to determine the effectiveness of accounting for herd heterogeneity. Variance components and respective heritability and repeatability were estimated based on a pedigree-based relationship matrix. Results indicated that the model fitted for scaled data showed better fit than the models (HOM or HET) fitted for unscaled data. The heritability estimates of the models 2 and 3 fitted for scaled data were 0.30 and 0.08, respectively. The repeatability estimates of the model fitted for scaled data ranged from 0.51 to 0.63. The re-estimated scaling factor after accounting for heterogeneity of residual variances was close to 1.0, indicating the stabilization of residual variances and herd accounted for most of the heterogeneity. The rank correlation of EBVs between

  17. Guidelines for information about therapy experiments: a proposal on best practice for recording experimental data on cancer therapy

    Directory of Open Access Journals (Sweden)

    González-Beltrán Alejandra N

    2012-01-01

    Full Text Available Abstract Background Biology, biomedicine and healthcare have become data-driven enterprises, where scientists and clinicians need to generate, access, validate, interpret and integrate different kinds of experimental and patient-related data. Thus, recording and reporting of data in a systematic and unambiguous fashion is crucial to allow aggregation and re-use of data. This paper reviews the benefits of existing biomedical data standards and focuses on key elements to record experiments for therapy development. Specifically, we describe the experiments performed in molecular, cellular, animal and clinical models. We also provide an example set of elements for a therapy tested in a phase I clinical trial. Findings We introduce the Guidelines for Information About Therapy Experiments (GIATE, a minimum information checklist creating a consistent framework to transparently report the purpose, methods and results of the therapeutic experiments. A discussion on the scope, design and structure of the guidelines is presented, together with a description of the intended audience. We also present complementary resources such as a classification scheme, and two alternative ways of creating GIATE information: an electronic lab notebook and a simple spreadsheet-based format. Finally, we use GIATE to record the details of the phase I clinical trial of CHT-25 for patients with refractory lymphomas. The benefits of using GIATE for this experiment are discussed. Conclusions While data standards are being developed to facilitate data sharing and integration in various aspects of experimental medicine, such as genomics and clinical data, no previous work focused on therapy development. We propose a checklist for therapy experiments and demonstrate its use in the 131Iodine labeled CHT-25 chimeric antibody cancer therapy. As future work, we will expand the set of GIATE tools to continue to encourage its use by cancer researchers, and we will engineer an ontology to

  18. Numerical solution of a coefficient inverse problem with multi-frequency experimental raw data by a globally convergent algorithm

    Science.gov (United States)

    Nguyen, Dinh-Liem; Klibanov, Michael V.; Nguyen, Loc H.; Kolesov, Aleksandr E.; Fiddy, Michael A.; Liu, Hui

    2017-09-01

    We analyze in this paper the performance of a newly developed globally convergent numerical method for a coefficient inverse problem for the case of multi-frequency experimental backscatter data associated to a single incident wave. These data were collected using a microwave scattering facility at the University of North Carolina at Charlotte. The challenges for the inverse problem under the consideration are not only from its high nonlinearity and severe ill-posedness but also from the facts that the amount of the measured data is minimal and that these raw data are contaminated by a significant amount of noise, due to a non-ideal experimental setup. This setup is motivated by our target application in detecting and identifying explosives. We show in this paper how the raw data can be preprocessed and successfully inverted using our inversion method. More precisely, we are able to reconstruct the dielectric constants and the locations of the scattering objects with a good accuracy, without using any advanced a priori knowledge of their physical and geometrical properties.

  19. Experimental Design and Data Analysis Issues Contribute to Inconsistent Results of C-Bouton Changes in Amyotrophic Lateral Sclerosis.

    Science.gov (United States)

    Dukkipati, S Shekar; Chihi, Aouatef; Wang, Yiwen; Elbasiouny, Sherif M

    2017-01-01

    The possible presence of pathological changes in cholinergic synaptic inputs [cholinergic boutons (C-boutons)] is a contentious topic within the ALS field. Conflicting data reported on this issue makes it difficult to assess the roles of these synaptic inputs in ALS. Our objective was to determine whether the reported changes are truly statistically and biologically significant and why replication is problematic. This is an urgent question, as C-boutons are an important regulator of spinal motoneuron excitability, and pathological changes in motoneuron excitability are present throughout disease progression. Using male mice of the SOD1-G93A high-expresser transgenic (G93A) mouse model of ALS, we examined C-boutons on spinal motoneurons. We performed histological analysis at high statistical power, which showed no difference in C-bouton size in G93A versus wild-type motoneurons throughout disease progression. In an attempt to examine the underlying reasons for our failure to replicate reported changes, we performed further histological analyses using several variations on experimental design and data analysis that were reported in the ALS literature. This analysis showed that factors related to experimental design, such as grouping unit, sampling strategy, and blinding status, potentially contribute to the discrepancy in published data on C-bouton size changes. Next, we systematically analyzed the impact of study design variability and potential bias on reported results from experimental and preclinical studies of ALS. Strikingly, we found that practices such as blinding and power analysis are not systematically reported in the ALS field. Protocols to standardize experimental design and minimize bias are thus critical to advancing the ALS field.

  20. Significance of data treatment and experimental setup on the determination of copper complexing parameters by anodic stripping voltammetry.

    Science.gov (United States)

    Omanović, Dario; Garnier, Cédric; Louis, Yoann; Lenoble, Véronique; Mounier, Stéphane; Pizeta, Ivanka

    2010-04-07

    Different procedures of voltammetric peak intensities determination, as well as various experimental setups were systematically tested on simulated and real experimental data in order to identify critical points in the determination of copper complexation parameters (ligand concentration and conditional stability constant) by anodic stripping voltammetry (ASV). Varieties of titration data sets (Cu(measured)vs. Cu(total)) were fitted by models encompassing discrete sites distribution of one-class and two-class of binding ligands (by PROSECE software). Examination of different procedures for peak intensities determination applied on voltammograms with known preset values revealed that tangent fit (TF) routine should be avoided, as for both simulated and experimental titration data it produced an additional class of strong ligand (actually not present). Peak intensities determination by fitting of the whole voltammogram was found to be the most appropriate, as it provided most reliable complexation parameters. Tests performed on real seawater samples under different experimental conditions revealed that in addition to importance of proper peak intensities determination, an accumulation time (control of the sensitivity) and an equilibration time needed for complete complexation of added copper during titration (control of complexation kinetics) are the keypoints to obtain reliable results free of artefacts. The consequence of overestimation and underestimation of complexing parameters is supported and illustrated by the example of free copper concentrations (the most bioavailable/toxic specie) calculated for all studied cases. Errors up to 80% of underestimation of free copper concentration and almost two orders of magnitude overestimation of conditional stability constant were registered for the simulated case with two ligands. Copyright 2010 Elsevier B.V. All rights reserved.

  1. Experimental evaluation of dynamic data allocation strategies in a distributed database with changing workloads

    Science.gov (United States)

    Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul

    1995-01-01

    Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.

  2. Experimental demonstration of software defined data center optical networks with Tbps end-to-end tunability

    Science.gov (United States)

    Zhao, Yongli; Zhang, Jie; Ji, Yuefeng; Li, Hui; Wang, Huitao; Ge, Chao

    2015-10-01

    The end-to-end tunability is important to provision elastic channel for the burst traffic of data center optical networks. Then, how to complete the end-to-end tunability based on elastic optical networks? Software defined networking (SDN) based end-to-end tunability solution is proposed for software defined data center optical networks, and the protocol extension and implementation procedure are designed accordingly. For the first time, the flexible grid all optical networks with Tbps end-to-end tunable transport and switch system have been online demonstrated for data center interconnection, which are controlled by OpenDayLight (ODL) based controller. The performance of the end-to-end tunable transport and switch system has been evaluated with wavelength number tuning, bit rate tuning, and transmit power tuning procedure.

  3. The Synthesis of Structural Responses Using Experimentally Measured Frequency Response Functions and Field Test Data

    Energy Technology Data Exchange (ETDEWEB)

    CAP,JEROME S.; NELSON,CURTIS F.

    2000-11-17

    This paper presents an analysis technique used to generate the structural response at locations not measured during the ejection of a captive-carried store. The ejection shock event is complicated by the fact that forces may be imparted to the store at eight distinct locations. The technique derives forcing functions by combining the initial field test data for a limited number of measurement locations with Frequency Response Functions (FRFs) measured using a traditional modal-type impact (tap) test at the same locations. The derived forcing functions were then used with tap test FRFs measured at additional locations of interest to produce the desired response data.

  4. Comparison with simulations to experimental data for photo-neutron reactions using SPring-8 Injector

    Directory of Open Access Journals (Sweden)

    Asano Yoshihiro

    2017-01-01

    Full Text Available Simulations of photo-nuclear reactions by using Monte Carlo codes PHITS and FLUKA have been performed to compare to the measured data at the SPring-8 injector with 250MeV and 961MeV electrons. Measurement data of Bismuth-206 productions due to photo-nuclear reactions of 209Bi(γ,3n 206Bi and high energy neutron reactions of 209Bi(n,4n206 Bi at the beam dumps have been compared with the simulations. Neutron leakage spectra outside the shield wall are also compared between experiments and simulations.

  5. Bayesian inference of the heat transfer properties of a wall using experimental data

    KAUST Repository

    Iglesias, Marco

    2016-01-06

    A hierarchical Bayesian inference method is developed to estimate the thermal resistance and volumetric heat capacity of a wall. We apply our methodology to a real case study where measurements are recorded each minute from two temperature probes and two heat flux sensors placed on both sides of a solid brick wall along a period of almost five days. We model the heat transfer through the wall by means of the one-dimensional heat equation with Dirichlet boundary conditions. The initial/boundary conditions for the temperature are approximated by piecewise linear functions. We assume that temperature and heat flux measurements have independent Gaussian noise and derive the joint likelihood of the wall parameters and the initial/boundary conditions. Under the model assumptions, the boundary conditions are marginalized analytically from the joint likelihood. ApproximatedGaussian posterior distributions for the wall parameters and the initial condition parameter are obtained using the Laplace method, after incorporating the available prior information. The information gain is estimated under different experimental setups, to determine the best allocation of resources.

  6. Modelling polymeric deformable granular materials - from experimental data to numerical models at the grain scale

    Science.gov (United States)

    Teil, Maxime; Harthong, Barthélémy; Imbault, Didier; Peyroux, Robert

    2017-06-01

    Polymeric deformable granular materials are widely used in industry and the understanding and the modelling of their shaping process is a point of interest. This kind of materials often presents a viscoelasticplastic behaviour and the present study promotes a joint approach between numerical simulations and experiments in order to derive the behaviour law of such granular material. The experiment is conducted on a polystyrene powder on which a confining pressure of 7MPa and an axial pressure reaching 30MPa are applied. Between different steps of the in-situ test, the sample is scanned in an X-rays microtomograph in order to know the structure of the material depending on the density. From the tomographic images and by using specific algorithms to improve the images quality, grains are automatically identified, separated and a finite element mesh is generated. The long-term objective of this study is to derive a representative sample directly from the experiments in order to run numerical simulations using a viscoelactic or viscoelastic-plastic constitutive law and compare numerical and experimental results at the particle scale.

  7. Influence of experimental conditions on data variability in the liver comet assay.

    Science.gov (United States)

    Guérard, M; Marchand, C; Plappert-Helbig, U

    2014-03-01

    The in vivo comet assay has increasingly been used for regulatory genotoxicity testing in recent years. While it has been demonstrated that the experimental execution of the assay, for example, electrophoresis or scoring, can have a strong impact on the results; little is known on how initial steps, that is, from tissue sampling during necropsy up to slide preparation, can influence the comet assay results. Therefore, we investigated which of the multitude of steps in processing the liver for the comet assay are most critical. All together eight parameters were assessed by using liver samples of untreated animals. In addition, two of those parameters (temperature and storage time of liver before embedding into agarose) were further investigated in animals given a single oral dose of ethyl methanesulfonate at dose levels of 50, 100, and 200 mg/kg, 3 hr prior to necropsy. The results showed that sample cooling emerged as the predominant influence factor, whereas variations in other elements of the procedure (e.g., size of the liver piece sampled, time needed to process the liver tissue post-mortem, agarose temperature, or time of lysis) seem to be of little relevance. Storing of liver samples of up to 6 hr under cooled conditions did not cause an increase in tail intensity. In contrast, storing the tissue at room temperature, resulted in a considerable time-dependent increase in comet parameters. Copyright © 2013 Wiley Periodicals, Inc.

  8. Understanding the physical properties controlling protein crystallization based on analysis of large-scale experimental data

    Science.gov (United States)

    Price, W. Nicholson; Chen, Yang; Handelman, Samuel K.; Neely, Helen; Manor, Philip; Karlin, Richard; Nair, Rajesh; Liu, Jinfeng; Baran, Michael; Everett, John; Tong, Saichiu N.; Forouhar, Farhad; Swaminathan, Swarup S.; Acton, Thomas; Xiao, Rong; Luft, Joseph R.; Lauricella, Angela; DeTitta, George T.; Rost, Burkhard; Montelione, Gaetano T.; Hunt, John F.

    2009-01-01

    Crystallization has proven to be the most significant bottleneck to high-throughput protein structure determination using diffraction methods. We have used the large-scale, systematically generated experimental results of the Northeast Structural Genomics Consortium to characterize the biophysical properties that control protein crystallization. Datamining of crystallization results combined with explicit folding studies lead to the conclusion that crystallization propensity is controlled primarily by the prevalence of well-ordered surface epitopes capable of mediating interprotein interactions and is not strongly influenced by overall thermodynamic stability. These analyses identify specific sequence features correlating with crystallization propensity that can be used to estimate the crystallization probability of a given construct. Analyses of entire predicted proteomes demonstrate substantial differences in the bulk amino acid sequence properties of human versus eubacterial proteins that reflect likely differences in their biophysical properties including crystallization propensity. Finally, our thermodynamic measurements enable critical evaluation of previous claims regarding correlations between protein stability and bulk sequence properties, which generally are not supported by our dataset. PMID:19079241

  9. Inferring biomarkers for Mycobacterium avium subsp. paratuberculosis infection and disease progression in cattle using experimental data

    Science.gov (United States)

    Magombedze, Gesham; Shiri, Tinevimbo; Eda, Shigetoshi; Stabel, Judy R.

    2017-03-01

    Available diagnostic assays for Mycobacterium avium subsp. paratuberculosis (MAP) have poor sensitivities and cannot detect early stages of infection, therefore, there is need to find new diagnostic markers for early infection detection and disease stages. We analyzed longitudinal IFN-γ, ELISA-antibody and fecal shedding experimental sensitivity scores for MAP infection detection and disease progression. We used both statistical methods and dynamic mathematical models to (i) evaluate the empirical assays (ii) infer and explain biological mechanisms that affect the time evolution of the biomarkers, and (iii) predict disease stages of 57 animals that were naturally infected with MAP. This analysis confirms that the fecal test is the best marker for disease progression and illustrates that Th1/Th2 (IFN-γ/ELISA antibodies) assays are important for infection detection, but cannot reliably predict persistent infections. Our results show that the theoretical simulated macrophage-based assay is a potential good diagnostic marker for MAP persistent infections and predictor of disease specific stages. We therefore recommend specifically designed experiments to test the use of a based assay in the diagnosis of MAP infections.

  10. Analysis of experimental hydrogen engine data and hydrogen vehicle performance and emissions simulation

    Energy Technology Data Exchange (ETDEWEB)

    Aceves, S.M.

    1996-09-01

    This paper reports the engine and vehicle simulation and analysis done at Lawrence Livermore (LLNL) as a part of a joint optimized hydrogen engine development effort. Project participants are: Sandia National Laboratory, California (SNLC), responsible for experimental evaluation; Los Alamos National Laboratory (LANL), responsible for detailed fluid mechanics engine evaluations, and the University of Miami, responsible for engine friction reduction. Fuel cells are considered as the ideal power source for future vehicles, due to their high efficiency and low emissions. However, extensive use of fuel cells in light-duty vehicles is likely to be years away, due to their high manufacturing cost. Hydrogen-fueled, spark-ignited, homogeneous-charge engines offer a near-term alternative to fuel cells. Hydrogen in a spark-ignited engine can be burned at very low equivalence ratios, so that NO{sub x} emissions can be reduced to less than 10 ppm without catalyst. HC and CO emissions may result from oxidation of engine oil, but by proper design are negligible (a few ppm). Lean operation also results in increased indicated efficiency due to the thermodynamic properties of the gaseous mixture contained in the cylinder. The high effective octane number of hydrogen allows the use of a high compression ratio, further increasing engine efficiency.

  11. Electrical Resistivity as an Indicator of Saturation in Fractured Geothermal Reservoir Rocks: Experimental Data and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Detwiler, R L; Roberts, J J

    2003-06-23

    The electrical resistivity of rock cores under conditions representative of geothermal reservoirs is strongly influenced by the state and phase (liquid/vapor) of the pore fluid. In fractured samples, phase change (vaporization/condensation) can result in resistivity changes that are more than an order of magnitude greater than those measured in intact samples. These results suggest that electrical resistivity monitoring of geothermal reservoirs may provide a useful tool for remotely detecting the movement of water and steam within fractures, the development and evolution of fracture systems and the formation of steam caps. We measured the electrical resistivity of cores of welded tuff containing fractures of various geometries to investigate the resistivity contrast caused by active boiling and to determine the effects of variable fracture dimensions and surface area on water extraction from the matrix. We then used the Nonisothermal Unsaturated Flow and Transport model (NUFT) (Nitao, 1998) to simulate the propagation of boiling fronts through the samples. The simulated saturation profiles combined with previously reported measurements of resistivity-saturation curves allow us to estimate the evolution of the sample resistivity as the boiling front propagates into the rock matrix. These simulations provide qualitative agreement with experimental measurements suggesting that our modeling approach may be used to estimate resistivity changes induced by boiling in more complex systems.

  12. Diagnostics of Particles emitted from a Laser generated Plasma: Experimental Data and Simulations

    Directory of Open Access Journals (Sweden)

    Costa Giuseppe

    2018-01-01

    Full Text Available The charge particle emission form laser-generated plasma was studied experimentally and theoretically using the COMSOL simulation code. The particle acceleration was investigated using two lasers at two different regimes. A Nd:YAG laser, with 3 ns pulse duration and 1010 W/cm2 intensity, when focused on solid target produces a non-equilibrium plasma with average temperature of about 30-50 eV. An Iodine laser with 300 ps pulse duration and 1016 W/cm2 intensity produces plasmas with average temperatures of the order of tens keV. In both cases charge separation occurs and ions and electrons are accelerated at energies of the order of 200 eV and 1 MeV per charge state in the two cases, respectively. The simulation program permits to plot the charge particle trajectories from plasma source in vacuum indicating how they can be deflected by magnetic and electrical fields. The simulation code can be employed to realize suitable permanent magnets and solenoids to deflect ions toward a secondary target or detectors, to focalize ions and electrons, to realize electron traps able to provide significant ion acceleration and to realize efficient spectrometers. In particular it was applied to the study two Thomson parabola spectrometers able to detect ions at low and at high laser intensities. The comparisons between measurements and simulation is presented and discussed.

  13. Integrated system for production of neutronics and photonics calculational constants. Major neutron-induced interactions (Z > 55): graphical, experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D.E.; Howerton, R.J.; MacGregor, M.H.; Perkins, S.T.

    1976-07-04

    This report (vol. 7) presents graphs of major neutron-induced interaction cross sections in the Experimental Cross Section Information Library (ECSIL) as of July 4, 1976. It consists primarily of interactions where a single data set contains enough points to show cross section behavior. In contrast, vol. 8 of this UCRL-50400 series consists of interactions where more than one data set is needed to show cross section behavior. Thus, you can find the total, elastic, capture, and fission cross sections (along with the parameters ..nu.. bar, ..cap alpha.., and eta) in vol. 7 and all other reactions in vol. 8. Data are plotted with associated cross section error bars (when given) and compared with the Evaluated Nuclear Data Library (ENDL) as of July 4, 1976. The plots are arranged in ascending order of atomic number (Z) and atomic weight (A). Part A contains the plots for Z = 1 to 55; Part B contains the plots for Z is greater than 55.

  14. Identification of Enriched PTM Crosstalk Motifs from Large-Scale Experimental Data Sets

    NARCIS (Netherlands)

    Peng, Mao; Scholten, Arjen; Heck, Albert J R; van Breukelen, Bas

    2014-01-01

    Post-translational modifications (PTMs) play an important role in the regulation of protein function. Mass spectrometry based proteomics experiments nowadays identify tens of thousands of PTMs in a single experiment. A wealth of data has therefore become publically available. Evidently the

  15. Interpreting experimental data on egg production - applications of dynamic differential equations

    NARCIS (Netherlands)

    France, J.; Lopez, S.; Kebreab, E.; Dijkstra, J.

    2013-01-01

    This contribution focuses on applying mathematical models based on systems of ordinary first-order differential equations to synthesize and interpret data from egg production experiments. Models based on linear systems of differential equations are contrasted with those based on nonlinear systems.

  16. Comparison of two acoustic analogies applied to experimental PIV data for cavity sound emission estimation

    NARCIS (Netherlands)

    Koschatzky, V.; Westerweel, J.; Boersma, B.J.

    2010-01-01

    The aim of the present study is to compare two different acoustic analogies applied to time-resolved particle image velocimetry (PIV) data for the prediction of the acoustic far-field generated by the flow over a rectangular cavity. Recent developments in laser and camera technology allow the

  17. Experimental data from coastal diffusion tests. [Smoke diffusion over coastal waters

    Energy Technology Data Exchange (ETDEWEB)

    Raynor, G S; Brown, R M; SethuRaman, S

    1976-10-01

    Data are reported from a series of seven experiments on the diffusion of smoke plumes over northeast Atlantic Ocean coastal waters in response to wind fluctuations and other meteorological variables. A qualitative description of smoke behavior during each experiment is included and photographs of the smoke are included to illustrate the type of diffusion observed. (CH)

  18. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    Directory of Open Access Journals (Sweden)

    Jingjing He

    2017-09-01

    Full Text Available This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions.

  19. Stream invertebrate productivity linked to forest subsidies: 37 stream-years of reference and experimental data

    Science.gov (United States)

    J. Bruce Wallace; Susan L Eggert; Judy L. Meyer; Jackson R. Webster

    2015-01-01

    Riparian habitats provide detrital subsidies of varying quantities and qualities to recipient ecosystems. We used long-term data from three reference streams (covering 24 stream-years) and 13-year whole-stream organic matter manipulations to investigate the influence of terrestrial detrital quantity and quality on benthic invertebrate community structure, abundance,...

  20. Statistics for proteomics : A review of tools for analyzing experimental data

    NARCIS (Netherlands)

    Urfer, Wolfgang; Grzegorczyk, Marco; Jung, Klaus

    Most proteomics experiments make use of 'high throughput' technologies such a's 2-DE, MS or protein arrays to measure simultaneously the expression levels of thousands of proteins. Such experiments yield, large, high-dimensional data sets which usually reflect not only the biological but also

  1. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  2. Assessment of reduced-order unscented Kalman filter for parameter identification in 1-dimensional blood flow models using experimental data.

    Science.gov (United States)

    Caiazzo, A; Caforio, Federica; Montecinos, Gino; Muller, Lucas O; Blanco, Pablo J; Toro, Eluterio F

    2016-10-25

    This work presents a detailed investigation of a parameter estimation approach on the basis of the reduced-order unscented Kalman filter (ROUKF) in the context of 1-dimensional blood flow models. In particular, the main aims of this study are (1) to investigate the effects of using real measurements versus synthetic data for the estimation procedure (i.e., numerical results of the same in silico model, perturbed with noise) and (2) to identify potential difficulties and limitations of the approach in clinically realistic applications to assess the applicability of the filter to such setups. For these purposes, the present numerical study is based on a recently published in vitro model of the arterial network, for which experimental flow and pressure measurements are available at few selected locations. To mimic clinically relevant situations, we focus on the estimation of terminal resistances and arterial wall parameters related to vessel mechanics (Young's modulus and wall thickness) using few experimental observations (at most a single pressure or flow measurement per vessel). In all cases, we first perform a theoretical identifiability analysis on the basis of the generalized sensitivity function, comparing then the results owith the ROUKF, using either synthetic or experimental data, to results obtained using reference parameters and to available measurements. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Liquidus experimental data for the system Cu-O-Cr{sub 2}O{sub 3} in air

    Energy Technology Data Exchange (ETDEWEB)

    Hamuyuni, Joseph, E-mail: joseph.hamuyuni@aalto.fi; Taskinen, Pekka

    2016-08-20

    Highlights: • Liquidus of the system Cu-O-Cr{sub 2}O{sub 3} in air determined using equilibration techniques. • The eutectic in the pseudo binary Cu{sub 2}O-Cr{sub 2}O{sub 3} determined. • Cr{sub 2}O{sub 3} content in liquid oxide phase is lower than previously predicted. - Abstract: The liquidus of the system Cu-O-Cr{sub 2}O{sub 3} was experimentally investigated in air for the first time using equilibration techniques. The composition of glassy liquid oxide phase in equilibrium with CuCrO{sub 2} was quantified using Electron Probe Micro-Analyzer (EPMA). Liquidus data in the temperature range from 1130–1400 °C, have been obtained based on two phase equilibria experimental data. The calculated liquidus using version 8.1 of MTOX database is different from the newly obtained experimental results. The eutectic temperature and its composition have also been investigated and found to be at 1112 ± 2 °C and at 2.06 wt% Cr{sub 2}O{sub 3}, respectively. This value is closer to the calculated eutectic using MTOX database where it is located at 1110 °C and 1.8 wt% Cr{sub 2}O{sub 3}.

  4. Gamma Knife Simulation Using the MCNP4C Code and the Zubal Phantom and Comparison with Experimental Data

    Directory of Open Access Journals (Sweden)

    Somayeh Gholami

    2010-06-01

    Full Text Available Introduction: Gamma Knife is an instrument specially designed for treating brain disorders. In Gamma Knife, there are 201 narrow beams of cobalt-60 sources that intersect at an isocenter point to treat brain tumors. The tumor is placed at the isocenter and is treated by the emitted gamma rays. Therefore, there is a high dose at this point and a low dose is delivered to the normal tissue surrounding the tumor. Material and Method: In the current work, the MCNP simulation code was used to simulate the Gamma Knife. The calculated values were compared to the experimental ones and previous works. Dose distribution was compared for different collimators in a water phantom and the Zubal brain-equivalent phantom. The dose profiles were obtained along the x, y and z axes. Result: The evaluation of the developed code was performed using experimental data and we found a good agreement between our simulation and experimental data. Discussion: Our results showed that the skull bone has a high contribution to both scatter and absorbed dose. In other words, inserting the exact material of brain and other organs of the head in digital phantom improves the quality of treatment planning. This work is regarding the measurement of absorbed dose and improving the treatment planning procedure in Gamma-Knife radiosurgery in the brain.

  5. Influence of body condition on influenza a virus infection in mallard ducks: Experimental infection data

    Science.gov (United States)

    Arsnoe, D.M.; Ip, Hon S.; Owen, J.C.

    2011-01-01

    Migrating waterfowl are implicated in the global spread of influenza A viruses (IAVs), and mallards (Anas platyrhynchos) are considered a particularly important IAV reservoir. Prevalence of IAV infection in waterfowl peaks during autumn pre-migration staging and then declines as birds reach wintering areas. Migration is energetically costly and birds often experience declines in body condition that may suppress immune function. We assessed how body condition affects susceptibility to infection, viral shedding and antibody production in wild-caught and captive-bred juvenile mallards challenged with low pathogenic avian influenza virus (LPAIV) H5N9. Wild mallards (n = 30) were separated into three experimental groups; each manipulated through food availability to a different condition level (-20%, -10%, and normal ??5% original body condition), and captive-bred mallards (n = 10) were maintained at normal condition. We found that wild mallards in normal condition were more susceptible to LPAIV infection, shed higher peak viral loads and shed viral RNA more frequently compared to birds in poor condition. Antibody production did not differ according to condition. We found that wild mallards did not differ from captive-bred mallards in viral intensity and duration of infection, but they did exhibit lower antibody titers and greater variation in viral load. Our findings suggest that reduced body condition negatively influences waterfowl host competence to LPAIV infection. This observation is contradictory to the recently proposed condition-dependent hypothesis, according to which birds in reduced condition would be more susceptible to IAV infection. The mechanisms responsible for reducing host competency among birds in poor condition remain unknown. Our research indicates body condition may influence the maintenance and spread of LPAIV by migrating waterfowl. ?? 2011 Arsnoe et al.

  6. Experimental hydrogen-fueled automotive engine design data-base project. Volume 1. Executive summary report

    Energy Technology Data Exchange (ETDEWEB)

    Swain, M.R.; Adt, R.R. Jr.; Pappas, J.M.

    1983-05-01

    A preliminary hydrogen-fueled automotive piston engine design data-base now exists as a result of a research project at the University of Miami. The effort, which is overviewed here, encompassed the testing of 19 different configurations of an appropriately-modified, 1.6-liter displacement, light-duty automotive piston engine. The design data base includes engine performance and exhaust emissions over the entire load range, generally at a fixed speed (1800 rpm) and best efficiency spark timing. This range was sometimes limited by intake manifold backfiring and lean-limit restrictions; however, effective measures were demonstrated for obviating these problems. High efficiency, competitive specific power, and low emissions were conclusively demonstrated.

  7. Experimental Data from the Benchmark SuperCritical Wing Wind Tunnel Test on an Oscillating Turntable

    Science.gov (United States)

    Heeg, Jennifer; Piatak, David J.

    2013-01-01

    The Benchmark SuperCritical Wing (BSCW) wind tunnel model served as a semi-blind testcase for the 2012 AIAA Aeroelastic Prediction Workshop (AePW). The BSCW was chosen as a testcase due to its geometric simplicity and flow physics complexity. The data sets examined include unforced system information and forced pitching oscillations. The aerodynamic challenges presented by this AePW testcase include a strong shock that was observed to be unsteady for even the unforced system cases, shock-induced separation and trailing edge separation. The current paper quantifies these characteristics at the AePW test condition and at a suggested benchmarking test condition. General characteristics of the model's behavior are examined for the entire available data set.

  8. Pion Structure in Qcd: from Theory to Lattice to Experimental Data

    Science.gov (United States)

    Bakulev, A. P.; Mikhailov, S. V.; Pimikov, A. V.; Stefanis, N. G.

    We describe the present status of the pion distribution amplitude (DA) as it originates from several sources: (i) a nonperturbative approach based on QCD sum rules with nonlocal condensates, (ii) an O(as) QCD analysis of the CLEO data on Fgg*p(Q2) with asymptotic and renormalon models for higher twists and (iii) recent high-precision lattice QCD calculations of the second moment of the pion DA. We show predictions for the pion electromagnetic form factor, obtained in analytic QCD perturbation theory, and compare it with the JLab data on Fp(Q2). We also discuss in this context an improved model for nonlocal condensates in QCD and show its consequences for the pion DA and the gg*p transition form factor. We include a brief analysis of meson-induced massive lepton (muon) Drell-Yan production for the process p-Nm+m-X, considering both an unpolarized nucleon target and longitudinally polarized protons.

  9. Experimental data of co-crystals of Etravirine and L-tartaric acid

    Directory of Open Access Journals (Sweden)

    Mikal Rekdal

    2018-02-01

    In this study Etravirine co-crystals were synthesized in the molar ratios 1:1, 1:2 and 2:1 with L-tartaric acid as the co-former. Both slow evaporation and physical mixture was performed to mix the components. DSC values of final products are presented as well as FTIR spectra to observe the altered intermolecular interactions. A chemical stability test was performed after seven days using area under curve data from an HPLC instrument.

  10. Experimental Demonstration of Multidimensional Switching Nodes for All-Optical Data Center Networks

    DEFF Research Database (Denmark)

    Kamchevska, Valerija; Medhin, Ashenafi Kiros; Da Ros, Francesco

    2016-01-01

    This paper reports on a novel ring-based data center architecture composed of multidimensional switching nodes. The nodes are interconnected with multicore fibers and can provide switching in three different physical, hierarchically overlaid dimensions (space, wavelength, and time). The proposed...... by increasing the transmission capacity to 1 Tbit/s/core equivalent to 7 Tbit/s total throughput in a single seven-core multicore fiber. The error-free performance (BER centers and accommodate...

  11. A comparison of experimental and estimated data analyses of solar radiation, in Adiyaman, Turkey

    OpenAIRE

    Bozkurt, Ismail; Calis, Nazif; Sogukpinar, Haci

    2015-01-01

    The world's main energy source is the sun. Other energy sources are caused directly or indirectly from the sun. Turkey has a rich potential in terms of solar energy and interest in solar power systems is increasing in the rapidly evolving technology. In all of the solar energy studies needs solar radiation data but solar radiation measurements are not possible on each area. Therefore, estimation of the solar radiation by using a variety of methods are emerging importance. In this study, ...

  12. Validation of the AeroDyn subroutines using NREL unsteady aerodynamics experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Laino, D.J.; Hansen, A.C.; Minnema, J.E. [Windward Engineering, Salt Lake City, UT (United States)

    2002-07-01

    Completion of the full-scale wind tunnel tests of the NREL Unsteady Aerodynamics Experiment (UAE) phase VI allowed validation of the AeroDyn wind turbine aerodynamics software to commence. Detailed knowledge of the inflow to the UAE was the bane of prior attempts to accomplish any in-depth validation in the past. The wind tunnel tests permitted unprecedented control and measurement of inflow to the UAE rotor. The data collected from these UAE tests are currently under investigation as part of an effort to better understand wind turbine rotor aerodynamics in order to improve aeroelastic modelling techniques. Preliminary results from this study using the AeroDyn subroutines are presented, pointing to several avenues towards improvement. Test data indicate that rotational effects cause more static stall delay over a larger portion of the blades than predicted by current methods. Despite the relatively stiff properties of the UAE, vibration modes appear to influence the aerodynamic forces and system loads. AeroDyn adequately predicts dynamic stall hysteresis loops when appropriate steady, 2D aerofoil tables are used. Problems encountered include uncertainties in converting measured inflow angle to angle of attack for the UAE phase VI. Future work is proposed to address this angle-of-attack problem and to analyse a slightly more complex dynamics model that incorporates some of the structural vibration modes evident in the test data. (author)

  13. Experimental Measurements of Sonic Boom Signatures Using a Continuous Data Acquisition Technique

    Science.gov (United States)

    Wilcox, Floyd J.; Elmiligui, Alaa A.

    2013-01-01

    A wind tunnel investigation was conducted in the Langley Unitary Plan Wind Tunnel to determine the effectiveness of a technique to measure aircraft sonic boom signatures using a single conical survey probe while continuously moving the model past the probe. Sonic boom signatures were obtained using both move-pause and continuous data acquisition methods for comparison. The test was conducted using a generic business jet model at a constant angle of attack and a single model-to-survey-probe separation distance. The sonic boom signatures were obtained at a Mach number of 2.0 and a unit Reynolds number of 2 million per foot. The test results showed that it is possible to obtain sonic boom signatures while continuously moving the model and that the time required to acquire the signature is at least 10 times faster than the move-pause method. Data plots are presented with a discussion of the results. No tabulated data or flow visualization photographs are included.

  14. Subchannel analysis and correlation of the Rod Bundle Heat Transfer (RBHT) steam cooling experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Riley, M.P.; Mohanta, L.; Miller, D.J.; Cheung, F.B. [Pennsylvania State Univ., University Park, PA (United States); Bajorek, S.M.; Tien, K.; Hoxie, C.L. [U.S. Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Regulatory Research

    2016-07-15

    A subchannel analysis of the steam cooling data obtained in the Rod Bundle Heat Transfer (RBHT) test facility has been performed in this study to capture the effect of spacer grids on heat transfer. The RBHT test facility has a 7 x 7 rod bundle with heater rods and with seven spacer grids equally spaced along the length of the rods. A method based on the concept of momentum and heat transport analogy has been developed for calculating the subchannel bulk mean temperature from the measured steam temperatures. Over the range of inlet Reynolds number, the local Nusselt number was found to exhibit a minimum value between the upstream and downstream spacer grids. The presence of a spacer grid not only affects the local Nusselt number downstream of the grid but also affects the local Nusselt number upstream of the next grid. A new correlation capturing the effect of Reynolds number on the local flow restructuring downstream as well as upstream of the spacer grids was proposed for the minimum Nusselt number. In addition, a new enhancement factor accounting for the effects of the upstream as well as downstream spacer grids was developed from the RBHT data. The new enhancement factor was found to compare well with the data from the ACHILLLES test facility.

  15. GPS Signal Scattering from Sea Surface: Wind Speed Retrieval Using Experimental Data and Theoretical Model

    Science.gov (United States)

    Komjathy, Attila; Zavorotny, Valery U.; Axelrad, Penina; Born, George H.; Garrison, James L.

    2000-01-01

    Global Positioning System (GPS) signals reflected from the ocean surface have potential use for various remote sensing purposes. Some possibilities arc measurements of surface roughness characteristics from which ware height, wind speed, and direction could be determined. For this paper, GPS-reflected signal measurements collected at aircraft altitudes of 2 km to 5 km with a delay-Doppler mapping GPS receiver arc used to explore the possibility of determining wind speed. To interpret the GPS data, a theoretical model has been developed that describes the power of the reflected GPS signals for different time delays and Doppler frequencies as a function of geometrical and environmental parameters. The results indicate a good agreement between the measured and the modeled normalized signal power waveforms during changing surface wind conditions. The estimated wind speed using surface- reflected GPS data, obtained by comparing actual and modeled waveforms, shows good agreement (within 2 m/s) with data obtained from a nearby buoy and independent wind speed measurements derived from the TOPEX/Poseidon altimetric satellite.

  16. Effects of surface irregularities on intensity data from laser scanning: an experimental approach.

    Directory of Open Access Journals (Sweden)

    G. Teza

    2008-06-01

    Full Text Available The results of an experiment carried out with the aim to investigate the role of surface irregularities on the intensity data provided by a terrestrial laser scanner (TLS survey are reported here. Depending on surface roughness, the interaction between an electromagnetic wave and microscopic irregularities leads to a Lambertian-like diffusive light reflection, allowing the TLS to receive the backscattered component of the signal. The described experiment consists in a series of TLS-based acquisitions of a rotating artificial target specifically conceived in order to highlight the effects on the intensity data due to surface irregularity. This target is articulated in a flat plate and in an irregular surface, whose macro-roughness has a characteristic length with the same order of the spot size. Results point out the different behavior of the plates. The intensity of the signal backscattered by the planar element decreases if the incidence angle increases, whereas the intensity of the signal backscattered by the irregular surface is almost constant if the incidence angle varies. Since the typical surfaces acquired in a geological/geophysical survey are generally irregular, these results imply that the intensity data can be easily used in order to evaluate the reflectance of the material at the considered wavelength, e.g. for pattern recognition purposes.

  17. Structure and properties of the Earth's plasmasphere. Experimental data and problems of their interpretation (review).

    Science.gov (United States)

    Gringauz, K. I.; Bassolo, V. S.

    1990-08-01

    The results of direct measurements of the concentration and temperature of the plasma are discussed, together with the mass and charge makeup of the plasmasphere ions. For the most part, the data were obtained during the last decade. It is shown that the data confirm the existence of two zones in the plasmasphere: an inner zone, which is stable, quasi-stationary, and cold; and an outer zone, which is usually warm and essentially nonstationary, with large longitudinal gradients of the parameters. The limit of the outer zone, the plasmapause, is clearly expressed during the evening-nighttime hours and is usually washed out in the day sector. It has a complex and nonstationary structure, changes strongly as a function of geomagnetic conditions, and is asymmetric in local time, with the evening and midday projections of the plasmapause being reliably distinguished. A comparison of these data with the theoretical constants leads to the conclusion that at present there is no generally accepted model that adequately describes all the basic features of the observed structure and dynamics of the plasmapause.

  18. Drift diffusion model of reward and punishment learning in schizophrenia: Modeling and experimental data.

    Science.gov (United States)

    Moustafa, Ahmed A; Kéri, Szabolcs; Somlai, Zsuzsanna; Balsdon, Tarryn; Frydecka, Dorota; Misiak, Blazej; White, Corey

    2015-09-15

    In this study, we tested reward- and punishment learning performance using a probabilistic classification learning task in patients with schizophrenia (n=37) and healthy controls (n=48). We also fit subjects' data using a Drift Diffusion Model (DDM) of simple decisions to investigate which components of the decision process differ between patients and controls. Modeling results show between-group differences in multiple components of the decision process. Specifically, patients had slower motor/encoding time, higher response caution (favoring accuracy over speed), and a deficit in classification learning for punishment, but not reward, trials. The results suggest that patients with schizophrenia adopt a compensatory strategy of favoring accuracy over speed to improve performance, yet still show signs of a deficit in learning based on negative feedback. Our data highlights the importance of applying fitting models (particularly drift diffusion models) to behavioral data. The implications of these findings are discussed relative to theories of schizophrenia and cognitive processing. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Salt effect on (liquid + liquid) equilibrium of (water + tert-butanol + 1-butanol) system: Experimental data and correlation

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Milton A.P. [School of Chemical Engineering, State University of Campinas, P.O. Box 6066, Campinas-SP 13081-970 (Brazil); Aznar, Martin [School of Chemical Engineering, State University of Campinas, P.O. Box 6066, Campinas-SP 13081-970 (Brazil)]. E-mail: maznar@feq.unicamp.br

    2006-01-15

    (Liquid + liquid) equilibrium data for the quaternary systems (water + tert-butanol + 1-butanol + KBr) and (water + tert-butanol + 1-butanol + MgCl{sub 2}) were experimentally determined at T = 293.15 K and T = 313.15 K. For mixtures with KBr, the overall salt concentrations were 5 and 10 mass percent; for mixtures with MgCl{sub 2}, the overall salt concentrations were 2 and 5 mass percent. The experimental results were used to estimate molecular interaction parameters for the NRTL activity coefficient model, using the Simplex minimization method and a concentration-based objective function. The correlation results are extremely satisfactory, with deviations in phase compositions below 1.7%.

  20. Empirical singular vectors of baroclinic flows deduced from experimental data of a differentially heated rotating annulus

    Directory of Open Access Journals (Sweden)

    Michael Hoff

    2015-01-01

    Full Text Available Instability is related to exponentially growing eigenmodes. Interestingly, when finite time intervals are considered, growth rates of certain initial perturbations can exceed the growth rates of the most unstable modes. Moreover, even when all modes are damped, such particular initial perturbations can still grow during finite time intervals. The perturbations with the largest growth rates are called singular vectors (SVs or optimal perturbations. They not only play an important role in atmospheric ensemble predictions, but also for the theory of instability and turbulence. Starting point for a classical SV-analysis is a linear dynamical system with a known system matrix. In contrast to this traditional approach, measured data are used here to estimate the linear propagator. For this estimation, a method is applied that uses the covariances of the measured time series to find the principal oscillation patterns (POPs that are the empirically estimated linear eigenmodes of the system. By using the singular value decomposition (SVD, we can estimate the modes of maximal growth of the propagator which are thus the empirically estimated SVs. These modes can be understood as a superposition of POPs that form a complete but in general non-orthogonal basis. The data used, originate from a differentially heated rotating annulus laboratory experiment. This experiment is an analogue of the earth's atmosphere and is used to study the development of baroclinic waves in a well controlled and reproducible way without the need of numerical approximations. Baroclinic waves form the background for many studies on SV growth and it is thus straight forward to apply the technique of empirical SV estimation to these laboratory data. To test the method of SV estimation, we use a quasi-geostrophic barotropic model and compare the known SVs from that model with SVs estimated from a surrogate data set that was generated with the help of the exact model propagator and some

  1. Informatics for materials science and engineering data-driven discovery for accelerated experimentation and application

    CERN Document Server

    Rajan, Krishna

    2014-01-01

    Materials informatics: a 'hot topic' area in materials science, aims to combine traditionally bio-led informatics with computational methodologies, supporting more efficient research by identifying strategies for time- and cost-effective analysis. The discovery and maturation of new materials has been outpaced by the thicket of data created by new combinatorial and high throughput analytical techniques. The elaboration of this ""quantitative avalanche""-and the resulting complex, multi-factor analyses required to understand it-means that interest, investment, and research are revisiting in

  2. An Experimental Study of Dynamic Stall on Advanced Airfoil Sections. Volume 2. Pressure and Force Data.

    Science.gov (United States)

    1982-09-01

    base applicable to the retreating-blade stall problem on helicopter rotors, most of the unsteady data accumulated can be classified as large amplitude...0 .~ ’’𔃺 m ~ C"DO 0 O,4M, Mm’~ I.Pd’~t’~,’W O- U’- ~ NkU’W?000 - N0W-’-P0’ fDM - O0C. N V 0NŔUl 𔃺 U l r. -0 0 !, 00: ca i𔃺 Mil 01C00og" Szw . c

  3. Modeling of testosterone regulation by pulse-modulated feedback: An experimental data study

    Science.gov (United States)

    Mattsson, Per; Medvedev, Alexander

    2013-10-01

    The continuous part of a hybrid (pulse-modulated) model of testosterone feedback regulation is extended with infinite-dimensional and nonlinear dynamics, to better explain the testosterone concentration profiles observed in clinical data. A linear least-squares based optimization algorithm is developed for the purpose of detecting impulses of gonadotropin-realsing hormone from measured concentration of luteinizing hormone. The parameters in the model are estimated from hormone concentration measured in human males, and simulation results from the full closed-loop system are provided.

  4. Validation of raw experimental data during shoting at the LIL facility

    Science.gov (United States)

    Henry, Olivier; Domin, Vincent; Romary, Philippe; Raffestin, Didier

    2012-10-01

    The LIL (Laser Integration Line) facility at CESTA (Aquitaine, France) is a facility allowing the delivery of 20 kJ at 3φ. The experiment system includes 13 diagnostics. The facility must be able to deliver, within one hour following shoting, all the results of the plasma diagnostics, alignment images and laser diagnostic measurements. These results have to be guaranteed in terms of conformity to the request and quality of measurement. The LIL has developed a tool for the visualisation, analysis and validation of the data. The software is written in the Delphi language for the main body. The configuration is based on XML files. It is thus possible to re-read the external analysis modules in Python (the language used on the future LMJ). The software is built on three pillars: definition of a validation model prior to the campaign, basic physical models to qualify the signal as compliant and exploitable, and inter-comparison of the shoting and signals over a given campaign or period. Validation of the raw plasma data must serve to validate and guarantee performances, assure the conformity of the PD configuration to the request from the client, check the consistency of measurements, trigger corrective maintenance if necessary.

  5. Effects of temperature on development, survival and reproduction of insects: experimental design, data analysis and modeling.

    Science.gov (United States)

    Régnière, Jacques; Powell, James; Bentz, Barbara; Nealis, Vincent

    2012-05-01

    The developmental response of insects to temperature is important in understanding the ecology of insect life histories. Temperature-dependent phenology models permit examination of the impacts of temperature on the geographical distributions, population dynamics and management of insects. The measurement of insect developmental, survival and reproductive responses to temperature poses practical challenges because of their modality, variability among individuals and high mortality near the lower and upper threshold temperatures. We address this challenge with an integrated approach to the design of experiments and analysis of data based on maximum likelihood. This approach expands, simplifies and unifies the analysis of laboratory data parameterizing the thermal responses of insects in particular and poikilotherms in general. This approach allows the use of censored observations (records of surviving individuals that have not completed development after a certain time) and accommodates observations from temperature transfer treatments in which individuals pass only a portion of their development at an extreme (near-threshold) temperature and are then placed in optimal conditions to complete their development with a higher rate of survival. Results obtained from this approach are directly applicable to individual-based modeling of insect development, survival and reproduction with respect to temperature. This approach makes possible the development of process-based phenology models that are based on optimal use of available information, and will aid in the development of powerful tools for analyzing eruptive insect population behavior and response to changing climatic conditions. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  6. Isothermal titration calorimetry: experimental design, data analysis, and probing macromolecule/ligand binding and kinetic interactions.

    Science.gov (United States)

    Freyer, Matthew W; Lewis, Edwin A

    2008-01-01

    Isothermal titration calorimetry (ITC) is now routinely used to directly characterize the thermodynamics of biopolymer binding interactions and the kinetics of enzyme-catalyzed reactions. This is the result of improvements in ITC instrumentation and data analysis software. Modern ITC instruments make it possible to measure heat effects as small as 0.1 microcal (0.4 microJ), allowing the determination of binding constants, K's, as large as 10(8) - 10(9)M(-1). Modern ITC instruments make it possible to measure heat rates as small as 0.1 microcal/sec, allowing for the precise determination of reaction rates in the range of 10(-12) mol/sec. Values for K(m) and k(cat), in the ranges of 10(-2) - 10(3) microM and 0.05 - 500 sec(-1), respectively, can be determined by ITC. This chapter reviews the planning of an optimal ITC experiment for either a binding or kinetic study, guides the reader through simulated sample experiments, and reviews analysis of the data and the interpretation of the results.

  7. The Robotic Scientist: Distilling Natural Laws from Experimental Data, from Cognitive Robotics to Computational Biology

    Energy Technology Data Exchange (ETDEWEB)

    Lipson, Hod [Cornell University

    2011-10-25

    Can machines discover analytical laws automatically? For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. By seeking dynamical invariants and symmetries, we show how we can go from finding just predictive models to finding deeper conservation laws. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems. Application to modeling physical and biological systems will be shown.

  8. Refinement of the Pion PDF implementing Drell-Yan and Deep Inelastic Scattering Experimental Data

    Science.gov (United States)

    Barry, Patrick; Sato, Nobuo; Melnitchouk, Wally; Ji, Chueng-Ryong

    2017-09-01

    We realize that an abundance of ``sea'' quarks and gluons (as opposed to three valence quarks) is crucial to understanding the mass and internal structure of the proton. An effective pion cloud exists around the core valence structure. In the Drell-Yan (DY) process, two hadrons collide, one donating a quark and the other donating an antiquark. The quark-antiquark pair annihilate, forming a virtual photon, which creates a lepton-antilepton pair. By measuring their cross-sections, we obtain information about the parton distribution function (PDF) of the hadrons. The PDF is the probability of finding a parton at a momentum fraction of the hadron, x, between 0 and 1. Complementary to the DY process is deep inelastic scattering (DIS). Here, a target nucleon is probed by a lepton, and we investigate the pion cloud of the nucleon. The experiments H1 and ZEUS done at HERA at DESY collect DIS data by detecting a leading neutron (LN). By using nested sampling to generate sets of parameters, we present some preliminary fits of pion PDFs to DY (Fermilab-E615 and CERN-NA10) and LN (H1 and ZEUS) datasets. We aim to perform a full NLO QCD global analysis to determine pion PDFs accurately for all x. There have been no attempts to fit the pion PDF using both low and high x data until now.

  9. A strand specific high resolution normalization method for chip-sequencing data employing multiple experimental control measurements

    DEFF Research Database (Denmark)

    Enroth, Stefan; Andersson, Claes; Andersson, Robin

    2012-01-01

    High-throughput sequencing is becoming the standard tool for investigating protein-DNA interactions or epigenetic modifications. However, the data generated will always contain noise due to e.g. repetitive regions or non-specific antibody interactions. The noise will appear in the form of a backg......High-throughput sequencing is becoming the standard tool for investigating protein-DNA interactions or epigenetic modifications. However, the data generated will always contain noise due to e.g. repetitive regions or non-specific antibody interactions. The noise will appear in the form...... of a background distribution of reads that must be taken into account in the downstream analysis, for example when detecting enriched regions (peak-calling). Several reported peak-callers can take experimental measurements of background tag distribution into account when analysing a data set. Unfortunately...

  10. Thermochemistry of dihalogen-substituted benzenes: data evaluation using experimental and quantum chemical methods.

    Science.gov (United States)

    Verevkin, Sergey P; Emel'yanenko, Vladimir N; Varfolomeev, Mikhail A; Solomonov, Boris N; Zherikova, Kseniya V; Melkhanova, Svetlana V

    2014-12-11

    Temperature dependence of vapor pressures for 12 dihalogen-substituted benzenes (halogen = F, Cl, Br, I) was studied by the transpiration method, and molar vaporization or sublimation enthalpies were derived. These data together with results available in the literature were collected and checked for internal consistency using structure-property correlations. Gas-phase enthalpies of formation of dihalogen-substituted benzenes were calculated by using quantum-chemical methods. Evaluated vaporization enthalpies in combination with gas-phase enthalpies of formation were used for estimation liquid-phase enthalpies of formation of dihalogen-substituted benzenes. Pairwise interactions of halogens on the benzene ring were derived and used for development of simple group additivity procedures for estimation of vaporization enthalpies, gas-phase, and liquid-phase enthalpies of formation of dihalogen-substituted benzenes.

  11. Determination of critical micelle concentration of cetyltrimethylammonium bromide: Different procedures for analysis of experimental data

    Directory of Open Access Journals (Sweden)

    Goronja Jelena M.

    2016-01-01

    Full Text Available Conductivity of two micellar systems was measured in order to determine critical micelle concentration (CMC of cetyltrimethylammonium bromide (CTAB. Those systems were: CTin water and CTin binary mixture acetonitrile (ACN-water. Conductivity (κ-concentration (c data were treated by four different methods: conventional method, differential methods (first and second derivative and method of integration (methods A-D, respectively. As CTin water micellar system shows a sharp transition between premicellar and postmicellar part of the κ/c curve, any of the applied methods gives reliable CMC values and there is no statistically significant difference between them. However, for CTin ACN-water mixture micellar system the integration method for CMC determination is recommended due to a weak curvature of κ/c plot.

  12. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  13. Experimental data comparing two coral grow-out methods in nursery-raised Acropora cervicornis

    Science.gov (United States)

    Kuffner, Ilsa B.; Bartels, Erich; Stathakopoulos, Anastasios; Enochs, Ian C.; Kolodziej, Graham; Toth, Lauren; Manzello, Derek P.

    2017-01-01

    Staghorn coral, Acropora cervicornis, is a threatened species and the primary focus of western Atlantic reef-restoration efforts to date. As part of the USGS Coral Reef Ecosystems Studies project (http://coastal.er.usgs.gov/crest/), we investigated skeletal characteristics of nursery-grown staghorn coral reared using two commonly used grow-out methods at Mote Tropical Research Laboratory’s offshore nursery. We compared linear extension, calcification rate, and skeletal density of nursery-raised A. cervicornis branches reared for six months either on blocks attached to substratum or hanging from monofilament line (on PVC “trees”) in the water column. We demonstrate that branches grown on the substratum had significantly higher skeletal density, measured using computerized tomography (CT), and lower linear extension rates compared to water-column fragments. Calcification rates determined with buoyant weighing were not statistically different between the two grow-out methods, but did vary among coral genotypes. Whereas skeletal density and extension rates were plastic traits that depended on environment, calcification rate was conserved. Our results show that the two rearing methods generate the same amount of calcium-carbonate skeleton but produce colonies with different skeletal characteristics, and suggest that genetically based variability in coral-calcification performance exists. The data resulting from this experiment are provided in this data release and are interpreted in Kuffner et al. (2017).Kuffner, I.B., E. Bartels, A. Stathakopoulos, I.C. Enochs, G. Kolodziej, L.T. Toth, and D.P. Manzello, 2017, Plasticity in skeletal characteristics of nursery-raised staghorn coral, Acropora cervicornis: Coral Reefs, in press.

  14. Interaction of ordinary Portland cement and Opalinus Clay: Dual porosity modelling compared to experimental data

    Science.gov (United States)

    Jenni, A.; Gimmi, T.; Alt-Epping, P.; Mäder, U.; Cloet, V.

    2017-06-01

    Interactions between concrete and clays are driven by the strong chemical gradients in pore water and involve mineral reactions in both materials. In the context of a radioactive waste repository, these reactions may influence safety-relevant clay properties such as swelling pressure, permeability or radionuclide retention. Interfaces between ordinary Portland cement and Opalinus Clay show weaker, but more extensive chemical disturbance compared to a contact between low-pH cement and Opalinus Clay. As a consequence of chemical reactions porosity changes occur at cement-clay interfaces. These changes are stronger and may lead to complete pore clogging in the case of low-pH cements. The prediction of pore clogging by reactive transport simulations is very sensitive to the magnitude of diffusive solute fluxes, cement clinker chemistry, and phase reaction kinetics. For instance, the consideration of anion-depleted porosity in clays substantially influences overall diffusion and pore clogging at interfaces. A new concept of dual porosity modelling approximating Donnan equilibrium is developed and applied to an ordinary Portland cement - Opalinus Clay interface. The model predictions are compared with data from the cement-clay interaction (CI) field experiment in the Mt Terri underground rock laboratory (Switzerland), which represent 5 y of interaction. The main observations such as the decalcification of the cement at the interface, the Mg enrichment in the clay detached from the interface, and the S enrichment in the cement detached from the interface, are qualitatively predicted by the new model approach. The model results reveal multiple coupled processes that create the observed features. The quantitative agreement of modelled and measured data can be improved if uncertainties of key input parameters (tortuosities, reaction kinetics, especially of clay minerals) can be reduced.

  15. Recent experimental data may point to a greater role for osmotic pressures in the subsurface

    Science.gov (United States)

    Neuzil, C.E.; Provost, A.M.

    2009-01-01

    Uncertainty about the origin of anomalous fluid pressures in certain geologic settings has caused researchers to take a second look at osmosis, or flow driven by chemical potential differences, as a pressure-generating process in the subsurface. Interest in geological osmosis has also increased because of an in situ experiment by Neuzil (2000) suggesting that Pierre Shale could generate large osmotic pressures when highly compacted. In the last few years, additional laboratory and in situ experiments have greatly increased the number of data on osmotic properties of argillaceous formations, but they have not been systematically examined. In this paper we compile these data and explore their implications for osmotic pressure generation in subsurface systems. Rather than base our analysis on osmotic efficiencies, which depend strongly on concentration, we calculated values of a quantity we term osmotic specific surface area (Aso) that, in principle, is a property of the porous medium only. The Aso values are consistent with a surprisingly broad spectrum of osmotic behavior in argillaceous formations, and all the formations tested exhibited at least a modest ability to generate osmotic pressure. It appears possible that under appropriate conditions some formations can be highly effective osmotic membranes able to generate osmotic pressures exceeding 30 MPa (3 km of head) at porosities as high as ??0.1 and pressures exceeding 10 MPa at porosities as high as ??0.2. These findings are difficult to reconcile with the lack of compelling field evidence for osmotic pressures, and we propose three explanations for the disparity: (1) Our analysis is flawed and argillaceous formations are less effective osmotic membranes than it suggests; (2) the necessary subsurface conditions, significant salinity differences within intact argillaceous formations, are rare; or (3) osmotic pressures are unlikely to be detected and are not recognized when encountered. The last possibility, that

  16. Using manifold embedding for assessing and predicting protein interactions from high-throughput experimental data.

    Science.gov (United States)

    You, Zhu-Hong; Lei, Ying-Ke; Gui, Jie; Huang, De-Shuang; Zhou, Xiaobo

    2010-11-01

    High-throughput protein interaction data, with ever-increasing volume, are becoming the foundation of many biological discoveries, and thus high-quality protein-protein interaction (PPI) maps are critical for a deeper understanding of cellular processes. However, the unreliability and paucity of current available PPI data are key obstacles to the subsequent quantitative studies. It is therefore highly desirable to develop an approach to deal with these issues from the computational perspective. Most previous works for assessing and predicting protein interactions either need supporting evidences from multiple information resources or are severely impacted by the sparseness of PPI networks. We developed a robust manifold embedding technique for assessing the reliability of interactions and predicting new interactions, which purely utilizes the topological information of PPI networks and can work on a sparse input protein interactome without requiring additional information types. After transforming a given PPI network into a low-dimensional metric space using manifold embedding based on isometric feature mapping (ISOMAP), the problem of assessing and predicting protein interactions is recasted into the form of measuring similarity between points of its metric space. Then a reliability index, a likelihood indicating the interaction of two proteins, is assigned to each protein pair in the PPI networks based on the similarity between the points in the embedded space. Validation of the proposed method is performed with extensive experiments on densely connected and sparse PPI network of yeast, respectively. Results demonstrate that the interactions ranked top by our method have high-functional homogeneity and localization coherence, especially our method is very efficient for large sparse PPI network with which the traditional algorithms fail. Therefore, the proposed algorithm is a much more promising method to detect both false positive and false negative interactions

  17. The IUPAC aqueous and non-aqueous experimental pKa data repositories of organic acids and bases.

    Science.gov (United States)

    Slater, Anthony Michael

    2014-10-01

    Accurate and well-curated experimental pKa data of organic acids and bases in both aqueous and non-aqueous media are invaluable in many areas of chemical research, including pharmaceutical, agrochemical, specialty chemical and property prediction research. In pharmaceutical research, pKa data are relevant in ligand design, protein binding, absorption, distribution, metabolism, elimination as well as solubility and dissolution rate. The pKa data compilations of the International Union of Pure and Applied Chemistry, originally in book form, have been carefully converted into computer-readable form, with value being added in the process, in the form of ionisation assignments and tautomer enumeration. These compilations offer a broad range of chemistry in both aqueous and non-aqueous media and the experimental conditions and original reference for all pKa determinations are supplied. The statistics for these compilations are presented and the utility of the computer-readable form of these compilations is examined in comparison to other pKa compilations. Finally, information is provided about how to access these databases.

  18. Comparisons of RELAP5-3D Analyses to Experimental Data from the Natural Convection Shutdown Heat Removal Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Hu, Rui; Lisowski, Darius; Kraus, Adam

    2016-04-17

    The Reactor Cavity Cooling System (RCCS) is an important passive safety system being incorporated into the overall safety strategy for high temperature advanced reactor concepts such as the High Temperature Gas- Cooled Reactors (HTGR). The Natural Convection Shutdown Heat Removal Test Facility (NSTF) at Argonne National Laboratory (Argonne) reflects a 1/2-scale model of the primary features of one conceptual air-cooled RCCS design. The project conducts ex-vessel, passive heat removal experiments in support of Department of Energy Office of Nuclear Energy’s Advanced Reactor Technology (ART) program, while also generating data for code validation purposes. While experiments are being conducted at the NSTF to evaluate the feasibility of the passive RCCS, parallel modeling and simulation efforts are ongoing to support the design, fabrication, and operation of these natural convection systems. Both system-level and high fidelity computational fluid dynamics (CFD) analyses were performed to gain a complete understanding of the complex flow and heat transfer phenomena in natural convection systems. This paper provides a summary of the RELAP5-3D NSTF model development efforts and provides comparisons between simulation results and experimental data from the NSTF. Overall, the simulation results compared favorably to the experimental data, however, further analyses need to be conducted to investigate any identified differences.

  19. Numerical and experimental study of a high port-density WDM optical packet switch architecture for data centers.

    Science.gov (United States)

    Di Lucente, S; Luo, J; Centelles, R Pueyo; Rohit, A; Zou, S; Williams, K A; Dorren, H J S; Calabretta, N

    2013-01-14

    Data centers have to sustain the rapid growth of data traffic due to the increasing demand of bandwidth-hungry internet services. The current intra-data center fat tree topology causes communication bottlenecks in the server interaction process, power-hungry O-E-O conversions that limit the minimum latency and the power efficiency of these systems. In this paper we numerically and experimentally investigate an optical packet switch architecture with modular structure and highly distributed control that allow configuration times in the order of nanoseconds. Numerical results indicate that the candidate architecture scaled over 4000 ports, provides an overall throughput over 50 Tb/s and a packet loss rate below 10(-6) while assuring sub-microsecond latency. We present experimental results that demonstrate the feasibility of a 16x16 optical packet switch based on parallel 1x4 integrated optical cross-connect modules. Error-free operations can be achieved with 4 dB penalty while the overall energy consumption is of 66 pJ/b. Based on those results, we discuss feasibility to scale the architecture to a much larger port count.

  20. The IUPAC aqueous and non-aqueous experimental pKa data repositories of organic acids and bases

    Science.gov (United States)

    Slater, Anthony Michael

    2014-10-01

    Accurate and well-curated experimental pKa data of organic acids and bases in both aqueous and non-aqueous media are invaluable in many areas of chemical research, including pharmaceutical, agrochemical, specialty chemical and property prediction research. In pharmaceutical research, pKa data are relevant in ligand design, protein binding, absorption, distribution, metabolism, elimination as well as solubility and dissolution rate. The pKa data compilations of the International Union of Pure and Applied Chemistry, originally in book form, have been carefully converted into computer-readable form, with value being added in the process, in the form of ionisation assignments and tautomer enumeration. These compilations offer a broad range of chemistry in both aqueous and non-aqueous media and the experimental conditions and original reference for all pKa determinations are supplied. The statistics for these compilations are presented and the utility of the computer-readable form of these compilations is examined in comparison to other pKa compilations. Finally, information is provided about how to access these databases.

  1. [Interactions of DNA bases with individual water molecules. Molecular mechanics and quantum mechanics computation results vs. experimental data].

    Science.gov (United States)

    Gonzalez, E; Lino, J; Deriabina, A; Herrera, J N F; Poltev, V I

    2013-01-01

    To elucidate details of the DNA-water interactions we performed the calculations and systemaitic search for minima of interaction energy of the systems consisting of one of DNA bases and one or two water molecules. The results of calculations using two force fields of molecular mechanics (MM) and correlated ab initio method MP2/6-31G(d, p) of quantum mechanics (QM) have been compared with one another and with experimental data. The calculations demonstrated a qualitative agreement between geometry characteristics of the most of local energy minima obtained via different methods. The deepest minima revealed by MM and QM methods correspond to water molecule position between two neighbor hydrophilic centers of the base and to the formation by water molecule of hydrogen bonds with them. Nevertheless, the relative depth of some minima and peculiarities of mutual water-base positions in' these minima depend on the method used. The analysis revealed insignificance of some differences in the results of calculations performed via different methods and the importance of other ones for the description of DNA hydration. The calculations via MM methods enable us to reproduce quantitatively all the experimental data on the enthalpies of complex formation of single water molecule with the set of mono-, di-, and trimethylated bases, as well as on water molecule locations near base hydrophilic atoms in the crystals of DNA duplex fragments, while some of these data cannot be rationalized by QM calculations.

  2. A Comparison of Predictive Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected Traditional Chemical Warfare Agents and Simulants II: COSMO RS and COSMOTherm

    Science.gov (United States)

    2017-04-01

    A COMPARISON OF PREDICTIVE THERMO AND WATER SOLVATION PROPERTY PREDICTION TOOLS AND EXPERIMENTAL DATA FOR...4. TITLE AND SUBTITLE A Comparison of Predictive Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected...ambient temperature. We then directed these descriptions of each molecule to COSMOTherm to calculate boiling point, vapor pressure, water solubility

  3. CFD analysis of pressure drop across grid spacers in rod bundles compared to correlations and heavy liquid metal experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Batta, A., E-mail: batta@kit.edu; Class, A.G., E-mail: class@kit.edu

    2017-02-15

    Early studies of the flow in rod bundles with spacer grids suggest that the pressure drop can be decomposed in contributions due to flow area variations by spacer grids and frictional losses along the rods. For these shape and frictional losses simple correlations based on theoretical and experimental data have been proposed. In the OECD benchmark study LACANES it was observed that correlations could well describe the flow behavior of the heavy liquid metal loop including a rod bundle with the exception of the core region, where different experts chose different pressure-loss correlations for the losses due to spacer grids. Here, RANS–CFD simulations provided very good data compared to the experimental data. It was observed that the most commonly applied Rehme correlation underestimated the shape losses. The available correlations relate the pressure drop across a grid spacer to the relative plugging of the spacer i.e. solidity e{sub max}. More sophisticated correlations distinct between spacer grids with round or sharp leading edge shape. The purpose of this study is to (i) show that CFD is suitable to predict pressure drop across spacer grids and (ii) to access the generality of pressure drop correlations. By verification and validation of CFD results against experimental data obtained in KALLA we show (i). The generality (ii) is challenged by considering three cases which yield identical pressure drop in the correlations. First we test the effect of surface roughness, a parameter not present in the correlations. Here we compare a simulation assuming a typical surface roughness representing the experimental situation to a perfectly smooth spacer surface. Second we reverse the flow direction for the spacer grid employed in the experiments which is asymmetric. The flow direction reversal is chosen for convenience, since an asymmetric spacer grid with given blockage ratio, may result in different flow situations depending on flow direction. Obviously blockage

  4. Statistical guidance for experimental design and data analysis of mutation detection in rare monogenic mendelian diseases by exome sequencing.

    Directory of Open Access Journals (Sweden)

    Degui Zhi

    Full Text Available Recently, whole-genome sequencing, especially exome sequencing, has successfully led to the identification of causal mutations for rare monogenic Mendelian diseases. However, it is unclear whether this approach can be generalized and effectively applied to other Mendelian diseases with high locus heterogeneity. Moreover, the current exome sequencing approach has limitations such as false positive and false negative rates of mutation detection due to sequencing errors and other artifacts, but the impact of these limitations on experimental design has not been systematically analyzed. To address these questions, we present a statistical modeling framework to calculate the power, the probability of identifying truly disease-causing genes, under various inheritance models and experimental conditions, providing guidance for both proper experimental design and data analysis. Based on our model, we found that the exome sequencing approach is well-powered for mutation detection in recessive, but not dominant, Mendelian diseases with high locus heterogeneity. A disease gene responsible for as low as 5% of the disease population can be readily identified by sequencing just 200 unrelated patients. Based on these results, for identifying rare Mendelian disease genes, we propose that a viable approach is to combine, sequence, and analyze patients with the same disease together, leveraging the statistical framework presented in this work.

  5. Stream invertebrate productivity linked to forest subsidies: 37 stream-years of reference and experimental data.

    Science.gov (United States)

    Wallace, J Bruce; Eggert, Susan L; Meyer, Judy L; Webster, Jackson R

    2015-05-01

    Riparian habitats provide detrital subsidies of varying quantities and qualities to recipient ecosystems. We used long-term data from three reference streams (covering 24 stream-years) and 13-year whole-stream organic matter manipulations to investigate the influence of terrestrial detrital quantity and quality on benthic invertebrate community structure, abundance, biomass, and secondary production in rockface (RF) and mixed substrates (MS) of forested headwater streams. Using a mesh canopy covering the entire treatment stream, we examined effects of litter ex'clusion, small- and large-wood removal, and addition of artificial wood (PVC) and leaves of varying quality on organic matter standing crops and invertebrate community structure and function. We assessed differences in functional feeding group distribution between substrate types as influenced by organic matter manipulations and long-term patterns of predator and prey production in manipulated vs. reference years. Particulate organic matter standing crops in MS of the treatment stream declined drastically with each successive year of litter exclusion, approaching zero after three years. Monthly invertebrate biomass and annual secondary production was positively related to benthic organic matter in the MS habitats. Rockface habitats exhibited fewer changes than MS habitats across all organic matter manipulations. With leaf addition, the patterns of functional group distribution among MS and RF habitats returned to patterns seen in reference streams. Secondary production per unit organic matter standing crop was greatest for the leaf addition period, followed by the reference streams, and significantly less for the litter exclusion and wood removal periods. These data indicate that the limited organic matter remaining in the stream following litter exclusion and wood removal was more refractory than that in the reference streams, whereas the added leaf material was more labile and readily converted into

  6. Secondary Neutron Production from Space Radiation Interactions: Advances in Model and Experimental Data Base Development

    Science.gov (United States)

    Heilbronn, Lawrence H.; Townsend, Lawrence W.; Braley, G. Scott; Iwata, Yoshiyuki; Iwase, Hiroshi; Nakamura, Takashi; Ronningen, Reginald M.; Cucinotta, Francis A.

    2003-01-01

    For humans engaged in long-duration missions in deep space or near-Earth orbit, the risk from exposure to galactic and solar cosmic rays is an important factor in the design of spacecraft, spacesuits, and planetary bases. As cosmic rays are transported through shielding materials and human tissue components, a secondary radiation field is produced. Neutrons are an important component of that secondary field, especially in thickly-shielded environments. Calculations predict that 50% of the dose-equivalent in a lunar or Martian base comes from neutrons, and a recent workshop held at the Johnson Space Center concluded that as much as 30% of the dose in the International Space Station may come from secondary neutrons. Accelerator facilities provide a means for measuring the effectiveness of various materials in their ability to limit neutron production, using beams and energies that are present in cosmic radiation. The nearly limitless range of beams, energies, and target materials that are present in space, however, means that accelerator-based experiments will not provide a complete database of cross sections and thick-target yields that are necessary to plan and design long-duration missions. As such, accurate nuclear models of neutron production are needed, as well as data sets that can be used to compare with, and verify, the predictions from such models. Improvements in a model of secondary neutron production from heavy-ion interactions are presented here, along with the results from recent accelerator-based measurements of neutron-production cross sections. An analytical knockout-ablation model capable of predicting neutron production from high-energy hadron-hadron interactions (both nucleon-nucleus and nucleus-nucleus collisions) has been previously developed. In the knockout stage, the collision between two nuclei result in the emission of one or more nucleons from the projectile and/or target. The resulting projectile and target remnants, referred to as

  7. Experimental design-based functional mining and characterization of high-throughput sequencing data in the sequence read archive.

    Science.gov (United States)

    Nakazato, Takeru; Ohta, Tazro; Bono, Hidemasa

    2013-01-01

    High-throughput sequencing technology, also called next-generation sequencing (NGS), has the potential to revolutionize the whole process of genome sequencing, transcriptomics, and epigenetics. Sequencing data is captured in a public primary data archive, the Sequence Read Archive (SRA). As of January 2013, data from more than 14,000 projects have been submitted to SRA, which is double that of the previous year. Researchers can download raw sequence data from SRA website to perform further analyses and to compare with their own data. However, it is extremely difficult to search entries and download raw sequences of interests with SRA because the data structure is complicated, and experimental conditions along with raw sequences are partly described in natural language. Additionally, some sequences are of inconsistent quality because anyone can submit sequencing data to SRA with no quality check. Therefore, as a criterion of data quality, we focused on SRA entries that were cited in journal articles. We extracted SRA IDs and PubMed IDs (PMIDs) from SRA and full-text versions of journal articles and retrieved 2748 SRA ID-PMID pairs. We constructed a publication list referring to SRA entries. Since, one of the main themes of -omics analyses is clarification of disease mechanisms, we also characterized SRA entries by disease keywords, according to the Medical Subject Headings (MeSH) extracted from articles assigned to each SRA entry. We obtained 989 SRA ID-MeSH disease term pairs, and constructed a disease list referring to SRA data. We previously developed feature profiles of diseases in a system called "Gendoo". We generated hyperlinks between diseases extracted from SRA and the feature profiles of it. The developed project, publication and disease lists resulting from this study are available at our web service, called "DBCLS SRA" (http://sra.dbcls.jp/). This service will improve accessibility to high-quality data from SRA.

  8. Experimental design-based functional mining and characterization of high-throughput sequencing data in the sequence read archive.

    Directory of Open Access Journals (Sweden)

    Takeru Nakazato

    Full Text Available High-throughput sequencing technology, also called next-generation sequencing (NGS, has the potential to revolutionize the whole process of genome sequencing, transcriptomics, and epigenetics. Sequencing data is captured in a public primary data archive, the Sequence Read Archive (SRA. As of January 2013, data from more than 14,000 projects have been submitted to SRA, which is double that of the previous year. Researchers can download raw sequence data from SRA website to perform further analyses and to compare with their own data. However, it is extremely difficult to search entries and download raw sequences of interests with SRA because the data structure is complicated, and experimental conditions along with raw sequences are partly described in natural language. Additionally, some sequences are of inconsistent quality because anyone can submit sequencing data to SRA with no quality check. Therefore, as a criterion of data quality, we focused on SRA entries that were cited in journal articles. We extracted SRA IDs and PubMed IDs (PMIDs from SRA and full-text versions of journal articles and retrieved 2748 SRA ID-PMID pairs. We constructed a publication list referring to SRA entries. Since, one of the main themes of -omics analyses is clarification of disease mechanisms, we also characterized SRA entries by disease keywords, according to the Medical Subject Headings (MeSH extracted from articles assigned to each SRA entry. We obtained 989 SRA ID-MeSH disease term pairs, and constructed a disease list referring to SRA data. We previously developed feature profiles of diseases in a system called "Gendoo". We generated hyperlinks between diseases extracted from SRA and the feature profiles of it. The developed project, publication and disease lists resulting from this study are available at our web service, called "DBCLS SRA" (http://sra.dbcls.jp/. This service will improve accessibility to high-quality data from SRA.

  9. Development of an operant treatment for content word dysfluencies in persistent stuttering children: Initial experimental data

    Science.gov (United States)

    Reed, Phil; Howell, Peter C.; Davis, Steve; Osborne, Lisa A.

    2009-01-01

    A novel behavioral treatment for persistent stuttering is described. Analysis of the dysfluent speech shows that children who emit high rates of stuttering on content words in sentences have a poor prognosis for recovery, compared to those who emit high rates of stuttering on function words. This novel technique aimed to reverse the pattern of dysfluencies noted in such children, and reduce stuttering in the short-term. To this end, dysfluent content words only were subject to an over-correction procedure. In contrast, dysfluent function words were subject to social approval. The results of two studies indicated that these procedures reduced rates of content word stuttering, even at a post-treatment follow-up assessment, for those with severe, and previously intractable, stuttering. These data suggest the efficacy of behavioral interventions for persistent stuttering, and point to the importance of careful delineation between the parts of speech to be subject to various contingencies. However, it remains to be seen whether the treatment efficacy was specifically due to targeting the parts of speech of the stutter-contingent time-outs PMID:19920870

  10. Identification of enriched PTM crosstalk motifs from large-scale experimental data sets.

    Science.gov (United States)

    Peng, Mao; Scholten, Arjen; Heck, Albert J R; van Breukelen, Bas

    2014-01-03

    Post-translational modifications (PTMs) play an important role in the regulation of protein function. Mass spectrometry based proteomics experiments nowadays identify tens of thousands of PTMs in a single experiment. A wealth of data has therefore become publically available. Evidently the biological function of each PTM is the key question to be addressed; however, such analyses focus primarily on single PTM events. This ignores the fact that PTMs may act in concert in the regulation of protein function, a process termed PTM crosstalk. Relatively little is known on the frequency and functional relevance of crosstalk between PTM sites. In a bioinformatics approach, we extracted PTMs occurring in proximity in the protein sequence from publically available databases. These PTMs and their flanking sequences were subjected to stringent motif searches, including a scoring for evolutionary conservation. Our unprejudiced approach was able to detect a respectable set of motifs, of which about half were described previously. Among these we could add many new proteins harboring these motifs. We extracted also several novel motifs, which through their widespread appearance and high conservation may pinpoint at previously nonannotated concerted PTM actions. By employing network analyses on these proteins, we propose putative functional roles for these novel motifs with two PTM sites in close proximity.

  11. Interpreting experimental data on egg production--applications of dynamic differential equations.

    Science.gov (United States)

    France, J; Lopez, S; Kebreab, E; Dijkstra, J

    2013-09-01

    This contribution focuses on applying mathematical models based on systems of ordinary first-order differential equations to synthesize and interpret data from egg production experiments. Models based on linear systems of differential equations are contrasted with those based on nonlinear systems. Regression equations arising from analytical solutions to linear compartmental schemes are considered as candidate functions for describing egg production curves, together with aspects of parameter estimation. Extant candidate functions are reviewed, a role for growth functions such as the Gompertz equation suggested, and a function based on a simple new model outlined. Structurally, the new model comprises a single pool with an inflow and an outflow. Compartmental simulation models based on nonlinear systems of differential equations, and thus requiring numerical solution, are next discussed, and aspects of parameter estimation considered. This type of model is illustrated in relation to development and evaluation of a dynamic model of calcium and phosphorus flows in layers. The model consists of 8 state variables representing calcium and phosphorus pools in the crop, stomachs, plasma, and bone. The flow equations are described by Michaelis-Menten or mass action forms. Experiments that measure Ca and P uptake in layers fed different calcium concentrations during shell-forming days are used to evaluate the model. In addition to providing a useful management tool, such a simulation model also provides a means to evaluate feeding strategies aimed at reducing excretion of potential pollutants in poultry manure to the environment.

  12. Fourier Analysis: Graphical Animation and Analysis of Experimental Data with Excel

    Directory of Open Access Journals (Sweden)

    Margarida Oliveira

    2012-05-01

    Full Text Available According to Fourier formulation, any function that can be represented in a graph may be approximated by the “sum” of infinite sinusoidal functions (Fourier series, termed as “waves”.The adopted approach is accessible to students of the first years of university studies, in which the emphasis is put on the understanding of mathematical concepts through illustrative graphic representations, the students being encouraged to prepare animated Excel-based computational modules (VBA-Visual Basic for Applications.Reference is made to the part played by both trigonometric and complex representations of Fourier series in the concept of discrete Fourier transform. Its connection with the continuous Fourier transform is demonstrated and a brief mention is made of the generalization leading to Laplace transform.As application, the example presented refers to the analysis of vibrations measured on engineering structures: horizontal accelerations of a one-storey building deriving from environment noise. This example is integrated in the curriculum of the discipline “Matemática Aplicada à Engenharia Civil” (Mathematics Applied to Civil Engineering, lectured at ISEL (Instituto Superior de Engenharia de Lisboa. In this discipline, the students have the possibility of performing measurements using an accelerometer and a data acquisition system, which, when connected to a PC, make it possible to record the accelerations measured in a file format recognizable by Excel.

  13. The Effect of Cooperation on UWB-Based Positioning Systems Using Experimental Data

    Science.gov (United States)

    Dardari, Davide; Conti, Andrea; Lien, Jaime; Win, Moe Z.

    2008-12-01

    Positioning systems based on ultrawide bandwidth (UWB) technology have been considered recently especially for indoor environments due to the property of UWB signals to resolve multipath and penetrate obstacles. However, line-of-sight (LoS) blockage and excess propagation delay affect ranging measurements thus drastically reducing the positioning accuracy. In this paper, we first characterize and derive models for the range estimation error and the excess delay based on measured data from real-ranging devices. These models are used in various multilateration algorithms to determine the position of the target. From measurements in a real indoor scenario, we investigate how the localization accuracy is affected by the number of beacons and by the availability of priori information about the environment and network geometry. We also examine the case where multiple targets cooperate by measuring ranges not only from the beacons but also from each other. An iterative multilateration algorithm that incorporates information gathered through cooperation is then proposed with the purpose of improving the position estimation accuracy. Using numerical results, we demonstrate the impact of cooperation on the positioning accuracy.

  14. Concussions experienced by Major League Baseball catchers and umpires: field data and experimental baseball impacts.

    Science.gov (United States)

    Beyer, Jeffrey A; Rowson, Steven; Duma, Stefan M

    2012-01-01

    Some reports have shown that head injuries in baseball may comprise up to 18.5% of all competitive sports-related head injuries. The objective of this study was to evaluate the response of catcher and umpire masks to impacts at these different regions to discover the impact conditions that represent the greatest risk of injury. A series of 10 events in which a catcher or umpire in Major League Baseball, who experienced a foul ball to the mask that resulted in a concussion, were analyzed through video and data on pitch characteristics. It was found that the impacts were distributed across the face, and the median plate speed was approximately 38 m/s (84 mph). To determine the relative severity of each identified impact location, an instrumented Hybrid III head outfitted with a catcher or umpire mask was impacted with baseballs. Testing at 27 and 38 m/s (60 and 84 mph) suggested that impacts to the center-eyebrow and chin locations were the most severe. Peak linear and rotational accelerations were found to be lower than the suggested injury thresholds. While impacts to a mask result in head accelerations which are near or below levels commonly associated with the lower limits for head injury, the exact injury mechanism is unclear, as concussions are still experienced by the mask wearers.

  15. Breaking of axial symmetry in excited nuclei as identified in experimental data

    Science.gov (United States)

    Junghans, Arnd R.; Grosse, Eckart; Massarczyk, Ralph

    2017-09-01

    A phenomenological prediction for radiative neutron capture is presented and compared to recent compilations of Maxwellian averaged cross sections and average radiative widths. Photon strength functions and nuclear level densities near the neutron separation energy are extracted from data without the assumption of axial symmetry - at variance to common usage. A satisfactory description is reached with a small number of global parameters when theoretical predictions on triaxiality (from constrained HFB calculations with the Gogny D1S interaction) are inserted into conventional calculations of radiative neutron capture. The photon strength is parametrized using the sum of three Lorentzians (TLO) in accordance to the dipole sum rule. The positions and widths are accounted for by the droplet model with surface dissipation without locally adjusted parameters. Level densities are influenced strongly by the significant collective enhancement based on the breaking of axial symmetry. With the less stringent requirement of invariance against rotation by 180∘ a global set of parameters which allows to describe the photon strength function and the level densities in the nuclear mass range from mass number 50 < A < 250 is found.

  16. Breaking of axial symmetry in excited nuclei as identified in experimental data

    Directory of Open Access Journals (Sweden)

    Junghans Arnd R.

    2017-01-01

    Full Text Available A phenomenological prediction for radiative neutron capture is presented and compared to recent compilations of Maxwellian averaged cross sections and average radiative widths. Photon strength functions and nuclear level densities near the neutron separation energy are extracted from data without the assumption of axial symmetry – at variance to common usage. A satisfactory description is reached with a small number of global parameters when theoretical predictions on triaxiality (from constrained HFB calculations with the Gogny D1S interaction are inserted into conventional calculations of radiative neutron capture. The photon strength is parametrized using the sum of three Lorentzians (TLO in accordance to the dipole sum rule. The positions and widths are accounted for by the droplet model with surface dissipation without locally adjusted parameters. Level densities are influenced strongly by the significant collective enhancement based on the breaking of axial symmetry. With the less stringent requirement of invariance against rotation by 180∘ a global set of parameters which allows to describe the photon strength function and the level densities in the nuclear mass range from mass number 50 < A < 250 is found.

  17. Finding a home for experimental data in terrestrial biosphere models: An empiricist's perspective

    Science.gov (United States)

    Iversen, C. M.

    2015-12-01

    Terrestrial biosphere models are necessary to project the integrated effects of processes and feedbacks on the climate system in 100 years. A tension exists between the representation of ecosystem processes in terrestrial biosphere models, which must be necessarily coarse, and the overwhelming complexity of processes that empiricists observe in the natural world. Working together, modelers and empiricists can diffuse this tension by targeting the experiments and observations needed to resolve model uncertainty. I have learned a few lessons in the recent realm of model-experiment interaction 'Mod-Ex': (1) Complaining about 'bad' or unrealistic representation of processes in models is unhelpful. Modelers are already in a position where they need to have expertise in any number of disciplines; no one person can be an expert in all. Instead, we (empirical scientists) need to proactively provide the information needed for model parameters and processes. This may require a global database. (2) Model needs are nearly always broader than narrow empirical questions. What ecologists might think of as 'the boring background information'—meteorology, soil processes, site history—are all necessary to put important ecological processes in a modeling context. (3) Data collected to inform the model is more meaningful if it considers the way that models necessarily function (e.g., reaching an equilibrium state before projection into the future can begin). For example, the SPRUCE experiment was designed as a regression design, rather than an ANOVA design, to allow for models to predict response thresholds, rather than the experiment providing a 'yes' or 'no' answer. (4) Empiricists have an important role to play in guiding and provide constraints on scaling their small-scale measurements to the temporal and spatial scales needed for large-scale global models. This interaction will be facilitated by a move to trait-based modeling, which seeks to capture the variation within a

  18. Comparison between numerical models and CHENSI with experimental data (MUST within the case of the 0° approach flow.

    Directory of Open Access Journals (Sweden)

    Medjahed Bendida

    2014-04-01

    Full Text Available The MUST wind tunnel data set served as a validation case for obstacle-resolving micro-scale models in the COST Action 732 “Quality Assurance and Improvement of Micro-Scale Meteorological Models”.The code used for the numerical simulation is code CHENSI, simulations carried out showed a certain degree of agreement between the experimental results and those of the numerical simulation, they highlight the need for proceeding to an experimental campaign but with more measurements and the need for having a good control of determining factors in the exploitation of its results. The aim is to explain the experimental data obtained by atmospheric wind on the physical model. The site company of Mock Urban Setting Test (MUST was selected to be simulated by the code CEN CHENSI developed by the team of Dynamique of l’atmosphere Habitee of LME/ECN. The code was based on (K- ε model of (Launder and Spalding. For the integration of the PDE (Potential Dimensional equations constitute the mathematical model, the finite volume method of (Ferziger and Peric was used within the decade disposition of unknowns MAC of (Harlow and Welck for the discretisation of PDE terms. The boundary conditions were imposed according to the wall laws (In ground and on buildings or within Dirichlet condition (Inlet boundary or of Newman (Outlet boundary or top limit. The numerical domain used was comparable to the one of the atmospheric wind experiences within a three-dimensional Cartesian mesh. Numerical results presented in this study for the mean flow field, turbulent kinetic energy in the direction of wind incidence 0°. For an objective comparison of the CHENSI model performances within other European codes used for MUST configuration simulation. The results obtained by the numerical modelling approach are presented in this paper.

  19. Comparison of neurofuzzy logic and decision trees in discovering knowledge from experimental data of an immediate release tablet formulation.

    Science.gov (United States)

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    Understanding of the cause-effect relationships between formulation ingredients, process conditions and product properties is essential for developing a quality product. However, the formulation knowledge is often hidden in experimental data and not easily interpretable. This study compares neurofuzzy logic and decision tree approaches in discovering hidden knowledge from an immediate release tablet formulation database relating formulation ingredients (silica aerogel, magnesium stearate, microcrystalline cellulose and sodium carboxymethylcellulose) and process variables (dwell time and compression force) to tablet properties (tensile strength, disintegration time, friability, capping and drug dissolution at various time intervals). Both approaches successfully generated useful knowledge in the form of either "if then" rules or decision trees. Although different strategies are employed by the two approaches in generating rules/trees, similar knowledge was discovered in most cases. However, as decision trees are not able to deal with continuous dependent variables, data discretisation procedures are generally required.

  20. Linking learner corpus and experimental data in studying second language learners’ knowledge of verb-argument constructions

    Directory of Open Access Journals (Sweden)

    Römer Ute

    2014-04-01

    Full Text Available This paper combines data from learner corpora and psycholinguistic experiments in an attempt to find out what advanced learners of English (first language backgrounds German and Spanish know about a range of common verbargument constructions (VACs, such as the ‘V about n’ construction (e.g. she thinks about chocolate a lot. Learners’ dominant verb-VAC associations are examined based on evidence retrieved from the German and Spanish subcomponents of ICLE and LINDSEI and collected in lexical production tasks in which participants complete VAC frames (e.g. ‘he ___ about the...’ with verbs that may fill the blank (e.g. talked, thought, wondered. The paper compares findings from the different data sets and highlights the value of linking corpus and experimental evidence in studying linguistic phenomena

  1. Pre-Equilibrium Emission in Differential Cross-Section Calculations and Analysis of Experimental Data for 232Th

    Science.gov (United States)

    Tel, E.; Demirkol, I.; Arasoğlu, A.; Şarer, B.

    In this study, neutron-emission spectra produced by (n,xn) reactions on nuclei 232Th have been calculated. Angle-integrated cross-sections in neutron induced reactions on targets 232Th have been calculated at the bombarding energies from 2 MeV to 18 MeV. We have investigated multiple pre-equilibrium matrix element constant from internal transition for 232Th (n,xn) neutron emission spectra. In the calculations, the geometry dependent hybrid model and the cascade exciton model including the effects of pre-equilibrium have been used. Pre-equilibrium direct effects have been examined by using full exciton model. In addition, we have described how multiple pre-equilibrium emissions can be included in the Feshbach-Kerman-Koonin (FKK) fully quantum-mechanical theory. By analyzing (n,xn) reaction on 232Th, with the incident energy from 2 MeV to 18 MeV, the importance of multiple pre-equilibrium emission can be seen clearly. All calculated results have been compared with experimental data. The obtained results have been discussed and compared with the available experimental data and found agreement with each other.

  2. Phase equilibrium of liquid mixtures: experimental and modeled data using statistical associating fluid theory for potential of variable range approach.

    Science.gov (United States)

    Giner, Beatriz; Bandrés, Isabel; López, M Carmen; Lafuente, Carlos; Galindo, Amparo

    2007-10-14

    A study of the phase equilibrium (experimental and modeled) of mixtures formed by a cyclic ether and haloalkanes has been derived. Experimental data for the isothermal vapor liquid equilibrium of mixtures formed by tetrahydrofuran and tetrahydropyran and isomeric chlorobutanes at temperatures of 298.15, 313.15, and 328.15 K are presented. Experimental results have been discussed in terms of both molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. The statistical associating fluid theory for potential of variable range (SAFT-VR) approach together with standard combining rules without adjustable parameters has been used to model the phase equilibrium. Good agreement between experiment and the prediction is found with such a model. Mean absolute deviations for pressures are of the order of 1 kPa, while less than 0.013 mole fraction for vapor phase compositions. In order to improve the results obtained, a new modeling has been carried out by introducing a unique transferable parameter k(ij), which modifies the strength of the dispersion interaction between unlike components in the mixtures, and is valid for all the studied mixtures being not temperature or pressure dependent. This parameter together with the SAFT-VR approach provides a description of the vapor-liquid equilibrium of the mixtures that is in excellent agreement with the experimental data for most cases. The absolute deviations are of the order of 0.005 mole fraction for vapor phase compositions and less than 0.3 kPa for pressure, excepting for mixtures containing 2-chloro-2-methylpropane which deviations for pressure are larger. Results obtained in this work in the modeling of the phase equilibrium with the SAFT-VR equation of state have been compared to the ones obtained in a previous study when the approach was used to model similar mixtures with clear differences in the thermodynamic behavior

  3. ANITA-2000 activation code package - updating of the decay data libraries and validation on the experimental data of the 14 MeV Frascati Neutron Generator

    Directory of Open Access Journals (Sweden)

    Frisoni Manuela

    2016-01-01

    Full Text Available ANITA-2000 is a code package for the activation characterization of materials exposed to neutron irradiation released by ENEA to OECD-NEADB and ORNL-RSICC. The main component of the package is the activation code ANITA-4M that computes the radioactive inventory of a material exposed to neutron irradiation. The code requires the decay data library (file fl1 containing the quantities describing the decay properties of the unstable nuclides and the library (file fl2 containing the gamma ray spectra emitted by the radioactive nuclei. The fl1 and fl2 files of the ANITA-2000 code package, originally based on the evaluated nuclear data library FENDL/D-2.0, were recently updated on the basis of the JEFF-3.1.1 Radioactive Decay Data Library. This paper presents the results of the validation of the new fl1 decay data library through the comparison of the ANITA-4M calculated values with the measured electron and photon decay heats and activities of fusion material samples irradiated at the 14 MeV Frascati Neutron Generator (FNG of the NEA-Frascati Research Centre. Twelve material samples were considered, namely: Mo, Cu, Hf, Mg, Ni, Cd, Sn, Re, Ti, W, Ag and Al. The ratios between calculated and experimental values (C/E are shown and discussed in this paper.

  4. Public Attitudes toward Consent and Data Sharing in Biobank Research: A Large Multi-site Experimental Survey in the US.

    Science.gov (United States)

    Sanderson, Saskia C; Brothers, Kyle B; Mercaldo, Nathaniel D; Clayton, Ellen Wright; Antommaria, Armand H Matheny; Aufox, Sharon A; Brilliant, Murray H; Campos, Diego; Carrell, David S; Connolly, John; Conway, Pat; Fullerton, Stephanie M; Garrison, Nanibaa' A; Horowitz, Carol R; Jarvik, Gail P; Kaufman, David; Kitchner, Terrie E; Li, Rongling; Ludman, Evette J; McCarty, Catherine A; McCormick, Jennifer B; McManus, Valerie D; Myers, Melanie F; Scrol, Aaron; Williams, Janet L; Shrubsole, Martha J; Schildcrout, Jonathan S; Smith, Maureen E; Holm, Ingrid A

    2017-03-02

    Individuals participating in biobanks and other large research projects are increasingly asked to provide broad consent for open-ended research use and widespread sharing of their biosamples and data. We assessed willingness to participate in a biobank using different consent and data sharing models, hypothesizing that willingness would be higher under more restrictive scenarios. Perceived benefits, concerns, and information needs were also assessed. In this experimental survey, individuals from 11 US healthcare systems in the Electronic Medical Records and Genomics (eMERGE) Network were randomly allocated to one of three hypothetical scenarios: tiered consent and controlled data sharing; broad consent and controlled data sharing; or broad consent and open data sharing. Of 82,328 eligible individuals, exactly 13,000 (15.8%) completed the survey. Overall, 66% (95% CI: 63%-69%) of population-weighted respondents stated they would be willing to participate in a biobank; willingness and attitudes did not differ between respondents in the three scenarios. Willingness to participate was associated with self-identified white race, higher educational attainment, lower religiosity, perceiving more research benefits, fewer concerns, and fewer information needs. Most (86%, CI: 84%-87%) participants would want to know what would happen if a researcher misused their health information; fewer (51%, CI: 47%-55%) would worry about their privacy. The concern that the use of broad consent and open data sharing could adversely affect participant recruitment is not supported by these findings. Addressing potential participants' concerns and information needs and building trust and relationships with communities may increase acceptance of broad consent and wide data sharing in biobank research. Copyright © 2017 American Society of Human Genetics. All rights reserved.

  5. Generalization of experimental data on amplitude and frequency of oscillations induced by steam injection into a subcooled pool

    Energy Technology Data Exchange (ETDEWEB)

    Villanueva, Walter; Li, Hua [Division of Nuclear Power Safety, Royal Institute of Technology (KTH), Roslagstullsbacken 21, SE-10691 Stockholm (Sweden); Puustinen, Markku [Nuclear Engineering, LUT School of Energy Systems, Lappeenranta University of Technology (LUT), FIN-53851 Lappeenranta (Finland); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology (KTH), Roslagstullsbacken 21, SE-10691 Stockholm (Sweden)

    2015-12-15

    Highlights: • Available data on steam injection into subcooled pool is generalized. • Scaling approach is proposed on amplitude and frequency of chugging oscillations. • The scaled amplitude has a maximum at Froude number Fr ≈ 2.8. • The scaled frequency has a minimum at Fr ≈ 6. • Both amplitude and frequency has a strong dependence on pool bulk temperature. - Abstract: Steam venting and condensation into a subcooled pool of water through a blowdown pipe can undergo a phenomenon called chugging, which is an oscillation of the steam–water interface inside the blowdown pipe. The momentum that is generated by the oscillations is directly proportional to the oscillations’ amplitude and frequency, according to the synthetic jet theory. Higher momentum can enhance pool mixing and positively affect the pool's pressure suppression capacity by reducing thermal stratification. In this paper, we present a generalization of available experimental data on the amplitude and frequency of oscillations during chugging. We use experimental data obtained in different facilities at different scales to suggest a scaling approach for non-dimensional amplitude and frequency of the oscillations. We demonstrate that the Froude number Fr (which relates the inertial forces to gravitational forces) can be used as a scaling criterion in this case. The amplitude has maximum at Fr ≈ 2.8. There is also a strong dependence of the amplitude on temperature; the lower the bulk temperature is the higher the scaled amplitude. A known analytical theory can only capture the decreasing trend in amplitude for Fr > 2.8 and fails to capture the increasing trend and the temperature dependence. Similarly, there is a minimum of the non-dimensional frequency at Fr ≈ 6. A strong dependence on temperature is also observed for Fr > 6; the lower the bulk temperature is the higher the scaled frequency. The known analytical theory is able to capture qualitatively the general trend in

  6. Evaluation of climatic data, post-treatment water yield and snowpack differences between closed and open stands of lodgepole pine on Tenderfoot Creek Experimental Forest

    Science.gov (United States)

    Phillip E. Farnes; Katherine J. Hansen

    2002-01-01

    Data collection on Tenderfoot Creek Experimental Forest was initiated in 1992 and has expanded to the present time. A preliminary report was prepared to include data collection through the 1995 season (Farnes et aI, 1995). Some data was updated in Farnes et al, 1999. Since then, data has been collected but has not been edited, summarized or tabulated in electronic form...

  7. An innovative experimental sequence on electromagnetic induction and eddy currents based on video analysis and cheap data acquisition

    Science.gov (United States)

    Bonanno, A.; Bozzo, G.; Sapia, P.

    2017-11-01

    In this work, we present a coherent sequence of experiments on electromagnetic (EM) induction and eddy currents, appropriate for university undergraduate students, based on a magnet falling through a drilled aluminum disk. The sequence, leveraging on the didactical interplay between the EM and mechanical aspects of the experiments, allows us to exploit the students’ awareness of mechanics to elicit their comprehension of EM phenomena. The proposed experiments feature two kinds of measurements: (i) kinematic measurements (performed by means of high-speed video analysis) give information on the system’s kinematics and, via appropriate numerical data processing, allow us to get dynamic information, in particular on energy dissipation; (ii) induced electromagnetic field (EMF) measurements (by using a homemade multi-coil sensor connected to a cheap data acquisition system) allow us to quantitatively determine the inductive effects of the moving magnet on its neighborhood. The comparison between experimental results and the predictions from an appropriate theoretical model (of the dissipative coupling between the moving magnet and the conducting disk) offers many educational hints on relevant topics related to EM induction, such as Maxwell’s displacement current, magnetic field flux variation, and the conceptual link between induced EMF and induced currents. Moreover, the didactical activity gives students the opportunity to be trained in video analysis, data acquisition and numerical data processing.

  8. The Three-Level Synthesis of Standardized Single-Subject Experimental Data: A Monte Carlo Simulation Study.

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2013-09-01

    Previous research indicates that three-level modeling is a valid statistical method to make inferences from unstandardized data from a set of single-subject experimental studies, especially when a homogeneous set of at least 30 studies are included ( Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013a ). When single-subject data from multiple studies are combined, however, it often occurs that the dependent variable is measured on a different scale, requiring standardization of the data before combining them over studies. One approach is to divide the dependent variable by the residual standard deviation. In this study we use Monte Carlo methods to evaluate this approach. We examine how well the fixed effects (e.g., immediate treatment effect and treatment effect on the time trend) and the variance components (the between- and within-subject variance) are estimated under a number of realistic conditions. The three-level synthesis of standardized single-subject data is found appropriate for the estimation of the treatment effects, especially when many studies (30 or more) and many measurement occasions within subjects (20 or more) are included and when the studies are rather homogeneous (with small between-study variance). The estimates of the variance components are less accurate.

  9. The Utilization of Historical Data and Geospatial Technology Advances at the Jornada Experimental Range to Support Western America Ranching Culture

    Directory of Open Access Journals (Sweden)

    Kris Havstad

    2011-09-01

    Full Text Available By the early 1900s, concerns were expressed by ranchers, academicians, and federal scientists that widespread overgrazing and invasion of native grassland by woody shrubs were having severe negative impacts upon normal grazing practices in Western America. Ranchers wanted to reverse these trends and continue their way of life and were willing to work with scientists to achieve these goals. One response to this desire was establishment of the USDA Jornada Experimental Range (783 km2 in south central New Mexico by a Presidential Executive Order in 1912 for conducting rangeland investigations. This cooperative effort involved experiments to understand principles of proper management and the processes causing the woody shrub invasion as well as to identify treatments to eradicate shrubs. By the late 1940s, it was apparent that combining the historical ground-based data accumulated at Jornada Experimental Range with rapidly expanding post World War II technologies would yield a better understanding of the driving processes in these arid and semiarid ecosystems which could then lead to improved rangeland management practices. One specific technology was the use of aerial photography to interpret landscape resource conditions. The assembly and utilization of long-term historical aerial photography data sets has occurred over the last half century. More recently, Global Positioning System (GPS techniques have been used in a myriad of scientific endeavors including efforts to accurately locate historical and contemporary treatment plots and to track research animals including livestock and wildlife. As an incredible amount of both spatial and temporal data became available, Geographic Information Systems have been exploited to display various layers of data over the same locations. Subsequent analyses of these data layers have begun to yield new insights. The most recent technological development has been the deployment of Unmanned Aerial Vehicles (UAVs

  10. Assessment of leaf carotenoids content with a new carotenoid index: Development and validation on experimental and model data

    Science.gov (United States)

    Zhou, Xianfeng; Huang, Wenjiang; Kong, Weiping; Ye, Huichun; Dong, Yingying; Casa, Raffaele

    2017-05-01

    Leaf carotenoids content (LCar) is an important indicator of plant physiological status. Accurate estimation of LCar provides valuable insight into early detection of stress in vegetation. With spectroscopy techniques, a semi-empirical approach based on spectral indices was extensively used for carotenoids content estimation. However, established spectral indices for carotenoids that generally rely on limited measured data, might lack predictive accuracy for carotenoids estimation in various species and at different growth stages. In this study, we propose a new carotenoid index (CARI) for LCar assessment based on a large synthetic dataset simulated from the leaf radiative transfer model PROSPECT-5, and evaluate its capability with both simulated data from PROSPECT-5 and 4SAIL and extensive experimental datasets: the ANGERS dataset and experimental data acquired in field experiments in China in 2004. Results show that CARI was the index most linearly correlated with carotenoids content at the leaf level using a synthetic dataset (R2 = 0.943, RMSE = 1.196 μg/cm2), compared with published spectral indices. Cross-validation results with CARI using ANGERS data achieved quite an accurate estimation (R2 = 0.545, RMSE = 3.413 μg/cm2), though the RBRI performed as the best index (R2 = 0.727, RMSE = 2.640 μg/cm2). CARI also showed good accuracy (R2 = 0.639, RMSE = 1.520 μg/cm2) for LCar assessment with leaf level field survey data, though PRI performed better (R2 = 0.710, RMSE = 1.369 μg/cm2). Whereas RBRI, PRI and other assessed spectral indices showed a good performance for a given dataset, overall their estimation accuracy was not consistent across all datasets used in this study. Conversely CARI was more robust showing good results in all datasets. Further assessment of LCar with simulated and measured canopy reflectance data indicated that CARI might not be very sensitive to LCar changes at low leaf area index (LAI) value, and in these conditions soil moisture

  11. New constraints on kinetic isotope effects during CO2(aq) hydration and hydroxylation: Revisiting theoretical and experimental data

    Science.gov (United States)

    Sade, Ziv; Halevy, Itay

    2017-10-01

    CO2 (de)hydration (i.e., CO2 hydration/HCO3- dehydration) and (de)hydroxylation (i.e., CO2 hydroxylation/HCO3- dehydroxylation) are key reactions in the dissolved inorganic carbon (DIC) system. Kinetic isotope effects (KIEs) during these reactions are likely to be expressed in the DIC and recorded in carbonate minerals formed during CO2 degassing or dissolution of gaseous CO2. Thus, a better understanding of KIEs during CO2 (de)hydration and (de)hydroxylation would improve interpretations of disequilibrium compositions in carbonate minerals. To date, the literature lacks direct experimental constraints on most of the oxygen KIEs associated with these reactions. In addition, theoretical estimates describe oxygen KIEs during separate individual reactions. The KIEs of the related reverse reactions were neither derived directly nor calculated from a link to the equilibrium fractionation. Consequently, KIE estimates of experimental and theoretical studies have been difficult to compare. Here we revisit experimental and theoretical data to provide new constraints on oxygen KIEs during CO2 (de)hydration and (de)hydroxylation. For this purpose, we provide a clearer definition of the KIEs and relate them both to isotopic rate constants and equilibrium fractionations. Such relations are well founded in studies of single isotope source/sink reactions, but they have not been established for reactions that involve dual isotopic sources/sinks, such as CO2 (de)hydration and (de)hydroxylation. We apply the new quantitative constraints on the KIEs to investigate fractionations during simultaneous CaCO3 precipitation and HCO3- dehydration far from equilibrium.

  12. Conductance-based refractory density approach: comparison with experimental data and generalization to lognormal distribution of input current.

    Science.gov (United States)

    Chizhov, Anton V

    2017-12-01

    The conductance-based refractory density (CBRD) approach is an efficient tool for modeling interacting neuronal populations. The model describes the firing activity of a statistical ensemble of uncoupled Hodgkin-Huxley-like neurons, each receiving individual Gaussian noise and a common time-varying deterministic input. However, the approach requires experimental validation and extension to cases of distributed input signals (or input weights) among different neurons of such an ensemble. Here the CBRD model is verified by comparing with experimental data and then generalized for a lognormal (LN) distribution of the input weights. The model with equal weights is shown to reproduce efficiently the post-spike time histograms and the membrane voltage of experimental multiple trial response of single neurons to a step-wise current injection. The responses reveal a more rapid reaction of the firing-rate than voltage. Slow adaptive potassium channels strongly affected the shape of the responses. Next, a computationally efficient CBRD model is derived for a population with the LN input weight distribution and is compared with the original model with equal input weights. The analysis shows that the LN distribution: (1) provides a faster response, (2) eliminates oscillations, (3) leads to higher sensitivity to weak stimuli, and (4) increases the coefficient of variation of interspike intervals. In addition, a simplified firing-rate type model is tested, showing improved precision in the case of a LN distribution of weights. The CBRD approach is recommended for complex, biophysically detailed simulations of interacting neuronal populations, while the modified firing-rate type model is recommended for computationally reduced simulations.

  13. Evaluation of the Oh, Dubois and IEM Backscatter Models Using a Large Dataset of SAR Data and Experimental Soil Measurements

    Directory of Open Access Journals (Sweden)

    Mohammad Choker

    2017-01-01

    Full Text Available The aim of this paper is to evaluate the most used radar backscattering models (Integral Equation Model “IEM”, Oh, Dubois, and Advanced Integral Equation Model “AIEM” using a wide dataset of SAR (Synthetic Aperture Radar data and experimental soil measurements. These forward models reproduce the radar backscattering coefficients ( σ 0 from soil surface characteristics (dielectric constant, roughness and SAR sensor parameters (radar wavelength, incidence angle, polarization. The analysis dataset is composed of AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR, ERS, RADARSAT, ASAR and TerraSAR-X data and in situ measurements (soil moisture and surface roughness. Results show that Oh model version developed in 1992 gives the best fitting of the backscattering coefficients in HH and VV polarizations with RMSE values of 2.6 dB and 2.4 dB, respectively. Simulations performed with the Dubois model show a poor correlation between real data and model simulations in HH polarization (RMSE = 4.0 dB and better correlation with real data in VV polarization (RMSE = 2.9 dB. The IEM and the AIEM simulate the backscattering coefficient with high RMSE when using a Gaussian correlation function. However, better simulations are performed with IEM and AIEM by using an exponential correlation function (slightly better fitting with AIEM than IEM. Good agreement was found between the radar data and the simulations using the calibrated version of the IEM modified by Baghdadi (IEM_B with bias less than 1.0 dB and RMSE less than 2.0 dB. These results confirm that, up to date, the IEM modified by Baghdadi (IEM_B is the most adequate to estimate soil moisture and roughness from SAR data.

  14. Applicability of the Gibbs Adsorption Isotherm to the analysis of experimental surface-tension data for ionic and nonionic surfactants.

    Science.gov (United States)

    Martínez-Balbuena, L; Arteaga-Jiménez, Araceli; Hernández-Zapata, Ernesto; Márquez-Beltrán, César

    2017-09-01

    The Gibbs Adsorption Isotherm equation is a two-dimensional analogous of the Gibbs-Duhem equation, and it is one of the cornerstones of interface science. It is also widely used to estimate the surface excess concentration (SEC) for surfactants and other compounds in aqueous solution, from surface tension measurements. However, in recent publications some authors have cast doubt on this method. In the present work, we review some of the best available surface tension experimental data, and compare estimations of the SEC, using the Gibbs isotherm method (GIM), to direct measurements reported in the literature. This is done for both nonionic and ionic surfactants, with and without added salt. Our review leads to the conclusion that the GIM has a very solid agreement with experiments, and that it does estimate accurately the SEC for surfactant concentrations smaller than the critical micellar concentration (CMC). Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Comparative study of OMA applied to experimental and simulated data from an operating Vestas V27 wind turbine

    DEFF Research Database (Denmark)

    Requeson, Oscar Ramirez; Tcherniak, Dmitri; Larsen, Gunner Chr.

    2015-01-01

    is the Coleman transformation, which transforms the vibrations expressed in the blade rotating coordinates to the fixed-ground frame of reference. The application of this transformation, originally from helicopter theory, allows for the conversion of a LPTV system to a LTI system under certain assumptions, among......), and modal analysis requirements are thus fulfilled for the dynamic characterization. Under operation, the system cannot be considered as LTI and must be modelled as a linear periodic time-variant (LPTV) system, which allows for the application of the related theory for such systems. One of these methods...... which is the assumption of isotropic rotors. Since rotors are never completely isotropic in real life, this paper presents the application of operational modal analysis together with the Coleman transformation on both experimental data from a full-scale Vestas wind turbine with instrumented blades...

  16. Analysis of the experimental data for impurity-band conduction in Mn-doped InSb

    Energy Technology Data Exchange (ETDEWEB)

    Kajikawa, Yasutomo [Department of Electric and Control Systems Engineering, Interdisciplinary Faculty of Science and Engineering, Shimane University, Matsue (Japan)

    2017-01-15

    The experimental data of the temperature-dependent Hall-effect measurements on Mn-doped p -type InSb samples, which exhibit the anomalous sign reversal of the Hall coefficient to negative at low temperatures, have been analyzed on the basis of the nearest-neighbor hopping model in an impurity band. It is shown that the anomalous sign reversal of the Hall coefficient to negative can be well explained with assuming the hopping Hall factor in the form of A{sub hop} = (k{sub B}T/J{sub 3}) exp(K{sub NNH}T{sub 3}/T) with the negative sign of J{sub 3}. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Technique in analyzing experimental double shock data to infer a solid-solid phase transition within cerium

    Science.gov (United States)

    Cherne, Frank; Jensen, Brian

    2017-06-01

    In the past decade many experiments have been performed looking at various aspects of the dynamic response of cerium metal. Recent experiments looking at off-principle hugoniot have been made and here we present an approach for interpreting the results of these double shock experiments. Double shock experiments are difficult to analyze with the potential of being nearly intractable due to the construction of the experiments. Using a simple one dimensional hydrodynamic code, calculations are performed to match the first and second shock states and the times of arrival. Upon matching velocity time history at the sample window interface, a Us was determined from the calculation. A two state linear Us -up model with a transitional density switch was developed to best model the experimental data set. The best parameter set shows an inflection point around 12-13 GPa which is near where the α - ɛ phase transition has been observed in static compression experiments at a temperature.

  18. Wash-off of Sr-90 and Cs-137 from two experimental plots. Model testing using Chernobyl data

    Energy Technology Data Exchange (ETDEWEB)

    Konoplev, A.; Bulgakov, A. [SPA Typhoon, Obninsk (Russian Federation); Hoffman, O.; Thiessen, K. [SENES, Oak Ridge, TN (United States)] [and others

    1996-09-01

    Surface water runoff from contaminated land is one of the major processes responsible for the contamination of water bodies. For example, the large area of land contaminated after the Chernobyl accident has become a continuing source of radionuclide contamination for natural waters and the aquatic ecosystem. Based on data from the Chernobyl accident, the 'Wash-off' scenario was developed to provide an opportunity to test models concerned with the movement of trace contaminants from terrestrial sources to water bodies. In particular, this scenario provides an opportunity for (1) evaluation of the movement of contaminants from soil to water, (2) calculation of the alteration and migration of contaminants in soil over different time scales, (3) increased understanding of contaminant transport at the process level, and (4) development and use of methods for estimation of key parameters. Modelers were provided with descriptions of two experimental plots near the Chernobyl NPP, one using simulated heavy rain (plot HR) and one using snow melt (plot SM). Initial information for plot HR included soil properties; hydrographs of rainfall and runoff dynamics; the time of application of rainfall; rainfall amounts, duration, and intensities; soil moisture content before the application of rainfall; regional data on average monthly precipitation and temperature; recorded information on naturally occurring precipitation between May and October 1986; and chemical forms of radionuclides in the soil of the plots prior to the experiments. Information for plot SM included the soil description and properties, snow storage in the snow melt period of 1988, chemical composition of the snow water, a hydrograph of the runoff dynamics, chemical radionuclide forms in the soil at the end of the experiment, and air and soil temperatures for the plot during the snow melt period. For each experimental plot, modelers were requested to estimate the vertical distribution of radionuclides

  19. Experimental measurement of oil-water two-phase flow by data fusion of electrical tomography sensors and venturi tube

    Science.gov (United States)

    Liu, Yinyan; Deng, Yuchi; Zhang, Maomao; Yu, Peining; Li, Yi

    2017-09-01

    Oil-water two-phase flows are commonly found in the production processes of the petroleum industry. Accurate online measurement of flow rates is crucial to ensure the safety and efficiency of oil exploration and production. A research team from Tsinghua University has developed an experimental apparatus for multiphase flow measurement based on an electrical capacitance tomography (ECT) sensor, an electrical resistance tomography (ERT) sensor, and a venturi tube. This work presents the phase fraction and flow rate measurements of oil-water two-phase flows based on the developed apparatus. Full-range phase fraction can be obtained by the combination of the ECT sensor and the ERT sensor. By data fusion of differential pressures measured by venturi tube and the phase fraction, the total flow rate and single-phase flow rate can be calculated. Dynamic experiments were conducted on the multiphase flow loop in horizontal and vertical pipelines and at various flow rates.

  20. Motive flow calculation through ejectors for transcritical CO2 heat pumps. Comparison between new experimental data and predictive methods

    Science.gov (United States)

    Boccardi, G.; Lillo, G.; Mastrullo, R.; Mauro, A. W.; Pieve, M.; Trinchieri, R.

    2017-01-01

    The revival of CO2 as refrigerant is due to new restrictions in the use of current refrigerants in developed countries, as consequence of environmental policy agreements. An optimal design of each part is necessary to overcome the possible penalty in performance, and the use of ejectors instead of throttling valves can improve the performance. Especially for applications as CO2 HPs for space heating, the use of ejectors has been little investigated. The data collected in a cooperation project between ENEA (C.R. Casaccia) and Federico II University of Naples have been used to experimentally characterize several ejectors in terms of motive mass flow rate, both in transcritical CO2 conditions and not. A statistical comparison is presented in order to assess the reliability of predictive methods available in the open literature for choked flow conditions.

  1. Lessons from the Large Hadron Collider for model-based experimentation : the concept of a model of data acquisition and the scope of the hierarchy of models

    NARCIS (Netherlands)

    Karaca, Koray

    2017-01-01

    According to the hierarchy of models (HoM) account of scientific experimentation developed by Patrick Suppes and elaborated by Deborah Mayo, theoretical considerations about the phenomena of interest are involved in an experiment through theoretical models that in turn relate to experimental data

  2. Analysis of the ecotoxicity data submitted within the framework of the REACH Regulation. Part 3. Experimental sediment toxicity assays.

    Science.gov (United States)

    Cesnaitis, Romanas; Sobanska, Marta A; Versonnen, Bram; Sobanski, Tomasz; Bonnomet, Vincent; Tarazona, Jose V; De Coen, Wim

    2014-03-15

    For the first REACH registration deadline, companies have submitted registrations with relevant hazard and exposure information for substances at the highest tonnage level (above 1000 tonnes per year). At this tonnage level, information on the long-term toxicity of a substance to sediment organisms is required. There are a number of available test guidelines developed and accepted by various national/international organisations, which can be used to investigate long-term toxicity to sediment organisms. However instead of testing, registrants may also use other options to address toxicity to sediment organisms, e.g. weight of evidence approach, grouping of substances and read-across approaches, as well as substance-tailored exposure-driven testing. The current analysis of the data provided in ECHA database focuses on the test methods applied and the test organisms used in the experimental studies to assess long-term toxicity to sediment organisms. The main guidelines used for the testing of substances registered under REACH are the OECD guidelines and OSPAR Protocols on Methods for the Testing of Chemicals used in the Offshore Oil Industry: "Part A: A Sediment Bioassay using an Amphipod Corophium sp." explaining why one of the mostly used test organisms is the marine amphipod Corophium sp. In total, testing results with at least 40 species from seven phyla are provided in the database. However, it can be concluded that the ECHA database does not contain a high enough number of available experimental data on toxicity to sediment organisms for it to be used extensively by the scientific community (e.g. for development of non-testing methods to predict hazards to sediment organisms). © 2013.

  3. Uncertainty Quantification Analysis of Both Experimental and CFD Simulation Data of a Bench-scale Fluidized Bed Gasifier

    Energy Technology Data Exchange (ETDEWEB)

    Shahnam, Mehrdad [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Gel, Aytekin [ALPEMI Consulting, LLC, Phoeniz, AZ (United States); Subramaniyan, Arun K. [GE Global Research Center, Niskayuna, NY (United States); Musser, Jordan [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Dietiker, Jean-Francois [West Virginia Univ. Research Corporation, Morgantown, WV (United States)

    2017-10-02

    Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has the most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows

  4. maxdLoad2 and maxdBrowse: standards-compliant tools for microarray experimental annotation, data management and dissemination.

    Science.gov (United States)

    Hancock, David; Wilson, Michael; Velarde, Giles; Morrison, Norman; Hayes, Andrew; Hulme, Helen; Wood, A Joseph; Nashar, Karim; Kell, Douglas B; Brass, Andy

    2005-11-03

    maxdLoad2 is a relational database schema and Java application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: the flexibility of the meta-data that can be captured, the tools provided for importing data from spreadsheets and other tabular representations, the tools provided for the automatic creation of structured documents, the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file) can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention.maxdBrowse is hosted on a web-server and presents multiple interfaces to the contents of maxd databases. maxd

  5. maxdLoad2 and maxdBrowse: standards-compliant tools for microarray experimental annotation, data management and dissemination

    Directory of Open Access Journals (Sweden)

    Nashar Karim

    2005-11-01

    Full Text Available Abstract Background maxdLoad2 is a relational database schema and Java® application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. Results maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: • the flexibility of the meta-data that can be captured, • the tools provided for importing data from spreadsheets and other tabular representations, • the tools provided for the automatic creation of structured documents, • the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention. maxdBrowse is hosted on a web-server and presents

  6. Understanding the influence of biofilm accumulation on the hydraulic properties of soils: a mechanistic approach based on experimental data

    Science.gov (United States)

    Carles Brangarí, Albert; Sanchez-Vila, Xavier; Freixa, Anna; Romaní, Anna M.; Fernàndez-Garcia, Daniel

    2017-04-01

    The distribution, amount, and characteristics of biofilms and its components govern the capacity of soils to let water through, to transport solutes, and the reactions occurring. Therefore, unraveling the relationship between microbial dynamics and the hydraulic properties of soils is of concern for the management of natural systems and many technological applications. However, the increased complexity of both the microbial communities and the geochemical processes entailed by them causes that the phenomenon of bioclogging remains poorly understood. This highlights the need for a better understanding of the microbial components such as live and dead bacteria and extracellular polymeric substances (EPS), as well as of their spatial distribution. This work tries to shed some light on these issues, providing experimental data and a new mechanistic model that predicts the variably saturated hydraulic properties of bio-amended soils based on these data. We first present a long-term laboratory infiltration experiment that aims at studying the temporal variation of selected biogeochemical parameters along the infiltration path. The setup consists of a 120-cm-high soil tank instrumented with an array of sensors plus soil and liquid samplers. Sensors measured a wide range of parameters in continuous, such as volumetric water content, electrical conductivity, temperature, water pressure, soil suction, dissolved oxygen, and pH. Samples were kept for chemical and biological analyses. Results indicate that: i) biofilm is present at all depths, denoting the potential for deep bioclogging, ii) the redox conditions profile shows different stages, indicating that the community was adapted to changing redox conditions, iii) bacterial activity, richness and diversity also exhibit zonation with depth, and iv) the hydraulic properties of the soil experienced significant changes as biofilm proliferated. Based on experimental evidences, we propose a tool to predict changes in the

  7. Final Report for NFE-07-00912: Development of Model Fuels Experimental Engine Data Base & Kinetic Modeling Parameter Sets

    Energy Technology Data Exchange (ETDEWEB)

    Bunting, Bruce G [ORNL

    2012-10-01

    The automotive and engine industries are in a period of very rapid change being driven by new emission standards, new types of after treatment, new combustion strategies, the introduction of new fuels, and drive for increased fuel economy and efficiency. The rapid pace of these changes has put more pressure on the need for modeling of engine combustion and performance, in order to shorten product design and introduction cycles. New combustion strategies include homogeneous charge compression ignition (HCCI), partial-premixed combustion compression ignition (PCCI), and dilute low temperature combustion which are being developed for lower emissions and improved fuel economy. New fuels include bio-fuels such as ethanol or bio-diesel, drop-in bio-derived fuels and those derived from new crude oil sources such as gas-to-liquids, coal-to-liquids, oil sands, oil shale, and wet natural gas. Kinetic modeling of the combustion process for these new combustion regimes and fuels is necessary in order to allow modeling and performance assessment for engine design purposes. In this research covered by this CRADA, ORNL developed and supplied experimental data related to engine performance with new fuels and new combustion strategies along with interpretation and analysis of such data and consulting to Reaction Design, Inc. (RD). RD performed additional analysis of this data in order to extract important parameters and to confirm engine and kinetic models. The data generated was generally published to make it available to the engine and automotive design communities and also to the Reaction Design Model Fuels Consortium (MFC).

  8. Effect of publicly reporting performance data of medicine use on injection use: a quasi-experimental study.

    Science.gov (United States)

    Wang, Xuan; Tang, Yuqing; Zhang, Xiaopeng; Yin, Xi; Du, Xin; Zhang, Xinping

    2014-01-01

    Inappropriate use of prescribing pharmaceuticals, particularly injections, not only affects the quality of medical care, but also leads to an increase in medical expenses. Publicly reporting performance data of medical care is becoming a common health policy tool adopted to supervise medical quality. To our knowledge, few studies about public reporting applied to medicine use have been reported. This study intended to introduce public reporting in the field of medicine use, and evaluate the effect of publicly reporting performance data of medicine use on the use of injections. The research sites were 20 primary healthcare institutions in Q City, Hubei. By matching, the institutions were divided into the intervention group and control group. A quasi-experimental design was applied in this study. In the intervention group, the performance data of medicine use were publicly reported. The injection prescribing rates of the two groups before and after intervention were measured and compared. Difference-in-difference method and logistic regression were employed to estimate the effect of public reporting on injection use. Public reporting led to a reduction of approximately 4% in the injection prescribing rate four months after intervention (OR = 0.96; 95%CI: 0.94, 0.97). The intervention effect was inconsistent in each month after intervention, and it was most positive in the second month after intervention (OR = 0.90; 95%CI: 0.89, 0.92). In general, publicly reporting performance data of medicine use may have positive effects on injection use to some extent. Further research is needed to investigate the mechanism by which public reporting influences injection use. Comprehensive measures are also necessary to promote the rational use of injections.

  9. Effect of publicly reporting performance data of medicine use on injection use: a quasi-experimental study.

    Directory of Open Access Journals (Sweden)

    Xuan Wang

    Full Text Available BACKGROUND: Inappropriate use of prescribing pharmaceuticals, particularly injections, not only affects the quality of medical care, but also leads to an increase in medical expenses. Publicly reporting performance data of medical care is becoming a common health policy tool adopted to supervise medical quality. To our knowledge, few studies about public reporting applied to medicine use have been reported. This study intended to introduce public reporting in the field of medicine use, and evaluate the effect of publicly reporting performance data of medicine use on the use of injections. METHODS: The research sites were 20 primary healthcare institutions in Q City, Hubei. By matching, the institutions were divided into the intervention group and control group. A quasi-experimental design was applied in this study. In the intervention group, the performance data of medicine use were publicly reported. The injection prescribing rates of the two groups before and after intervention were measured and compared. Difference-in-difference method and logistic regression were employed to estimate the effect of public reporting on injection use. RESULTS: Public reporting led to a reduction of approximately 4% in the injection prescribing rate four months after intervention (OR = 0.96; 95%CI: 0.94, 0.97. The intervention effect was inconsistent in each month after intervention, and it was most positive in the second month after intervention (OR = 0.90; 95%CI: 0.89, 0.92. CONCLUSIONS: In general, publicly reporting performance data of medicine use may have positive effects on injection use to some extent. Further research is needed to investigate the mechanism by which public reporting influences injection use. Comprehensive measures are also necessary to promote the rational use of injections.

  10. PVTxy properties of CO2 mixtures relevant for CO2 capture, transport and storage: Review of available experimental data and theoretical models

    OpenAIRE

    Li, Hailong; Jakobsen, Jana P.; Wilhelmsen, Øivind; Yan, Jinyue

    2011-01-01

    The knowledge about pressure–volume–temperature–composition (PVTxy) properties plays an importantrole in the design and operation of many processes involved in CO2 capture and storage (CCS) systems.A literature survey was conducted on both the available experimental data and the theoreticalmodels associated with the thermodynamic properties of CO2 mixtures within the operation windowof CCS. Some gaps were identified between available experimental data and requirements of the systemdesign and ...

  11. Reducing the effects of acoustic heterogeneity with an iterative reconstruction method from experimental data in microwave induced thermoacoustic tomography.

    Science.gov (United States)

    Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo

    2015-05-01

    An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity.

  12. Validation of Geant4 on Proton Transportation for Thick Absorbers: Study Based on Tschalär Experimental Data

    Science.gov (United States)

    Hoff, Gabriela; Denyak, Valeriy; Schelin, Hugo R.; Paschuk, Sergei

    2017-02-01

    Imaging techniques using protons as incident particles are currently being developed to substitute X-ray computer tomography and nuclear magnetic resonance methods in proton therapy. They deal with relatively thick targets, like the human head or trunk, where protons lose a significant part of their energy, however, they have enough energy to exit the target. The physical quantities important in proton imaging are kinetic energy, angle and coordinates of emerging proton from an absorber material. In recent times, many research groups use the Geant4 toolkit to simulate proton imaging devices. Most of the available publications about validation of Geant4 models are for thin or thick absorbers (Bragg Peak studies), that are not consistent with the contour conditions applied to proton imaging. The main objective of this work is to evaluate the kinetic energy spectrum for protons emerging from homogeneous absorbers slabs comparing it to the experimental results published by Tschalär and Maccabee, in 1970. Different models (standard and detailed) available on Geant4 (version 9.6.p03) are explored taking into account its accuracy and computational performance. This paper presents a validation for protons with incident kinetic energies of 19.68 MeV and 49.10 MeV. The validation results from the emerging protons kinetic energy spectra show that: (i) there are differences between the reference data and the data produced by different processes evoked for transportation and (ii) the validation energies are sensitive to sub-shell processes.

  13. Determination of heat transfer parameters by use of finite integral transform and experimental data for regular geometric shapes

    Science.gov (United States)

    Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad

    2017-12-01

    This article offers a study on estimation of heat transfer parameters (coefficient and thermal diffusivity) using analytical solutions and experimental data for regular geometric shapes (such as infinite slab, infinite cylinder, and sphere). Analytical solutions have a broad use in experimentally determining these parameters. Here, the method of Finite Integral Transform (FIT) was used for solutions of governing differential equations. The temperature change at centerline location of regular shapes was recorded to determine both the thermal diffusivity and heat transfer coefficient. Aluminum and brass were used for testing. Experiments were performed for different conditions such as in a highly agitated water medium ( T = 52 °C) and in air medium ( T = 25 °C). Then, with the known slope of the temperature ratio vs. time curve and thickness of slab or radius of the cylindrical or spherical materials, thermal diffusivity value and heat transfer coefficient may be determined. According to the method presented in this study, the estimated of thermal diffusivity of aluminum and brass is 8.395 × 10-5 and 3.42 × 10-5 for a slab, 8.367 × 10-5 and 3.41 × 10-5 for a cylindrical rod and 8.385 × 10-5 and 3.40 × 10-5 m2/s for a spherical shape, respectively. The results showed there is close agreement between the values estimated here and those already published in the literature. The TAAD% is 0.42 and 0.39 for thermal diffusivity of aluminum and brass, respectively.

  14. Isobaric (vapor + liquid) equilibria of 1-ethyl-3-methylimidazolium ethylsulfate plus (propionaldehyde or valeraldehyde): Experimental data and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Victor H. [School of Chemical Engineering, University of Campinas (UNICAMP), Av. Albert Einstein 500, 13083-852 Campinas, SP (Brazil); Mattedi, Silvana [Chemical Engineering Department, Polytechnic School, Federal University of Bahia (UFBA), R. Aristides Novis 2, 40210-630 Salvador, BA (Brazil); Aznar, Martin, E-mail: maznar@feq.unicamp.b [School of Chemical Engineering, University of Campinas (UNICAMP), Av. Albert Einstein 500, 13083-852 Campinas, SP (Brazil)

    2011-06-15

    Research highlights: We report density, refraction index, and VLE for (propionaldehyde or valeraldehyde) + [emim][EtSO{sub 4}]. The Peng-Robinson + Wong-Sandler + COSMO-SAC model was used to predict density and VLE. The densities were predicted with deviations below than 2.3%. The experimental VLE was predicted with deviations below than 1.6%. - Abstract: This paper reports the density, refraction index, and (vapor + liquid) equilibria (VLE) for binary systems {l_brace}aldehyde + 1-ethyl-3-methylimidazolium ethylsulfate ([emim][EtSO{sub 4}]){r_brace}: {l_brace}propionaldehyde + [emim][EtSO{sub 4}]{r_brace} and {l_brace}valeraldehyde + [emim][EtSO{sub 4}]{r_brace}. The uncertainties for the temperature, pressure, and compositions measurements for the phase equilibria are {+-}0.1 K, {+-}0.01 kPa and {+-}0.0004, respectively. A qualitative analysis of the variation of the properties with changes in solvent and temperature was performed. The Peng-Robinson equation of state (PR EoS), coupled with the Wong-Sandler mixing rule (WS), is used to describe the experimental data. To calculate activity coefficients we used three different models: NRTL, UNIQUAC, and COSMO-SAC. Since the predictive liquid activity coefficient model COSMO-SAC is used in the Wong-Sandler mixing rule, the resulting thermodynamic model is a completely predictive one. The prediction results for the density and for the (vapor + liquid) equilibria have a deviation lower than 2.3% and 1.6%, respectively. The (vapor + liquid) equilibria predictions show a good description for the propionaldehyde system and only a qualitative description for the valeraldehyde system.

  15. Imagining is not doing but involves specific motor commands: a review of experimental data related to motor inhibition

    Directory of Open Access Journals (Sweden)

    Aymeric eGuillot

    2012-09-01

    Full Text Available There is now compelling evidence that motor imagery (MI and actual movement share common neural substrate. However, the question of how MI inhibits the transmission of motor commands into the efferent pathways in order to prevent any movement is largely unresolved. Similarly, little is known about the nature of the electromyographic activity that is apparent during MI. In addressing these gaps in the literature, the present paper argues that MI includes motor execution commands for muscle contractions which are blocked at some level of the motor system by inhibitory mechanisms. We first assemble data from neuroimaging studies that demonstrate that the neural networks mediating MI and motor performance are not totally overlapping, thereby highlighting potential differences between MI and actual motor execution. We then review MI data indicating the presence of subliminal muscular activity reflecting the intrinsic characteristics of the motor command as well as increased corticomotor excitability. The third section not only considers the inhibitory mechanisms involved during MI but also examines how the brain resolves the problem of issuing the motor command for action while supervising motor inhibition when people engage in voluntary movement during MI. The last part of the paper draws on imagery research in clinical contexts to suggest that some patients move while imagining an action, although they are not aware of such movements. In particular, experimental data from amputees as well as from patients with Parkinson’s disease are discussed. We also review recent studies based on comparing brain activity in tetraplegic patients with that from healthy matched controls that provide insights into inhibitory processes during MI. We conclude by arguing that based on available evidence, a multifactorial explanation of motor inhibition during MI is warranted.

  16. Imagining is Not Doing but Involves Specific Motor Commands: A Review of Experimental Data Related to Motor Inhibition

    Science.gov (United States)

    Guillot, Aymeric; Di Rienzo, Franck; MacIntyre, Tadhg; Moran, Aidan; Collet, Christian

    2012-01-01

    There is now compelling evidence that motor imagery (MI) and actual movement share common neural substrate. However, the question of how MI inhibits the transmission of motor commands into the efferent pathways in order to prevent any movement is largely unresolved. Similarly, little is known about the nature of the electromyographic activity that is apparent during MI. In addressing these gaps in the literature, the present paper argues that MI includes motor execution commands for muscle contractions which are blocked at some level of the motor system by inhibitory mechanisms. We first assemble data from neuroimaging studies that demonstrate that the neural networks mediating MI and motor performance are not totally overlapping, thereby highlighting potential differences between MI and actual motor execution. We then review MI data indicating the presence of subliminal muscular activity reflecting the intrinsic characteristics of the motor command as well as increased corticomotor excitability. The third section not only considers the inhibitory mechanisms involved during MI but also examines how the brain resolves the problem of issuing the motor command for action while supervising motor inhibition when people engage in voluntary movement during MI. The last part of the paper draws on imagery research in clinical contexts to suggest that some patients move while imagining an action, although they are not aware of such movements. In particular, experimental data from amputees as well as from patients with Parkinson’s disease are discussed. We also review recent studies based on comparing brain activity in tetraplegic patients with that from healthy matched controls that provide insights into inhibitory processes during MI. We conclude by arguing that based on available evidence, a multifactorial explanation of motor inhibition during MI is warranted. PMID:22973214

  17. Soot Particle Optical Properties: a Comparison between Numerical Calculations and Experimental Data Collected during the Boston College Experiment

    Science.gov (United States)

    Sharma, N.; Mazzoleni, C.; China, S.; Dubey, M. K.; Onasch, T. B.; Cross, E. S.; Davidovits, P.; Wrobel, W.; Ahern, A.; Schwarz, J. P.; Spackman, J. R.; Lack, D. A.; Massoli, P.; Freedman, A.; Olfert, J. S.; Freitag, S.; Sedlacek, A. J.; Cappa, C. D.; Subramanian, R.

    2010-12-01

    A black carbon instrument inter-comparison study was conducted in July 2008 at Boston College to measure the optical, physical and chemical properties of laboratory generated soot under controlled conditions [1]. The physical, chemical and optical properties were measured on size-selected particles for: 1. Nascent soot particles 2. Nascent- denuded soot particles 3. Soot particles coated with sulfuric acid or DOS (dioctyl sebacate) across a range of coating thicknesses 4. Coated and then denuded soot particles. Instruments involved in the inter-comparison study fell into two broad categories: a) mass-based instruments and b) optically-based instruments. During this experiment, 7 mass-based and 9 optically-based instruments were deployed. Absorption scattering and extinction measurements were carried out in combination with mass-based instruments in order to obtain absorption, scattering and extinction coefficients for coated and denuded soot particles as a function of their mass, size and coating thickness. Particle samples were also collected on nuclepore filters to perform Scanning Electron Microscopy (SEM) analysis. The images obtained with the SEM elucidated the changes in particle morphology upon coating and denuding. The images were also used to determine morphological parameters for single soot aggregates (e.g. monomers number and diameter) used in the numerical estimation of aerosol optical properties. With the data collected during the experiment, we carry out a comparative study of the optical properties of soot particles obtained experimentally with those calculated using the two most commonly used numerical approximations (Rayleigh-Debye-Gans (RDG) theory and Mie theory). Thus we validate the degree of agreement between theoretical models and experimental results. The laboratory optical, mass, size and morphological data can be used to elucidate the impact of these parameters on radiative forcing by atmospheric soot [2, 3]. References: 1. Cross, E. S

  18. Impact of diagenetic alteration on sea urchin (Echinodermata) magnesium isotope signatures: Comparison of experimental and fossil data

    Science.gov (United States)

    Riechelmann, Sylvia; Mavromatis, Vasileios; Buhl, Dieter; Dietzel, Martin; Hoffmann, René; Jöns, Niels; Eisenhauer, Anton; Immenhauser, Adrian

    2017-04-01

    Due to their thermodynamically instable high-Mg calcite mineralogy, the skeletal elements of echinoderms are often regarded as unreliable archives of Phanerozoic marine climate dynamics. Nevertheless, traditional and non-traditional isotope and elemental proxy data from echinoderms have been used to reconstruct global changes in palaeoseawater composition (Sandberg-cycles). Recently, these data and the interpretation have been controversially discussed in context with ancient seawater properties. This paper tests the sensitivity of echinoderm skeletal hardparts, specifically sea urchin spines to diagenetic alteration based on magnesium isotope data. We apply a dual approach by: (i) performing hydrothermal alteration experiments using meteoric, marine, and burial reactive fluids; and (ii) comparing these data with fossil sea urchin hardparts. The degree of alteration of experimentally altered and fossil sea urchin hardparts is assessed by a combination of optical (fluorescence, cathodoluminescence (CL), scanning electron microscopy (SEM)) and geochemical tools (elemental distribution, carbon, oxygen and magnesium isotopes). Although initial fluid chemistry of the experiments did not allow the detection of diagenetic overprint by elemental distribution (Fe, Mn) and cathodoluminescence, other tools such as fluorescence, SEM, delta18O, Mg concentration and delta26Mg display alteration effects, which respond to differential fluid temperature, chemistry, and experiment duration time. At experiments run under meteoric conditions with no Mg in the initial fluid, the solid is enriched in the heavier Mg isotopomer due to preferential dissolution of the lighter isotope. In contrast, initial burial and marine fluids have medium to high Mg concentrations. There, the Mg concentration and the delta26Mg values of the altered sea urchin spines increase. Fossil sea urchin hardparts display partly very strong diagenetic overprint as observed by their elemental distribution

  19. Shrinkage and porosity evolution during air-drying of non-cellular food systems: Experimental data versus mathematical modelling.

    Science.gov (United States)

    Nguyen, Thanh Khuong; Khalloufi, Seddik; Mondor, Martin; Ratti, Cristina

    2018-01-01

    In the present work, the impact of glass transition on shrinkage of non-cellular food systems (NCFS) during air-drying will be assessed from experimental data and the interpretation of a 'shrinkage' function involved in a mathematical model. Two NCFS made from a mixture of water/maltodextrin/agar (w/w/w: 1/0.15/0.015) were created out of maltodextrins with dextrose equivalent 19 (MD19) or 36 (MD36). The NCFS made with MD19 had 30°C higher Tg than those with MD36. This information indicated that, during drying, the NCFS with MD19 would pass from rubbery to glassy state sooner than NCFS MD36, for which glass transition only happens close to the end of drying. For the two NCFS, porosity and volume reduction as a function of moisture content were captured with high accuracy when represented by the mathematical models previously developed. No significant differences in porosity and in maximum shrinkage between both samples during drying were observed. As well, no change in the slope of the shrinkage curve as a function of moisture content was perceived. These results indicate that glass transition alone is not a determinant factor in changes of porosity or volume during air-drying. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-01-01

    Full Text Available Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC in combination with multivariate adaptive regression splines (MARS technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.

  1. Flagellar swimming in viscoelastic fluids: role of fluid elastic stress revealed by simulations based on experimental data.

    Science.gov (United States)

    Li, Chuanbin; Qin, Boyang; Gopinath, Arvind; Arratia, Paulo E; Thomases, Becca; Guy, Robert D

    2017-10-01

    Many important biological functions depend on microorganisms' ability to move in viscoelastic fluids such as mucus and wet soil. The effects of fluid elasticity on motility remain poorly understood, partly because the swimmer strokes depend on the properties of the fluid medium, which obfuscates the mechanisms responsible for observed behavioural changes. In this study, we use experimental data on the gaits of Chlamydomonas reinhardtii swimming in Newtonian and viscoelastic fluids as inputs to numerical simulations that decouple the swimmer gait and fluid type in order to isolate the effect of fluid elasticity on swimming. In viscoelastic fluids, cells employing the Newtonian gait swim faster but generate larger stresses and use more power, and as a result the viscoelastic gait is more efficient. Furthermore, we show that fundamental principles of swimming based on viscous fluid theory miss important flow dynamics: fluid elasticity provides an elastic memory effect that increases both the forward and backward speeds, and (unlike purely viscous fluids) larger fluid stress accumulates around flagella moving tangent to the swimming direction, compared with the normal direction. © 2017 The Author(s).

  2. Bentonite swelling pressure in strong NaCl solutions. Correlation of model calculations to experimentally determined data

    Energy Technology Data Exchange (ETDEWEB)

    Karnland, O. [Clay Technology, Lund (Sweden)

    1998-01-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed. This report discusses a number of models which possibly can be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved model predicts a substantial bentonite swelling pressure also in a saturated sodium chloride solution if the density of the system is sufficiently high. This means in practice that the buffer in a KBS-3 repository will give rise to an acceptable swelling pressure, but that the positive effects of mixing bentonite into a backfill material will be lost if the system is exposed to brines. (orig.). 14 refs.

  3. Bentonite swelling pressure in strong NaCl solutions. Correlation between model calculations and experimentally determined data

    Energy Technology Data Exchange (ETDEWEB)

    Karnland, O. [Clay Technology, Lund (Sweden)

    1997-12-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed by different researchers over the years. The present report examines some of the models which possibly may be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved thermodynamic model predicts substantial bentonite swelling pressures also in saturated sodium chloride solution if the density of the system is high enough. In practice, the model predicts a substantial swelling pressure for the buffer in a KBS-3 repository if the system is exposed to brines, but the positive effects of mixing bentonite into a backfill material will be lost, since the available compaction technique does not give a sufficiently high bentonite density 37 refs, 15 figs

  4. Probabilistic evidential assessment of gunshot residue particle evidence (Part II): Bayesian parameter estimation for experimental count data.

    Science.gov (United States)

    Biedermann, A; Bozza, S; Taroni, F

    2011-03-20

    Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models--in the form of Bayesian networks--address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates usable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR). Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    Science.gov (United States)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2017-08-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  6. Model predictive control of the solid oxide fuel cell stack temperature with models based on experimental data

    Science.gov (United States)

    Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari

    2015-03-01

    Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.

  7. Uncertainty Quantification Reveals the Importance of Data Variability and Experimental Design Considerations for in Silico Proarrhythmia Risk Assessment

    Directory of Open Access Journals (Sweden)

    Kelly C. Chang

    2017-11-01

    Full Text Available The Comprehensive in vitro Proarrhythmia Assay (CiPA is a global initiative intended to improve drug proarrhythmia risk assessment using a new paradigm of mechanistic assays. Under the CiPA paradigm, the relative risk of drug-induced Torsade de Pointes (TdP is assessed using an in silico model of the human ventricular action potential (AP that integrates in vitro pharmacology data from multiple ion channels. Thus, modeling predictions of cardiac risk liability will depend critically on the variability in pharmacology data, and uncertainty quantification (UQ must comprise an essential component of the in silico assay. This study explores UQ methods that may be incorporated into the CiPA framework. Recently, we proposed a promising in silico TdP risk metric (qNet, which is derived from AP simulations and allows separation of a set of CiPA training compounds into Low, Intermediate, and High TdP risk categories. The purpose of this study was to use UQ to evaluate the robustness of TdP risk separation by qNet. Uncertainty in the model parameters used to describe drug binding and ionic current block was estimated using the non-parametric bootstrap method and a Bayesian inference approach. Uncertainty was then propagated through AP simulations to quantify uncertainty in qNet for each drug. UQ revealed lower uncertainty and more accurate TdP risk stratification by qNet when simulations were run at concentrations below 5× the maximum therapeutic exposure (Cmax. However, when drug effects were extrapolated above 10× Cmax, UQ showed that qNet could no longer clearly separate drugs by TdP risk. This was because for most of the pharmacology data, the amount of current block measured was <60%, preventing reliable estimation of IC50-values. The results of this study demonstrate that the accuracy of TdP risk prediction depends both on the intrinsic variability in ion channel pharmacology data as well as on experimental design considerations that preclude an

  8. The challenge of using experimental infectivity data in risk assessment for Ebola virus: why ecology may be important.

    Science.gov (United States)

    Gale, P; Simons, R R L; Horigan, V; Snary, E L; Fooks, A R; Drew, T W

    2016-01-01

    Analysis of published data shows that experimental passaging of Zaire ebolavirus (EBOV) in guinea pigs changes the risk of infection per plaque-forming unit (PFU), increasing infectivity to some species while decreasing infectivity to others. Thus, a PFU of monkey-adapted EBOV is 10(7) -fold more lethal to mice than a PFU adapted to guinea pigs. The first conclusion is that the infectivity of EBOV to humans may depend on the identity of the donor species itself and, on the basis of limited epidemiological data, the question is raised as to whether bat-adapted EBOV is less infectious to humans than nonhuman primate (NHP)-adapted EBOV. Wildlife species such as bats, duikers and NHPs are naturally infected by EBOV through different species giving rise to EBOV with different wildlife species-passage histories (heritages). Based on the ecology of these wildlife species, three broad 'types' of EBOV-infected bushmeat are postulated reflecting differences in the number of passages within a given species, and hence the degree of adaptation of the EBOV present. The second conclusion is that the prior species-transmission chain may affect the infectivity to humans per PFU for EBOV from individuals of the same species. This is supported by the finding that the related Marburg marburgvirus requires ten passages in mice to fully adapt. It is even possible that the evolutionary trajectory of EBOV could vary in individuals of the same species giving rise to variants which are more or less virulent to humans and that the probability of a given trajectory is related to the heritage. Overall the ecology of the donor species (e.g. dog or bushmeat species) at the level of the individual animal itself may determine the risk of infection per PFU to humans reflecting the heritage of the virus and may contribute to the sporadic nature of EBOV outbreaks. © 2015 Crown copyright. © 2015 Society for Applied Microbiology.

  9. Gamma Ray Shielding Study of Barium–Bismuth–Borosilicate Glasses as Transparent Shielding Materials using MCNP-4C Code, XCOM Program, and Available Experimental Data

    Directory of Open Access Journals (Sweden)

    Reza Bagheri

    2017-02-01

    Full Text Available In this work, linear and mass attenuation coefficients, effective atomic number and electron density, mean free paths, and half value layer and 10th value layer values of barium–bismuth–borosilicate glasses were obtained for 662 keV, 1,173 keV, and 1,332 keV gamma ray energies using MCNP-4C code and XCOM program. Then obtained data were compared with available experimental data. The MCNP-4C code and XCOM program results were in good agreement with the experimental data. Barium–bismuth–borosilicate glasses have good gamma ray shielding properties from the shielding point of view.

  10. An experimental AWTS process and comparisons of ONERA T2 and 0.3-m TCT AWTS data for the ONERA CAST-10 aerofoil

    Science.gov (United States)

    Wolf, Stephen; Jenkins, Renaldo

    1989-01-01

    An experimental Adaptive Wall Test Section (AWTS) process is described. Comparisons of the ONERA T2 and the 0.3-m TCT (transonic cryogenic tunnel) AWTS data for the ONERA CAST-10 airfoil are presented. Most of the 0.3-m TCT data is new and preliminary and no sidewall boundary layer control is involved. No conclusions are given.

  11. V&V of MCNP 6.1.1 Beta Against Intermediate and High-Energy Experimental Data

    Energy Technology Data Exchange (ETDEWEB)

    Mashnik, Stepan G [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-08

    This report presents a set of validation and verification (V&V) MCNP 6.1.1 beta results calculated in parallel, with MPI, obtained using its event generators at intermediate and high-energies compared against various experimental data. It also contains several examples of results using the models at energies below 150 MeV, down to 10 MeV, where data libraries are normally used. This report can be considered as the forth part of a set of MCNP6 Testing Primers, after its first, LA-UR-11-05129, and second, LA-UR-11-05627, and third, LA-UR-26944, publications, but is devoted to V&V with the latest, 1.1 beta version of MCNP6. The MCNP6 test-problems discussed here are presented in the /VALIDATION_CEM/and/VALIDATION_LAQGSM/subdirectories in the MCNP6/Testing/directory. README files that contain short descriptions of every input file, the experiment, the quantity of interest that the experiment measures and its description in the MCNP6 output files, and the publication reference of that experiment are presented for every test problem. Templates for plotting the corresponding results with xmgrace as well as pdf files with figures representing the final results of our V&V efforts are presented. Several technical “bugs” in MCNP 6.1.1 beta were discovered during our current V&V of MCNP6 while running it in parallel with MPI using its event generators. These “bugs” are to be fixed in the following version of MCNP6. Our results show that MCNP 6.1.1 beta using its CEM03.03, LAQGSM03.03, Bertini, and INCL+ABLA, event generators describes, as a rule, reasonably well different intermediate- and high-energy measured data. This primer isn’t meant to be read from cover to cover. Readers may skip some sections and go directly to any test problem in which they are interested.

  12. Human impact on the hydrology of the Lake of Monate (Italy): an experimental data base to investigate anthropogenic disturbance

    Science.gov (United States)

    Montanari, A.; Castellarin, A.

    2014-12-01

    The Lake of Monate is located in the Lombardia region, in Northern Italy, close to the highest peaks of the Alps. The lake surface is about 2.5 square kilometers, with a maximum and mean water depth of 34 and 18 meters, respectively. Intensive agricultural cultivation and mining activities took place in the surrounding area since ancient times, as well as intensive urban and industrial development in the recent past. Notwithstanding the above anthropic activity, and the urbanization along the lake banks, the Lake of Monate is still close to pristine conditions, therefore being a unique example of ecosystem in equilibrium. The human impact is negligble because the lake has no tributaries, being the water inflow supplied by groundwater fluxes only, providing an average inflow volume of about 3.18 million of cubic meters. Since Roman times intensive mining activities are taking place in a large area that is located outside the geographical contributing catchment to the lake. In fact, the two mining sites of Cava Faraona and Cava Santa Maria are placed beyond the geographical divide between the Lake of Monate and the contiguous Lake of Ternate. However, the presence of subsurface rock layers that are tilted towards the Lake of Monate makes the actual contributing catchment more extended, therefore including the mining sites. For this reason, the recent decision to intensify the mining activities induced relevant concerns for the possible impact on the ecological equilibrium of the lake. Therefore, the local administration promoted an intensive monitoring campaign finalized to reach a better understanding of the hydrology of the lake and the subsurface water fluxes, to quantify the actual impact of the mining works. Meteorological and hydrological data at several location and fine time scale are being collected from Fall 2013 therefore putting together an experimental data set of relevant scientific value. This contribution aims to present the meteorological

  13. Experimental Design and Data Analysis in Receiver Operating Characteristic Studies: Lessons Learned from Reports in Radiology from 1997 to 20061

    Science.gov (United States)

    Shiraishi, Junji; Pesce, Lorenzo L.; Metz, Charles E.; Doi, Kunio

    2009-01-01

    Purpose: To provide a broad perspective concerning the recent use of receiver operating characteristic (ROC) analysis in medical imaging by reviewing ROC studies published in Radiology between 1997 and 2006 for experimental design, imaging modality, medical condition, and ROC paradigm. Materials and Methods: Two hundred ninety-five studies were obtained by conducting a literature search with PubMed with two criteria: publication in Radiology between 1997 and 2006 and occurrence of the phrase “receiver operating characteristic.” Studies returned by the query that were not diagnostic imaging procedure performance evaluations were excluded. Characteristics of the remaining studies were tabulated. Results: Two hundred thirty-three (79.0%) of the 295 studies reported findings based on observers' diagnostic judgments or objective measurements. Forty-three (14.6%) did not include human observers, with most of these reporting an evaluation of a computer-aided diagnosis system or functional data obtained with computed tomography (CT) or magnetic resonance (MR) imaging. The remaining 19 (6.4%) studies were classified as reviews or meta-analyses and were excluded from our subsequent analysis. Among the various imaging modalities, MR imaging (46.0%) and CT (25.7%) were investigated most frequently. Approximately 60% (144 of 233) of ROC studies with human observers published in Radiology included three or fewer observers. Conclusion: ROC analysis is widely used in radiologic research, confirming its fundamental role in assessing diagnostic performance. However, the ROC studies reported in Radiology were not always adequate to support clear and clinically relevant conclusions. © RSNA, 2009 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.2533081632/-/DC1 PMID:19864510

  14. Modelling flow and heat transfer through unsaturated chalk - Validation with experimental data from the ground surface to the aquifer

    Science.gov (United States)

    Thiéry, Dominique; Amraoui, Nadia; Noyer, Marie-Luce

    2018-01-01

    During the winter and spring of 2000-2001, large floods occurred in northern France (Somme River Basin) and southern England (Patcham area of Brighton) in valleys that are developed on Chalk outcrops. The floods durations were particularly long (more than 3 months in the Somme Basin) and caused significant damage in both countries. To improve the understanding of groundwater flooding in Chalk catchments, an experimental site was set up in the Hallue basin, which is located in the Somme River Basin (France). Unsaturated fractured chalk formation overlying the Chalk aquifer was monitored to understand its reaction to long and heavy rainfall events when it reaches a near saturation state. The water content and soil temperature were monitored to a depth of 8 m, and the matrix pressure was monitored down to the water table, 26.5 m below ground level. The monitoring extended over a 2.5-year period (2006-2008) under natural conditions and during two periods when heavy, artificial infiltration was induced. The objective of the paper is to describe a vertical numerical flow model based on Richards' equation using these data that was developed to simulate infiltrating rainwater flow from the ground surface to the saturated aquifer. The MARTHE computer code, which models the unsaturated-saturated continuum, was adapted to reproduce the monitored high saturation periods. Composite constitutive functions (hydraulic conductivity-saturation and pressure-saturation) that integrate the increase in hydraulic conductivity near saturation and extra available porosity resulting from fractures were introduced into the code. Using these composite constitutive functions, the model was able to accurately simulate the water contents and pressures at all depths over the entire monitored period, including the infiltration tests. The soil temperature was also accurately simulated at all depths, except during the infiltrations tests, which contributes to the model validation. The model was used

  15. Integrated system for production of neutronics and photonics calculational constants. Neutron-induced interactions: bibliography of experimental data

    Energy Technology Data Exchange (ETDEWEB)

    MacGregor, M.H.; Cullen, D.E.; Howerton, R.J.; Perkins, S.T.

    1976-07-04

    The bibliographic citations in the Experimental Cross Section Information Library (ECSIL) as of July 4, 1976 are tabulated. The tabulation has three arrangements: alphabetically by author, alphabetically by publication, and numerically by reference number.

  16. LIQUID-LIQUID EQUILIBRIUM FOR TERNARY SYSTEMS CONTAINING ETHYLIC BIODIESEL + ANHYDROUS ETHANOL + REFINED VEGETABLE OIL (SUNFLOWER OIL, CANOLA OIL AND PALM OIL: EXPERIMENTAL DATA AND THERMODYNAMIC MODELING

    Directory of Open Access Journals (Sweden)

    T. P. V. B. Dias

    2015-09-01

    Full Text Available AbstractPhase equilibria of the reaction components are essential data for the design and process operations of biodiesel production. Despite their importance for the production of ethylic biodiesel, the reaction mixture, reactant (oil and ethanol and the product (fatty acid ethyl esters up to now have received less attention than the corresponding systems formed during the separation and purification phases of biodiesel production using ethanol. In this work, new experimental measurements were performed for the liquid-liquid equilibrium (LLE of the system containing vegetable oil (sunflower oil and canola oil + ethylic biodiesel of refined vegetable oil + anhydrous ethanol at 303.15 and at 323.15 K and the system containing refined palm oil + ethylic biodiesel of refined palm oil + ethanol at 318.15 K. The experimental data were successfully correlated by the nonrandom two-liquid (NRTL model; the average deviations between calculated and experimental data were smaller than 1.00%.

  17. Study of Uranium Transport Utilizing Reactive Numerical Modeling and Experimental Data from Heterogeneous Intermediate-Scale Tanks

    Science.gov (United States)

    Rodriguez, D.; Miller, A.; Honeyman, B.

    2007-12-01

    The study of the transport of contaminants in groundwater is critical in order to mitigate risks to downstream receptors from sites where past releases of these contaminants has resulted in the degradation of the water quality of the underlying aquifer. In most cases, the fate and transport of these contaminants occurs in a chemically and physically heterogeneous environment; thereby making the prediction of the ultimate fate of these contaminants difficult. In order to better understand the fundamental processes that have the greatest effect on the transport of these contaminants, careful laboratory study must be completed in a controlled environment. Once the experimental data has been generated, the validation of numerical models may then be achieved. Questions on the management of contaminated sites may center on the long-term release (e.g., desorption, dissolution) behavior of contaminated geomedia. Data on the release of contaminants is often derived from bench-scale experiments or, in rare cases, through field-scale experiments. A central question, however, is how molecular-scale processes (e.g., bond breaking) are expressed at the macroscale. This presentation describes part of a collaborative study between the Colorado School of Mines, the USGS and Lawrence Berkeley National Lab on upscaling pore-scale processes to understanding field-scale observations. In the work described here, two experiments were conducted in two intermediate-scale tanks (2.44 m x 1.22 m x 7.6 cm and 2.44 m x 0.61 m x 7.6 cm) to generate data to quantify the processes of uranium dissolution and transport in fully saturated conditions, and to evaluate the ability of two reactive transport models to capture the relevant processes and predict U behavior at the intermediate scale. Each tank was designed so that spatial samples could be collected from the side of the tank, as well as samples from the effluent end of the tank. The larger tank was packed with a less than 2mm fraction of a

  18. Estimation of Solvation Quantities from Experimental Thermodynamic Data: Development of the Comprehensive CompSol Databank for Pure and Mixed Solutes

    Science.gov (United States)

    Moine, Edouard; Privat, Romain; Sirjean, Baptiste; Jaubert, Jean-Noël

    2017-09-01

    The Gibbs energy of solvation measures the affinity of a solute for its solvent and is thus a key property for the selection of an appropriate solvent for a chemical synthesis or a separation process. More fundamentally, Gibbs energies of solvation are choice data for developing and benchmarking molecular models predicting solvation effects. The Comprehensive Solvation—CompSol—database was developed with the ambition to propose very large sets of new experimental solvation chemical-potential, solvation entropy, and solvation enthalpy data of pure and mixed components, covering extended temperature ranges. For mixed compounds, the solvation quantities were generated in infinite-dilution conditions by combining experimental values of pure-component and binary-mixture thermodynamic properties. Three types of binary-mixture properties were considered: partition coefficients, activity coefficients at infinite dilution, and Henry's-law constants. A rigorous methodology was implemented with the aim to select data at appropriate conditions of temperature, pressure, and concentration for the estimation of solvation data. Finally, our comprehensive CompSol database contains 21 671 data associated with 1969 pure species and 70 062 data associated with 14 102 binary mixtures (including 760 solvation data related to the ionic-liquid class of solvents). On the basis of the very large amount of experimental data contained in the CompSol database, it is finally discussed how solvation energies are influenced by hydrogen-bonding association effects.

  19. A TECHNIQUE FOR EXPERIMENTAL DATA PROCESSING AT MODELING THE DISPERSION OF THE BIOLOGICAL TISSUE IMPEDANCE USING THE FRICKE EQUIVALENT CIRCUIT

    Directory of Open Access Journals (Sweden)

    I. V. Krivtsun

    2017-10-01

    Full Text Available Purpose. Modeling the dispersion of the biological tissue impedance of vegetable and animal origin using the Fricke equivalent circuit; development of a technique for experimental data processing to determine the approximation coefficients of the dispersion of the biological tissue impedance for this equivalent circuit; study of the features of the equivalent circuit at modeling the dispersion of the impedance, resistance, and reactance; the definition of the frequency domain in which using of the equivalent circuit is correct; revealing and generalization of the main regularities of dissipation of biological tissue impedance of vegetable and animal origin. Methodology. The technique is based on the scientific provisions of theoretical electrical engineering – the theory of the electromagnetic field in nonlinear media in modeling the dispersion of the biological tissue impedance. Results. The electric circuit of the Fricke equivalent circuit allows modeling the dependences of the impedance module of biological tissues, active and reactive components of impedance with acceptable accuracy for practical purposes in the frequency domain from 103 to 106 Hz. The equation of impedance of the Fricke equivalent circuit for biological tissues makes it possible to approximate the frequency dependences of the impedance modulus, active and reactive parts of the total resistance only by using the approximation coefficients corresponding to each part. The developed method for determining the values of the approximation coefficients of the impedance equation for the Fricke equivalent circuit for biological tissues allows to determine these values with high accuracy for various biological tissues. It is shown that the frequency dependences of the active component of the total resistance for tissues of vegetable and animal origin are similar. Originality. The developed technique operates with the normalized values of the impedance modulus of the Fricke

  20. Development of an Experimental Data Base to Validate Compressor-Face Boundary Conditions Used in Unsteady Inlet Flow Computations

    Science.gov (United States)

    Sajben, Miklos; Freund, Donald D.

    1998-01-01

    The ability to predict the dynamics of integrated inlet/compressor systems is an important part of designing high-speed propulsion systems. The boundaries of the performance envelope are often defined by undesirable transient phenomena in the inlet (unstart, buzz, etc.) in response to disturbances originated either in the engine or in the atmosphere. Stability margins used to compensate for the inability to accurately predict such processes lead to weight and performance penalties, which translate into a reduction in vehicle range. The prediction of transients in an inlet/compressor system requires either the coupling of two complex, unsteady codes (one for the inlet and one for the engine) or else a reliable characterization of the inlet/compressor interface, by specifying a boundary condition. In the context of engineering development programs, only the second option is viable economically. Computations of unsteady inlet flows invariably rely on simple compressor-face boundary conditions (CFBC's). Currently, customary conditions include choked flow, constant static pressure, constant axial velocity, constant Mach number or constant mass flow per unit area. These conditions are straightforward extensions of practices that are valid for and work well with steady inlet flows. Unfortunately, it is not at all likely that any flow property would stay constant during a complex system transient. At the start of this effort, no experimental observation existed that could be used to formulate of verify any of the CFBC'S. This lack of hard information represented a risk for a development program that has been recognized to be unacceptably large. The goal of the present effort was to generate such data. Disturbances reaching the compressor face in flight may have complex spatial structures and temporal histories. Small amplitude disturbances may be decomposed into acoustic, vorticity and entropy contributions that are uncoupled if the undisturbed flow is uniform. This study