WorldWideScience

Sample records for experimental data

  1. Covariance data evaluation for experimental data

    International Nuclear Information System (INIS)

    Liu Tingjin

    1993-01-01

    Some methods and codes have been developed and utilized for covariance data evaluation of experimental data, including parameter analysis, physical analysis, Spline fitting etc.. These methods and codes can be used in many different cases

  2. Reconstruction of dynamic structures of experimental setups based on measurable experimental data only

    Science.gov (United States)

    Chen, Tian-Yu; Chen, Yang; Yang, Hu-Jiang; Xiao, Jing-Hua; Hu, Gang

    2018-03-01

    Nowadays, massive amounts of data have been accumulated in various and wide fields, it has become today one of the central issues in interdisciplinary fields to analyze existing data and extract as much useful information as possible from data. It is often that the output data of systems are measurable while dynamic structures producing these data are hidden, and thus studies to reveal system structures by analyzing available data, i.e., reconstructions of systems become one of the most important tasks of information extractions. In the past, most of the works in this respect were based on theoretical analyses and numerical verifications. Direct analyses of experimental data are very rare. In physical science, most of the analyses of experimental setups were based on the first principles of physics laws, i.e., so-called top-down analyses. In this paper, we conducted an experiment of “Boer resonant instrument for forced vibration” (BRIFV) and inferred the dynamic structure of the experimental set purely from the analysis of the measurable experimental data, i.e., by applying the bottom-up strategy. Dynamics of the experimental set is strongly nonlinear and chaotic, and itʼs subjects to inevitable noises. We proposed to use high-order correlation computations to treat nonlinear dynamics; use two-time correlations to treat noise effects. By applying these approaches, we have successfully reconstructed the structure of the experimental setup, and the dynamic system reconstructed with the measured data reproduces good experimental results in a wide range of parameters.

  3. Non-parametric smoothing of experimental data

    International Nuclear Information System (INIS)

    Kuketayev, A.T.; Pen'kov, F.M.

    2007-01-01

    Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving

  4. Data Analysis in Experimental Biomedical Research

    DEFF Research Database (Denmark)

    Markovich, Dmitriy

    This thesis covers two non-related topics in experimental biomedical research: data analysis in thrombin generation experiments (collaboration with Novo Nordisk A/S), and analysis of images and physiological signals in the context of neurovascular signalling and blood flow regulation in the brain...... to critically assess and compare obtained results. We reverse engineered the data analysis performed by CAT, a de facto standard assay in the field. This revealed a number of possibilities to improve its methods of data analysis. We found that experimental calibration data is described well with textbook...

  5. Modeling of Experimental Adsorption Isotherm Data

    Directory of Open Access Journals (Sweden)

    Xunjun Chen

    2015-01-01

    Full Text Available Adsorption is considered to be one of the most effective technologies widely used in global environmental protection areas. Modeling of experimental adsorption isotherm data is an essential way for predicting the mechanisms of adsorption, which will lead to an improvement in the area of adsorption science. In this paper, we employed three isotherm models, namely: Langmuir, Freundlich, and Dubinin-Radushkevich to correlate four sets of experimental adsorption isotherm data, which were obtained by batch tests in lab. The linearized and non-linearized isotherm models were compared and discussed. In order to determine the best fit isotherm model, the correlation coefficient (r2 and standard errors (S.E. for each parameter were used to evaluate the data. The modeling results showed that non-linear Langmuir model could fit the data better than others, with relatively higher r2 values and smaller S.E. The linear Langmuir model had the highest value of r2, however, the maximum adsorption capacities estimated from linear Langmuir model were deviated from the experimental data.

  6. Acquisition and treatment systems for experimental data

    International Nuclear Information System (INIS)

    Bouard, E.

    1988-01-01

    The acquisition and treatment systems for experimental data has been conceived to give a response to experimental requirements in a research reactor such OSIRIS. Its objective is to acquire and treat the ensemble of informations coming from one or many experiences, to archive useful data for an ulterior treatment and to give at the experimentator a tool ensemble for a better track of his experience. Its main characteristics are given in this text [fr

  7. Analysis of DCA experimental data

    International Nuclear Information System (INIS)

    Min, B. J.; Kim, S. Y.; Ryu, S. J.; Seok, H. C.

    2000-01-01

    The lattice characteristics of DCA are calculated with WIMS-ATR code to validate WIMS-AECL code for the lattice analysis of CANDU core by using experimental data of DCA at JNC. Analytical studies of some critical experiments had been performed to analyze the effects of fuel composition. Different items of reactor physics such as local power peaking factor (LPF), effective multiplication factor (Keff) and coolant void reactivity were calculated for two coolant void fractions (0% and 100%). LPFs calculated by WIMS-ATR code are in close agreement with the experimental results. LPFs calculated by WIMS-AECL code with WINFRITH and ENDF/B-V libraries have similar values for both libraries but the differences between experimental data and calculated results by WIMS-AECL code are larger than those of WIMS-ATR code. The maximum difference between the values calculated by WIMS-ATR and experimental values of LPFs are within 1.3%. The coupled code systems WIMS-ATR and CITATION used in this analysis predict Keff within 1% ΔK and coolant void reactivity within 4 % ΔK/K in all cases. The coolant void reactivity of uranium fuel is found to be positive. To validate WIMS-AECL code, the core characteristics of DCA shall be calculated by WIMS-AECL and CITATION codes in the future

  8. Data archiving in experimental physics

    International Nuclear Information System (INIS)

    Dalesio, L.R.; Watson, W. III; Bickley, M.; Clausen, M.

    1998-01-01

    In experimental physics, data is archived from a wide variety of sources and used for a wide variety of purposes. In each of these environments, trade-offs are made between data storage rate, data availability, and retrieval rate. This paper presents archive alternatives in EPICS, the overall archiver design and details on the data collection and retrieval requirements, performance studies, design choices, design alternatives, and measurements made on the beta version of the archiver

  9. Developing Phenomena Models from Experimental Data

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover...... unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance...... of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data....

  10. Developing Phenomena Models from Experimental Data

    DEFF Research Database (Denmark)

    A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover...... unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance...... of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data....

  11. Status of experimental data for neutron induced reactions

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Mamoru [Tohoku Univ., Sendai (Japan)

    1998-11-01

    A short review is presented on the status of experimental data for neutron induced reactions above 20 MeV based on the EXFOR data base and journals. Experimental data which were obtained in a systematic manner and/or by plural authors are surveyed and tabulated for the nuclear data evaluation and the benchmark test of the evaluated data. (author). 61 refs.

  12. PROTEUS Experimental data

    International Nuclear Information System (INIS)

    Perret, G.

    2013-01-01

    This presentation gives an overview of the PROTEUS experimental programme performed at PSI over more than 30 years. In the 1970's the Gas-Cooled Fast Reactor (GCFR) experiments were essentially designed to improve the nuclear data in the fast energy range. The light water reactor experiments performed in the 1980's (HCLWR) and until 2006 (LWR-PROTEUS, Phases I, II and III) allowed to study various configurations for PWR and BWR. More information is available on the PROTEUS web site at http://proteus.web.psi.ch

  13. Development of the NSRR experimental data bank system, (1)

    International Nuclear Information System (INIS)

    Ishijima, Kiyomi; Uemura, Mutsumi; Ohnishi, Nobuaki

    1981-01-01

    To promote collection, arrangement, and utilization of the NSRR experimental data, development of the NSRR experimental data bank system was intended. Fundamental parts of the NSRR experimental data bank system, including the processing program DTBNK, have been completed. Data of the experiments performed so far have been collected and stored. Outline of the processing program and the method of utilization and the present status of the data bank system are discussed. (author)

  14. Steam as turbine blade coolant: Experimental data generation

    Energy Technology Data Exchange (ETDEWEB)

    Wilmsen, B.; Engeda, A.; Lloyd, J.R. [Michigan State Univ., East Lansing, MI (United States)

    1995-10-01

    Steam as a coolant is a possible option to cool blades in high temperature gas turbines. However, to quantify steam as a coolant, there exists practically no experimental data. This work deals with an attempt to generate such data and with the design of an experimental setup used for the purpose. Initially, in order to guide the direction of experiments, a preliminary theoretical and empirical prediction of the expected experimental data is performed and is presented here. This initial analysis also compares the coolant properties of steam and air.

  15. Detection of outliers in gas centrifuge experimental data

    International Nuclear Information System (INIS)

    Andrade, Monica C.V.; Nascimento, Claudio A.O.

    2005-01-01

    Isotope separation in a gas centrifuge is a very complex process. Development and optimization of a gas centrifuge requires experimentation. These data contain experimental errors, and like other experimental data, there may be some gross errors, also known as outliers. The detection of outliers in gas centrifuge experimental data may be quite complicated because there is not enough repetition for precise statistical determination and the physical equations may be applied only on the control of the mass flows. Moreover, the concentrations are poorly predicted by phenomenological models. This paper presents the application of a three-layer feed-forward neural network to the detection of outliers in a very extensive experiment for the analysis of the separation performance of a gas centrifuge. (author)

  16. User's manual of JT-60 experimental data analysis system

    International Nuclear Information System (INIS)

    Hirayama, Takashi; Morishima, Soichi; Yoshioka, Yuji

    2010-02-01

    In the Japan Atomic Energy Agency Naka Fusion Institute, a lot of experiments have been conducted by using the large tokamak device JT-60 aiming to realize fusion power plant. In order to optimize the JT-60 experiment and to investigate complex characteristics of plasma, JT-60 experimental data analysis system was developed and used for collecting, referring and analyzing the JT-60 experimental data. Main components of the system are a data analysis server and a database server for the analyses and accumulation of the experimental data respectively. Other peripheral devices of the system are magnetic disk units, NAS (Network Attached Storage) device, and a backup tape drive. This is a user's manual of the JT-60 experimental data analysis system. (author)

  17. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  18. Data base of reactor physics experimental results in Kyoto University critical assembly experimental facilities

    International Nuclear Information System (INIS)

    Ichihara, Chihiro; Fujine, Shigenori; Hayashi, Masatoshi

    1986-01-01

    The Kyoto University critical assembly experimental facilities belong to the Kyoto University Research Reactor Institute, and are the versatile critical assembly constructed for experimentally studying reactor physics and reactor engineering. The facilities are those for common utilization by universities in whole Japan. During more than ten years since the initial criticality in 1974, various experiments on reactor physics and reactor engineering have been carried out using many experimental facilities such as two solidmoderated cores, a light water-moderated core and a neutron generator. The kinds of the experiment carried out were diverse, and to find out the required data from them is very troublesome, accordingly it has become necessary to make a data base which can be processed by a computer with the data accumulated during the past more than ten years. The outline of the data base, the data base CAEX using personal computers, the data base supported by a large computer and so on are reported. (Kako, I.)

  19. Detection of outliers in a gas centrifuge experimental data

    Directory of Open Access Journals (Sweden)

    M. C. V. Andrade

    2005-09-01

    Full Text Available Isotope separation with a gas centrifuge is a very complex process. Development and optimization of a gas centrifuge requires experimentation. These data contain experimental errors, and like other experimental data, there may be some gross errors, also known as outliers. The detection of outliers in gas centrifuge experimental data is quite complicated because there is not enough repetition for precise statistical determination and the physical equations may be applied only to control of the mass flow. Moreover, the concentrations are poorly predicted by phenomenological models. This paper presents the application of a three-layer feed-forward neural network to the detection of outliers in analysis of performed on a very extensive experiment.

  20. Collection of experimental data for fusion neutronics benchmark

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Yamamoto, Junji; Ichihara, Chihiro; Ueki, Kotaro; Ikeda, Yujiro.

    1994-02-01

    During the recent ten years or more, many benchmark experiments for fusion neutronics have been carried out at two principal D-T neutron sources, FNS at JAERI and OKTAVIAN at Osaka University, and precious experimental data have been accumulated. Under an activity of Fusion Reactor Physics Subcommittee of Reactor Physics Committee, these experimental data are compiled in this report. (author)

  1. The computer library of experimental neutron data

    International Nuclear Information System (INIS)

    Bychkov, V.M.; Manokhin, V.N.; Surgutanov, V.V.

    1976-05-01

    The paper describes the computer library of experimental neutron data at the Obninsk Nuclear Data Centre. The format of the library (EXFOR) and the system of programmes for supplying the library are briefly described. (author)

  2. Adjustment model of thermoluminescence experimental data

    International Nuclear Information System (INIS)

    Moreno y Moreno, A.; Moreno B, A.

    2002-01-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a i * exp (-1/b i * (T-C i )) where: a i , b i , c i are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  3. Neutron cross section and covariance data evaluation of experimental data for 27Al

    International Nuclear Information System (INIS)

    Li Chunjuan; Liu Jianfeng; Liu Tingjin

    2006-01-01

    The evaluation of neutron cross section and covariance data for 27 Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for 27 Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)

  4. Data acquisition, processing and display of experimental data for the Tokamak de Varennes

    International Nuclear Information System (INIS)

    Robins, E.S.; Larsen, J.M.; Lee, A.; Somers, G.

    1985-01-01

    The Tokamak de Varennes is to be a national facility for research into magnetic nuclear fusion. A centralised computer system is currently under development to facilitate the remote control, acquisition, processing and display of experimental data. The software (GALE-V) consists of a set of tasks to build data structures which mirror the physical arrangement of each experiment and provide the bases for the interpretation and presentation of the data to each experimenter. Data retrieval is accomplished through the graphics subsystem, and an interface for user-written data processing programs allows for the varied needs of data analysis of each experiment. Other facilities being developed provide the tools for a user to retrieve, process and view the data in a simple manner

  5. Improving plant bioaccumulation science through consistent reporting of experimental data

    DEFF Research Database (Denmark)

    Fantke, Peter; Arnot, Jon A.; Doucette, William J.

    2016-01-01

    Experimental data and models for plant bioaccumulation of organic contaminants play a crucial role for assessing the potential human and ecological risks associated with chemical use. Plants are receptor organisms and direct or indirect vectors for chemical exposures to all other organisms. As new...... experimental data are generated they are used to improve our understanding of plant-chemical interactions that in turn allows for the development of better scientific knowledge and conceptual and predictive models. The interrelationship between experimental data and model development is an ongoing, never......-ending process needed to advance our ability to provide reliable quality information that can be used in various contexts including regulatory risk assessment. However, relatively few standard experimental protocols for generating plant bioaccumulation data are currently available and because of inconsistent...

  6. Application of data base management systems for developing experimental data base using ES computers

    International Nuclear Information System (INIS)

    Vasil'ev, V.I.; Karpov, V.V.; Mikhajlyuk, D.N.; Ostroumov, Yu.A.; Rumyantsev, A.N.

    1987-01-01

    Modern data base measurement systems (DBMS) are widely used for development and operation of different data bases by assignment of data processing systems in economy, planning, management. But up today development and operation of data masses with experimental physical data in ES computer has been based mainly on the traditional technology of consequent or index-consequent files. The principal statements of DBMS technology applicability for compiling and operation of data bases with data on physical experiments are formulated based on the analysis of DBMS opportunities. It is shown that application of DBMS allows to essentially reduce general costs of calculational resources for development and operation of data bases and to decrease the scope of stored experimental data when analyzing information content of data

  7. Experimental animal data and modeling of late somatic effects

    International Nuclear Information System (INIS)

    Fry, R.J.M.

    1988-01-01

    This section is restricted to radiation-induced life shortening and cancer and mainly to studies with external radiation. The emphasis will be on the experimental data that are available and the experimental systems that could provide the type of data with which to either formulate or test models. Genetic effects which are of concern are not discussed in this section. Experimental animal radiation studies fall into those that establish general principles and those that demonstrate mechanisms. General principles include the influence of dose, radiation quality, dose rate, fractionation, protraction and such biological factors as age and gender. The influence of these factors are considered as general principles because they are independent, at least qualitatively, of the species studied. For example, if an increase in the LET of the radiation causes an increased effectiveness in cancer induction in a mouse a comparable increase in effectiveness can be expected in humans. Thus, models, whether empirical or mechanistic, formulated from experimental animal data should be generally applicable

  8. Neutron cross section and covariance data evaluation of experimental data for {sup 27}Al

    Energy Technology Data Exchange (ETDEWEB)

    Chunjuan, Li; Jianfeng, Liu [Physics Department , Zhengzhou Univ., Zhengzhou (China); Tingjin, Liu [China Nuclear Data Center, China Inst. of Atomic Energy, Beijing (China)

    2006-07-15

    The evaluation of neutron cross section and covariance data for {sup 27}Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for {sup 27}Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)

  9. Fitting experimental data by using weighted Monte Carlo events

    International Nuclear Information System (INIS)

    Stojnev, S.

    2003-01-01

    A method for fitting experimental data using modified Monte Carlo (MC) sample is developed. It is intended to help when a single finite MC source has to fit experimental data looking for parameters in a certain underlying theory. The extraction of the searched parameters, the errors estimation and the goodness-of-fit testing is based on the binned maximum likelihood method

  10. Experimental data base for gamma-ray strength functions

    International Nuclear Information System (INIS)

    Kopecky, J.

    1999-01-01

    Theoretical and experimental knowledge of γ-ray strength functions is a very important ingredient for description and calculation of photon production data in all reaction channels. This study focusses on experimental γ-ray strength functions, collected over a period of about 40 years and based on measurements of partial radiative widths

  11. TFTR Experimental Data Analysis Collaboration

    International Nuclear Information System (INIS)

    Callen, J.D.

    1993-01-01

    The research performed under the second year of this three-year grant has concentrated on a few key TFTR experimental data analysis issues: MHD mode identification and effects on supershots; identification of new MHD modes; MHD mode theory-experiment comparisons; local electron heat transport inferred from impurity-induced cool pulses; and some other topics. Progress in these areas and activities undertaken in conjunction with this grant are summarized briefly in this report

  12. New System For Tokamak T-10 Experimental Data Acquisition, Data Handling And Remote Access

    International Nuclear Information System (INIS)

    Sokolov, M. M.; Igonkina, G. B.; Koutcherenko, I. Yu.; Nurov, D. N.

    2008-01-01

    For carrying out the experiments on nuclear fusion devices in the Institute of Nuclear Fusion, Moscow, a system for experimental data acquisition, data handling and remote access (further 'DAS-T10') was developed and has been used in the Institute since the year 2000. The DAS-T10 maintains the whole cycle of experimental data handling: from configuration of data measuring equipment and acquisition of raw data from the fusion device (the Device), to presentation of math-processed data and support of the experiment data archive. The DAS-T10 provides facilities for the researchers to access the data both at early stages of an experiment and well afterwards, locally from within the experiment network and remotely over the Internet.The DAS-T10 is undergoing a modernization since the year 2007. The new version of the DAS-T10 will accommodate to modern data measuring equipment and will implement improved architectural solutions. The innovations will allow the DAS-T10 to produce and handle larger amounts of experimental data, thus providing the opportunities to intensify and extend the fusion researches. The new features of the DAS-T10 along with the existing design principles are reviewed in this paper

  13. Clustering of experimental data and its application to nuclear data evaluation

    International Nuclear Information System (INIS)

    Abboud, A.; Rashed, R.; Ibrahim, M.

    1998-01-01

    A semi-automatic pre-processing technique has been proposed by Iwasaki to classify the experimental data for a reaction into one or a small number of large data groups, called main cluster (s), and to eliminate some data which deviates from the main body of the data. The classifying method is based on technique like pattern clustering in the information processing domain. Test of the data clustering formed reasonable main clusters for three activation cross-sections. This technique is a helpful tool in the neutron cross-section evaluation

  14. Operation and management manual of JT-60 experimental data analysis system

    International Nuclear Information System (INIS)

    Hirayama, Takashi; Morishima, Soichi

    2014-03-01

    In the Japan Atomic Energy Agency Naka Fusion Institute, a lot of experiments have been conducted by using the large tokamak device JT-60 aiming to realize fusion power plant. In order to optimize the JT-60 experiment and to investigate complex characteristics of plasma, JT-60 experimental data analysis system was developed and used for collecting, referring and analyzing the JT-60 experimental data. Main components of the system are a data analysis server and a database server for the analyses and accumulation of the experimental data respectively. Other peripheral devices of the system are magnetic disk units, NAS (Network Attached Storage) device, and a backup tape drive. This is an operation and management manual the JT-60 experimental data analysis system. (author)

  15. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  16. Improving plant bioaccumulation science through consistent reporting of experimental data.

    Science.gov (United States)

    Fantke, Peter; Arnot, Jon A; Doucette, William J

    2016-10-01

    Experimental data and models for plant bioaccumulation of organic contaminants play a crucial role for assessing the potential human and ecological risks associated with chemical use. Plants are receptor organisms and direct or indirect vectors for chemical exposures to all other organisms. As new experimental data are generated they are used to improve our understanding of plant-chemical interactions that in turn allows for the development of better scientific knowledge and conceptual and predictive models. The interrelationship between experimental data and model development is an ongoing, never-ending process needed to advance our ability to provide reliable quality information that can be used in various contexts including regulatory risk assessment. However, relatively few standard experimental protocols for generating plant bioaccumulation data are currently available and because of inconsistent data collection and reporting requirements, the information generated is often less useful than it could be for direct applications in chemical assessments and for model development and refinement. We review existing testing guidelines, common data reporting practices, and provide recommendations for revising testing guidelines and reporting requirements to improve bioaccumulation knowledge and models. This analysis provides a list of experimental parameters that will help to develop high quality datasets and support modeling tools for assessing bioaccumulation of organic chemicals in plants and ultimately addressing uncertainty in ecological and human health risk assessments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Estimation of covariance matrix on the experimental data for nuclear data evaluation

    International Nuclear Information System (INIS)

    Murata, T.

    1985-01-01

    In order to evaluate fission and capture cross sections of some U and Pu isotopes for JENDL-3, we have a plan for evaluating them simultaneously with a least-squares method. For the simultaneous evaluation, the covariance matrix is required for each experimental data set. In the present work, we have studied the procedures for deriving the covariance matrix from the error data given in the experimental papers. The covariance matrices were obtained using the partial errors and estimated correlation coefficients between the same type partial errors for different neutron energy. Some examples of the covariance matrix estimation are explained and the preliminary results of the simultaneous evaluation are presented. (author)

  18. BirdsEyeView (BEV: graphical overviews of experimental data

    Directory of Open Access Journals (Sweden)

    Zhang Lifeng

    2012-09-01

    Full Text Available Abstract Background Analyzing global experimental data can be tedious and time-consuming. Thus, helping biologists see results as quickly and easily as possible can facilitate biological research, and is the purpose of the software we describe. Results We present BirdsEyeView, a software system for visualizing experimental transcriptomic data using different views that users can switch among and compare. BirdsEyeView graphically maps data to three views: Cellular Map (currently a plant cell, Pathway Tree with dynamic mapping, and Gene Ontology http://www.geneontology.org Biological Processes and Molecular Functions. By displaying color-coded values for transcript levels across different views, BirdsEyeView can assist users in developing hypotheses about their experiment results. Conclusions BirdsEyeView is a software system available as a Java Webstart package for visualizing transcriptomic data in the context of different biological views to assist biologists in investigating experimental results. BirdsEyeView can be obtained from http://metnetdb.org/MetNet_BirdsEyeView.htm.

  19. Graphic display of spatially distributed binary-state experimental data

    International Nuclear Information System (INIS)

    Watson, B.L.

    1981-01-01

    Experimental data collected from a large number of transducers spatially distributed throughout a three-dimensional volume has typically posed a difficult interpretation task for the analyst. This paper describes one approach to alleviating this problem by presenting color graphic displays of experimental data; specifically, data representing the dynamic three-dimensional distribution of cooling fluid collected during the reflood and refill of simulated nuclear reactor vessels. Color-coded binary data (wet/dry) are integrated with a graphic representation of the reactor vessel and displayed on a high-resolution color CRT. The display is updated with successive data sets and made into 16-mm movies for distribution and analysis. Specific display formats are presented and extension to other applications discussed

  20. Clustering of experimental data and its application to nuclear data evaluation

    International Nuclear Information System (INIS)

    Abboud, A.; Rashed, R.; Ibrahim, M.

    1997-01-01

    A semi-automatic pre-processing technique has been proposed by Iwasaki to classify the experimental data for a reaction into one or a small number of large data groups, called main cluster(s), and to eliminate some data which deviate from the main body of the data. The classifying method is based on a technique like pattern clustering in the information processing domain. Test of the data clustering formed reasonable main clusters for three activation cross-sections. This technique is a helpful tool in the neutron cross-section evaluation. (author). 4 refs, 1 fig., 3 tabs

  1. Figure output program for JFT-2M experimental data

    International Nuclear Information System (INIS)

    Miura, Yukitoshi; Mori, Masahiro; Matsuda, Toshiaki; Takada, Susumu.

    1991-11-01

    The software for the figure output of JFT-2M experimental data is reported. Since the configuration of a figure is determined by some easy input parameters, then any format of each experimental output is configured freely by this software. (author)

  2. Experimental data and dose-response models

    International Nuclear Information System (INIS)

    Ullrich, R.L.

    1985-01-01

    Dose-response relationships for radiation carcinogenesis have been of interest to biologists, modelers, and statisticians for many years. Despite his interest there are few instances in which there are sufficient experimental data to allow the fitting of various dose-response models. In those experimental systems for which data are available the dose-response curves for tumor induction for the various systems cannot be described by a single model. Dose-response models which have been observed following acute exposures to gamma rays include threshold, quadratic, and linear models. Data on sex, age, and environmental influences of dose suggest a strong role of host factors on the dose response. With decreasing dose rate the effectiveness of gamma ray irradiation tends to decrease in essentially every instance. In those cases in which the high dose rate dose response could be described by a quadratic model, the effect of dose rate is consistent with predictions based on radiation effects on the induction of initial events. Whether the underlying reasons for the observed dose-rate effect is a result of effects on the induction of initial events or is due to effects on the subsequent steps in the carcinogenic process is unknown. Information on the dose response for tumor induction for high LET (linear energy transfer) radiations such as neutrons is even more limited. The observed dose and dose rate data for tumor induction following neutron exposure are complex and do not appear to be consistent with predictions based on models for the induction of initial events

  3. application of covariance analysis to feed/ ration experimental data

    African Journals Online (AJOL)

    Prince Acheampong

    ABSTRACT. The use Analysis of Covariance (ANOCOVA) to feed/ration experimental data for birds was examined. Correlation and Regression analyses were used to adjust for the covariate – initial weight of the experimental birds. The Fisher's F statistic for the straight forward Analysis of Variance (ANOVA) showed ...

  4. Experimental data base for containment thermalhydraulic analysis

    International Nuclear Information System (INIS)

    Cheng, X.; Bazin, P.; Cornet, P.; Hittner, D.; Jackson, J.D.; Lopez Jimenez, J.; Naviglio, A.; Oriolo, F.; Petzold, H.

    2001-01-01

    This paper describes the joint research project DABASCO which is supported by the European Community under a cost-shared contract and participated by nine European institutions. The main objective of the project is to provide a generic experimental data base for the development of physical models and correlations for containment thermalhydraulic analysis. The project consists of seven separate-effects experimental programs which deal with new innovative conceptual features, e.g. passive decay heat removal and spray systems. The results of the various stages of the test programs will be assessed by industrial partners in relation to their applicability to reactor conditions

  5. Error bounds for molecular Hamiltonians inverted from experimental data

    International Nuclear Information System (INIS)

    Geremia, J.M.; Rabitz, Herschel

    2003-01-01

    Inverting experimental data provides a powerful technique for obtaining information about molecular Hamiltonians. However, rigorously quantifying how laboratory error propagates through the inversion algorithm has always presented a challenge. In this paper, we develop an inversion algorithm that realistically treats experimental error. It propagates the distribution of observed laboratory measurements into a family of Hamiltonians that are statistically consistent with the distribution of the data. This algorithm is built upon the formalism of map-facilitated inversion to alleviate computational expense and permit the use of powerful nonlinear optimization algorithms. Its capabilities are demonstrated by identifying inversion families for the X 1 Σ g + and a 3 Σ u + states of Na 2 that are consistent with the laboratory data

  6. Server for experimental data from LHD

    International Nuclear Information System (INIS)

    Emoto, M.; Ohdachi, S.; Watanabe, K.; Sudo, S.; Nagayama, Y.

    2006-01-01

    In order to unify various types of data, the Kaiseki Server was developed. This server provides physical experimental data of large helical device (LHD) experiments. Many types of data acquisition systems currently exist in operation, and they produce files of various formats. Therefore, it has been difficult to analyze different types of acquisition data at the same time because the data of each system should be read in a particular manner. To facilitate the usage of this data by researchers, the authors have developed a new server system, which provides a unified data format and a unique data retrieval interface. Although the Kaiseki Server satisfied the initial demand, new requests arose from researchers, one of which was the remote usage of the server. The current system cannot be used remotely because of security issues. Another request was group ownership, i.e., users belonging to the same group should have equal access to data. To satisfy these demands, the authors modified the server. However, since other requests may arise in the future, the new system must be flexible so that it can satisfy future demands. Therefore, the authors decided to develop a new server using a three-tier structure

  7. WPS criterion proposition based on experimental data base interpretation

    International Nuclear Information System (INIS)

    Chapuliot, S.; Izard, J.P.; Moinereau, D.; Marie, S.

    2011-01-01

    This article gives the background and the methodology developed to define a K J based criterion for brittle fracture of Reactor Pressure Vessel (RPV) submitted to Pressurized Thermal Shock (PTS), and taking into account Warm Pre Stressing effect (WPS). The first step of this methodology is the constitution of an experimental data base. This work was performed through bibliography and partnerships, and allows merging experimental results dealing with: -) Various ferritic steels; -) Various material states (as received, thermally aged, irradiated...); -) Various mode of fracture (cleavage, inter-granular, mixed mode); -) Various specimen geometry and size (CT, SENB, mock-ups); -) Various thermo-mechanical transients. Based on this experimental data base, a simple K J based limit is proposed and compared to experimental results. Parametric studies are performed in order to define the main parameters of the problem. Finally, a simple proposition based on a detailed analysis of tests results is performed. This proposition giving satisfactory results in every cases, it constitutes a good candidate for integration in French RSE-M code for in service assessment. (authors)

  8. Experimental benchmark data for PWR rod bundle with spacer-grids

    International Nuclear Information System (INIS)

    Dominguez-Ontiveros, Elvis E.; Hassan, Yassin A.; Conner, Michael E.; Karoutas, Zeses

    2012-01-01

    In numerical simulations of fuel rod bundle flow fields, the unsteady Navier–Stokes equations have to be solved in order to determine the time (phase) dependent characteristics of the flow. In order to validate the simulations results, detailed comparison with experimental data must be done. Experiments investigating complex flows in rod bundles with spacer grids that have mixing devices (such as flow mixing vanes) have mostly been performed using single-point measurements. In order to obtain more details and insight on the discrepancies between experimental and numerical data as well as to obtain a global understanding of the causes of these discrepancies, comparisons of the distributions of complete phase-averaged velocity and turbulence fields for various locations near spacer-grids should be performed. The experimental technique Particle Image Velocimetry (PIV) is capable of providing such benchmark data. This paper describes an experimental database obtained using two-dimensional Time Resolved Particle Image Velocimetry (TR-PIV) measurements within a 5 × 5 PWR rod bundle with spacer-grids that have flow mixing vanes. One of the unique characteristic of this set-up is the use of the Matched Index of Refraction technique employed in this investigation to allow complete optical access to the rod bundle. This unique feature allows flow visualization and measurement within the bundle without rod obstruction. This approach also allows the use of high temporal and spatial non-intrusive dynamic measurement techniques namely TR-PIV to investigate the flow evolution below and immediately above the spacer. The experimental data presented in this paper includes explanation of the various cases tested such as test rig dimensions, measurement zones, the test equipment and the boundary conditions in order to provide appropriate data for comparison with Computational Fluid Dynamics (CFD) simulations. Turbulence parameters of the obtained data are presented in order to gain

  9. Thermodynamic properties of caffeine: Reconciliation of available experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Emel' yanenko, Vladimir N. [Department of Physical Chemistry, University of Rostock, Hermannstrasse 14, D-18051 Rostock (Germany); Verevkin, Sergey P. [Department of Physical Chemistry, University of Rostock, Hermannstrasse 14, D-18051 Rostock (Germany)], E-mail: sergey.verevkin@uni-rostock.de

    2008-12-15

    Molar enthalpies of sublimation of two crystal forms of caffeine were obtained from the temperature dependence of the vapour pressure measured by the transpiration method. A large number of primary experimental results on the temperature dependences of vapour pressure and phase transitions have been collected from the literature and have been treated in a uniform manner in order to derive sublimation enthalpies of caffeine at T = 298.15 K. This collection together with the new experimental results reported here has helped to resolve contradictions in the available sublimation enthalpies data and to recommend a consistent and reliable set of sublimation and formation enthalpies for both crystal forms under study. Ab initio calculations of the gaseous molar enthalpy of formation of caffeine have been performed using the G3MP2 method and the results are in excellent agreement with the selected experimental data.

  10. Thermodynamic properties of caffeine: Reconciliation of available experimental data

    International Nuclear Information System (INIS)

    Emel'yanenko, Vladimir N.; Verevkin, Sergey P.

    2008-01-01

    Molar enthalpies of sublimation of two crystal forms of caffeine were obtained from the temperature dependence of the vapour pressure measured by the transpiration method. A large number of primary experimental results on the temperature dependences of vapour pressure and phase transitions have been collected from the literature and have been treated in a uniform manner in order to derive sublimation enthalpies of caffeine at T = 298.15 K. This collection together with the new experimental results reported here has helped to resolve contradictions in the available sublimation enthalpies data and to recommend a consistent and reliable set of sublimation and formation enthalpies for both crystal forms under study. Ab initio calculations of the gaseous molar enthalpy of formation of caffeine have been performed using the G3MP2 method and the results are in excellent agreement with the selected experimental data

  11. Intuitive web-based experimental design for high-throughput biomedical data.

    Science.gov (United States)

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

  12. Criteria of the validation of experimental and evaluated covariance data

    International Nuclear Information System (INIS)

    Badikov, S.

    2008-01-01

    The criteria of the validation of experimental and evaluated covariance data are reviewed. In particular: a) the criterion of the positive definiteness for covariance matrices, b) the relationship between the 'integral' experimental and estimated uncertainties, c) the validity of the statistical invariants, d) the restrictions imposed to correlations between experimental errors, are described. Applying these criteria in nuclear data evaluation was considered and 4 particular points have been examined. First preserving positive definiteness of covariance matrices in case of arbitrary transformation of a random vector was considered, properties of the covariance matrices in operations widely used in neutron and reactor physics (splitting and collapsing energy groups, averaging the physical values over energy groups, estimation parameters on the basis of measurements by means of generalized least squares method) were studied. Secondly, an algorithm for comparison of experimental and estimated 'integral' uncertainties was developed, square root of determinant of a covariance matrix is recommended for use in nuclear data evaluation as a measure of 'integral' uncertainty for vectors of experimental and estimated values. Thirdly, a set of statistical invariants-values which are preserved in statistical processing was presented. And fourthly, the inequality that signals a correlation between experimental errors that leads to unphysical values is given. An application is given concerning the cross-section of the (n,t) reaction on Li 6 with a neutron incident energy comprised between 1 and 100 keV

  13. Status of experimental data related to Be in ITER materials R and D data bank

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Shigeru [ITER Joint Central Team, Muenchen (Germany)

    1998-01-01

    To keep traceability of many valuable raw data that were experimentally obtained in the ITER Technology R and D Tasks related to materials for In-Vessel components (divertor, first wall, blanket, vacuum vessel, etc.) and to easily make the best use of these data in the ITER design activities, the `ITER Materials R and D Data Bank` has been built up, with the use of Excel{sup TM} spread sheets. The paper describes status of experimental data collected in this data bank on thermo-mechanical properties of unirradiated and neutron irradiated Be, on plasma-material interactions of Be, on mechanical properties of various kinds of Be/Cu joints (including plasma sprayed Be), and on thermal fatigue tests of Be/Cu mock-ups. (author)

  14. 40 CFR 158.210 - Experimental use permit data requirements for product chemistry.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit data requirements for product chemistry. 158.210 Section 158.210 Protection of Environment ENVIRONMENTAL PROTECTION... Experimental use permit data requirements for product chemistry. All product chemistry data, as described in...

  15. Status of experimental data for the VHTR core design

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Seok; Chang, Jong Hwa; Park, Chang Kue

    2004-05-01

    The VHTR (Very High Temperature Reactor) is being emerged as a next generation nuclear reactor to demonstrate emission-free nuclear-assisted electricity and hydrogen production. The VHTR could be either a prismatic or pebble type helium cooled, graphite moderated reactor. The final decision will be made after the completion of the pre-conceptual design for each type. For the pre-conceptual design for both types, computational tools are being developed. Experimental data are required to validate the tools to be developed. Many experiments on the HTGR (High Temperature Gas-cooled Reactor) cores have been performed to confirm the design data and to validate the design tools. The applicability and availability of the existing experimental data have been investigated for the VHTR core design in this report.

  16. 40 CFR 158.270 - Experimental use permit data requirements for residue chemistry.

    Science.gov (United States)

    2010-07-01

    ... requirements for residue chemistry. 158.270 Section 158.270 Protection of Environment ENVIRONMENTAL PROTECTION... Experimental use permit data requirements for residue chemistry. All residue chemistry data, as described in... section 408(r) is sought. Residue chemistry data are not required for an experimental use permit issued on...

  17. Construction of covariance matrix for experimental data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhang Jianhua

    1992-01-01

    For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained

  18. Insights in Experimental Data : Interactive Statistics with the ILLMO Program

    NARCIS (Netherlands)

    Martens, J.B.O.S.

    2017-01-01

    Empirical researchers turn to statistics to assist them in drawing conclusions, also called inferences, from their collected data. Often, this data is experimental data, i.e., it consists of (repeated) measurements collected in one or more distinct conditions. The observed data can hence be

  19. Experimental data at high PT and its interpretation: the role of theory

    Energy Technology Data Exchange (ETDEWEB)

    Belonoshko, A. B.; Rosengren, A.

    2011-07-01

    Experiments, relevant for planetary science, are performed often under extreme conditions of pressure and temperature. This makes them technically difficult. The results are often difficult to interpret correctly, especially in the cases when experimental data are scarce and experimental trends difficult to establish. Theory, while normally is inferior in precision of delivered data, is superior in providing a big picture and details behind materials behavior. We consider the experiments performed for deuterium, Mo, and Fe. We demonstrate that when experimental data is verified by theory, significant insight can be gained. (Author) 26 refs.

  20. Covariance data evaluation of some experimental data for n + 65,63,NatCu

    International Nuclear Information System (INIS)

    Jia Min; Liu Jianfeng; Liu Tingjin

    2003-01-01

    The evaluation of covariance data for 65,63,Nat Cu in the energy range from 99.5 keV to 20 MeV was carried out using EXPCOV and SPC code based on the experimental data available. The data can be as a part of the covariance file 33 in the evaluated library in ENDF/B6 format for the corresponding nuclides, and also can be used as the basis of theoretical calculation concerned. (authors)

  1. Outline and handling manual of experimental data time slice monitoring software 'SLICE'

    International Nuclear Information System (INIS)

    Shirai, Hiroshi; Hirayama, Toshio; Shimizu, Katsuhiro; Tani, Keiji; Azumi, Masafumi; Hirai, Ken-ichiro; Konno, Satoshi; Takase, Keizou.

    1993-02-01

    We have developed a software 'SLICE' which maps various kinds of plasma experimental data measured at the different geometrical position of JT-60U and JFT-2M onto the equilibrium magnetic configuration and treats them as a function of volume averaged minor radius ρ. Experimental data can be handled uniformly by using 'SLICE'. Plenty of commands of 'SLICE' make it easy to process the mapped data. The experimental data measured as line integrated values are also transformed by Abel inversion. The mapped data are fitted to a functional form and saved to the database 'MAPDB'. 'SLICE' can read the data from 'MAPDB' and re-display and transform them. Still more 'SLICE' creates run data of orbit following Monte-Carlo code 'OFMC' and tokamak predictive and interpretation code system 'TOPICS'. This report summarizes an outline and the usage of 'SLICE'. (author)

  2. Analysis of experimental data on relativistic nuclear collisions in the Lobachevsky space

    International Nuclear Information System (INIS)

    Baldin, A.A.; Baldina, Eh.G.; Kladnitskaya, E.N.; Rogachevskij, O.V.

    2004-01-01

    Relativistic nuclear collisions are considered in terms of relative 4-velocity and rapidity space (the Lobachevsky space). The connection between geometric relations in the Lobachevsky space and measurable (experimentally determined) kinematic characteristics (transverse momentum, longitudinal rapidity, square relative 4-velocity b ik , etc.) is discussed. The experimental data obtained using the propane bubble chamber are analyzed on the basis of triangulation in the Lobachevsky space. General properties of relativistic invariants distributions characterizing the geometric position of particles in the Lobachevsky space are discussed. The transition energy region is considered on the basis of relativistic approach to experimental data on multiparticle processes. Possible applications of the obtained results for planning of experimental research and analysis of data on multiple particle production are discussed

  3. Contribution of computer science to the evaluation of experimental data

    International Nuclear Information System (INIS)

    Steuerwald, J.

    1978-11-01

    The GALE data acquisition system and EDDAR data processing system, used at Max-Planck-Institut fuer Plasmaphysik, serve to illustrate some of the various ways in which computer science plays a major role in developing the evaluation of experimental data. (orig.) [de

  4. Experimental data for the slug two-phase flow characteristics in horizontal pipeline

    Directory of Open Access Journals (Sweden)

    Abdalellah O. Mohmmed

    2018-02-01

    Full Text Available The data presented in this article were the basis for the study reported in the research articles entitled “Statistical assessment of experimental observation on the slug body length and slug translational velocity in a horizontal pipe” (Al-Kayiem et al., 2017 [1] which presents an experimental investigation of the slug velocity and slug body length for air-water tow phase flow in horizontal pipe. Here, in this article, the experimental set-up and the major instruments used for obtaining the computed data were explained in details. This data will be presented in the form of tables and videos.

  5. Challenges in Data Collection and Analysis in Multi-National Experimentation

    Science.gov (United States)

    2007-06-01

    sampling of personnel when individual interviews would be labor intensive and time consuming. Ideally surveys contribute to the cognitive aspect of...the experimental data for the data collection plan. In addition to gaining data concerning the cognitive aspect , surveys can be used when no other

  6. Assessment CANDU physics codes using experimental data - part 1: criticality measurement

    International Nuclear Information System (INIS)

    Roh, Gyu Hong; Choi, Hang Bok; Jeong, Chang Joon

    2001-08-01

    In order to assess the applicability of MCNP-4B code to the heavy water moderated, light water cooled and pressure-tube type reactor, the MCNP-4B physics calculations has been carried out for the Deuterium Critical Assembly (DCA), and the results were compared with those of the experimental data. In this study, the key safety parameters like as the multiplication factor, void coefficient, local power peaking factor and bundle power distribution in the scattered core are simulated. In order to use the cross section data consistently for the fuels to be analyzed in the future, new MCNP libraries have been generated from ENDF/B-VI release 3. Generally, the MCNP-4B calculation results show a good agreement with experimental data of DCA core. After benchmarking MCNP-4B against available experimental data, it will be used as the reference tool to benchmark design and analysis codes for the advanced CANDU fuels

  7. Analysis of cerebral vessels dynamics using experimental data with missed segments

    Science.gov (United States)

    Pavlova, O. N.; Abdurashitov, A. S.; Ulanova, M. V.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.; Pavlov, A. N.

    2018-04-01

    Physiological signals often contain various bad segments that occur due to artifacts, failures of the recording equipment or varying experimental conditions. The related experimental data need to be preprocessed to avoid such parts of recordings. In the case of few bad segments, they can simply be removed from the signal and its analysis is further performed. However, when there are many extracted segments, the internal structure of the analyzed physiological process may be destroyed, and it is unclear whether such signal can be used in diagnostic-related studies. In this paper we address this problem for the case of cerebral vessels dynamics. We perform analysis of simulated data in order to reveal general features of quantifying scaling features of complex signals with distinct correlation properties and show that the effects of data loss are significantly different for experimental data with long-range correlations and anti-correlations. We conclude that the cerebral vessels dynamics is significantly less sensitive to missed data fragments as compared with signals with anti-correlated statistics.

  8. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.

    1989-01-01

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  9. Experimental software for modeling and interpreting educational data analysis processes

    Directory of Open Access Journals (Sweden)

    Natalya V. Zorina

    2017-12-01

    Full Text Available Problems, tasks and processes of educational data mining are considered in this article. The objective is to create a fundamentally new information system of the University using the results educational data analysis. One of the functions of such a system is knowledge extraction from accumulated in the operation process data. The creation of the national system of this type is an iterative and time-consuming process requiring the preliminary studies and incremental prototyping modules. The novelty of such systems is that there is a lack of those using this methodology of the development, for this purpose a number of experiments was carried out in order to collect data, choose appropriate methods for the study and to interpret them. As a result of the experiment, the authors were available sources available for analysis in the information environment of the home university. The data were taken from the semester performance, obtained from the information system of the training department of the Institute of IT MTU MIREA, the data obtained as a result of the independent work of students and data, using specially designed Google-forms. To automate the collection of information and analysis of educational data, an experimental software package was created. As a methodology for developing the experimental software complex, a decision was made using the methodologies of rational-empirical complexes (REX and single-experimentation program technologies (TPEI. The details of the program implementation of the complex are described in detail, conclusions are given about the availability of the data sources used, and conclusions are drawn about the prospects for further development.

  10. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    Science.gov (United States)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  11. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    International Nuclear Information System (INIS)

    Geng, S.M.; Tew, R.C.

    1994-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free-piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine-specific calibration to bring predictions and experimental data into agreement

  12. Revisiting dibenzothiophene thermochemical data: Experimental and computational studies

    International Nuclear Information System (INIS)

    Freitas, Vera L.S.; Gomes, Jose R.B.; Ribeiro da Silva, Maria D.M.C.

    2009-01-01

    Thermochemical data of dibenzothiophene were studied in the present work by experimental techniques and computational calculations. The standard (p 0 =0.1MPa) molar enthalpy of formation, at T = 298.15 K, in the gaseous phase, was determined from the enthalpy of combustion and sublimation, obtained by rotating bomb calorimetry in oxygen, and by Calvet microcalorimetry, respectively. This value was compared with estimated data from G3(MP2)//B3LYP computations and also with the other results available in the literature.

  13. Development of advanced methods for analysis of experimental data in diffusion

    Science.gov (United States)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix

  14. Experimental (Network) and Evaluated Nuclear Reaction Data at NDS

    International Nuclear Information System (INIS)

    Otsuka, N.; Semkova, V.; Simakov, S.P.; Zerkin, V.

    2011-01-01

    Dr Simakov of Nuclear Data Services Unit in the Nuclear Data Section (NDS) gave a brief overview of the data compilation and evaluation activities in the nuclear data community: experimental nuclear reaction data (EXFOR, http://www-nds.iaea.org/exfor/) and evaluated nuclear reaction data (ENDF, http://www-nds.iaea.org/endf). The International Network of Nuclear Reaction Data Centres (NRDC) coordinated by NDS includes 14 Centres in 8 Countries (China, Hungary, India, Japan, Korea, Russian, Ukraine, USA) and 2 International Organizations (NEA, IAEA). It had the first meeting of four core centres (Brookhaven, Saclay, Obninsk, Vienna) in 1966 and the EXFOR was adopted as an official data exchange format. In 2000, IAEA implemented the EXFOR database as a relational multiform database and the EXFOR is a trusted, increasing and living database with 19100 experimental works (as of September 2011) and 141600 data tables. The EXFOR provides a compilation control system for selection of articles and compilation of data and the NRDC home page provides manuals, documents and codes. The nuclear data can be retrieved by the web-retrieval system or distributed on a DVD on request. The EXFOR data play a critical role in the development of evaluated nuclear reaction data. There are several major general purpose libraries: ENDF (US), CENDL (China), JEFF (EU), JENDL (Japan) and RUSFOND (Russia). In addition, there are special libraries for particular applications: EAF (European Activation File), FENDL (Fusion Evaluated Nuclear Data Library for ITER neutronics), IBANDL (Ion Beam Analysis Nuclear Data Library for surface analysis of solids), IRDF, DXS (Dosimetry, radiation damage and gas production data) and Medical portal. Dr V. Zerkin of NDS demonstrated the data retrieval from the EXFOR database and the ENDF library.

  15. Experimental (Network) and Evaluated Nuclear Reaction Data at NDS

    Energy Technology Data Exchange (ETDEWEB)

    Otsuka, N; Semkova, V; Simakov, S P; Zerkin, V [Nuclear Data Services Unit, Nuclear Data Section, IAEA, Vienna (Austria)

    2011-11-15

    Dr Simakov of Nuclear Data Services Unit in the Nuclear Data Section (NDS) gave a brief overview of the data compilation and evaluation activities in the nuclear data community: experimental nuclear reaction data (EXFOR, http://www-nds.iaea.org/exfor/) and evaluated nuclear reaction data (ENDF, http://www-nds.iaea.org/endf). The International Network of Nuclear Reaction Data Centres (NRDC) coordinated by NDS includes 14 Centres in 8 Countries (China, Hungary, India, Japan, Korea, Russian, Ukraine, USA) and 2 International Organizations (NEA, IAEA). It had the first meeting of four core centres (Brookhaven, Saclay, Obninsk, Vienna) in 1966 and the EXFOR was adopted as an official data exchange format. In 2000, IAEA implemented the EXFOR database as a relational multiform database and the EXFOR is a trusted, increasing and living database with 19100 experimental works (as of September 2011) and 141600 data tables. The EXFOR provides a compilation control system for selection of articles and compilation of data and the NRDC home page provides manuals, documents and codes. The nuclear data can be retrieved by the web-retrieval system or distributed on a DVD on request. The EXFOR data play a critical role in the development of evaluated nuclear reaction data. There are several major general purpose libraries: ENDF (US), CENDL (China), JEFF (EU), JENDL (Japan) and RUSFOND (Russia). In addition, there are special libraries for particular applications: EAF (European Activation File), FENDL (Fusion Evaluated Nuclear Data Library for ITER neutronics), IBANDL (Ion Beam Analysis Nuclear Data Library for surface analysis of solids), IRDF, DXS (Dosimetry, radiation damage and gas production data) and Medical portal. Dr V. Zerkin of NDS demonstrated the data retrieval from the EXFOR database and the ENDF library.

  16. Prediction of sonic boom from experimental near-field overpressure data. Volume 2: Data base construction

    Science.gov (United States)

    Glatt, C. R.; Reiners, S. J.; Hague, D. S.

    1975-01-01

    A computerized method for storing, updating and augmenting experimentally determined overpressure signatures has been developed. A data base of pressure signatures for a shuttle type vehicle has been stored. The data base has been used for the prediction of sonic boom with the program described in Volume I.

  17. Experimental data and boundary conditions for a Double - Skin Facade building in transparent insulation mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    was carried out in a full scale test facility ‘The Cube’, in order to compile three sets of high quality experimental data for validation purposes. The data sets are available for preheating mode, external air curtain mode and transparent insulation mode. The objective of this article is to provide the reader...... with all information about the experimental data and measurements, necessary to complete an independent empirical validation of any simulation tool. The article includes detailed information about the experimental apparatus, experimental principles and experimental full-scale test facility ‘The Cube...

  18. LHC experimental data from today's data challenges to the promise of tomorrow

    CERN Multimedia

    CERN. Geneva; Panzer-Steindel, Bernd; Rademakers, Fons

    2003-01-01

    The LHC experiments constitute a challenge in several discipline in both High Energy Physics and Information Technologies. This is definitely the case for data acquisition, processing and analysis. This challenge has been addressed by many years or R&D activity during which prototypes of components or subsystems have been developed. This prototyping phase is now culminating with an evaluation of the prototypes in large-scale tests (approximately called "Data Challenges"). In a period of restricted funding, the expectation is to realize the LHC data acquisition and computing infrastructures by making extensive use of standard and commodity components. The lectures will start with a brief overview of the requirements of the LHC experiments in terms of data acquisition and computing. The different tasks of the experimental data chain will also be explained: data acquisition, selection, storage, processing and analysis. The major trends of the computing and networking industries will then be indicated with pa...

  19. Development of experimental data bank on heat transfer crisis under stationary conditions

    International Nuclear Information System (INIS)

    Koshtyalek, Ya.

    1982-01-01

    The development of an experimental data bank on heat transfer orisis under stationary conditions is discussed. The work is being carried out under the auspices of CMEA in compliance with the resolution of CMEA countries experts meetinq in January 1981 held in Moscow. The data bank is supposed to be formed as a sequential set of available experimental data on the regimes with heat-transfer crisis, recorded on a standard magnetic tape for ES or IBM comuter family. All operations with the bank are to be performed via the computer. Recommendations are given to what the record structure should be used and an example of a code is suggested for a user to extract data from the bank in accordance with various criteria. At the present time parameters of more than 12000 experimental regimes are prepared for the bank and some 3000 more are being processed [ru

  20. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Haynes, D.A.; Mancini, R.C.; Cooley, J.H.; Tommasini, R.; Golovkin, I.E.; Sherrill, M.E.; Haan, S.W.

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  1. 40 CFR 158.2172 - Experimental use permit microbial pesticides residue data requirements table.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit microbial....2172 Experimental use permit microbial pesticides residue data requirements table. (a) General. Sections 158.100 through 158.130 describe how to use this table to determine the residue chemistry data...

  2. Status of experimental data of proton-induced reactions for intermediate-energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Yukinobu; Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Yamano, Naoki; Fukahori, Tokio

    1998-11-01

    The present status of experimental data of proton-induced reactions is reviewed, with particular attention to total reaction cross section, elastic and inelastic scattering cross section, double-differential particle production cross section, isotope production cross section, and activation cross section. (author)

  3. Experimental burn plot trial in the Kruger National Park: history, experimental design and suggestions for data analysis

    Directory of Open Access Journals (Sweden)

    R. Biggs

    2003-12-01

    Full Text Available The experimental burn plot (EBP trial initiated in 1954 is one of few ongoing long-termfire ecology research projects in Africa. The trial aims to assess the impacts of differentfire regimes in the Kruger National Park. Recent studies on the EBPs have raised questions as to the experimental design of the trial, and the appropriate model specificationwhen analysing data. Archival documentation reveals that the original design was modified on several occasions, related to changes in the park's fire policy. These modifications include the addition of extra plots, subdivision of plots and changes in treatmentsover time, and have resulted in a design which is only partially randomised. The representativity of the trial plots has been questioned on account of their relatively small size,the concentration of herbivores on especially the frequently burnt plots, and soil variation between plots. It is suggested that these factors be included as covariates inexplanatory models or that certain plots be excluded from data analysis based on resultsof independent studies of these factors. Suggestions are provided for the specificationof the experimental design when analysing data using Analysis of Variance. It is concluded that there is no practical alternative to treating the trial as a fully randomisedcomplete block design.

  4. The importance of the accuracy of the experimental data for the prediction of solubility

    Directory of Open Access Journals (Sweden)

    SLAVICA ERIĆ

    2010-04-01

    Full Text Available Aqueous solubility is an important factor influencing several aspects of the pharmacokinetic profile of a drug. Numerous publications present different methodologies for the development of reliable computational models for the prediction of solubility from structure. The quality of such models can be significantly affected by the accuracy of the employed experimental solubility data. In this work, the importance of the accuracy of the experimental solubility data used for model training was investigated. Three data sets were used as training sets – data set 1, containing solubility data collected from various literature sources using a few criteria (n = 319, data set 2, created by substituting 28 values from data set 1 with uniformly determined experimental data from one laboratory (n = 319, and data set 3, created by including 56 additional components, for which the solubility was also determined under uniform conditions in the same laboratory, in the data set 2 (n = 375. The selection of the most significant descriptors was performed by the heuristic method, using one-parameter and multi-parameter analysis. The correlations between the most significant descriptors and solubility were established using multi-linear regression analysis (MLR for all three investigated data sets. Notable differences were observed between the equations corresponding to different data sets, suggesting that models updated with new experimental data need to be additionally optimized. It was successfully shown that the inclusion of uniform experimental data consistently leads to an improvement in the correlation coefficients. These findings contribute to an emerging consensus that improving the reliability of solubility prediction requires the inclusion of many diverse compounds for which solubility was measured under standardized conditions in the data set.

  5. STRAIN-CONTROLLED BIAXIAL TENSION OF NATURAL RUBBER: NEW EXPERIMENTAL DATA

    KAUST Repository

    Pancheri, Francesco Q.

    2014-03-01

    We present a new experimental method and provide data showing the response of 40A natural rubber in uniaxial, pure shear, and biaxial tension. Real-time biaxial strain control allows for independent and automatic variation of the velocity of extension and retraction of each actuator to maintain the preselected deformation rate within the gage area of the specimen. Wealso focus on the Valanis-Landel hypothesis that is used to verify and validate the consistency of the data.Weuse a threeterm Ogden model to derive stress-stretch relations to validate the experimental data. The material model parameters are determined using the primary loading path in uniaxial and equibiaxial tension. Excellent agreement is found when the model is used to predict the response in biaxial tension for different maximum in-plane stretches. The application of the Valanis-Landel hypothesis also results in excellent agreement with the theoretical prediction.

  6. Methods of experimental settlement of contradicting data in evaluated nuclear data libraries

    Directory of Open Access Journals (Sweden)

    V. A. Libman

    2016-12-01

    Full Text Available The latest versions of the evaluated nuclear data libraries (ENDLs have contradictions concerning data about neutron cross sections. To resolve this contradiction we propose the method of experimental verification. This method is based on using of the filtered neutron beams and following measurement of appropriate samples. The basic idea of the method is to modify the suited filtered neutron beam so that the differences between the neutron cross sections in accordance with different ENDLs become measurable. Demonstration of the method is given by the example of cerium, which according to the latest versions of four ENDLs has significantly different total neutron cross section.

  7. 40 CFR 158.2080 - Experimental use permit data requirements-biochemical pesticides.

    Science.gov (United States)

    2010-07-01

    ... requirements-biochemical pesticides. 158.2080 Section 158.2080 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Biochemical Pesticides § 158.2080 Experimental use permit data requirements—biochemical pesticides. (a) Sections 158.2081...

  8. An Experimental Metagenome Data Management and AnalysisSystem

    Energy Technology Data Exchange (ETDEWEB)

    Markowitz, Victor M.; Korzeniewski, Frank; Palaniappan, Krishna; Szeto, Ernest; Ivanova, Natalia N.; Kyrpides, Nikos C.; Hugenholtz, Philip

    2006-03-01

    The application of shotgun sequencing to environmental samples has revealed a new universe of microbial community genomes (metagenomes) involving previously uncultured organisms. Metagenome analysis, which is expected to provide a comprehensive picture of the gene functions and metabolic capacity of microbial community, needs to be conducted in the context of a comprehensive data management and analysis system. We present in this paper IMG/M, an experimental metagenome data management and analysis system that is based on the Integrated Microbial Genomes (IMG) system. IMG/M provides tools and viewers for analyzing both metagenomes and isolate genomes individually or in a comparative context.

  9. Multi-window dialogue system of data processing for experimental setup VASSILISSA in PAW environment

    International Nuclear Information System (INIS)

    Andreev, A.N.; Vakatov, D.V.; Veselski, M.; Eremin, A.V.; Ivanov, V.V.; Khasanov, A.M.

    1992-01-01

    Multi-window dialogue system for processing data acquired from experimental setup VASSILISSA is presented. The system provides friendly user's interface for experimental data conversion, selection and preparing for graphic analysis with PAW. 7 refs.; 5 figs.; 1 tab

  10. Control and data acquisition systems for the Fermi Elettra experimental stations

    International Nuclear Information System (INIS)

    Borghes, R.; Chenda, V.; Curri, A.; Gaio, G.; Kourousias, G.; Lonza, M.; Passos, G.; Passuello, R.; Pivetta, L.; Prica, M.; Pugliese, R.; Strangolino, G.

    2012-01-01

    FERMI-Elettra is a single-pass Free Electron Laser (FEL) user-facility covering the wavelength range from 100 nm to 4 nm. The facility is located in Trieste, Italy, nearby the third-generation synchrotron light source Elettra. Three experimental stations, dedicated to different scientific areas, have been installed in 2011: Low Density Matter (LDM), Elastic and Inelastic Scattering (EIS) and Diffraction and Projection Imaging (DiProI). The experiment control and data acquisition system is the natural extension of the machine control system. It integrates a shot-by-shot data acquisition framework with a centralized data storage and analysis system. Low-level applications for data acquisition and online processing have been developed using the Tango framework on Linux platforms. High-level experimental applications can be developed on both Linux and Windows platforms using C/C++, Python, LabView, IDL or Matlab. The Elettra scientific computing portal allows remote access to the experiment and to the data storage system. (authors)

  11. 40 CFR 158.2170 - Experimental use permit data requirements-microbial pesticides.

    Science.gov (United States)

    2010-07-01

    ... requirements-microbial pesticides. 158.2170 Section 158.2170 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Microbial Pesticides § 158.2170 Experimental use permit data requirements—microbial pesticides. (a) For all microbial pesticides. (1) The...

  12. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  13. Experimental determination of (p, ρ, T) data for binary mixtures of methane and helium

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Segovia, J.J.; Chamorro, C.R.

    2016-01-01

    Highlights: • Accurate density data for two binary mixtures of methane and helium are presented. • Experimental data are compared with the densities calculated from different EOS. • Deviations from GERG-2008 exceeded the 3% for some points. • Deviations from AGA8-DC92 did not exceed the 0.3% at any experimental point. • The relative deviations are clearly higher for GERG-2008 than for AGA8-DC92. - Abstract: The basis for the development and evaluation of equations of state for mixtures is experimental data for several thermodynamic properties. The quality and the availability of experimental data limit the achievable accuracy of the equation. Referring to the fundamentals of GERG-2008 wide-range equation of state, no suitable data were available for many mixtures containing secondary natural gas components. This work provides accurate experimental (p, ρ, T) data for two binary mixtures of methane with helium (0.95 (amount-of-substance fraction) CH_4 + 0.05 He and 0.90 CH_4 + 0.10 He). Density measurements were performed at temperatures between (250 and 400) K and pressures up to 20 MPa by using a single-sinker densimeter with magnetic suspension coupling. Experimental data were compared with the corresponding densities calculated from the GERG-2008 and the AGA8-DC92 equations of state. Deviations from GERG-2008 were found within a 2% band for the (0.95 CH_4 + 0.05 He) mixture but exceeded the 3% limit for the (0.95 CH_4 + 0.05 He) mixture. The highest deviations were observed at T = 250 K and pressures between (17 and 19) MPa. Values calculated from AGA8-DC92, however, deviated from the experimental data by only 0.1% at high pressures and exceeded the 0.2% limit only at temperatures of 300 K and above, for the (0.90 CH_4 + 0.10 He) mixture.

  14. Experimental vapor-liquid equilibria data for binary mixtures of xylene isomers

    Directory of Open Access Journals (Sweden)

    W.L. Rodrigues

    2005-09-01

    Full Text Available Separation of aromatic C8 compounds by distillation is a difficult task due to the low relative volatilities of the compounds and to the high degree of purity required of the final commercial products. For rigorous simulation and optimization of this separation, the use of a model capable of describing vapor-liquid equilibria accurately is necessary. Nevertheless, experimental data are not available for all binaries at atmospheric pressure. Vapor-liquid equilibria data for binary mixtures were isobarically obtained with a modified Fischer cell at 100.65 kPa. The vapor and liquid phase compositions were analyzed with a gas chromatograph. The methodology was initially tested for cyclo-hexane+n-heptane data; results obtained are similar to other data in the literature. Data for xylene binary mixtures were then obtained, and after testing, were considered to be thermodynamically consistent. Experimental data were regressed with Aspen Plus® 10.1 and binary interaction parameters were reported for the most frequently used activity coefficient models and for the classic mixing rules of two cubic equations of state.

  15. Mathematical processing of experimental data on neutron yield from separate fission fragments

    International Nuclear Information System (INIS)

    Basova, B.G.; Rabinovich, A.D.; Ryazanov, D.K.

    1975-01-01

    The algorithm is described for processing the multi-dimensional experiments on measurements of prompt emission of neutrons from separate fission fragments. While processing the data the effect of a number of experimental corrections is correctly taken into account; random coincidence background, neutron spectrum, neutron detector efficiency, instrument angular resolution. On the basis of the described algorithm a program for BESM-4 computer is realized and the treatment of experimental data is performed according to the spontaneous fission of 252 Cf

  16. Statistical analysis of correlated experimental data and neutron cross section evaluation

    International Nuclear Information System (INIS)

    Badikov, S.A.

    1998-01-01

    The technique for evaluation of neutron cross sections on the basis of statistical analysis of correlated experimental data is presented. The most important stages of evaluation beginning from compilation of correlation matrix for measurement uncertainties till representation of the analysis results in the ENDF-6 format are described in details. Special attention is paid to restrictions (positive uncertainty) on covariation matrix of approximate parameters uncertainties generated within the method of least square fit which is derived from physical reasons. The requirements for source experimental data assuring satisfaction of the restrictions mentioned above are formulated. Correlation matrices of measurement uncertainties in particular should be also positively determined. Variants of modelling the positively determined correlation matrices of measurement uncertainties in a situation when their consequent calculation on the basis of experimental information is impossible are discussed. The technique described is used for creating the new generation of estimates of dosimetric reactions cross sections for the first version of the Russian dosimetric file (including nontrivial covariation information)

  17. Experimental and evaluated data on the discrete level excitation function of the 238U(n,n') reaction

    International Nuclear Information System (INIS)

    Simakov, S.P.

    1991-01-01

    Experimental data on the 238 U excitation function are compiled and analyzed. The experimental data are compared with the evaluated data from the BNAB, ENDF/B-IV and ENDL-78 evaluated data libraries. It is shown that the BNAB evaluated data are in good agreement with the existing experimental data, including new results from recent experiments. (author). 26 refs, 2 figs, 2 tabs

  18. STRAIN-CONTROLLED BIAXIAL TENSION OF NATURAL RUBBER: NEW EXPERIMENTAL DATA

    KAUST Repository

    Pancheri, Francesco Q.; Dorfmann, Luis

    2014-01-01

    model to derive stress-stretch relations to validate the experimental data. The material model parameters are determined using the primary loading path in uniaxial and equibiaxial tension. Excellent agreement is found when the model is used to predict

  19. A Small Guide to Generating Covariances of Experimental Data

    International Nuclear Information System (INIS)

    Mannhart, Wolf

    2011-05-01

    A complete description of the uncertainties of an experiment can only be realized by a detailed list of all the uncertainty components, their value and a specification of existing correlations between the data. Based on such information the covariance matrix can be generated, which is necessary for any further proceeding with the experimental data. It is not necessary, and not recommended, that an experimenter evaluates this covariance matrix. The reason for this is that a incorrectly evaluated final covariance matrix can never be corrected if the details are not given. (Such obviously wrong covariance matrices have recently occasionally been found in the literature). Hence quotation of a covariance matrix is an additional step which should not occur without quoting a detailed list of the various uncertainty components and their correlations as well. It must be hoped that editors of journals will understand these necessary requirements. The generalized least squares procedure shown permits an easy way of interchanging data D 0 with parameter estimates P. This means new data can easily be combined with an earlier evaluation. However, it must be mentioned that this is only valid as long as the new data have no correlation with any of the older data of the prior evaluation. Otherwise the old data which show correlation with new data have to be extracted from the evaluation and then, together with the new data and taking account of the correlation, have again to be added to the reduced evaluation. In most cases this step cannot be performed and the evaluation has to be completely redone. A partial way out is given if the evaluation is performed step by step and the results of each step are stored. Then the evaluation need only be repeated from the step which contains correlated data for the first time while all earlier steps remain unchanged. Finally it should be noted that the addition of a small set of new data to a prior evaluation consisting of a large number of

  20. Gastric bypass: why Roux-en-Y? A review of experimental data.

    Science.gov (United States)

    Collins, Brendan J; Miyashita, Tomoharu; Schweitzer, Michael; Magnuson, Thomas; Harmon, John W

    2007-10-01

    To highlight the clinical and experimental rationales that support why the Roux-en-Y limb is an important surgical principle for bariatric gastric bypass. We reviewed PubMed citations for open Roux-en-Y gastric bypass (RYGBP), laparoscopic RYGBP, loop gastric bypass, chronic alkaline reflux gastritis, and duodenoesophageal reflux. We reviewed clinical and experimental articles. Clinical articles included prospective, retrospective, and case series of patients undergoing RYGBP, laparoscopic RYGBP, or loop gastric bypass. Experimental articles that were reviewed included in vivo and in vitro models of chronic duodenoesophageal reflux and its effect on carcinogenesis. No formal data extraction was performed. We reviewed published operative times, lengths of stay, and anastomotic leak rates for laparoscopic RYGBP and loop gastric bypass. For in vivo and in vitro experimental models of duodenoesophageal reflux, we reviewed the kinetics and potential molecular mechanisms of carcinogenesis. Recent data suggest that laparoscopic loop gastric bypass, performed without the creation of a Roux-en-Y gastroenterostomy, is a faster surgical technique that confers similarly robust weight loss compared with RYGBP or laparoscopic RYGBP. In the absence of a Roux limb, the long-term effects of chronic alkaline reflux are unknown. Animal models and in vitro analyses of chronic alkaline reflux suggest a carcinogenic effect.

  1. Experimental data on PCI and PCMI within the IFPE data base

    International Nuclear Information System (INIS)

    Killeen, J.C.; Sartori, E.; Turnbull, J.A.

    2005-01-01

    Following the conclusions reached at the end of the FUMEX-I code comparison exercise, the International Fuel Performance Experimental Database (IFPE) gave priority to collecting and assembling data sets addressing: thermal performance, fission gas release and pellet-clad mechanical interaction (PCMI). The data available that address the last topic are the subject of the current paper. The data on mechanical interaction in fuel rods fall into three broad categories: - Fuel rod diameter changes caused by periods spent at higher than normal power. - The result of power ramp testing to define a failure threshold. - Single effects studies to measure changes in gaseous porosity causing fuel swelling during controlled test conditions. In the first category, the fuel remained un-failed at the end of the test and the resulting permanent clad strain was due to PCMI caused by thermal expansion of the pellet and gaseous fuel swelling. Some excellent data in this category come from the last two Riso Fission Gas Release projects. The second category, namely, failure by pellet-clad interaction (PCI) and stress corrosion cracking (SCC) involves the simultaneous imposition of stress and the availability of corrosive fission products. A comprehensive list of tests carried out in the Swedish Studsvik reactor is included in the database. The third category is a recent acquisition to the database and comprises data on fuel swelling obtained from ramp tests on AGR fuel and carried out in the Halden BWR. This data set contains a wealth of well-qualified data which are invaluable for the development and validation of fuel swelling models. (authors)

  2. The new STRESA tool for preservation of thermalhydraulic experimental data produced in the European Commission

    International Nuclear Information System (INIS)

    Pla, Patricia; Pascal, Ghislain; Tanarro, Jorge; Annunziato, Alessandro

    2015-01-01

    Highlights: • ITFs and severe accident data is of high importance to validate thermal hydraulic codes for NPPs. • LOBI, FARO, KROTOS and STORM produced a lot of TH and SA experimental data. • The JRC facilities data was stored in the JRC STRESA database developed by JRC. • The paper presents the new JRC STRESA database developed by JRC in 2014–2015. • The long-term importance of well maintained ITF databases (like STRESA) is demonstrated. - Abstract: The experimental data recorded in Integral Effect Test Facilities (ITFs) are traditionally used in order to validate best estimate (BE) system codes and to investigate the behaviour of nuclear power plants (NPPs) under accident scenarios. In the same way, facilities dedicated to specific thermal-hydraulic (TH) severe accident (SA) phenomena are used for the development and improvement of specific analytical models and codes used in the SA analysis for light water reactors (LWR). The extent to which the existing reactor safety experimental databases are preserved was well known and frequently debated and questioned in the nuclear community. The Joint Research Centre (JRC) of the European Commission (EC) has been deeply involved in several projects for experimental data production and experimental data preservation. In this context the STRESA (Storage of Thermal REactor Safety Analysis Data) web-based informatics platform was developed by JRC-Ispra in the year 2000. At present the JRC STRESA database is hosted and maintained by JRC-Petten. The Nuclear Reactor Safety Assessment Unit (NRSA) of the JRC-Petten is engaged in the administration of a new STRESA tool that secures EU storage for SA experimental data and calculations. The development of this new STRESA tool was completed by early 2015 and published on the 25/06/2015 in the URL: (http://stresa.jrc.ec.europa.eu/). The target was to keep the main features of the original STRESA structure but using the new informatics technologies that are nowadays

  3. Systematic integration of experimental data and models in systems biology.

    Science.gov (United States)

    Li, Peter; Dada, Joseph O; Jameson, Daniel; Spasic, Irena; Swainston, Neil; Carroll, Kathleen; Dunn, Warwick; Khan, Farid; Malys, Naglis; Messiha, Hanan L; Simeonidis, Evangelos; Weichart, Dieter; Winder, Catherine; Wishart, Jill; Broomhead, David S; Goble, Carole A; Gaskell, Simon J; Kell, Douglas B; Westerhoff, Hans V; Mendes, Pedro; Paton, Norman W

    2010-11-29

    The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  4. Existing experimental criticality data applicable to nuclear-fuel-transportation systems

    International Nuclear Information System (INIS)

    Bierman, S.R.

    1983-02-01

    Analytical techniques are generally relied upon in making criticality evaluations involving nuclear material outside reactors. For these evaluations to be accepted the calculations must be validated by comparison with experimental data for a known set of conditions having physical and neutronic characteristics similar to those conditions being evaluated analytically. The purpose of this report is to identify those existing experimental data that are suitable for use in verifying criticality calculations on nuclear fuel transportation systems. In addition, near term needs for additional data in this area are identified. Of the considerable amount of criticality data currently existing, that are applicable to non-reactor systems, those particularly suitable for use in support of nuclear material transportation systems have been identified and catalogued into the following groups: (1) critical assemblies of fuel rods in water; (2) critical assemblies of fuel rods in water containing soluble neutron absorbers; (3) critical assemblies containing solid neutron absorber; (4) critical assemblies of fuel rods in water with heavy metal reflectors; and (5) critical assemblies of fuel rods in water with irregular features. A listing of the current near term needs for additional data in each of the groups has been developed for future use in planning criticality research in support of nuclear fuel transportation systems. The criticality experiments needed to provide these data are briefly described and identified according to priority and relative cost of performing the experiments

  5. Comparison between a Computational Seated Human Model and Experimental Verification Data

    Directory of Open Access Journals (Sweden)

    Christian G. Olesen

    2014-01-01

    Full Text Available Sitting-acquired deep tissue injuries (SADTI are the most serious type of pressure ulcers. In order to investigate the aetiology of SADTI a new approach is under development: a musculo-skeletal model which can predict forces between the chair and the human body at different seated postures. This study focuses on comparing results from a model developed in the AnyBody Modeling System, with data collected from an experimental setup. A chair with force-measuring equipment was developed, an experiment was conducted with three subjects, and the experimental results were compared with the predictions of the computational model. The results show that the model predicted the reaction forces for different chair postures well. The correlation coefficients of how well the experiment and model correlate for the seat angle, backrest angle and footrest height was 0.93, 0.96, and 0.95. The study show a good agreement between experimental data and model prediction of forces between a human body and a chair. The model can in the future be used in designing wheelchairs or automotive seats.

  6. Collective states in 230Th: experimental data

    Directory of Open Access Journals (Sweden)

    A. I. Levon

    2009-12-01

    Full Text Available The excitation spectra in the deformed nucleus 230Th were studied by means of the (p, t reaction, using the Q3D spectrograph facility at the Munich Tandem accelerator. The angular distributions of tritons are measured for about 200 excitations seen in the triton spectra up to 3.3 MeV. Firm 0+ assignments are made for 16 excited states by comparison of experimental angular distributions with the calculated ones using the CHUCK3 code and relatively firm - for 4 states. Assignments up to spin 6+ are made for other states. Analysis of the obtained data will be presented in forthcoming paper.

  7. Radionuclides in fruit systems: Model prediction-experimental data intercomparison study

    International Nuclear Information System (INIS)

    Ould-Dada, Z.; Carini, F.; Eged, K.; Kis, Z.; Linkov, I.; Mitchell, N.G.; Mourlon, C.; Robles, B.; Sweeck, L.; Venter, A.

    2006-01-01

    This paper presents results from an international exercise undertaken to test model predictions against an independent data set for the transfer of radioactivity to fruit. Six models with various structures and complexity participated in this exercise. Predictions from these models were compared against independent experimental measurements on the transfer of 134 Cs and 85 Sr via leaf-to-fruit and soil-to-fruit in strawberry plants after an acute release. Foliar contamination was carried out through wet deposition on the plant at two different growing stages, anthesis and ripening, while soil contamination was effected at anthesis only. In the case of foliar contamination, predicted values are within the same order of magnitude as the measured values for both radionuclides, while in the case of soil contamination models tend to under-predict by up to three orders of magnitude for 134 Cs, while differences for 85 Sr are lower. Performance of models against experimental data is discussed together with the lessons learned from this exercise

  8. Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data

    Science.gov (United States)

    2017-03-02

    AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum- Encrypted Data Philip Walther UNIVERSITT WIEN Final...on Quantum- Encrypted Data 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-16-1-0004 5c.  PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Philip Walther 5d...1010 AT 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) EOARD Unit 4515 APO AE 09421-4515 10

  9. Comparison of ATHENA/RELAP results against ice experimental data

    CERN Document Server

    Moore-Richard, L

    2002-01-01

    In order to demonstrate the adequacy of the International Thermonuclear Experimental Reactor design from a safety stand point as well as investigating the behavior of two-phase flow phenomena during an ingress of coolant event, an integrated ICE test facility was constructed in Japan. The data generated from the ICE facility offers a valuable means to validate computer codes such as ATHENA /RELAP5, which is one of the codes used at the Idaho National Engineering And Environmental Laboratory (INEEL) to evaluate the safety of various fusion reactor concepts. In this paper we compared numerical results generated by the ATHENA code with corresponding test data from the ICE facility. Overall we found good agreement between the test data and the predicted results.

  10. Analysis of experimental data sets for local scour depth around ...

    African Journals Online (AJOL)

    The performance of soft computing techniques to analyse and interpret the experimental data of local scour depth around bridge abutment, measured at different laboratory conditions and environment, is presented. The scour around bridge piers and abutments is, in the majority of cases, the main reason for bridge failures.

  11. Summary of climatic data for the Bonanza Creek Experimental Forest, interior Alaska.

    Science.gov (United States)

    Richard J. Barney; Erwin R. Berglund

    1973-01-01

    A summary of climatic data during the 1968-71 growing seasons is presented for the subarctic Bonanza Creek Experimental Forest located near Fairbanks, Alaska. Data were obtained from three weather station sites at elevations of 1,650, 1,150, and 550 feet from May until September each year. Data are for relative humidity, rainfall, and maximum, minimum, and mean...

  12. Stereochemical analysis of (+)-limonene using theoretical and experimental NMR and chiroptical data

    Science.gov (United States)

    Reinscheid, F.; Reinscheid, U. M.

    2016-02-01

    Using limonene as test molecule, the success and the limitations of three chiroptical methods (optical rotatory dispersion (ORD), electronic and vibrational circular dichroism, ECD and VCD) could be demonstrated. At quite low levels of theory (mpw1pw91/cc-pvdz, IEFPCM (integral equation formalism polarizable continuum model)) the experimental ORD values differ by less than 10 units from the calculated values. The modelling in the condensed phase still represents a challenge so that experimental NMR data were used to test for aggregation and solvent-solute interactions. After establishing a reasonable structural model, only the ECD spectra prediction showed a decisive dependence on the basis set: only augmented (in the case of Dunning's basis sets) or diffuse (in the case of Pople's basis sets) basis sets predicted the position and shape of the ECD bands correctly. Based on these result we propose a procedure to assign the absolute configuration (AC) of an unknown compound using the comparison between experimental and calculated chiroptical data.

  13. 78 FR 18576 - Agency Information Collection Activities; Comment Request; Experimental Sites Data Collection...

    Science.gov (United States)

    2013-03-27

    ... and provide the requested data in the desired format. ED is soliciting comments on the proposed... specific information/performance data for analysis of the experiments. This effort will assist the...; Comment Request; Experimental Sites Data Collection Instrument AGENCY: Department of Education (ED...

  14. BioQ: tracing experimental origins in public genomic databases using a novel data provenance model.

    Science.gov (United States)

    Saccone, Scott F; Quan, Jiaxi; Jones, Peter L

    2012-04-15

    Public genomic databases, which are often used to guide genetic studies of human disease, are now being applied to genomic medicine through in silico integrative genomics. These databases, however, often lack tools for systematically determining the experimental origins of the data. We introduce a new data provenance model that we have implemented in a public web application, BioQ, for assessing the reliability of the data by systematically tracing its experimental origins to the original subjects and biologics. BioQ allows investigators to both visualize data provenance as well as explore individual elements of experimental process flow using precise tools for detailed data exploration and documentation. It includes a number of human genetic variation databases such as the HapMap and 1000 Genomes projects. BioQ is freely available to the public at http://bioq.saclab.net.

  15. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    Science.gov (United States)

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  16. Code REX to fit experimental data to exponential functions and graphics plotting

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    The REX code, written in Fortran IV, performs the fitting a set of experimental data to different kind of functions as: straight-line (Y = A + BX) , and various exponential type (Y-A B x , Y=A X B ; Y=A exp(BX) ) , using the Least Squares criterion. Such fitting could be done directly for one selected function of for the our simultaneously and allows to chose the function that best fitting to the data, since presents the statistics data of all the fitting. Further, it presents the graphics plotting, of the fitted function, in the appropriate coordinate axes system. An additional option allows also the Graphic plotting of experimental data used for the fitting. All the data necessary to execute this code are asked to the operator in the terminal screen, in the iterative way by screen-operator dialogue, and the values are introduced through the keyboard. This code could be executed with any computer provided with graphic screen and keyboard terminal, with a X-Y plotter serial connected to the graphics terminal. (Author) 5 refs

  17. Within-subject mediation analysis for experimental data in cognitive psychology and neuroscience.

    Science.gov (United States)

    Vuorre, Matti; Bolger, Niall

    2017-12-15

    Statistical mediation allows researchers to investigate potential causal effects of experimental manipulations through intervening variables. It is a powerful tool for assessing the presence and strength of postulated causal mechanisms. Although mediation is used in certain areas of psychology, it is rarely applied in cognitive psychology and neuroscience. One reason for the scarcity of applications is that these areas of psychology commonly employ within-subjects designs, and mediation models for within-subjects data are considerably more complicated than for between-subjects data. Here, we draw attention to the importance and ubiquity of mediational hypotheses in within-subjects designs, and we present a general and flexible software package for conducting Bayesian within-subjects mediation analyses in the R programming environment. We use experimental data from cognitive psychology to illustrate the benefits of within-subject mediation for theory testing and comparison.

  18. Sequences by Metastable Attractors: Interweaving Dynamical Systems and Experimental Data

    Directory of Open Access Journals (Sweden)

    Axel Hutt

    2017-05-01

    Full Text Available Metastable attractors and heteroclinic orbits are present in the dynamics of various complex systems. Although their occurrence is well-known, their identification and modeling is a challenging task. The present work reviews briefly the literature and proposes a novel combination of their identification in experimental data and their modeling by dynamical systems. This combination applies recurrence structure analysis permitting the derivation of an optimal symbolic representation of metastable states and their dynamical transitions. To derive heteroclinic sequences of metastable attractors in various experimental conditions, the work introduces a Hausdorff clustering algorithm for symbolic dynamics. The application to brain signals (event-related potentials utilizing neural field models illustrates the methodology.

  19. Experimental data bases useful for quantification of model uncertainties in best estimate codes

    International Nuclear Information System (INIS)

    Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.

    1988-01-01

    A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs

  20. Comparison of existing plastic collapse load solutions with experimental data for 90° elbows

    International Nuclear Information System (INIS)

    Han, Jae-Jun; Lee, Kuk-Hee; Kim, Nak-Hyun; Kim, Yun-Jae; Jerng, Dong Wook; Budden, Peter J.

    2012-01-01

    This paper compares published experimental plastic collapse loads for 90° elbows with existing closed-form solutions. A total of 46 experimental data are considered, covering pure bending (in-plane closing, in-plane opening and out-of-plane bending) and combined pressure and bending loads. The plastic collapse load solutions considered are from the ASME code, the Ductile Fracture handbook of Zahoor, by Chattopadhyay and co-workers, and by Y.-J. Kim and co-workers. Comparisons with the experimental data shows that the ASME code solution is conservative by a factor of 2 on collapse load for in-plane closing bending, 2.3 for out-of-plane bending, and 3 for in-plane opening bending. The solutions given by Kim and co-workers give the least conservative estimates of plastic collapse loads, although they provide slightly non-conservative estimates for some data. - Highlights: ► We compare published 46 experimental data of plastic collapse loads for 90° elbows with existing four different plastic collapse load solutions. ► We find that the ASME code solution is conservative by a factor of 2–3, depending on the loading mode. ► We find that the solutions given by Kim and co-workers give the least conservative estimates of plastic collapse loads.

  1. Bank of experimental data on heat transfer crisis at water boiling in circular tubes

    International Nuclear Information System (INIS)

    Sedova, T.K.; Smolin, V.N.; Shpanskij, S.V.

    1982-01-01

    Basic principles and structure of an automated information system (bank) are described. The system is to accumulate and store experimental data on heat-transfer crisis in boiling water flow within tu bular fuel elements. For each experimental section registered in the bank there is a certain amount of information including both geometry and design characteristics (dimensions, heat release distrivution, number of registered regimes and so on) and the investigated operation regimes. Each regime is characterized by values of pressure, outlet enthalpy, critical power, coolant flow rate and others. The searching programme screens the available experimental section and regime lists transfering the information to subprogrammes wherein, on the basis of the user request, the selection of a particular section and regime is performed. A brief analysis of accumulated experimental data from 26 Soviet and foreign sources is given [ru

  2. Experimental data from a full-scale facility investigating radiant and convective terminals

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Jensen, Rasmus Lund

    The objective of this technical report is to provide information on the accuracy of the experiments performed in “the Cube” (part I, II and III). Moreover, this report lists the experimental data, which have been monitored in the test facility (part IV). These data are available online and can be...

  3. Modeling Aerobic Carbon Source Degradation Processes using Titrimetric Data and Combined Respirometric-Titrimetric Data: Experimental Data and Model Structure

    DEFF Research Database (Denmark)

    Gernaey, Krist; Petersen, B.; Nopens, I.

    2002-01-01

    Experimental data are presented that resulted from aerobic batch degradation experiments in activated sludge with simple carbon sources (acetate and dextrose) as substrates. Data collection was done using combined respirometric-titrimetric measurements. The respirometer consists of an open aerated....... For acetate, protons were consumed during aerobic degradation, whereas for dextrose protons were produced. For both carbon sources, a linear relationship was found between the amount of carbon source added and the amount of protons consumed (in case of acetate: 0.38 meq/mmol) or produced (in case of dextrose...

  4. Treating experimental data of inverse kinetic method by unitary linear regression analysis

    International Nuclear Information System (INIS)

    Zhao Yusen; Chen Xiaoliang

    2009-01-01

    The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)

  5. Preliminary Validation of the MATRA-LMR Code Using Existing Sodium-Cooled Experimental Data

    International Nuclear Information System (INIS)

    Choi, Sun Rock; Kim, Sangji

    2014-01-01

    The main objective of the SFR prototype plant is to verify TRU metal fuel performance, reactor operation, and transmutation ability of high-level wastes. The core thermal-hydraulic design is used to ensure the safe fuel performance during the whole plant operation. The fuel design limit is highly dependent on both the maximum cladding temperature and the uncertainties of the design parameters. Therefore, an accurate temperature calculation in each subassembly is highly important to assure a safe and reliable operation of the reactor systems. The current core thermalhydraulic design is mainly performed using the SLTHEN (Steady-State LMR Thermal-Hydraulic Analysis Code Based on ENERGY Model) code, which has been already validated using the existing sodium-cooled experimental data. In addition to the SLTHEN code, a detailed analysis is performed using the MATRA-LMR (Multichannel Analyzer for Transient and steady-state in Rod Array-Liquid Metal Reactor) code. In this work, the MATRA-LMR code is validated for a single subassembly evaluation using the previous experimental data. The MATRA-LMR code has been validated using existing sodium-cooled experimental data. The results demonstrate that the design code appropriately predicts the temperature distributions compared with the experimental values. Major differences are observed in the experiments with the large pin number due to the radial-wise mixing difference

  6. Combining Simulated and Experimental Data to Simulate Ultrasonic Array Data From Defects in Materials With High Structural Noise.

    Science.gov (United States)

    Bloxham, Harry A; Velichko, Alexander; Wilcox, Paul David

    2016-12-01

    Ultrasonic nondestructive testing inspections using phased arrays are performed on a wide range of components and materials. All real inspections suffer, to varying extents, from coherent noise, including image artifacts and speckle caused by complex geometries and grain scatter, respectively. By its nature, this noise is not reduced by averaging; however, it degrades the signal-to-noise ratio of defects and ultimately limits their detectability. When evaluating the effectiveness of an inspection, a large pool of data from samples containing a range of different defects are important to estimate the probability of detection of defects and to help characterize them. For a given inspection, coherent noise is easy to measure experimentally but hard to model realistically. Conversely, the ultrasonic response of defects can be simulated relatively easily. This paper proposes a novel method of simulating realistic array data by combining noise-free simulations of defect responses with coherent noise taken from experimental data. This removes the need for costly physical samples with known defects to be made and allows for large data sets to be created easily.

  7. Texas Panhandle soil-crop-beef food chain for uranium: a dynamic model validated by experimental data

    International Nuclear Information System (INIS)

    Wenzel, W.J.; Wallwork-Barber, K.M.; Rodgers, J.C.; Gallegos, A.F.

    1982-01-01

    Long-term simulations of uranium transport in the soil-crop-beef food chain were performed using the BIOTRAN model. Experimental data means from an extensive Pantex beef cattle study are presented. Experimental data were used to validate the computer model. Measurements of uranium in air, soil, water, range grasses, feed, and cattle tissues are compared to simulated uranium output values in these matrices when the BIOTRAN model was set at the measured soil and air values. The simulations agreed well with experimental data even though metabolic details for ruminants and uranium chemical form in the environment remain to be studied

  8. Increasing process understanding by analyzing complex interactions in experimental data

    DEFF Research Database (Denmark)

    Naelapaa, Kaisa; Allesø, Morten; Kristensen, Henning Gjelstrup

    2009-01-01

    understanding of a coating process. It was possible to model the response, that is, the amount of drug released, using both mentioned techniques. However, the ANOVAmodel was difficult to interpret as several interactions between process parameters existed. In contrast to ANOVA, GEMANOVA is especially suited...... for modeling complex interactions and making easily understandable models of these. GEMANOVA modeling allowed a simple visualization of the entire experimental space. Furthermore, information was obtained on how relative changes in the settings of process parameters influence the film quality and thereby drug......There is a recognized need for new approaches to understand unit operations with pharmaceutical relevance. A method for analyzing complex interactions in experimental data is introduced. Higher-order interactions do exist between process parameters, which complicate the interpretation...

  9. The impact of retirement on health: quasi-experimental methods using administrative data.

    Science.gov (United States)

    Horner, Elizabeth Mokyr; Cullen, Mark R

    2016-02-19

    Is retirement good or bad for health? Disentangling causality is difficult. Much of the previous quasi-experimental research on the effect of health on retirement used self-reported health and relied upon discontinuities in public retirement incentives across Europe. The current study investigated the effect of retirement on health by exploiting discontinuities in private retirement incentives to test the effect of retirement on health using a quasi-experimental study design. Secondary data (1997-2009) on a cohort of male manufacturing workers in a United States setting. Health status was determined using claims data from private insurance and Medicare. Analyses used employer-based administrative and claims data and claim data from Medicare. Widely used selection on observables models overstate the negative impact of retirement due to the endogeneity of the decision to retire. In addition, health status as measured by administrative claims data provide some advantages over the more commonly used survey items. Using an instrument and administrative health records, we find null to positive effects from retirement on all fronts, with a possible exception of increased risk for diabetes. This study provides evidence that retirement is not detrimental and may be beneficial to health for a sample of manufacturing workers. In addition, it supports previous research indicating that quasi-experimental methodologies are necessary to evaluate the relationship between retirement and health, as any selection on observable model will overstate the negative relationship of retirement on health. Further, it provides a model for how such research could be implemented in countries like the United States that do not have a strong public pension program. Finally, it demonstrates that such research need-not rely upon survey data, which has certain shortcomings and is not always available for homogenous samples.

  10. DNB Mechanistic model assessment based on experimental data in narrow rectangular channel

    International Nuclear Information System (INIS)

    Zhou Lei; Yan Xiao; Huang Yanping; Xiao Zejun; Huang Shanfang

    2011-01-01

    The departure from nuclear boiling (DNB) is important concerning about the safety of a PWR. Lacking assessment by experimental data points, it's doubtful whether the existing models can be used in narrow rectangular channels or not. Based on experimental data points in narrow rectangular channels, two kinds of classical DNB models, which include liquid sublayer dryout model (LSDM) and bubble crowding model (BCM), were assessed. The results show that the BCM has much wider application range than the LSDM. Several thermal parameters show systematical influences on the calculated results by the models. The performances of all the models deteriorate as the void fraction increases. The reason may be attributed to the geometrical differences between a circular tube and narrow rectangular channel. (authors)

  11. The coupling of high-speed high resolution experimental data and LES through data assimilation techniques

    Science.gov (United States)

    Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.

    2017-11-01

    Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.

  12. Experimental Data Does Not Violate Bell's Inequality for "Right Kolmogorov Space''

    DEFF Research Database (Denmark)

    Fischer, Paul; Avis, David; Hilbert, Astrid

    2008-01-01

    of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent...... probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory....

  13. Determination of the angle of attack on the mexico rotor using experimental data

    DEFF Research Database (Denmark)

    Yang, Hua; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2010-01-01

    characteristics from experimental data on the MEXICO (Model Experiments in controlled Conditions) rotor. Detailed surface pressure and Particle Image Velocimetry (PIV) flow field at different rotor azimuth positions were examined for determining the sectional airfoil data. It is worthwhile noting that the present...

  14. A memory module for experimental data handling

    Science.gov (United States)

    De Blois, J.

    1985-02-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory chian internal connector. The memory unit will be used in Mössbauer experiments and in thermal neutron scattering experiments.

  15. Statistical analysis on experimental calibration data for flowmeters in pressure pipes

    Science.gov (United States)

    Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto

    2017-08-01

    This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .

  16. Benchmarking Experimental and Computational Thermochemical Data: A Case Study of the Butane Conformers.

    Science.gov (United States)

    Barna, Dóra; Nagy, Balázs; Csontos, József; Császár, Attila G; Tasi, Gyula

    2012-02-14

    Due to its crucial importance, numerous studies have been conducted to determine the enthalpy difference between the conformers of butane. However, it is shown here that the most reliable experimental values are biased due to the statistical model utilized during the evaluation of the raw experimental data. In this study, using the appropriate statistical model, both the experimental expectation values and the associated uncertainties are revised. For the 133-196 and 223-297 K temperature ranges, 668 ± 20 and 653 ± 125 cal mol(-1), respectively, are recommended as reference values. Furthermore, to show that present-day quantum chemistry is a favorable alternative to experimental techniques in the determination of enthalpy differences of conformers, a focal-point analysis, based on coupled-cluster electronic structure computations, has been performed that included contributions of up to perturbative quadruple excitations as well as small correction terms beyond the Born-Oppenheimer and nonrelativistic approximations. For the 133-196 and 223-297 K temperature ranges, in exceptional agreement with the corresponding revised experimental data, our computations yielded 668 ± 3 and 650 ± 6 cal mol(-1), respectively. The most reliable enthalpy difference values for 0 and 298.15 K are also provided by the computational approach, 680.9 ± 2.5 and 647.4 ± 7.0 cal mol(-1), respectively.

  17. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Validation of NEPTUNE-CFD two-phase flow models using experimental data

    International Nuclear Information System (INIS)

    Perez-Manes, Jorge; Sanchez Espinoza, Victor Hugo; Bottcher, Michael; Stieglitz, Robert; Sergio Chiva Vicent

    2014-01-01

    This paper deals with the validation of the two-phase flow models of the CFD code NEPTUNE-CFD using experimental data provided by the OECD BWR BFBT and PSBT Benchmark. Since the two-phase models of CFD codes are extensively being improved, the validation is a key step for the acceptability of such codes. The validation work is performed in the frame of the European NURISP Project and it was focused on the steady state and transient void fraction tests. The influence of different NEPTUNE-CFD model parameters on the void fraction prediction is investigated and discussed in detail. Due to the coupling of heat conduction solver SYRTHES with NEPTUNE-CFD, the description of the coupled fluid dynamics and heat transfer between the fuel rod and the fluid is improved significantly. The averaged void fraction predicted by NEPTUNE-CFD for selected PSBT and BFBT tests is in good agreement with the experimental data. Finally, areas for future improvements of the NEPTUNE-CFD code were identified, too. (authors)

  19. A computer program to evaluate the experimental data in instrumental multielement neutron activation analysis

    International Nuclear Information System (INIS)

    Greim, L.; Motamedi, K.; Niedergesaess, R.

    1976-01-01

    A computer code evaluating experimental data of neutron activation analysis (NAA) for determination of atomic abundancies is described. The experimental data are, beside a probe designation, the probe weight, irradiation parameters and a Ge(Li)-pulse-height-spectrum from the activity measurement. The organisation of the necessary nuclear data, comprising all methods of activation in reactor-irradiations, is given. Furthermore the automatic evaluation of spectra, the designation of the resulting peaks to nuclei and the calculation of atomic abundancies are described. The complete evaluation of a spectrum with many lines, e.g. 100 lines of 20 nuclei, takes less than 1 minute machine-time on the TR 440 computer. (orig.) [de

  20. Code ''Repol'' to fit experimental data with a polynomial and its graphics plotting

    International Nuclear Information System (INIS)

    Travesi, A.; Romero, L.

    1983-01-01

    The ''Repol'' code performs the fitting of a set of experimental data, with a polynomial of mth. degree (max. 10), using the Least Squares Criterion. Further, it presents the graphic plotting of the fitted polynomial, in the appropriate coordinates axes system, by a plotter. An additional option allows also the graphic plotting of the experimental data, used for the fit. The necessary data to execute this code, are asked to the operator in the screen, in a iterative way, by screen-operator dialogue, and the values are introduced through the keyboard. This code is written in Fortran IV, and because of its structure programming in subroutine blocks, can be adapted to any computer with graphic screen and keyboard terminal, with a plotter serial connected to it, whose software has the Hewlett Packard ''Graphics 1000''. (author)

  1. Can experimental data in humans verify the finite element-based bone remodeling algorithm?

    DEFF Research Database (Denmark)

    Wong, C.; Gehrchen, P.M.; Kiaer, T.

    2008-01-01

    STUDY DESIGN: A finite element analysis-based bone remodeling study in human was conducted in the lumbar spine operated on with pedicle screws. Bone remodeling results were compared to prospective experimental bone mineral content data of patients operated on with pedicle screws. OBJECTIVE......: The validity of 2 bone remodeling algorithms was evaluated by comparing against prospective bone mineral content measurements. Also, the potential stress shielding effect was examined using the 2 bone remodeling algorithms and the experimental bone mineral data. SUMMARY OF BACKGROUND DATA: In previous studies...... operated on with pedicle screws between L4 and L5. The stress shielding effect was also examined. The bone remodeling results were compared with prospective bone mineral content measurements of 4 patients. They were measured after surgery, 3-, 6- and 12-months postoperatively. RESULTS: After 1 year...

  2. Analysis and discussion on the experimental data of electrolyte analyzer

    Science.gov (United States)

    Dong, XinYu; Jiang, JunJie; Liu, MengJun; Li, Weiwei

    2018-06-01

    In the subsequent verification of electrolyte analyzer, we found that the instrument can achieve good repeatability and stability in repeated measurements with a short period of time, in line with the requirements of verification regulation of linear error and cross contamination rate, but the phenomenon of large indication error is very common, the measurement results of different manufacturers have great difference, in order to find and solve this problem, help enterprises to improve quality of product, to obtain accurate and reliable measurement data, we conducted the experimental evaluation of electrolyte analyzer, and the data were analyzed by statistical analysis.

  3. Management, Analysis, and Visualization of Experimental and Observational Data -- The Convergence of Data and Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Greenwald, Martin; Kleese van Dam, Kersten; Parashar, Manish; Wild, Stefan, M.; Wiley, H. Steven

    2016-10-27

    Scientific user facilities---particle accelerators, telescopes, colliders, supercomputers, light sources, sequencing facilities, and more---operated by the U.S. Department of Energy (DOE) Office of Science (SC) generate ever increasing volumes of data at unprecedented rates from experiments, observations, and simulations. At the same time there is a growing community of experimentalists that require real-time data analysis feedback, to enable them to steer their complex experimental instruments to optimized scientific outcomes and new discoveries. Recent efforts in DOE-SC have focused on articulating the data-centric challenges and opportunities facing these science communities. Key challenges include difficulties coping with data size, rate, and complexity in the context of both real-time and post-experiment data analysis and interpretation. Solutions will require algorithmic and mathematical advances, as well as hardware and software infrastructures that adequately support data-intensive scientific workloads. This paper presents the summary findings of a workshop held by DOE-SC in September 2015, convened to identify the major challenges and the research that is needed to meet those challenges.

  4. Code REPOL to fit experimental data with a polynomial, and its graphics plotting

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    The REPOL code, performs the fitting a set of experimental data, with a polynomial of mth. degree (max. 10), using the Least Squares Criterion. further, it presents the graphic plotting of the fitted polynomial, in the appropriate coordinates axes system, by a plotter. An additional option allows also the graphic plotting of the experimental data, used for the fit. The necessary data to execute this code, are asked to the operator in the screen, in a iterative way, by screen-operator dialogue, and the values are introduced through the keyboard. This code is written in Fortran IV, and because of its structure programming in subroutine blocks, can be adapted to any computer with graphic screen and keyboard terminal, with a plotter serial connected to it, whose Software has the Hewlett Packard Graphics 1000. (Author) 5 refs

  5. Fatigue crack extension in nozzle junctions; comparison of analytical approximations with experimental data

    International Nuclear Information System (INIS)

    Broekhoven, M.J.G.; Ruijtenbeek, M.G. van de

    1975-01-01

    The fracture mechanics based stress intensity factor (K-factor) concept has obtained wide-spread acceptance as a tool for quantitative analysis of both fatigue crack growth and instable fracture. The present study discusses the applicability of various simple analytical approximations by comparing results with experimental data. A semi-analytical procedure has been developed whose main characteristics are: the true stress distribution perpendicular to the crack plane for the uncracked structure is used as input data; an extended version of the Shah and Kobayashi solution for elliptical cracks, loaded on their surfaces by tractions described by fourth order double symmetrical polynomials fit through the data of previous step is used to calculate full K-factor variations along the crack fronts; several corrections, a.o. to correct for free surfaces and for a corner radius are incorporated. The experiments concern careful monitoring crack growth rates (da/dN) under uniaxial fatigue loading of precracked nozzle-on-plate models, a.o. using a closed T.V. circuit. Resulting da/dN versus crack length (a) curves are converted into K versus a curves using da/dN versus ΔK curves for the same material (ASTM A 508 C12) obtained by standard procedures. Comparison of theoretical and experimental data yields the conclusion that: simple analytical approximations as sometimes recommended in literature may largely overestimate or underestimate K-factors for nozzle corner cracks; a computer program based on the semi-analytical procedure yields results within seconds of CPU-time once the input data have been generated. These results compare well with experimental and available finite element data for the range of crack depths of practical concern

  6. Summary Report of the Workshop on the Experimental Nuclear Reaction Data Database

    International Nuclear Information System (INIS)

    Semkova, V.; Pritychenko, B.

    2014-12-01

    The Workshop on the Experimental Nuclear Reaction Data Database (EXFOR) was held at IAEA Headquarters in Vienna from 6 to 10 October 2014. The workshop was organized to discuss various aspects of the EXFOR compilation process including compilation rules, different techniques for nuclear reaction data measurements, software developments, etc. A summary of the presentations and discussions that took place during the workshop is reported here. (author)

  7. Summary Report of the Workshop on The Experimental Nuclear Reaction Data Database

    Energy Technology Data Exchange (ETDEWEB)

    Semkova, V. [IAEA Nuclear Data Section, Vienna (Austria); Pritychenko, B. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2014-12-01

    The Workshop on the Experimental Nuclear Reaction Data Database (EXFOR) was held at IAEA Headquarters in Vienna from 6 to 10 October 2014. The workshop was organized to discuss various aspects of the EXFOR compilation process including compilation rules, different techniques for nuclear reaction data measurements, software developments, etc. A summary of the presentations and discussions that took place during the workshop is reported here.

  8. Confrontation of thermoluminescence models in lithium fluoride with experimental data

    International Nuclear Information System (INIS)

    Niewiadomski, T.

    1976-12-01

    The thermoluminescent properties of lithium fluoride depend on numerous factors and are much more complex than those of other phosphors. The so far developed fragmentary models are meant to explain the relationships between crystal defect structure and the processes involved in TL. An attempt has been made to compare these models with the veryfied experimental data and to point out the observations which are inconsistant with the models. (author)

  9. EXFOR – a global experimental nuclear reaction data repository: Status and new developments

    Directory of Open Access Journals (Sweden)

    Semkova Valentina

    2017-01-01

    Full Text Available Members of the International Network of Nuclear Reaction Data Centres (NRDC have collaborated since the 1960s on the worldwide collection, compilation and dissemination of experimental nuclear reaction data. New publications are systematically complied, and all agreed data assembled and incorporated within the EXFOR database. Recent upgrades to achieve greater completeness of the contents are described, along with reviews and adjustments of the compilation rules for specific types of data.

  10. A memory module for experimental data handling

    International Nuclear Information System (INIS)

    Blois, J. de

    1985-01-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory by an internal connector. The memory unit will be used in Moessbauer experiments and in thermal neutron scattering experiments. (orig.)

  11. Three-dimensional inviscid analysis of radial-turbine flow and a limited comparison with experimental data

    Science.gov (United States)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  12. Three-dimensional inviscid analysis of radial turbine flow and a limited comparison with experimental data

    Science.gov (United States)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  13. From experimental zoology to big data: Observation and integration in the study of animal development.

    Science.gov (United States)

    Bolker, Jessica; Brauckmann, Sabine

    2015-06-01

    The founding of the Journal of Experimental Zoology in 1904 was inspired by a widespread turn toward experimental biology in the 19th century. The founding editors sought to promote experimental, laboratory-based approaches, particularly in developmental biology. This agenda raised key practical and epistemological questions about how and where to study development: Does the environment matter? How do we know that a cell or embryo isolated to facilitate observation reveals normal developmental processes? How can we integrate descriptive and experimental data? R.G. Harrison, the journal's first editor, grappled with these questions in justifying his use of cell culture to study neural patterning. Others confronted them in different contexts: for example, F.B. Sumner insisted on the primacy of fieldwork in his studies on adaptation, but also performed breeding experiments using wild-collected animals. The work of Harrison, Sumner, and other early contributors exemplified both the power of new techniques, and the meticulous explanation of practice and epistemology that was marshaled to promote experimental approaches. A century later, experimentation is widely viewed as the standard way to study development; yet at the same time, cutting-edge "big data" projects are essentially descriptive, closer to natural history than to the approaches championed by Harrison et al. Thus, the original questions about how and where we can best learn about development are still with us. Examining their history can inform current efforts to incorporate data from experiment and description, lab and field, and a broad range of organisms and disciplines, into an integrated understanding of animal development. © 2015 Wiley Periodicals, Inc.

  14. Experimental validation of incomplete data CT image reconstruction techniques

    International Nuclear Information System (INIS)

    Eberhard, J.W.; Hsiao, M.L.; Tam, K.C.

    1989-01-01

    X-ray CT inspection of large metal parts is often limited by x-ray penetration problems along many of the ray paths required for a complete CT data set. In addition, because of the complex geometry of many industrial parts, manipulation difficulties often prevent scanning over some range of angles. CT images reconstructed from these incomplete data sets contain a variety of artifacts which limit their usefulness in part quality determination. Over the past several years, the authors' company has developed 2 new methods of incorporating a priori information about the parts under inspection to significantly improve incomplete data CT image quality. This work reviews the methods which were developed and presents experimental results which confirm the effectiveness of the techniques. The new methods for dealing with incomplete CT data sets rely on a priori information from part blueprints (in electronic form), outer boundary information from touch sensors, estimates of part outer boundaries from available x-ray data, and linear x-ray attenuation coefficients of the part. The two methods make use of this information in different fashions. The relative performance of the two methods in detecting various flaw types is compared. Methods for accurately registering a priori information with x-ray data are also described. These results are critical to a new industrial x-ray inspection cell built for inspection of large aircraft engine parts

  15. Regularization of the double period method for experimental data processing

    Science.gov (United States)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  16. Containment accident analysis using CONTEMPT4/M0D2 compared with experimental data

    International Nuclear Information System (INIS)

    Metcalfe, L.J.; Hargroves, D.W.; Wells, R.A.

    1978-01-01

    CONTEMPT4/MOD2 is a new computer program developed to predict the long-term thermal hydraulic behavior of light-water reactor and experimental containment systems during postulated loss-of-coolant accident (LOCA) conditions. Improvements over previous containment codes include multicompartment capability and ice condenser analytical models. A program description and comparisons of calculated results with experimental data are presented

  17. In situ impulse test: an experimental and analytical evaluation of data interpretation procedures

    International Nuclear Information System (INIS)

    1975-08-01

    Special experimental field testing and analytical studies were undertaken at Fort Lawton in Seattle, Washington, to study ''close-in'' wave propagation and evaluate data interpretation procedures for a new in situ impulse test. This test was developed to determine the shear wave velocity and dynamic modulus of soils underlying potential nuclear power plant sites. The test is different from conventional geophysical testing in that the velocity variation with strain is determined for each test. In general, strains between 10 -1 and 10 -3 percent are achieved. The experimental field work consisted of performing special tests in a large test sand fill to obtain detailed ''close-in'' data. Six recording transducers were placed at various points on the energy source, while approximately 37 different transducers were installed within the soil fill, all within 7 feet of the energy source. Velocity measurements were then taken simultaneously under controlled test conditions to study shear wave propagation phenomenology and help evaluate data interpretation procedures. Typical test data are presented along with detailed descriptions of the results

  18. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  19. Analysis of experimental air-detritiation data using TSOAK-M1

    International Nuclear Information System (INIS)

    Land, R.H.; Maroni, V.A.; Minkoff, M.

    1980-01-01

    A computer code (TSOAK-M1) has been developed which permits the determination of tritium reaction (T 2 to HTO)/adsorption/release and instrument correction parameters from enclosure (building) detritiation test data. The code is based on a simplified model which treats each parameter as a normalized time-independent constant throughout the data-unfolding steps. TSOAK-M1 was used to analyze existing small-cubicle test data with good success, and the resulting normalized parameters were employed to evaluate hypothetical reactor-building detritiation scenarios. It was concluded from the latter evaluation that the complications associated with moisture formation, adsorption, and release, particularly in terms of extended cleanup times, may not be as great as was previously thought. It is recommended that the validity of the TSOAK-M1 model be tested using data from detritiation tests conducted on large experimental enclosures (5 to 10 m 3 ) and, if possible, actual facility buildings

  20. LBA-ECO LC-02 Forest Flammability Data, Catuaba Experimental Farm, Acre, Brazil: 1998

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides the results of controlled burns conducted to assess the flammability of mature forests on the Catuaba Experimental Farm of the Federal...

  1. LBA-ECO LC-02 Forest Flammability Data, Catuaba Experimental Farm, Acre, Brazil: 1998

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set provides the results of controlled burns conducted to assess the flammability of mature forests on the Catuaba Experimental Farm of the...

  2. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  3. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Science.gov (United States)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  4. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  5. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    International Nuclear Information System (INIS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-01-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available

  6. Comparison of numerical results with experimental data for single-phase natural convection in an experimental sodium loop

    International Nuclear Information System (INIS)

    Ribando, R.J.

    1979-01-01

    A comparison is made between computed results and experimental data for single-phase natural convection in an experimental sodium loop. The tests were conducted in the Thermal-Hydraulic Out-of-Reactor Safety (THORS) Facility, an engineering-scale high temperature sodium facility at the Oak Ridge National Laboratory used for thermal-hydraulic testing of simulated LMFBR subassemblies at normal and off-normal operating conditions. Heat generation in the 19 pin assembly during these tests was typical of decay heat levels. Tests were conducted both with zero initial forced flow and with a small initial forced flow. The bypass line was closed in most tests, but open in one. The computer code used to analyze these tests [LONAC (LOw flow and NAtural Convection)] is an ORNL-developed, fast running, one-dimensional, single-phase finite difference model for simulating forced and free convection transients in the THORS loop

  7. Comparison of a fuel sheath failure model with published experimental data

    International Nuclear Information System (INIS)

    Varty, R.L.; Rosinger, H.E.

    1982-01-01

    A fuel sheath failure model has been compared with the published results of experiments in which a Zircaloy-4 fuel sheath was subjected to a temperature ramp and a differential pressure until failure occurred. The model assumes that the deformation of the sheath is controlled by steady-state creep and that there is a relationship between tangential stress and temperature at the instant of failure. The sheath failure model predictions agree reasonably well with the experimental data. The burst temperature is slightly overpredicted by the model. The burst strain is overpredicted for small experimental burst strains but is underpredicted otherwise. The reasons for these trends are discussed and the extremely wide variation in burst strain reported in the literature is explained using the model

  8. Analysis of shallow water experimental acoustic data including normal mode model comparisons

    NARCIS (Netherlands)

    McHugh, R.; Simons, D.G.

    2000-01-01

    Ss part of a propagation model validation exercise experimental acoustic and oceanographic data was collected from a shallow-water, long-range channel, off the west coast of Scotland. Temporal variability effects in this channel were assessed through visual inspection of stacked plots, each of which

  9. 1988 Progress report of the EDF department for the analysis of experimental data and measurements

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    The 1988 activity report of the department for the analysis of experimental data and measurements (Department of Retour d'Experience Mesures-Essais, of EDF, France), is presented. The mission of the department is to collect and investigate data from the nuclear power plant operations. The investigations started before 1988, were carried on in 1988. The department main activities are: technology and information transfer from experimental activities, the construction of a standard data acquisition and processing system, the actions involving the N4 turbine, and the modelling and construction of new non-destructive methods of control. The most important facts and activities carried out in 1988 are presented [fr

  10. Acquisition of reactor experimental data; Akviziter reaktorskih eksperimentalnih podataka

    Energy Technology Data Exchange (ETDEWEB)

    Petrovic, M; Tasic, A [Institut za nuklearne nauke ' Boris Kidric' , Vinca, Belgrade (Yugoslavia)

    1966-07-01

    This paper include the analysis of possible experiments and relevant experimental devices for detection, registering and analysis of inducing and response signals. It contains a concept of our system for detection and registering of data, which i appropriate for our research program. Non-typical details of certain acquisition systems are described as well. U ovom radu se analiziraju moguci eksperimenti i odgovarajuci eksperimentalni uredjaji za detekciju, registraciju i analizu signala pobude i odziva. Dalje se iznosi koncepcija naseg sistema za detekciju i registraciju podataka pogodnog za nas program istrazivanja. Netipicni detalji pojedinih kola akvizitera takodje se iznose u radu (author)

  11. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    Science.gov (United States)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  12. DEAR Monte Carlo simulation versus experimental data in measurements with the DEAR NTP setup

    International Nuclear Information System (INIS)

    Bragadireanu, A.M.; Iliescu, M.; Petrascu, C.; Ponta, T.

    1999-01-01

    The DEAR NTP setup was installed in DAΦNE and is taking background data since February 1999. The goal of this work is to compare the measurements, in terms of charged particle hits (clusters), with the DEAR Monte Carlo simulation, taking into account the main effects due to which the particles are lost from circulating beams: Touschek effect and beam-gas interaction. To be mentioned that, during this period, no collisions between electrons and positrons have been achieved in the DEAR Interaction Point (IP) and consequently we don't have any experimental data concerning the hadronic background coming from φ-decays directly, or as secondary products of hadronic interactions. The NTP setup was shielded using lead and copper which gives a shielding factor of about 4. In parallel with the NTP setup, the signals from two scintillator slabs (150 x 80 x 2 mm) collected by 4 PMTs, positioned bellow the NTP setup and facing the IP, were digitized and counted using a National Instruments Timer/Counter Card. To compare experimental data with results of the Monte Carlo simulation we selected periods with only one circulating beam (electrons or positrons), in order to have a clean data set and we selected data files with CCD occupancy lower than 5%. As concerning the X-rays, the statistics was too poor to perform any quantitative comparison. The comparison between Monte Carlo, CCD data and kaon monitor data, for two beams are shown. It can be seen the agreement is fairly good and promising along the way of checking our routines which describes the experimental setup and the physical processes occurring in the accelerator environment. (authors)

  13. The upgrade of the J-TEXT experimental data access and management system

    International Nuclear Information System (INIS)

    Yang, C.; Zhang, M.; Zheng, W.; Liu, R.; Zhuang, G.

    2014-01-01

    Highlights: • The J-TEXT DAMS is developed based on B/S model, which makes it conveniently access the system. • The JWeb-Scope adopts segment strategy to read data that improve the speed of reading data. • DAMS have integrated the management and JWeb-Scope and make an easy way for visitors to access the experiment data. • The JWeb-Scope can be visited all over the world, plot experiment data and zoom in or out smoothly. - Abstract: The experimental data of J-TEXT tokamak are stored in the MDSplus database. The old J-TEXT data access system is based on the tools provided by MDSplus. Since the number of signals is huge, the data retrieval for an experiment is difficult. To solve this problem, the J-TEXT experimental data access and management system (DAMS) based on MDSplus has been developed. The DAMS left the old MDSplus system unchanged providing new tools, which can help users to handle all signals as well as to retrieve signals they need thanks to the user information requirements. The DAMS also offers users a way to create their jScope configuration files which can be downloaded to the local computer. In addition, the DAMS provides a JWeb-Scope tool to visualize the signal in a browser. JWeb-Scope adopts segment strategy to read massive data efficiently. Users can plot one or more signals on their own choice and zoom-in, zoom-out smoothly. The whole system is based on B/S model, so that the users only need of the browsers to access the DAMS. The DAMS has been tested and it has a better user experience. It will be integrated into the J-TEXT remote participation system later

  14. The visualization and availability of experimental research data at Elsevier

    Science.gov (United States)

    Keall, Bethan

    2014-05-01

    In the digital age, the visualization and availability of experimental research data is an increasingly prominent aspect of the research process and of the scientific output that researchers generate. We expect that the importance of data will continue to grow, driven by technological advancements, requirements from funding bodies to make research data available, and a developing research data infrastructure that is supported by data repositories, science publishers, and other stakeholders. Elsevier is actively contributing to these efforts, for example by setting up bidirectional links between online articles on ScienceDirect and relevant data sets on trusted data repositories. A key aspect of Elsevier's "Article of the Future" program, these links enrich the online article and make it easier for researchers to find relevant data and articles and help place data in the right context for re-use. Recently, we have set up such links with some of the leading data repositories in Earth Sciences, including the British Geological Survey, Integrated Earth Data Applications, the UK Natural Environment Research Council, and the Oak Ridge National Laboratory DAAC. Building on these links, Elsevier has also developed a number of data integration and visualization tools, such as an interactive map viewer that displays the locations of relevant data from PANGAEA next to articles on ScienceDirect. In this presentation we will give an overview of these and other capabilities of the Article of the Future, focusing on how they help advance communication of research in the digital age.

  15. New amides for uranium extraction: comparison between in silico predictions and experimental data

    International Nuclear Information System (INIS)

    Klimshuk, O.; Ouadi, A.; Billard, I.; Varnek, A.; Fourches, D.; Solov'ev, V.

    2006-01-01

    New methods and original software tools for computer-aided molecular design have been used to develop 'in silico' new monoamides which efficiently extract U(VI). A set of available experimental values of the uranyl partition coefficient (log D) in water/toluene system for 22 monoamides have been used by the ISIDA program in order to establish quantitative relationships between structure of the molecules and their extraction properties. Then, developed structure-property models have been applied to screen a virtual combinatorial library containing more than 2000 molecules. Selected hits have been synthesized and studied experimentally as extractants using the same protocol as for the molecules from the initial data set. Comparison between predicted and experimentally obtained log D values for new extractants is discussed. (author)

  16. Development and implantation of application systems for reduction and analysis of experimental data in Nuclear Physics area

    International Nuclear Information System (INIS)

    Cardoso Junior, J.L.; Schelin, H.R.; Lemos, B.J.K.C.; Tanaka, E.H.; Castro, A.T.C.G.

    1984-01-01

    Several application systems for reduction and analysis of experimental data are described. These codes were development and/or implanted in tee IEAv/CTA CYBER 170/750 system. A brief description of the experimental data acquisition modes and the necessary reduction for analysis is given. Information on the purposes, uses and access of the codes are given [pt

  17. Sixty years of research, 60 years of data: long-term US Forest Service data management on the Penobscot Experimental Forest

    Science.gov (United States)

    Matthew B. Russell; Spencer R. Meyer; John C. Brissette; Laura Kenefic

    2014-01-01

    The U.S. Department of Agriculture, Forest Service silvicultural experiment on the Penobscot Experimental Forest (PEF) in Maine represents 60 years of research in the northern conifer and mixedwood forests of the Acadian Forest Region. The objective of this data management effort, which began in 2008, was to compile, organize, and archive research data collected in the...

  18. Assessment of electronic component failure rates on the basis of experimental data

    International Nuclear Information System (INIS)

    Nitsch, R.

    1991-01-01

    Assessment and prediction of failure rates of electronic systems are made using experimental data derived from laboratory-scale tests or from the practice, as for instance from component failure rate statistics or component repair statistics. Some problems and uncertainties encountered in an evaluation of such field data are discussed in the paper. In order to establish a sound basis for comparative assessment of data from various sources, the items of comparison and the procedure in case of doubt have to be defined. The paper explains two standard methods proposed for practical failure rate definition. (orig.) [de

  19. Experimental device, corresponding forward model and processing of the experimental data using wavelet analysis for tomographic image reconstruction applied to eddy current nondestructive evaluation

    International Nuclear Information System (INIS)

    Joubert, P.Y.; Madaoui, N.

    1999-01-01

    In the context of eddy current non destructive evaluation using a tomographic image reconstruction process, the success of the reconstruction depends not only on the choice of the forward model and of the inversion algorithms, but also on the ability to extract the pertinent data from the raw signal provided by the sensor. We present in this paper, an experimental device designed for imaging purposes, the corresponding forward model, and a pre-processing of the experimental data using wavelet analysis. These three steps implemented with an inversion algorithm, will allow in the future to perform image reconstruction of 3-D flaws. (authors)

  20. On the calibration strategies of the Johnson–Cook strength model: Discussion and applications to experimental data

    International Nuclear Information System (INIS)

    Gambirasio, Luca; Rizzi, Egidio

    2014-01-01

    The present paper aims at assessing the various procedures adoptable for calibrating the parameters of the so-called Johnson–Cook strength model, expressing the deviatoric behavior of elastoplastic materials, with particular reference to the description of High Strain Rate (HSR) phenomena. The procedures rely on input experimental data corresponding to a set of hardening functions recorded at different equivalent plastic strain rates and temperatures. After a brief review of the main characteristics of the Johnson–Cook strength model, five different calibration strategies are framed and widely described. The assessment is implemented through a systematic application of each calibration strategy to three different real material cases, i.e. a DH-36 structural steel, a commercially pure niobium and an AL-6XN stainless steel. Experimental data available in the literature are considered. Results are presented in terms of plots showing the predicted Johnson–Cook hardening functions against the experimental trends, together with tables describing the fitting problematics which arise in each case, by assessing both lower yield stress and overall plastic flow introduced errors. The consequences determined by each calibration approach are then carefully compared and evaluated. A discussion on the positive and negative aspects of each strategy is presented and some suggestions on how to choose the best calibration approach are outlined, by considering the available experimental data and the objectives of the following modeling process. The proposed considerations should provide a useful guideline in the process of determining the best Johnson–Cook parameters in each specific situation in which the model is going to be adopted. A last section introduces some considerations about the calibration of the Johnson–Cook strength model through experimental data different from those consisting in a set of hardening functions relative to different equivalent plastic strain

  1. Can Experimental Scientists, Data Evaluators and Compilers, and Nuclear Data Users Understand One Another?

    Energy Technology Data Exchange (ETDEWEB)

    Usachev, L. N. [Institute of Physics and Energetics, Obninsk, USSR (Russian Federation)

    1966-07-01

    The International Atomic Energy Agency organizes conferences on a wide variety of scientific subjects, all of which are of fundamental importance for the development of nuclear power. These include the technology of fuel elements, their stability in neutron fields, and chemical reprocessing as well as reactor physics, mathematical computational methods and the problems of protection and dosimetry. The problem of microscopic nuclear data, an essential aspect of reactor work, is just one of these many subjects. On the other hand, it should be remembered that the possibility of releasing nuclear energy was established in the first place by obtaining nuclear data on the fission process occurring in the uranium nucleus following the capture of a neutron and on the escape of the 2-3 secondary fission neutrons. In early nuclear power work the information provided by nuclear data was of considerable, even of decisive, importance. For example, the information available on the neutron balance in fast reactors showed that such reactors could operate as breeders and thus that it was worth while developing them. Strictly speaking, it is of course difficult to speak of a knowledge of nuclear data at this early period. It is perhaps more accurate to speak of the understanding of and the feeling for such data which grew up on the basis of the existing physical ideas on the fission of the nucleus, radiative capture and neutron scattering. Experimental data were very scanty but for that reason they were particularly valuable.

  2. Can Experimental Scientists, Data Evaluators and Compilers, and Nuclear Data Users Understand One Another?

    International Nuclear Information System (INIS)

    Usachev, L.N.

    1966-01-01

    The International Atomic Energy Agency organizes conferences on a wide variety of scientific subjects, all of which are of fundamental importance for the development of nuclear power. These include the technology of fuel elements, their stability in neutron fields, and chemical reprocessing as well as reactor physics, mathematical computational methods and the problems of protection and dosimetry. The problem of microscopic nuclear data, an essential aspect of reactor work, is just one of these many subjects. On the other hand, it should be remembered that the possibility of releasing nuclear energy was established in the first place by obtaining nuclear data on the fission process occurring in the uranium nucleus following the capture of a neutron and on the escape of the 2-3 secondary fission neutrons. In early nuclear power work the information provided by nuclear data was of considerable, even of decisive, importance. For example, the information available on the neutron balance in fast reactors showed that such reactors could operate as breeders and thus that it was worth while developing them. Strictly speaking, it is of course difficult to speak of a knowledge of nuclear data at this early period. It is perhaps more accurate to speak of the understanding of and the feeling for such data which grew up on the basis of the existing physical ideas on the fission of the nucleus, radiative capture and neutron scattering. Experimental data were very scanty but for that reason they were particularly valuable

  3. Data taking and processing system for nuclear experimental physics study

    International Nuclear Information System (INIS)

    Nagashima, Y.; Kimura, H.; Katori, K.; Kuriyama, K.

    1979-01-01

    A multi input, multi mode, multi user data taking and processing system was developed. This system has following special features. 1) It is multi computer system which is constitute with two special processors and two mini computers. 2) The pseudo devices are introduced to make operating procedurs simply and easily. Especially, the selection or modification of 1 - 8 coincidence mode can be done very easily and quickly. 3) A 16 Kch spectrum storage has 8 partitions. Every partitions having floating size are handled automatically by the data taking software SHINE. 4) On line real time data processing can be done. Useing the FORTRAN language, user may prepare the processing software apart from the data taking software. Under the RSX-11D system software, this software runs concurrently with the data taking software by a multi programming mode. 5) The data communication between arbitraly external devices and this system can be done. With this communication procedures, not only the data transfer between computers, but also the control of the experimental devices are realized. Like the real time processing software, this software can be prepared by users and be ran concurrently with other softwares. 6) For data monitoring, two different graphic displays are used complementally. One is a refresh typed high speed display. The other is a storage typed large screen display. Raw datas are displayed on the former. Processed datas or multi parametric large volume datas are displayed on the later one. (author)

  4. An improved energy-range relationship for high-energy electron beams based on multiple accurate experimental and Monte Carlo data sets

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Andreo, P.; Hyoedynmaa, S.; Brahme, A.; Bielajew, A.F.

    1995-01-01

    A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H 2 O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)

  5. The essential value of long-term experimental data for hydrology and water management

    Science.gov (United States)

    Tetzlaff, D.; Carey, S. K.; McNamara, J. P.; Laudon, H.; Soulsby, C.

    2017-12-01

    Observations and data from long-term experimental watersheds are the foundation of hydrology as a geoscience. They allow us to benchmark process understanding, observe trends and natural cycles, and are pre-requisites for testing predictive models. Long-term experimental watersheds also are places where new measurement technologies are developed. These studies offer a crucial evidence base for understanding and managing the provision of clean water supplies; predicting and mitigating the effects of floods, and protecting ecosystem services provided by rivers and wetlands. They also show how to manage land and water in an integrated, sustainable way that reduces environmental and economic costs. We present a number of compelling examples illustrating how hydrologic process understanding has been generated through comparing hypotheses to data, and how this understanding has been essential for managing water supplies, floods, and ecosystem services today.

  6. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    Science.gov (United States)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  7. Integral method of treatment of experimental data from radiochemical solar neutrino detectors

    International Nuclear Information System (INIS)

    Gavrin, V.N.; Kopylov, A.V.; Streltsov, A.V.

    1985-01-01

    An analysis is made of the statistical errors in solar neutrino detection by radiochemical detectors at different times of exposure. It is shown that short exposures (tau/sub e/ = one-half to one half-life) give minimal one-year error. The possibility is considered of the detection of the solar neutrino flux variation due to annual changes of the Earth-Sun distance. The integral method of treatment of the experimental data is described. Results are given of the statistical treatment of computer simulated data

  8. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Crecy, F de

    1994-12-31

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs.

  9. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  10. Normalization and experimental design for ChIP-chip data

    Directory of Open Access Journals (Sweden)

    Alekseyenko Artyom A

    2007-06-01

    Full Text Available Abstract Background Chromatin immunoprecipitation on tiling arrays (ChIP-chip has been widely used to investigate the DNA binding sites for a variety of proteins on a genome-wide scale. However, several issues in the processing and analysis of ChIP-chip data have not been resolved fully, including the effect of background (mock control subtraction and normalization within and across arrays. Results The binding profiles of Drosophila male-specific lethal (MSL complex on a tiling array provide a unique opportunity for investigating these topics, as it is known to bind on the X chromosome but not on the autosomes. These large bound and control regions on the same array allow clear evaluation of analytical methods. We introduce a novel normalization scheme specifically designed for ChIP-chip data from dual-channel arrays and demonstrate that this step is critical for correcting systematic dye-bias that may exist in the data. Subtraction of the mock (non-specific antibody or no antibody control data is generally needed to eliminate the bias, but appropriate normalization obviates the need for mock experiments and increases the correlation among replicates. The idea underlying the normalization can be used subsequently to estimate the background noise level in each array for normalization across arrays. We demonstrate the effectiveness of the methods with the MSL complex binding data and other publicly available data. Conclusion Proper normalization is essential for ChIP-chip experiments. The proposed normalization technique can correct systematic errors and compensate for the lack of mock control data, thus reducing the experimental cost and producing more accurate results.

  11. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    Energy Technology Data Exchange (ETDEWEB)

    Bhandarkar, Manisha, E-mail: manisha@ipr.res.in; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-11-15

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  12. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    International Nuclear Information System (INIS)

    Bhandarkar, Manisha; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-01-01

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  13. Updating and using the international non-neutron experimental nuclear data base in ''Generalized EXFOR'' format

    International Nuclear Information System (INIS)

    Zhuravleva, G.M.; Chukreev, F.E.

    1985-10-01

    A software system for the automatic preparation of non-formalized textual information for the international exchange of nuclear data in the ''Generalized Exchange Format (EXFOR)'' is described. The ''Generalized EXFOR'' format is briefly outlined and data are given on the size of the international non-neutron experimental data base in this format. (author)

  14. Inference of missing data and chemical model parameters using experimental statistics

    Science.gov (United States)

    Casey, Tiernan; Najm, Habib

    2017-11-01

    A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.

  15. Discussions on the experimental data covering (EDC) procedure

    International Nuclear Information System (INIS)

    Lee, S.Y.; Ban, C.H.

    2004-01-01

    In describing step-9 of TRAC-CSAU, there is a statement that there is no clear guide with which one can determine the code uncertainty parameters and their statistics using the integral and/or separate effects tests. On the other hand, there is an important requirement, stating that the code uncertainty should be evaluated through direct data comparison with the relevant integral systems and separate-effects experiments at different scales. There are some efforts, in the best-estimate LOCA methodologies, to use the IET and SET data for determining the code uncertainty parameters and their statistics. But it is hard to find a systematic way to relate the code uncertainty parameters and/or their statistics with the results of the direct data comparison, especially in the case of the components with multiple code uncertainty parameters. It is essential to develop a procedure to implement the requirement of the direct data comparison with SETs and IETs in determining the code uncertainty. In this paper, Code Accuracy Based Uncertainty Estimation (CABUE) technique is introduced with an emphasis on the role of Experimental Data Coverings to connect the code accuracy with the overall uncertainty for a code prediction. Contrasting to the fact that the code accuracy is used only for confirmation of the conservatism in TRACCSAU, in CABUE, it can be represented by the scalable code parameters and their statistics through the EDC extensive calculations, it gives several benefits. Since the code accuracy becomes the measure of the determination of the statistics of code parameters, a code with better accuracy naturally provide smaller overall calculational uncertainty. This is the full implementation of the requirement of 'direct data comparison'. Adopting EDC procedure in CABUE where uniform application of distribution-free percentile estimation technique with simple random sampling calculations is used in various levels, makes it easy to chose the number of code uncertainty

  16. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  17. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  18. A semantic web approach applied to integrative bioinformatics experimentation: a biological use case with genomics data.

    NARCIS (Netherlands)

    Post, L.J.G.; Roos, M.; Marshall, M.S.; van Driel, R.; Breit, T.M.

    2007-01-01

    The numerous public data resources make integrative bioinformatics experimentation increasingly important in life sciences research. However, it is severely hampered by the way the data and information are made available. The semantic web approach enhances data exchange and integration by providing

  19. iLAP: a workflow-driven software for experimental protocol development, data acquisition and analysis

    Directory of Open Access Journals (Sweden)

    McNally James

    2009-01-01

    Full Text Available Abstract Background In recent years, the genome biology community has expended considerable effort to confront the challenges of managing heterogeneous data in a structured and organized way and developed laboratory information management systems (LIMS for both raw and processed data. On the other hand, electronic notebooks were developed to record and manage scientific data, and facilitate data-sharing. Software which enables both, management of large datasets and digital recording of laboratory procedures would serve a real need in laboratories using medium and high-throughput techniques. Results We have developed iLAP (Laboratory data management, Analysis, and Protocol development, a workflow-driven information management system specifically designed to create and manage experimental protocols, and to analyze and share laboratory data. The system combines experimental protocol development, wizard-based data acquisition, and high-throughput data analysis into a single, integrated system. We demonstrate the power and the flexibility of the platform using a microscopy case study based on a combinatorial multiple fluorescence in situ hybridization (m-FISH protocol and 3D-image reconstruction. iLAP is freely available under the open source license AGPL from http://genome.tugraz.at/iLAP/. Conclusion iLAP is a flexible and versatile information management system, which has the potential to close the gap between electronic notebooks and LIMS and can therefore be of great value for a broad scientific community.

  20. The use of experimental data in an MTR-type nuclear reactor safety analysis

    Science.gov (United States)

    Day, Simon E.

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type ( i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core.

  1. The use of experimental data in an MTR-type nuclear reactor safety analysis

    International Nuclear Information System (INIS)

    Day, S.E.

    2006-01-01

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type (i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core. (author)

  2. The use of experimental data in an MTR-type nuclear reactor safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Day, S.E

    2006-07-01

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type (i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core. (author)

  3. Archival and Dissemination of the U.S. and Canadian Experimental Nuclear Reaction Data (EXFOR Project)

    Science.gov (United States)

    Pritychenko, Boris; Hlavac, Stanislav; Schwerer, Otto; Zerkin, Viktor

    2017-09-01

    The Exchange Format (EXFOR) or experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource includes numerical data sets and bibliographical information for more than 22,000 experiments since the beginning of nuclear science. Analysis of the experimental data sets, recovery and archiving will be discussed. Examples of the recent developments of the data renormalization, uploads and inverse reaction calculations for nuclear science and technology applications will be presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development and research activities. It is publicly available at the National Nuclear Data Center website http://www.nndc.bnl.gov/exfor and the International Atomic Energy Agency mirror site http://www-nds.iaea.org/exfor. This work was sponsored in part by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookha ven Science Associates, LLC.

  4. Data handling at EBR-II [Experimental Breeder Reactor II] for advanced diagnostics and control work

    International Nuclear Information System (INIS)

    Lindsay, R.W.; Schorzman, L.W.

    1988-01-01

    Improved control and diagnostics systems are being developed for nuclear and other applications. The Experimental Breeder Reactor II (EBR-II) Division of Argonne National Laboratory has embarked on a project to upgrade the EBR-II control and data handling systems. The nature of the work at EBR-II requires that reactor plant data be readily available for experimenters, and that the plant control systems be flexible to accommodate testing and development needs. In addition, operational concerns require that improved operator interfaces and computerized diagnostics be included in the reactor plant control system. The EBR-II systems have been upgraded to incorporate new data handling computers, new digital plant process controllers, and new displays and diagnostics are being developed and tested for permanent use. In addition, improved engineering surveillance will be possible with the new systems

  5. Use of the dynamic stiffness method to interpret experimental data from a nonlinear system

    Science.gov (United States)

    Tang, Bin; Brennan, M. J.; Gatti, G.

    2018-05-01

    The interpretation of experimental data from nonlinear structures is challenging, primarily because of dependency on types and levels of excitation, and coupling issues with test equipment. In this paper, the use of the dynamic stiffness method, which is commonly used in the analysis of linear systems, is used to interpret the data from a vibration test of a controllable compressed beam structure coupled to a test shaker. For a single mode of the system, this method facilitates the separation of mass, stiffness and damping effects, including nonlinear stiffness effects. It also allows the separation of the dynamics of the shaker from the structure under test. The approach needs to be used with care, and is only suitable if the nonlinear system has a response that is predominantly at the excitation frequency. For the structure under test, the raw experimental data revealed little about the underlying causes of the dynamic behaviour. However, the dynamic stiffness approach allowed the effects due to the nonlinear stiffness to be easily determined.

  6. Chemometrics in analytical chemistry-part I: history, experimental design and data analysis tools.

    Science.gov (United States)

    Brereton, Richard G; Jansen, Jeroen; Lopes, João; Marini, Federico; Pomerantsev, Alexey; Rodionova, Oxana; Roger, Jean Michel; Walczak, Beata; Tauler, Romà

    2017-10-01

    Chemometrics has achieved major recognition and progress in the analytical chemistry field. In the first part of this tutorial, major achievements and contributions of chemometrics to some of the more important stages of the analytical process, like experimental design, sampling, and data analysis (including data pretreatment and fusion), are summarised. The tutorial is intended to give a general updated overview of the chemometrics field to further contribute to its dissemination and promotion in analytical chemistry.

  7. Experimental and numerical analysis for potential heat reuse in liquid cooled data centres

    International Nuclear Information System (INIS)

    Carbó, Andreu; Oró, Eduard; Salom, Jaume; Canuto, Mauro; Macías, Mario; Guitart, Jordi

    2016-01-01

    Highlights: • The potential heat reuse of a liquid data centre has been characterized. • Dynamic behaviours of a liquid cooled data centre have been studied. • A dynamic energy model of liquid cooling data centres is developed. • The dynamic energy model has been validated with experimental data. • Server usage and consumption relation was developed for different IT loads. - Abstract: The rapid increase of data centre industry has stimulated the interest of both researchers and professionals in order to reduce energy consumption and carbon footprint of these unique infrastructures. The implementation of energy efficiency strategies and the use of renewables play an important role to reduce the overall data centre energy demand. Information Technology (IT) equipment produce vast amount of heat which must be removed and therefore waste heat recovery is a likely energy efficiency strategy to be studied in detail. To evaluate the potential of heat reuse a unique liquid cooled data centre test bench was designed and built. An extensive thermal characterization under different scenarios was performed. The effective liquid cooling capacity is affected by the inlet water temperature. The lower the inlet water temperature the higher the liquid cooling capacity; however, the outlet water temperature will be also low. Therefore, the requirements of the heat reuse application play an important role in the optimization of the cooling configuration. The experimental data was then used to validate a dynamic energy model developed in TRNSYS. This model is able to predict the behaviour of liquid cooling data centres and can be used to study the potential compatibility between large data centres with different heat reuse applications. The model also incorporates normalized power consumption profiles for heterogeneous workloads that have been derived from realistic IT loads.

  8. MONJU experimental data analysis and its feasibility evaluation to build up the standard data base for large FBR nuclear core design

    International Nuclear Information System (INIS)

    Sugino, K.; Iwai, T.

    2006-01-01

    MONJU experimental data analysis was performed by using the detailed calculation scheme for fast reactor cores developed in Japan. Subsequently, feasibility of the MONJU integral data was evaluated by the cross-section adjustment technique for the use of FBR nuclear core design. It is concluded that the MONJU integral data is quite valuable for building up the standard data base for large FBR nuclear core design. In addition, it is found that the application of the updated data base has a possibility to considerably improve the prediction accuracy of neutronic parameters for MONJU. (authors)

  9. Universal Implicatures and Free Choice Effects: Experimental Data

    Directory of Open Access Journals (Sweden)

    Emmanuel Chemla

    2009-05-01

    Full Text Available Universal inferences like (i have been taken as evidence for a local/syntactic treatment of scalar implicatures (i.e. theories where the enrichment of "some" into "some but not all" can happen sub-sententially: (i Everybody read some of the books --> Everybody read [some but not all the books]. In this paper, I provide experimental evidence which casts doubt on this argument. The counter-argument relies on a new set of data involving free choice inferences (a sub-species of scalar implicatures and negative counterparts of (i, namely sentences with the quantifier "no" instead of "every". The results show that the globalist account of scalar implicatures is incomplete (mainly because of free choice inferences but that the distribution of universal inferences made available by the localist move remains incomplete as well (mainly because of the negative cases. doi:10.3765/sp.2.2 BibTeX info

  10. Calculation and comparison with experimental data of cascade curves for liquid xenon

    International Nuclear Information System (INIS)

    Strugal'skij, Z.S.; Yablonskij, Z.

    1975-01-01

    Cascade curves calculated by different methods are compared with the experimental data for showers caused by gamma-quanta with the energies from 40 to 2000 MeV in liquid xenon. The minimum energy of shower electrons (cut-off energy) taken into account by the experiment amounts to 3.1-+1.2 MeV, whereas the calculated cascade curves are given for the energies ranging from 40 to 4000 MeV at the cut-off energies 2.3; 3.5; 4.7 MeV. The depth of the shower development is reckoned from the point of generation of gamma-quanta which create showers. Cascade curves are calculated by the moment method with consideration for three moments. The following physical processes are taken into consideration: generation of electron-positron pairs; Compton effect; bremsstrahlung; ionization losses. The dependences of the mean number of particles on the depth of the shower development are obtained from measurements of photographs taken with a xenon bubble chamber. Presented are similar dependences calculated by the moment and Monte-Carlo methods. From the data analysis it follows that the calculation provides correct position of the shower development maximum, but different methods of calculation for small and low depths of shower development yield drastically different results. The Monte-Carlo method provides better agreement with the experimental data

  11. Collection of creep fatigue laws and their comparison with experimental data

    International Nuclear Information System (INIS)

    Rieunier, J.B.; Dufresne, J.

    1982-07-01

    A systematic investigation has been undertaken to collect the main model describing phenomena of creep-fatigue interaction. A total of 13 models was collected. Simultaneously, 660 experimental data on 304 stainless steel were collected and compared to the results obtained from theoretical models. Conclusion are that none of these models describes correctly all phenomena considered (imposed strain or stress - hold time - two strain levels etc...) but each of those phenomena is well represented by some laws

  12. Probing the Structure and Dynamics of Proteins by Combining Molecular Dynamics Simulations and Experimental NMR Data.

    Science.gov (United States)

    Allison, Jane R; Hertig, Samuel; Missimer, John H; Smith, Lorna J; Steinmetz, Michel O; Dolenc, Jožica

    2012-10-09

    NMR experiments provide detailed structural information about biological macromolecules in solution. However, the amount of information obtained is usually much less than the number of degrees of freedom of the macromolecule. Moreover, the relationships between experimental observables and structural information, such as interatomic distances or dihedral angle values, may be multiple-valued and may rely on empirical parameters and approximations. The extraction of structural information from experimental data is further complicated by the time- and ensemble-averaged nature of NMR observables. Combining NMR data with molecular dynamics simulations can elucidate and alleviate some of these problems, as well as allow inconsistencies in the NMR data to be identified. Here, we use a number of examples from our work to highlight the power of molecular dynamics simulations in providing a structural interpretation of solution NMR data.

  13. Validation of the CATHARE2 code against experimental data from Brayton-cycle plants

    International Nuclear Information System (INIS)

    Bentivoglio, Fabrice; Tauveron, Nicolas; Geffraye, Genevieve; Gentner, Herve

    2008-01-01

    In recent years the Commissariat a l'Energie Atomique (CEA) has commissioned a wide range of feasibility studies of future-advanced nuclear reactors, in particular gas-cooled reactors (GCR). The thermohydraulic behaviour of these systems is a key issue for, among other things, the design of the core, the assessment of thermal stresses, and the design of decay heat removal systems. These studies therefore require efficient and reliable simulation tools capable of modelling the whole reactor, including the core, the core vessel, piping, heat exchangers and turbo-machinery. CATHARE2 is a thermal-hydraulic 1D reference safety code developed and extensively validated for the French pressurized water reactors. It has been recently adapted to deal also with gas-cooled reactor applications. In order to validate CATHARE2 for these new applications, CEA has initiated an ambitious long-term experimental program. The foreseen experimental facilities range from small-scale loops for physical correlations, to component technology and system demonstration loops. In the short-term perspective, CATHARE2 is being validated against existing experimental data. And in particular from the German power plants Oberhausen I and II. These facilities have both been operated by the German utility Energie Versorgung Oberhausen (E.V.O.) and their power conversion systems resemble to the high-temperature reactor concepts: Oberhausen I is a 13.75-MWe Brayton-cycle air turbine plant, and Oberhausen II is a 50-MWe Brayton-cycle helium turbine plant. The paper presents these two plants, the adopted CATHARE2 modelling and a comparison between experimental data and code results for both steady state and transient cases

  14. Presentation and comparison of experimental critical heat flux data at conditions prototypical of light water small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Greenwood, M.S., E-mail: 1greenwoodms@ornl.gov; Duarte, J.P.; Corradini, M.

    2017-06-15

    Highlights: • Low mass flux and moderate to high pressure CHF experimental results are presented. • Facility uses chopped-cosine heater profile in a 2 × 2 square bundle geometry. • The EPRI, CISE-GE, and W-3 CHF correlations provide reasonable average CHF prediction. • Neural network analysis predicts experimental data and demonstrates utility of method. - Abstract: The critical heat flux (CHF) is a two-phase flow phenomenon which rapidly decreases the efficiency of the heat transfer performance at a heated surface. This phenomenon is one of the limiting criteria in the design and operation of light water reactors. Deviations of operating parameters greatly alters the CHF condition and must be experimentally determined for any new parameters such as those proposed in small modular reactors (SMR) (e.g. moderate to high pressure and low mass fluxes). Current open literature provides too little data for functional use at the proposed conditions of prototypical SMRs. This paper presents a brief summary of CHF data acquired from an experimental facility at the University of Wisconsin-Madison designed and built to study CHF at high pressure and low mass flux ranges in a 2 × 2 chopped cosine rod bundle prototypical of conceptual SMR designs. The experimental CHF test inlet conditions range from pressures of 8–16 MPa, mass fluxes of 500–1600 kg/m2 s, and inlet water subcooling from 250 to 650 kJ/kg. The experimental data is also compared against several accepted prediction methods whose application ranges are most similar to the test conditions.

  15. Compilation of reactor-physical data of the AVR experimental reactor for 1982

    International Nuclear Information System (INIS)

    Werner, H.; Wawrzik, U.; Grotkamp, T.; Buettgen, I.

    1983-12-01

    Since the end of 1981 the calculation model AVR-80 has been taken as a basis for compiling reactor-physical data of the AVR experimental reactor. A brief outline of the operation history of 1982 is given, including the beginning of a large-scale experiment dealing with change-over from high enriched uranium to low enriched uranium. Calculations relative to spectral shift, diffusion, temperature, burnup, and recirculation of the fuel elements are described in brief. The essential results of neutron-physical and thermodynamic calculations and the characteristical data of the various types of fuel used are shown in tables and illustrations. (RF) [de

  16. An Experimental Seismic Data and Parameter Exchange System for Tsunami Warning Systems

    Science.gov (United States)

    Hoffmann, T. L.; Hanka, W.; Saul, J.; Weber, B.; Becker, J.; Heinloo, A.; Hoffmann, M.

    2009-12-01

    For several years GFZ Potsdam is operating a global earthquake monitoring system. Since the beginning of 2008, this system is also used as an experimental seismic background data center for two different regional Tsunami Warning Systems (TWS), the IOTWS (Indian Ocean) and the interim NEAMTWS (NE Atlantic and Mediterranean). The SeisComP3 (SC3) software, developed within the GITEWS (German Indian Ocean Tsunami Early Warning System) project, capable to acquire, archive and process real-time data feeds, was extended for export and import of individual processing results within the two clusters of connected SC3 systems. Therefore not only real-time waveform data are routed to the attached warning centers through GFZ but also processing results. While the current experimental NEAMTWS cluster consists of SC3 systems in six designated national warning centers in Europe, the IOTWS cluster presently includes seven centers, with another three likely to join in 2009/10. For NEAMTWS purposes, the GFZ virtual real-time seismic network (GEOFON Extended Virtual Network -GEVN) in Europe was substantially extended by adding many stations from Western European countries optimizing the station distribution. In parallel to the data collection over the Internet, a GFZ VSAT hub for secured data collection of the EuroMED GEOFON and NEAMTWS backbone network stations became operational and first data links were established through this backbone. For the Southeast Asia region, a VSAT hub has been established in Jakarta already in 2006, with some other partner networks connecting to this backbone via the Internet. Since its establishment, the experimental system has had the opportunity to prove its performance in a number of relevant earthquakes. Reliable solutions derived from a minimum of 25 stations were very promising in terms of speed. For important events, automatic alerts were released and disseminated by emails and SMS. Manually verified solutions are added as soon as they become

  17. Calculated fraction of an incident current pulse that will be accelerated by an electron linear accelerator and comparisons with experimental data

    International Nuclear Information System (INIS)

    Alsmiller, R.G. Jr.; Alsmiller, F.S.; Lewis, T.A.

    1986-05-01

    In a series of previous papers, calculated results obtained using a one-dimensional ballistic model were presented to aid in the design of a prebuncher for the Oak Ridge Electron Linear Accelerator. As part of this work, a model was developed to provide limits on the fraction of an incident current pulse that would be accelerated by the existing accelerator. In this paper experimental data on this fraction are presented and the validity of the model developed previously is tested by comparing calculated and experimental data. Part of the experimental data is used to fix the physical parameters in the model and then good agreement between the calculated results and the rest of the experimental data is obtained

  18. Comparison of numerical results with experimental data for single-phase natural convection in an experimental sodium loop

    International Nuclear Information System (INIS)

    Ribando, R.J.

    1979-01-01

    A comparison is made between computed results and experimental data for a single-phase natural convection test in an experimental sodium loop. The test was conducted in the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility, an engineering-scale high temperature sodium loop at the Oak Ridge National Laboratory (ORNL) used for thermal-hydraulic testing of simulated Liquid Metal Fast Breeder Reactor (LMFBR) subassemblies at normal and off-normal operating conditions. Heat generation in the 19 pin assembly during the test was typical of decay heat levels. The test chosen for analysis in this paper was one of seven natural convection runs conducted in the facility using a variety of initial conditions and testing parameters. Specifically, in this test the bypass line was open to simulate a parallel heated assembly and the test was begun with a pump coastdown from a small initial forced flow. The computer program used to analyze the test, LONAC (LOw flow and NAtural Convection) is an ORNL-developed, fast-running, one-dimensional, single-phase, finite-difference model used for simulating forced and free convection transients in the THORS loop

  19. Program PLOTC4 (Version 86-1). Plot evaluated data from the ENDF/B format and/or experimental data which is in a computation format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1986-09-01

    Experimental and evaluated nuclear reaction data are world-wide compiled in EXFOR format and ENDF format, respectively. The computer program PLOTC4 described in the present document plots data from both formats; EXFOR data must be converted first to a ''computation format''. The program is available costfree from the IAEA Nuclear Data Section, upon request. (author)

  20. Implementation of 3D tomographic visualisation through planar ICT data from experimental gamma-ray tomographic system

    International Nuclear Information System (INIS)

    Umesh Kumar; Singh, Gursharan; Ravindran, V.R.

    2001-01-01

    Industrial Computed Tomography (ICT) is one of the latest methods of non-destructive testing and examination. Different prototypes of Computed Industrial Tomographic Imaging System (CITIS) have been developed and experimental data have been generated in Isotope Applications Division. The experimental gamma-rays based tomographic imaging system comprises of beam generator containing approx. 220 GBq (6 Curies) of 137 Cs, a single NaI(Tl) -PMT integral assembly in a thick shielding and associated electronics, stepper motor controlled mechanical manipulator, collimators and required software. CITIS data is normally acquired in one orientation of the sample. It may be sometimes required to view a tomographic plane in a different orientation. Also, 3D visualization may be required with the available 2D data set. All these can be achieved by processing the available data. We have customized some of the routines for this purpose provided IDL (Integrated Data Language) package to suit our requirements. The present paper discusses methodology adopted for this purpose with an illustrative example. (author)

  1. Experimental data and boundary conditions for a Double-Skin Facade building in external air curtain mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    Frequent discussions of double skin façade energy performance have started a dialogue about the methods, models and tools for simulation of double façade systems and reliability of their results. Their reliability will increase with empirical validation of the software. Detailed experimental work...... was carried out in a full scale test facility ‘The Cube’, in order to compile three sets of high quality experimental data for validation purposes. The data sets are available for preheating mode, external air curtain mode and transparent insulation mode. The objective of this article is to provide the reader......’. This covers such problem areas as measurements of naturally induced air flow, measurements of air temperature under direct solar radiation exposure, etc. Finally, in order to create a solid foundation for software validation, the uncertainty and limitations in the experimental results are discussed. In part...

  2. Acquiring, recording, and analyzing pathology data from experimental mice: an overview.

    Science.gov (United States)

    Scudamore, Cheryl L

    2014-03-21

    Pathology is often underutilized as an end point in mouse studies in academic research because of a lack of experience and expertise. The use of traditional pathology techniques including necropsy and microscopic analysis can be useful in identifying the basic processes underlying a phenotype and facilitating comparison with equivalent human diseases. This overview aims to provide a guide and reference to the acquisition, recording, and analysis of high-quality pathology data from experimental mice in an academic research setting. Copyright © 2014 John Wiley & Sons, Inc.

  3. Experimental data and EXFOR

    International Nuclear Information System (INIS)

    Plompen, A.

    2012-01-01

    Nuclear data needs are first of all determined by the applications in which they are used. In the field of nuclear fission energy recent developments have led to re-emphasize nuclear safety and security, including the issue of nuclear waste, and to downplay the importance of energy-sustainability and economic viability of the various options in nuclear energy. In practice, this requires a shift in attention: more emphasis on data needs related to light-water reactors, both currently operating and under construction, continuing emphasis on data related to minimization of high level nuclear waste, and reduced emphasis on innovative options for nuclear energy sustainability such as fast reactors. The increased emphasis on safety of nuclear systems places high demands on the predictability of their performance and the quality of their safety assessments. Verification and validation schemes for safety assessments and design methods require nuclear data that allow establishment of the margins associated with estimates of diverse quantities such as reactivity and reactivity coefficients, shielding, inventory build-up, and radiation dose. Sensitivity and uncertainty analyses for key nuclear systems parameters point at strict requirements for uncertainties on important nuclear data. In particular, these help prioritize nuclear data development by isotope, reaction and energy range; a key asset in a time where resources for research in the nuclear field are under strain, while the demands for reliability and accuracy are higher than ever

  4. Numerical Validation of a Vortex Model against ExperimentalData on a Straight-Bladed Vertical Axis Wind Turbine

    Directory of Open Access Journals (Sweden)

    Eduard Dyachuk

    2015-10-01

    Full Text Available Cyclic blade motion during operation of vertical axis wind turbines (VAWTs imposes challenges on the simulations models of the aerodynamics of VAWTs. A two-dimensional vortex model is validated against the new experimental data on a 12-kW straight-bladed VAWT, which is operated at an open site. The results on the normal force on one blade are analyzed. The model is assessed against the measured data in the wide range of tip speed ratios: from 1.8 to 4.6. The predicted results within one revolution have a similar shape and magnitude as the measured data, though the model does not reproduce every detail of the experimental data. The present model can be used when dimensioning the turbine for maximum loads.

  5. CFD and experimental data of closed-loop wind tunnel flow

    Directory of Open Access Journals (Sweden)

    John Kaiser Calautit

    2016-06-01

    Full Text Available The data presented in this article were the basis for the study reported in the research articles entitled ‘A validated design methodology for a closed loop subsonic wind tunnel’ (Calautit et al., 2014 [1], which presented a systematic investigation into the design, simulation and analysis of flow parameters in a wind tunnel using Computational Fluid Dynamics (CFD. The authors evaluated the accuracy of replicating the flow characteristics for which the wind tunnel was designed using numerical simulation. Here, we detail the numerical and experimental set-up for the analysis of the closed-loop subsonic wind tunnel with an empty test section.

  6. Experimental data of thermal cracking of soybean oil and blends with hydrogenated fat

    Directory of Open Access Journals (Sweden)

    R.F. Beims

    2018-04-01

    Full Text Available This article presents the experimental data on the thermal cracking of soybean oil and blends with hydrogenated fat. Thermal cracking experiments were carried out in a plug flow reactor with pure soybean oil and two blends with hydrogenated fat to reduce the degree of unsaturation of the feedstock. The same operational conditions was considered. The data obtained showed a total aromatics content reduction by 14% with the lowest degree of unsaturation feedstock. Other physicochemical data is presented, such as iodine index, acid index, density, kinematic viscosity. A distillation curve was carried out and compared with the curve from a petroleum sample.

  7. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within ±10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the 92 Mo(n, 2n) 91g Mo reaction in FENDL, and lack of activation cross section data, e.g., the 138 Ba(n, 2n) 137m Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  8. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within {+-}10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the {sup 92}Mo(n, 2n){sup 91g}Mo reaction in FENDL, and lack of activation cross section data, e.g., the {sup 138}Ba(n, 2n){sup 137m}Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  9. System for the experimental data acquisition, processing and output on the base of the double-input CAMAC modules

    International Nuclear Information System (INIS)

    Avramenko, A.E.; Ariskin, N.I.; Samojlov, V.V.

    1983-01-01

    A system for experimental data acquisition, processing and output developed on the base of the double-input CAMAC module is described. Use of the double-input on-line memory unit at the capacity of up to 64k bite for experimental data storage and an external input controller permitted to obtain the time of the data input and output cycle in the storage equal to 1.6 μs. Rates of experimental data acquisition and output do not depend on the computer response or CAMAC cycle duration. They are determined only by the potentialities of the functional moduls. Combination of operations on data acquisi tion, processing and output is possible. Library of subroutines assuring processing in an on-line system with the SM-4, SM-3, ''Electronika-60'' computers is developed for the system. Subroutiines of this library can be fetched from the code written in the FORTRAN and MLCROASSEMBER and they assure: input/output to/from the computer buffer storage, synchronization of ipput/output operations redout from the buffer storage to the computer storage, recording data from the storage to the huffer storage

  10. Optical bandgap of semiconductor nanostructures: Methods for experimental data analysis

    Science.gov (United States)

    Raciti, R.; Bahariqushchi, R.; Summonte, C.; Aydinli, A.; Terrasi, A.; Mirabella, S.

    2017-06-01

    Determination of the optical bandgap (Eg) in semiconductor nanostructures is a key issue in understanding the extent of quantum confinement effects (QCE) on electronic properties and it usually involves some analytical approximation in experimental data reduction and modeling of the light absorption processes. Here, we compare some of the analytical procedures frequently used to evaluate the optical bandgap from reflectance (R) and transmittance (T) spectra. Ge quantum wells and quantum dots embedded in SiO2 were produced by plasma enhanced chemical vapor deposition, and light absorption was characterized by UV-Vis/NIR spectrophotometry. R&T elaboration to extract the absorption spectra was conducted by two approximated methods (single or double pass approximation, single pass analysis, and double pass analysis, respectively) followed by Eg evaluation through linear fit of Tauc or Cody plots. Direct fitting of R&T spectra through a Tauc-Lorentz oscillator model is used as comparison. Methods and data are discussed also in terms of the light absorption process in the presence of QCE. The reported data show that, despite the approximation, the DPA approach joined with Tauc plot gives reliable results, with clear advantages in terms of computational efforts and understanding of QCE.

  11. An Experimental Seismic Data and Parameter Exchange System for Interim NEAMTWS

    Science.gov (United States)

    Hanka, W.; Hoffmann, T.; Weber, B.; Heinloo, A.; Hoffmann, M.; Müller-Wrana, T.; Saul, J.

    2009-04-01

    In 2008 GFZ Potsdam has started to operate its global earthquake monitoring system as an experimental seismic background data centre for the interim NEAMTWS (NE Atlantic and Mediterranean Tsunami Warning System). The SeisComP3 (SC3) software, developed within the GITEWS (German Indian Ocean Tsunami Early Warning System) project was extended to test the export and import of individual processing results within a cluster of SC3 systems. The initiated NEAMTWS SC3 cluster consists presently of the 24/7 seismic services at IMP, IGN, LDG/EMSC and KOERI, whereas INGV and NOA are still pending. The GFZ virtual real-time seismic network (GEOFON Extended Virtual Network - GEVN) was substantially extended by many stations from Western European countries optimizing the station distribution for NEAMTWS purposes. To amend the public seismic network (VEBSN - Virtual European Broadband Seismic Network) some attached centres provided additional private stations for NEAMTWS usage. In parallel to the data collection by Internet the GFZ VSAT hub for the secured data collection of the EuroMED GEOFON and NEAMTWS backbone network stations became operational and the first data links were established. In 2008 the experimental system could already prove its performance since a number of relevant earthquakes have happened in NEAMTWS area. The results are very promising in terms of speed as the automatic alerts (reliable solutions based on a minimum of 25 stations and disseminated by emails and SMS) were issued between 2 1/2 and 4 minutes for Greece and 5 minutes for Iceland. They are also promising in terms of accuracy since epicenter coordinates, depth and magnitude estimates were sufficiently accurate from the very beginning, usually don't differ substantially from the final solutions and provide a good starting point for the operations of the interim NEAMTWS. However, although an automatic seismic system is a good first step, 24/7 manned RTWCs are mandatory for regular manual verification

  12. Field experimental data for crop modeling of wheat growth response to nitrogen fertilizer, elevated CO2, water stress, and high temperature

    Science.gov (United States)

    Field experimental data of five experiments covering a wide range Field experimental data of five experiments covering a wide range of growing conditions are assembled for wheat growth and cropping systems modeling. The data include (i) an experiment on interactive effects of elevated CO2 by water a...

  13. A multi-agent architecture for sharing knowledge and experimental data about waste water treatment plants through the Internet

    International Nuclear Information System (INIS)

    Abu Yaman, I. R.; Kerckhoffs, J. E.

    1998-01-01

    In this paper, we present a first prototype of a local multi-agent architecture for the sharing of knowledge and experimental data about waste water treatment plants through the Internet, or more specifically the WWW. Applying a net browser such as nets cape, a user can have access to a CLIPS expert system (advising on waste water cleaning technologies) and experimental data files. The discussed local prototype is part of proposed global agent architecture. (authors)

  14. THE ART OF COLLECTING EXPERIMENTAL DATA INTERNATIONALLY: EXFOR, CINDA AND THE NRDC NETWORK

    International Nuclear Information System (INIS)

    HENRIKSSON, H.; SCHWERER, O.; ROCHMAN, D.; MIKHAYLYUKOVA, M.V.; OTUKA, N.

    2007-01-01

    The world-wide network of nuclear reaction data centers (NRDC) has, for about 40 years, provided data services to the scientific community. This network covers all types of nuclear reaction data, including neutron-induced, charged-particle-induced, and photonuclear data, used in a wide range of applications, such as fission reactors, accelerator driven systems, fusion facilities, nuclear medicine, materials analysis, environmental monitoring, and basic research. The now 13 nuclear data centers included in the NRDC are dividing the efforts of compilation and distribution for particular types of reactions and/or geographic regions all over the world. A central activity of the network is the collection and compilation of experimental nuclear reaction data and the related bibliographic information in the EXFOR and CINDA databases. Many of the individual data centers also distribute other types of nuclear data information, including evaluated data libraries, nuclear structure and decay data, and nuclear data reports. The network today ensures the world-wide transfer of information and coordinated evolution of an important source of nuclear data for current and future nuclear applications

  15. The art of collecting experimental data internationally: EXFOR, CINDA and the NRDC network

    International Nuclear Information System (INIS)

    Henriksson, H.; Schwerer, O.; Rochman, D.; Mikhaylyukova, M.V.; Otuka, N.

    2008-01-01

    The world-wide network of nuclear reaction data centres (NRDC) has, for about 40 years, provided data services to the scientific community. This network covers all types of nuclear reaction data, including neutron-induced, charged-particle-induced, and photonuclear data, used in a wide range of applications, such as fission reactors, accelerator driven systems, fusion facilities, nuclear medicine, materials analysis, environmental monitoring, and basic research. The now 13 nuclear data centres included in the NRDC are dividing the efforts of compilation and distribution for particular types of reactions and/or geographic regions all over the world. A central activity of the network is the collection and compilation of experimental nuclear reaction data and the related bibliographic information in the EXFOR and CINDA databases. Many of the individual data centres also distribute other types of nuclear data information, including evaluated data libraries, nuclear structure and decay data, and nuclear data reports. The network today ensures the world-wide transfer of information and coordinated evolution of an important source of nuclear data for current and future nuclear applications. (authors)

  16. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    Science.gov (United States)

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  17. A compilation of experimental burnout data for axial flow of water in rod bundles

    International Nuclear Information System (INIS)

    Chapman, A.G.; Carrard, G.

    1981-02-01

    A compilation has been made of burnout (critical heat flux) data from the results of more thant 12,000 tests on 321 electrically-heated, water-cooled experimental assemblies each simulating, to some extent, the operating or postulated accident conditions in the fuel elements of water-cooled nuclear power reactors. The main geometric characteristics of the assemblies are listed and references are given for the sources of information from which the data were gathered

  18. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    International Nuclear Information System (INIS)

    Terzuoli, F.; Galassi, M.C.; Mazzini, D.; D'Auria, F.

    2008-01-01

    Pressurized thermal shock (PTS) modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV) lifetime is the cold water emergency core cooling (ECC) injection into the cold leg during a loss of coolant accident (LOCA). Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM) Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs) code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mecanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX), and a research code (NEPTUNE CFD). The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling

  19. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    Directory of Open Access Journals (Sweden)

    F. Terzuoli

    2008-01-01

    Full Text Available Pressurized thermal shock (PTS modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV lifetime is the cold water emergency core cooling (ECC injection into the cold leg during a loss of coolant accident (LOCA. Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mécanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX, and a research code (NEPTUNE CFD. The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling.

  20. Antenatal environmental stress and maturation of the breathing control, experimental data.

    Science.gov (United States)

    Cayetanot, F; Larnicol, N; Peyronnet, J

    2009-08-31

    The nervous respiratory system undergoes postnatal maturation and yet still must be functional at birth. Any antenatal suboptimal environment could upset either its building prenatally and/or its maturation after birth. Here, we would like to briefly summarize some of the major stresses leading to clinical postnatal respiratory dysfunction that can occur during pregnancy, we then relate them to experimental models that have been developed in order to better understand the underlying mechanisms implicated in the respiratory dysfunctions observed in neonatal care units. Four sections are aimed to review our current knowledge based on experimental data. The first will deal with the metabolic factors such as oxygen and glucose, the second with consumption of psychotropic substances (nicotine, cocaine, alcohol, morphine, cannabis and caffeine), the third with psychoactive molecules commonly consumed by pregnant women within a therapeutic context and/or delivered to premature neonates in critical care units (benzodiazepine, caffeine). In the fourth section, we take into account care protocols involving extended maternal-infant separation due to isolation in incubators. The effects of this stress potentially adds to those previously described.

  1. SOFC regulation at constant temperature: Experimental test and data regression study

    International Nuclear Information System (INIS)

    Barelli, L.; Bidini, G.; Cinti, G.; Ottaviano, A.

    2016-01-01

    Highlights: • SOFC operating temperature impacts strongly on its performance and lifetime. • Experimental tests were carried out varying electric load and feeding mixture gas. • Three different anodic inlet gases were tested maintaining constant temperature. • Cathodic air flow rate was used to maintain constant its operating temperature. • Regression law was defined from experimental data to regulate the air flow rate. - Abstract: The operating temperature of solid oxide fuel cell stack (SOFC) is an important parameter to be controlled, which impacts the SOFC performance and its lifetime. Rapid temperature change implies a significant temperature differences between the surface and the mean body leading to a state of thermal shock. Thermal shock and thermal cycling introduce stress in a material due to temperature differences between the surface and the interior, or between different regions of the cell. In this context, in order to determine a control law that permit to maintain constant the fuel cell temperature varying the electrical load and the infeed fuel mixture, an experimental activity were carried out on a planar SOFC short stack to analyse stack temperature. Specifically, three different anodic inlet gas compositions were tested: pure hydrogen, reformed natural gas with steam to carbon ratio equal to 2 and 2.5. By processing the obtained results, a regression law was defined to regulate the air flow rate to be provided to the fuel cell to maintain constant its operating temperature varying its operating conditions.

  2. Correction of Magnetic Optics and Beam Trajectory Using LOCO Based Algorithm with Expanded Experimental Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, A.; Edstrom, D.; Emanov, F. A.; Koop, I. A.; Perevedentsev, E. A.; Rogovsky, Yu. A.; Shwartz, D. B.; Valishev, A.

    2017-03-28

    Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.

  3. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    Science.gov (United States)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  4. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations

    Directory of Open Access Journals (Sweden)

    Andrea Stocco

    2018-04-01

    Full Text Available This article describes the data analyzed in the paper “Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model” (Stocco et al., 2017 [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004 [2], Simon task (Craft and Simon, 1970 [3], and Automated Operation Span (Unsworth et al., 2005 [4], as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  5. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations.

    Science.gov (United States)

    Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S

    2018-04-01

    This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  6. Pore Size Distribution Influence on Suction Properties of Calcareous Stones in Cultural Heritage: Experimental Data and Model Predictions

    Directory of Open Access Journals (Sweden)

    Giorgio Pia

    2016-01-01

    Full Text Available Water sorptivity symbolises an important property associated with the preservation of porous construction materials. The water movement into the microstructure is responsible for deterioration of different types of materials and consequently for the indoor comfort worsening. In this context, experimental sorptivity tests are incompatible, because they require large quantities of materials in order to statistically validate the results. Owing to these reasons, the development of analytical procedure for indirect sorptivity valuation from MIP data would be highly beneficial. In this work, an Intermingled Fractal Units’ model has been proposed to evaluate sorptivity coefficient of calcareous stones, mostly used in historical buildings of Cagliari, Sardinia. The results are compared with experimental data as well as with other two models found in the literature. IFU model better fits experimental data than the other two models, and it represents an important tool for estimating service life of porous building materials.

  7. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  8. Experimental data processing technique for nonstationary heat transfer on fuel rod simulators

    International Nuclear Information System (INIS)

    Nikonov, S.P.; Nikonov, A.P.; Belyukin, V.A.

    1982-01-01

    Non-stationary heat-transfer data processing is considered in connection with experimental studies of the emergency cooling whereat fuel rod imitators both with direct and indirect shell heating were used. The objective of data processing was obtaining the temperature distribution within the imitator, the heat flux removed by the coolant and the shell-coolant heat-transfer coefficient. The special attention was paid to the temperature distribution calculation at the data processing during the reflooding experiments. In this case two factors are assumed to be known: the time dependency of temperature variation at a certain point within the imitator cross-section and the heat flux at some point of the same cross-section. The initial data preparation for calculations, employing the procedure of smoothing by cubic spline functions, is considered as well, with application of an algorithm reported in the literature, which is efficient for the given functional dependency wherein the deviation in each point is known [ru

  9. Seven challenges for model-driven data collection in experimental and observational studies

    Directory of Open Access Journals (Sweden)

    J. Lessler

    2015-03-01

    Full Text Available Infectious disease models are both concise statements of hypotheses and powerful techniques for creating tools from hypotheses and theories. As such, they have tremendous potential for guiding data collection in experimental and observational studies, leading to more efficient testing of hypotheses and more robust study designs. In numerous instances, infectious disease models have played a key role in informing data collection, including the Garki project studying malaria, the response to the 2009 pandemic of H1N1 influenza in the United Kingdom and studies of T-cell immunodynamics in mammals. However, such synergies remain the exception rather than the rule; and a close marriage of dynamic modeling and empirical data collection is far from the norm in infectious disease research. Overcoming the challenges to using models to inform data collection has the potential to accelerate innovation and to improve practice in how we deal with infectious disease threats.

  10. Absorber and regenerator models for liquid desiccant air conditioning systems. Validation and comparison using experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Krause, M.; Heinzen, R.; Jordan, U.; Vajen, K. [Kassel Univ., Inst. of Thermal Engineering, Kassel (Germany); Saman, W.; Halawa, E. [Sustainable Energy Centre, Univ. of South Australia, Mawson Lakes, Adelaide (Australia)

    2008-07-01

    Solar assisted air conditioning systems using liquid desiccants represent a promising option to decrease high summer energy demand caused by electrically driven vapor compression machines. The main components of liquid desiccant systems are absorbers for dehumidifying and cooling of supply air and regenerators for concentrating the desiccant. However, high efficient and validated reliable components are required and the design and operation have to be adjusted to each respective building design, location, and user demand. Simulation tools can help to optimize component and system design. The present paper presents new developed numerical models for absorbers and regenerators, as well as experimental data of a regenerator prototype. The models have been compared with a finite-difference method model as well as experimental data. The data are gained from the regenerator prototype presented and an absorber presented in the literature. (orig.)

  11. Experimental data base of turbulent flow in rod bundles using laser doppler velocimeter

    International Nuclear Information System (INIS)

    Chung, Moon Ki; Yang, Sun Kyu; Chung, Heung June; Won, Soon Yeun; Kim, Bok Deuk; Cho, Young Rho

    1992-01-01

    This report presents in detail the hydraulic characteristics measurements in subchannels of rod bundles using one-component LDV (Laser Doppler Velocimeter). In particular, this report presents the figures and tabulations of the resulting data. The detailed explanations about these results are shown in references publicated or presented at the conference. 4 kinds of experimental work were performed so far. (Author)

  12. Calculating the parameters of experimental data Gauss distribution using the least square fit method and evaluation of their accuracy

    International Nuclear Information System (INIS)

    Guseva, E.V.; Peregudov, V.N.

    1982-01-01

    The FITGAV program for calculation of parameters of the Gauss curve describing experimental data is considered. The calculations are based on the least square fit method. The estimations of errors in the parameter determination as a function of experimental data sample volume and their statistical significance are obtained. The curve fit using 100 points occupies less than 1 s at the SM-4 type computer

  13. Study of neutron spectra in extended uranium target. New experimental data

    Directory of Open Access Journals (Sweden)

    Paraipan M.

    2017-01-01

    Full Text Available The spatial distribution of neutron fluences in the extended uranium target (“Quinta” assembly irradiated with 0.66 GeV proton, 4 AGeV deuteron and carbon beams was studied using the reactions with different threshold energy (Eth. The data sets were obtained with 59Co samples. The accumulation rates for the following isotopes: 60Co (Eth 0 MeV, 59Fe (Eth 3 MeV, 58Co (Eth 10 MeV, 57Co (Eth 20 MeV, 56Co (Eth 32 MeV, 47Sc (Eth 55 MeV, and 48V (Eth 70 MeV were measured with HPGe spectrometer. The experimental accumulation rates were compared with the predictions of the simulations with Geant4 code. Substantial difference between the reconstructed and the simulated data for the hard part of the neutron spectrum was analyzed.

  14. Program PLOTC4. (Version 87-1). Plot evaluated data from the ENDF/B format and/or experimental data which is in a computation format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1987-06-01

    Experimental and evaluated nuclear reaction data are world-wide compiled in EXFOR format (see document IAEA-NDS-1) and ENDF format (see document IAEA-NDS-10), respectively. The computer program PLOTC4 described in the present document plots data from both formats; EXFOR data must be converted first to a ''computation format'' (see document IAEA-NDS-80). The program is available upon request costfree from the IAEA Nuclear Data Section. (author)

  15. A Harmony Search Algorithm for the Reproduction of Experimental Data in the Social Force Model

    Directory of Open Access Journals (Sweden)

    Osama Moh'd Alia

    2014-01-01

    Full Text Available Crowd dynamics is a discipline dealing with the management and flow of crowds in congested places and circumstances. Pedestrian congestion is a pressing issue where crowd dynamics models can be applied. The reproduction of experimental data (velocity-density relation and specific flow rate is a major component for the validation and calibration of such models. In the social force model, researchers have proposed various techniques to adjust essential parameters governing the repulsive social force, which is an effort at reproducing such experimental data. Despite that and various other efforts, the optimal reproduction of the real life data is unachievable. In this paper, a harmony search-based technique called HS-SFM is proposed to overcome the difficulties of the calibration process for SFM, where the fundamental diagram of velocity-density relation and the specific flow rate are reproduced in conformance with the related empirical data. The improvisation process of HS is modified by incorporating the global best particle concept from particle swarm optimization (PSO to increase the convergence rate and overcome the high computational demands of HS-SFM. Simulation results have shown HS-FSM’s ability to produce near optimal SFM parameter values, which makes it possible for SFM to almost reproduce the related empirical data.

  16. Comparison of numerical results with experimental data for single-phase natural convection in an experimental sodium loop. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Ribando, R.J.

    1979-01-01

    A comparison is made between computed results and experimental data for a single-phase natural convection test in an experimental sodium loop. The test was conducted in the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility, an engineering-scale high temperature sodium loop at the Oak Ridge National Laboratory (ORNL) used for thermal-hydraulic testing of simulated Liquid Metal Fast Breeder Reactor (LMFBR) subassemblies at normal and off-normal operating conditions. Heat generation in the 19 pin assembly during the test was typical of decay heat levels. The test chosen for analysis in this paper was one of seven natural convection runs conducted in the facility using a variety of initial conditions and testing parameters. Specifically, in this test the bypass line was open to simulate a parallel heated assembly and the test was begun with a pump coastdown from a small initial forced flow. The computer program used to analyze the test, LONAC (LOw flow and NAtural Convection) is an ORNL-developed, fast-running, one-dimensional, single-phase, finite-difference model used for simulating forced and free convection transients in the THORS loop.

  17. The use of the normalized residual in averaging experimental data and in treating outliers

    International Nuclear Information System (INIS)

    James, M.F.; Mills, R.W.; Weaver, D.R.

    1992-01-01

    In comparing and averaging different measurements of a particular quantity, the problem frequently arises of treating discrepant data. Ideally the evaluator should then study the experimental methods in detail, to try to resolve the discrepancies. This however is often either impractical or unsuccessful. An alternative statistical approach is outlined here, using the ''normalized residual''. The theoretical probability distribution of this quantity is compared with that observed for fission chain yield data from a recent evaluation by the authors. The use of the normalized residual in treating discrepant data is explained and compared with alternative methods. (author)

  18. The picture of the nuclei disintegration mechanism - from nucleus-nucleus collision experimental data at high energies

    International Nuclear Information System (INIS)

    Strugalska-Gola, E.; Strugalski, Z.

    1997-01-01

    Experimental data on nuclear collisions at high energies, mainly obtained from photographic emulsions, are considered from the point of view of the picture of the nuclear collision processes mechanisms prompted experimentally. In fact, the disintegration products of each nucleus involved in a nuclear collision, in its own rest-frame, are similar to that produced by the impact of a number of nucleons of velocity equal to that of the moving primary nucleus

  19. Computational reverse shoulder prosthesis model: Experimental data and verification.

    Science.gov (United States)

    Martins, A; Quental, C; Folgado, J; Ambrósio, J; Monteiro, J; Sarmento, M

    2015-09-18

    The reverse shoulder prosthesis aims to restore the stability and function of pathological shoulders, but the biomechanical aspects of the geometrical changes induced by the implant are yet to be fully understood. Considering a large-scale musculoskeletal model of the upper limb, the aim of this study is to evaluate how the Delta reverse shoulder prosthesis influences the biomechanical behavior of the shoulder joint. In this study, the kinematic data of an unloaded abduction in the frontal plane and an unloaded forward flexion in the sagittal plane were experimentally acquired through video-imaging for a control group, composed of 10 healthy shoulders, and a reverse shoulder group, composed of 3 reverse shoulders. Synchronously, the EMG data of 7 superficial muscles were also collected. The muscle force sharing problem was solved through the minimization of the metabolic energy consumption. The evaluation of the shoulder kinematics shows an increase in the lateral rotation of the scapula in the reverse shoulder group, and an increase in the contribution of the scapulothoracic joint to the shoulder joint. Regarding the muscle force sharing problem, the musculoskeletal model estimates an increased activity of the deltoid, teres minor, clavicular fibers of the pectoralis major, and coracobrachialis muscles in the reverse shoulder group. The comparison between the muscle forces predicted and the EMG data acquired revealed a good correlation, which provides further confidence in the model. Overall, the shoulder joint reaction force was lower in the reverse shoulder group than in the control group. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  1. Critical comparison of experimental data and theoretical predictions for N-d scattering below the breakup threshold

    Energy Technology Data Exchange (ETDEWEB)

    Kievsky, A. [Istituto Nazionale di Fisica Nucleare, Pisa (Italy); Rosati, S. [Istituto Nazionale di Fisica Nucleare, Pisa (Italy)]|[Pisa Univ. (Italy). Dipt. di Fisica; Tornow, W. [Duke Univ., Durham, NC (United States). Dept. of Physics; Viviani, M. [Istituto Nazionale di Fisica Nucleare, Pisa (Italy)

    1996-09-30

    The theoretical approaches for studying N-d processes using realistic, semi-phenomenological NN potentials have matured considerably during the last few years. Accurate calculations of scattering observables are now feasible. Recently, high-quality measurements of N-d scattering at energies below the deuteron breakup threshold became available. Therefore, a detailed comparison between theory and experimental data can now be performed. In this paper the various sets of experimental data for the N-d differential cross section, and the vector and tensor analyzing powers are examined in a critical way in the incident nucleon energy range from 1 to 3 MeV. In order to identify possible inadequacies of the interaction models adopted, phase-shift analyses were performed and compared to the theoretical parameters. (orig.).

  2. Laboratory-based Interpretation of Seismological Models: Dealing with Incomplete or Incompatible Experimental Data (Invited)

    Science.gov (United States)

    Jackson, I.; Kennett, B. L.; Faul, U. H.

    2009-12-01

    In parallel with cooperative developments in seismology during the past 25 years, there have been phenomenal advances in mineral/rock physics making laboratory-based interpretation of seismological models increasingly useful. However, the assimilation of diverse experimental data into a physically sound framework for seismological application is not without its challenges as demonstrated by two examples. In the first example, that of equation-of-state and elasticity data, an appropriate, thermodynamically consistent framework involves finite-strain expansion of the Helmholz free energy incorporating the Debye approximation to the lattice vibrational energy, as advocated by Stixrude and Lithgow-Bertelloni. Within this context, pressure, specific heat and entropy, thermal expansion, elastic constants and their adiabatic and isothermal pressure derivatives are all calculable without further approximation in an internally consistent manner. The opportunities and challenges of assimilating a wide range of sometimes marginally incompatible experimental data into a single model of this type will be demonstrated with reference to MgO, unquestionably the most thoroughly studied mantle mineral. A neighbourhood-algorithm inversion has identified a broadly satisfactory model, but uncertainties in key parameters associated particularly with pressure calibration remain sufficiently large as to preclude definitive conclusions concerning lower-mantle chemical composition and departures from adiabaticity. The second example is the much less complete dataset concerning seismic-wave dispersion and attenuation emerging from low-frequency forced-oscillation experiments. Significant progress has been made during the past decade towards an understanding of high-temperature, micro-strain viscoelastic relaxation in upper-mantle materials, especially as regards the roles of oscillation period, temperature, grain size and melt fraction. However, the influence of other potentially important

  3. Experimental data of the static behavior of reinforced concrete beams at room and low temperature.

    Science.gov (United States)

    Mirzazadeh, M Mehdi; Noël, Martin; Green, Mark F

    2016-06-01

    This article provides data on the static behavior of reinforced concrete at room and low temperature including, strength, ductility, and crack widths of the reinforced concrete. The experimental data on the application of digital image correlation (DIC) or particle image velocimetry (PIV) in measuring crack widths and the accuracy and precision of DIC/PIV method with temperature variations when is used for measuring strains is provided as well.

  4. Review of nuclear data improvement needs for nuclear radiation measurement techniques used at the CEA experimental reactor facilities

    Directory of Open Access Journals (Sweden)

    Destouches Christophe

    2016-01-01

    Full Text Available The constant improvement of the neutron and gamma calculation codes used in experimental nuclear reactors goes hand in hand with that of the associated nuclear data libraries. The validation of these calculation schemes always requires the confrontation with integral experiments performed in experimental reactors to be completed. Nuclear data of interest, straight as cross sections, or elaborated ones such as reactivity, are always derived from a reaction rate measurement which is the only measurable parameter in a nuclear sensor. So, in order to derive physical parameters from the electric signal of the sensor, one needs specific nuclear data libraries. This paper presents successively the main features of the measurement techniques used in the CEA experimental reactor facilities for the on-line and offline neutron/gamma flux characterizations: reactor dosimetry, neutron flux measurements with miniature fission chambers and Self Power Neutron Detector (SPND and gamma flux measurements with chamber ionization and TLD. For each technique, the nuclear data necessary for their interpretation will be presented, the main identified needs for improvement identified and an analysis of their impact on the quality of the measurement. Finally, a synthesis of the study will be done.

  5. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-05-01

    A proposal has been made at LBL to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multilevel, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data through typical transformations and correlations in under 30 s. The throughput for such a facility, for five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600. 3 figures

  6. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  7. Analysis of Elektrogorsk 108 test facility experimental data

    International Nuclear Information System (INIS)

    Urbonas, R.

    2001-01-01

    In the paper an evaluation of experimental data obtained at Russian Elektrogorsk 108 (E-108) test facility is presented. E-108 facility is a scaled model of Russian RBMK design reactor. An attempt to validate state-of-the-art thermal hydraulic codes on the basis of E-108 test facility was made. Originally these codes were developed and validated for BWRs and PWRs. Since state-of-art thermal hydraulic codes are widely used for simulation of RBMK reactors further codes' implementation and validation is required. The facility was modelled by employing RELAP5 (INEEL, USA) thermal hydraulic system analysis best estimate code. The results show dependence from number of nodes used in the heated channels, frictional and form losses employed. The obtained oscillatory behaviour is resulted by density wave and critical heat flux. It is shown that codes are able to predict thermal hydraulic instability and sudden heat structure temperature excursion, when critical heat flux is approached, well. In addition, an uncertainty analysis of one of the experiments was performed by employing GRS developed System for Uncertainty and Sensitivity Analysis (SUSA). It was one of the first attempts to use this statistic-based methodology in Lithuania.(author)

  8. Computational study of a low head draft tube and validation with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Henau, V De; Payette, F A; Sabourin, M [Alstom Power Systems, Hydro 1350 chemin Saint-Roch, Sorel-Tracy (Quebec), J3R 5P9 (Canada); Deschenes, C; Gagnon, J M; Gouin, P, E-mail: vincent.dehenau@power.alstom.co [Hydraulic Machinery Laboratory, Laval University 1065 ave. de la Medecine, Quebec (Canada)

    2010-08-15

    The objective of this paper is to investigate methodologies to improve the reliability of CFD analysis of low head turbine draft tubes. When only the draft tube performance is investigated, the study indicates that draft tube only simulations with an adequate treatment of the inlet boundary conditions for velocity and turbulence are a good alternative to rotor/stator (stage) simulations. The definition of the inlet velocity in the near wall regions is critical to get an agreement between the stage and draft tube only solutions. An average turbulent kinetic energy intensity level and average turbulent kinetic energy dissipation length scale are sufficient as turbulence inlet conditions as long as these averages are coherent with the stage solution. Comparisons of the rotor/stator simulation results to the experimental data highlight some discrepancies between the predicted draft tube flow and the experimental observations.

  9. Optimal experimental design with R

    CERN Document Server

    Rasch, Dieter; Verdooren, L R; Gebhardt, Albrecht

    2011-01-01

    Experimental design is often overlooked in the literature of applied and mathematical statistics: statistics is taught and understood as merely a collection of methods for analyzing data. Consequently, experimenters seldom think about optimal design, including prerequisites such as the necessary sample size needed for a precise answer for an experimental question. Providing a concise introduction to experimental design theory, Optimal Experimental Design with R: Introduces the philosophy of experimental design Provides an easy process for constructing experimental designs and calculating necessary sample size using R programs Teaches by example using a custom made R program package: OPDOE Consisting of detailed, data-rich examples, this book introduces experimenters to the philosophy of experimentation, experimental design, and data collection. It gives researchers and statisticians guidance in the construction of optimum experimental designs using R programs, including sample size calculations, hypothesis te...

  10. Kinetic energy in the collective quadrupole Hamiltonian from the experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Jolos, R.V., E-mail: jolos@theor.jinr.ru [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Dubna State University, 141980 Dubna (Russian Federation); Kolganova, E.A. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Dubna State University, 141980 Dubna (Russian Federation)

    2017-06-10

    Dependence of the kinetic energy term of the collective nuclear Hamiltonian on collective momentum is considered. It is shown that the fourth order in collective momentum term of the collective quadrupole Hamiltonian generates a sizable effect on the excitation energies and the matrix elements of the quadrupole moment operator. It is demonstrated that the results of calculation are sensitive to the values of some matrix elements of the quadrupole moment. It stresses the importance for a concrete nucleus to have the experimental data for the reduced matrix elements of the quadrupole moment operator taken between all low lying states with the angular momenta not exceeding 4.

  11. 40 CFR 158.2174 - Experimental use permit microbial pesticides nontarget organisms and environmental fate data...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit microbial... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS... controls the target insect pest by a mechanism of infectivity; i.e., may create an epizootic condition in...

  12. The Particle Physics Playground website: tutorials and activities using real experimental data

    Science.gov (United States)

    Bellis, Matthew; CMS Collaboration

    2016-03-01

    The CERN Open Data Portal provides access to data from the LHC experiments to anyone with the time and inclination to learn the analysis procedures. The CMS experiment has made a significant amount of data availible in basically the same format the collaboration itself uses, along with software tools and a virtual enviroment in which to run those tools. These same data have also been mined for educational exercises that range from very simple .csv files that can be analyzed in a spreadsheet to more sophisticated formats that use ROOT, a dominant software package in experimental particle physics but not used as much in the general computing community. This talk will present the Particle Physics Playground website (http://particle-physics-playground.github.io/), a project that uses data from the CMS experiment, as well as the older CLEO experiment, in tutorials and exercises aimed at high school and undergraduate students and other science enthusiasts. The data are stored as text files and the users are provided with starter Python/Jupyter notebook programs and accessor functions which can be modified to perform fairly high-level analyses. The status of the project, success stories, and future plans for the website will be presented. This work was supported in part by NSF Grant PHY-1307562.

  13. Processing and analyses of the pulsed-neutron experimental data of the YALINA facility

    International Nuclear Information System (INIS)

    Cao, Y.; Gohar, Y.; Smith, D.; Talamo, A.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: The YALINA subcritical assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus has been utilized to study the physics parameters of accelerator driven systems (ADS) with high intensity Deuterium-Tritium and Deuterium-Deuterium pulsed neutron sources. In particular, with the fast and thermal neutron zones of the YALINA-Booster subcritical assembly, the pulsed neutron experiments have been utilized to evaluate the pulsed neutron methods for determining the reactivity of the subcritical system. In this paper, the pulsed-neutron experiments performed in the YALINA-Booster 1141 configuration with 90% U 235 fuel and 1185 configuration with 36% and 21% U fuel are examined and analized. The Sjo:strand area-ratio method is utilized to determine the reactivities of the subcritical assembly configurations. The linear regression method is applied to obtain the prompt neutron decay constants from the pulsed-neutron experimental data. The reactivity values obtained from experimental data are shown to be dependent on the detector locations and also on the detector types. The large discrepancies between the reactivity values given by the detectors in the fast neutron zone was reduced by spatial correction methods, and the estimated reactivity after the spatial corrections are almost spatially independent.

  14. Combining experimental and simulation data of molecular processes via augmented Markov models.

    Science.gov (United States)

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  15. Evaluation of experimental data for wax and diamondoids solubility in gaseous systems

    DEFF Research Database (Denmark)

    Mohammadi, Amir H.; Gharagheizi, Farhad; Eslamimanesh, Ali

    2012-01-01

    The Leverage statistical approach is herein applied for evaluation of experimental data of the paraffin waxes/diamondoids solubility in gaseous systems. The calculation steps of this algorithm consist of determination of the statistical Hat matrix, sketching the Williams Plot, and calculation......-Santiago and Teja correlations are used to calculate/estimate the solubility of paraffin waxes (including n-C24H50 to n-C33H68) and diamondoids (adamantane and diamantane) in carbon dioxide/ethane gases, respectively. It can be interpreted from the obtained results that the applied equations for calculation...

  16. Experimental Comparison of 56 Gbit/s PAM-4 and DMT for Data Center Interconnect Applications

    DEFF Research Database (Denmark)

    Eiselt, Nicklas; Dochhan, Annika; Griesser, Helmut

    2016-01-01

    Four-level pulse amplitude modulation (PAM-4) and discrete multi-tone transmission (DMT) in combination with intensity modulation and direct-detection are two promising approaches for a low-power and low-cost solution for the next generation of data center interconnect applications. We experiment......Four-level pulse amplitude modulation (PAM-4) and discrete multi-tone transmission (DMT) in combination with intensity modulation and direct-detection are two promising approaches for a low-power and low-cost solution for the next generation of data center interconnect applications. We...... experimentally investigate and compare both modulation formats at a data rate of 56 Gb/s and a transmission wavelength of 1544 nm using the same experimental setup. We show that PAM-4 outperforms double sideband DMT and also vestigial sideband DMT for the optical back-to-back (b2b) case and also...... for a transmission distance of 80 km SSMF in terms of required OSNR at a FEC-threshold of 3.8e-3. However, it is also pointed out that both versions of DMT do not require any optical dispersion compensation to transmit over 80 km SSMF while this is essential for PAM-4. Thus, implementation effort and cost may...

  17. Theoretical bases and possibilities of program BRASIER for experimental data fitting and management

    International Nuclear Information System (INIS)

    Quintero, B.; Santos, J.; Garcia Yip, F.; Lopez, I.

    1992-01-01

    In the paper the theoretical bases and primary possibilities of the program BRASIER are shown. It was performed for the management and fitting of experimental data. Relevant characteristics are: Utilization of several regression methods, errors treatment, P oint-Drop Technique , multidimensional fitting, friendly interactivity, graphical possibilities and file management. The fact of using various regression methods has resulted in greater convergence possibility with respect to other similar programs that use an unique algorithm

  18. Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa

    CERN Document Server

    Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F

    2014-01-01

    The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...

  19. New experimental data on the influence of extranuclear factors on the probability of radioactive decay

    CERN Document Server

    Bondarevskij, S I; Skorobogatov, G A

    2002-01-01

    New experimental data on influence of various extranuclear factors on probability (lambda) of radioactive decay are presented. During redox processes in solutions containing sup 1 sup 3 sup 9 Ce relative change in lambda measured by the DELTA I/I method was [I(Ce sup I sup V)-I(Ce sup I sup I sup I)]/I sub m sub e sub a sub n +(1.4+-0.6)x10 sup - sup 4. Using a modification of the method based on displacement of the age-old radioactive equilibrium, when a source MgO( sup 1 sup 2 sup 1 sup m Te) was cooled to 78 K, growth of lambda of tellurium nuclear isomer by 0.04+-0.02% was detected. New experimental data on increase in gamma-radioactivity of sample Be sup 1 sup 2 sup 3 sup m Te at the expense of low-temperature induced reaction, i.e. collective nuclear superluminescence, are provided

  20. Modeling the basin of attraction as a two-dimensional manifold from experimental data: Applications to balance in humans

    Science.gov (United States)

    Zakynthinaki, Maria S.; Stirling, James R.; Cordente Martínez, Carlos A.; Díaz de Durana, Alfonso López; Quintana, Manuel Sillero; Romo, Gabriel Rodríguez; Molinuevo, Javier Sampedro

    2010-03-01

    We present a method of modeling the basin of attraction as a three-dimensional function describing a two-dimensional manifold on which the dynamics of the system evolves from experimental time series data. Our method is based on the density of the data set and uses numerical optimization and data modeling tools. We also show how to obtain analytic curves that describe both the contours and the boundary of the basin. Our method is applied to the problem of regaining balance after perturbation from quiet vertical stance using data of an elite athlete. Our method goes beyond the statistical description of the experimental data, providing a function that describes the shape of the basin of attraction. To test its robustness, our method has also been applied to two different data sets of a second subject and no significant differences were found between the contours of the calculated basin of attraction for the different data sets. The proposed method has many uses in a wide variety of areas, not just human balance for which there are many applications in medicine, rehabilitation, and sport.

  1. An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.

    Science.gov (United States)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-09-01

    Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.

  2. Convenience experimentation.

    Science.gov (United States)

    Krohs, Ulrich

    2012-03-01

    Systems biology aims at explaining life processes by means of detailed models of molecular networks, mainly on the whole-cell scale. The whole cell perspective distinguishes the new field of systems biology from earlier approaches within molecular cell biology. The shift was made possible by the high throughput methods that were developed for gathering 'omic' (genomic, proteomic, etc.) data. These new techniques are made commercially available as semi-automatic analytic equipment, ready-made analytic kits and probe arrays. There is a whole industry of supplies for what may be called convenience experimentation. My paper inquires some epistemic consequences of strong reliance on convenience experimentation in systems biology. In times when experimentation was automated to a lesser degree, modeling and in part even experimentation could be understood fairly well as either being driven by hypotheses, and thus proceed by the testing of hypothesis, or as being performed in an exploratory mode, intended to sharpen concepts or initially vague phenomena. In systems biology, the situation is dramatically different. Data collection became so easy (though not cheap) that experimentation is, to a high degree, driven by convenience equipment, and model building is driven by the vast amount of data that is produced by convenience experimentation. This results in a shift in the mode of science. The paper shows that convenience driven science is not primarily hypothesis-testing, nor is it in an exploratory mode. It rather proceeds in a gathering mode. This shift demands another shift in the mode of evaluation, which now becomes an exploratory endeavor, in response to the superabundance of gathered data. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. First experimental data on the FEL - RF interaction at the Jefferson Lab IRFEL

    International Nuclear Information System (INIS)

    L. Merminga; P. Alexeev; S.V. Benson; A. Bolshakov; L.R. Doolittle; D.R. Douglas; C. Hovater; G.R. Neil

    1999-01-01

    High power FELs driven by recirculating, energy-recovering linacs can exhibit instabilities in the beam energy and laser output power. Fluctuations in the accelerating cavity fields can cause beam loss on apertures, phase oscillations and optical cavity detuning. These can affect the laser power and in turn the beam-induced voltage to further enhance the fluctuations of the rf fields. A theoretical model was developed to study the dynamics of the coupled system and was presented last year. Recently, a first set of experimental data was obtained at the Jefferson Lab IRFEL for direct comparisons with the model. The authors describe the experiment, present the data together with the modeling predictions and outline future directions

  4. Assessment of the PIUS physics and thermal-hydraulic experimental data bases

    International Nuclear Information System (INIS)

    Boyack, B.E.

    1993-01-01

    The PIUS reactor utilizes simplified, inherent, passive, or other innovative means to accomplish safety functions. Accordingly, the PIUS reactor is subject to the requirements of 10CFR52.47(b)(2)(i)(A). This regulation requires that the applicant adequately demonstrate the performance of each safety feature, interdependent effects among the safety features, and a sufficient data base on the safety features of the design to assess the analytical tools used for safety analysis. Los Alamos has assessed the quality and completeness of the existing and planned data bases used by Asea Brown Boveri to validate its safety analysis codes and other relevant data bases. Only a limited data base of separate effect and integral tests exist at present. This data base is not adequate to fulfill the requirements of 10CFR52.47(b)(2)(i)(A). Asea Brown Boveri has stated that it plans to conduct more separate effect and integral test programs. If appropriately designed and conducted, these test programs have the potential to satisfy most of the data base requirements of 10CFR52.47(b)(2)(i)(A) and remedy most of the deficiencies of the currently existing combined data base. However, the most important physical processes in PIUS are related to reactor shutdown because the PIUS reactor does not contain rodded shutdown and control systems. For safety-related reactor shutdown, PIUS relies on negative reactivity insertions from the moderator temperature coefficient and from boron entering the core from the reactor pool. Asea Brown Boveri has neither developed a direct experimental data base for these important processes nor provided a rationale for indirect testing of these key PIUS processes. This is assessed as a significant shortcoming. In preparing the conclusions of this report, test documentation and results have been reviewed for only one integral test program, the small-scale integral tests conducted in the ATLE facility

  5. Beauty photoproduction at HERA. k{sub T}-factorization versus experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Lipatov, A.V.; Zotov, N.P. [M.V. Lomonosov Moscow State Univ., Moscow (Russian Federation). D.V. Skobeltsyn Institute of Nuclear Physics

    2006-05-15

    We present calculations of the beauty photoproduction at HERA collider in the framework of the k{sub T}-factorization approach. Both direct and resolved photon contributions are taken into account. The unintegrated gluon densities in a proton and in a photon are obtained from the full CCFM, from unified BFKL-DGLAP evolution equations as well as from the Kimber-Martin-Ryskin prescription. We investigate different production rates (both inclusive and associated with hadronic jets) and compare our theoretical predictions with the recent experimental data taken by the H1 and ZEUS collaborations. Special attention is put on the x{sup obs}{sub {gamma}} variable which is sensitive to the relative contributions to the beauty production cross section. (Orig.)

  6. Theoretical interpretation of experimental data from direct dark matter detection

    Energy Technology Data Exchange (ETDEWEB)

    Chung-Lin, Shan

    2007-10-15

    I derive expressions that allow to reconstruct the normalized one-dimensional velocity distribution function of halo WIMPs and to determine its moments from the recoil energy spectrum as well as from experimental data directly. The reconstruction of the velocity distribution function is further extended to take into account the annual modulation of the event rate. All these expressions are independent of the as yet unknown WIMP density near the Earth as well as of the WIMP-nucleus cross section. The only information about the nature of halo WIMPs which one needs is the WIMP mass. I also present a method for the determination of the WIMP mass by combining two (or more) experiments with different detector materials. This method is not only independent of the model of Galactic halo but also of that of WIMPs. (orig.)

  7. Theoretical interpretation of experimental data from direct dark matter detection

    International Nuclear Information System (INIS)

    Shan Chung-Lin

    2007-10-01

    I derive expressions that allow to reconstruct the normalized one-dimensional velocity distribution function of halo WIMPs and to determine its moments from the recoil energy spectrum as well as from experimental data directly. The reconstruction of the velocity distribution function is further extended to take into account the annual modulation of the event rate. All these expressions are independent of the as yet unknown WIMP density near the Earth as well as of the WIMP-nucleus cross section. The only information about the nature of halo WIMPs which one needs is the WIMP mass. I also present a method for the determination of the WIMP mass by combining two (or more) experiments with different detector materials. This method is not only independent of the model of Galactic halo but also of that of WIMPs. (orig.)

  8. The IUPAC aqueous and non-aqueous experimental pKa data repositories of organic acids and bases.

    Science.gov (United States)

    Slater, Anthony Michael

    2014-10-01

    Accurate and well-curated experimental pKa data of organic acids and bases in both aqueous and non-aqueous media are invaluable in many areas of chemical research, including pharmaceutical, agrochemical, specialty chemical and property prediction research. In pharmaceutical research, pKa data are relevant in ligand design, protein binding, absorption, distribution, metabolism, elimination as well as solubility and dissolution rate. The pKa data compilations of the International Union of Pure and Applied Chemistry, originally in book form, have been carefully converted into computer-readable form, with value being added in the process, in the form of ionisation assignments and tautomer enumeration. These compilations offer a broad range of chemistry in both aqueous and non-aqueous media and the experimental conditions and original reference for all pKa determinations are supplied. The statistics for these compilations are presented and the utility of the computer-readable form of these compilations is examined in comparison to other pKa compilations. Finally, information is provided about how to access these databases.

  9. Network inference from functional experimental data (Conference Presentation)

    Science.gov (United States)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  10. Comparison between a new TRNSYS model and experimental data of phase change materials in a solar combisystem

    Energy Technology Data Exchange (ETDEWEB)

    Bony, J.; Citherlet, S.

    2007-07-01

    In the framework of the IEA Task 32 (Solar Heating and Cooling Programme), we developed a numeric model to simulate heat transfer in phase change materials (PCM), and experimental data. The analyzed system is bulk PCM plunged in a water tank storage of a solar combisystem (heating and domestic hot water production). The numerical model, based on the enthalpy approach, takes into account hysteresis and subcooling characteristic and also the conduction and the convection in the PCM. This model has been implemented in an existing TRNSYS type of water tank storage. The simulations has been compared with experimental data obtained with a solar installation using water tank storage of about 900 litres, already studied during the IEA Task 26 (Weiss 2003). (author)

  11. Experimental Space Shuttle Orbiter Studies to Acquire Data for Code and Flight Heating Model Validation

    Science.gov (United States)

    Wadhams, T. P.; Holden, M. S.; MacLean, M. G.; Campbell, Charles

    2010-01-01

    In an experimental study to obtain detailed heating data over the Space Shuttle Orbiter, CUBRC has completed an extensive matrix of experiments using three distinct models and two unique hypervelocity wind tunnel facilities. This detailed data will be employed to assess heating augmentation due to boundary layer transition on the Orbiter wing leading edge and wind side acreage with comparisons to computational methods and flight data obtained during the Orbiter Entry Boundary Layer Flight Experiment and HYTHIRM during STS-119 reentry. These comparisons will facilitate critical updates to be made to the engineering tools employed to make assessments about natural and tripped boundary layer transition during Orbiter reentry. To achieve the goals of this study data was obtained over a range of Mach numbers from 10 to 18, with flight scaled Reynolds numbers and model attitudes representing key points on the Orbiter reentry trajectory. The first of these studies were performed as an integral part of Return to Flight activities following the accident that occurred during the reentry of the Space Shuttle Columbia (STS-107) in February of 2003. This accident was caused by debris, which originated from the foam covering the external tank bipod fitting ramps, striking and damaging critical wing leading edge heating tiles that reside in the Orbiter bow shock/wing interaction region. During investigation of the accident aeroheating team members discovered that only a limited amount of experimental wing leading edge data existed in this critical peak heating area and a need arose to acquire a detailed dataset of heating in this region. This new dataset was acquired in three phases consisting of a risk mitigation phase employing a 1.8% scale Orbiter model with special temperature sensitive paint covering the wing leading edge, a 0.9% scale Orbiter model with high resolution thin-film instrumentation in the span direction, and the primary 1.8% scale Orbiter model with detailed

  12. Comparisons of experimental beta-ray spectra important to decay heat predictions with ENSDF [Evaluated Nuclear Structure Data File] evaluations

    International Nuclear Information System (INIS)

    Dickens, J.K.

    1990-03-01

    Graphical comparisons of recently obtained experimental beta-ray spectra with predicted beta-ray spectra based on the Evaluated Nuclear Structure Data File are exhibited for 77 fission products having masses 79--99 and 130--146 and lifetimes between 0.17 and 23650 sec. The comparisons range from very poor to excellent. For beta decay of 47 nuclides, estimates are made of ground-state transition intensities. For 14 cases the value in ENSDF gives results in very good agreement with the experimental data. 12 refs., 77 figs., 1 tab

  13. CPAD, Curated Protein Aggregation Database: A Repository of Manually Curated Experimental Data on Protein and Peptide Aggregation.

    Science.gov (United States)

    Thangakani, A Mary; Nagarajan, R; Kumar, Sandeep; Sakthivel, R; Velmurugan, D; Gromiha, M Michael

    2016-01-01

    Accurate distinction between peptide sequences that can form amyloid-fibrils or amorphous β-aggregates, identification of potential aggregation prone regions in proteins, and prediction of change in aggregation rate of a protein upon mutation(s) are critical to research on protein misfolding diseases, such as Alzheimer's and Parkinson's, as well as biotechnological production of protein based therapeutics. We have developed a Curated Protein Aggregation Database (CPAD), which has collected results from experimental studies performed by scientific community aimed at understanding protein/peptide aggregation. CPAD contains more than 2300 experimentally observed aggregation rates upon mutations in known amyloidogenic proteins. Each entry includes numerical values for the following parameters: change in rate of aggregation as measured by fluorescence intensity or turbidity, name and source of the protein, Uniprot and Protein Data Bank codes, single point as well as multiple mutations, and literature citation. The data in CPAD has been supplemented with five different types of additional information: (i) Amyloid fibril forming hexa-peptides, (ii) Amorphous β-aggregating hexa-peptides, (iii) Amyloid fibril forming peptides of different lengths, (iv) Amyloid fibril forming hexa-peptides whose crystal structures are available in the Protein Data Bank (PDB) and (v) Experimentally validated aggregation prone regions found in amyloidogenic proteins. Furthermore, CPAD is linked to other related databases and resources, such as Uniprot, Protein Data Bank, PUBMED, GAP, TANGO, WALTZ etc. We have set up a web interface with different search and display options so that users have the ability to get the data in multiple ways. CPAD is freely available at http://www.iitm.ac.in/bioinfo/CPAD/. The potential applications of CPAD have also been discussed.

  14. Current status of the European contribution to the Remote Data Access System of the ITER Remote Experimentation Centre

    International Nuclear Information System (INIS)

    De Tommasi, G.; Manduchi, G.; Muir, D.G.; Ide, S.; Naito, O.; Urano, H.; Clement-Lorenzo, S.; Nakajima, N.; Ozeki, T.; Sartori, F.

    2015-01-01

    The ITER Remote Experimentation Centre (REC) is one of the projects under implementation within the BA agreement. The final objective of the REC is to allow researchers to take part in the experimentation on ITER from a remote location. Before ITER first operations, the REC will be used to evaluate ITER-relevant technologies for remote participation. Among the different software tools needed for remote participation, an important one is the Remote Data Access System (RDA), which provides a single software infrastructure to access data stored at the remotely participating experiment, regardless of the geographical location of the users. This paper introduces the European contribution to the RDA system for the REC.

  15. Verification of experimental modal modeling using HDR (Heissdampfreaktor) dynamic test data

    International Nuclear Information System (INIS)

    Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.

    1983-01-01

    Experimental modal modeling involves the determination of the modal parameters of the model of a structure from recorded input-output data from dynamic tests. Though commercial modal analysis algorithms are being widely used in many industries their ability to identify a set of reliable modal parameters of an as-built nuclear power plant structure has not been systematically verified. This paper describes the effort to verify MODAL-PLUS, a widely used modal analysis code, using recorded data from the dynamic tests performed on the reactor building of the Heissdampfreaktor, situated near Frankfurt, Federal Republic of Germany. In the series of dynamic tests on HDR in 1979, the reactor building was subjected to forced vibrations from different types and levels of dynamic excitations. Two sets of HDR containment building input-output data were chosen for MODAL-PLUS analyses. To reduce the influence of nonlinear behavior on the results, these sets were chosen so that the levels of excitation are relatively low and about the same in the two sets. The attempted verification was only partially successful in that only one modal model, with a limited range of validity, could be synthesized and in that the goodness of fit could be verified only in this limited range

  16. Using simulation to interpret experimental data in terms of protein conformational ensembles.

    Science.gov (United States)

    Allison, Jane R

    2017-04-01

    In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Development and testing of an Internet-based data collection technique for simulator and real world experimentation

    International Nuclear Information System (INIS)

    Droeivoldsmo, Asgeir; Johnsen, Terje

    2005-09-01

    With experience from many years of data collection in the Man - Machine and Virtual Reality Laboratories at the OECD Halden Reactor Project, an evident need for more efficient handling of questionnaire data was documented. A working prototype on-line system for World Wide Web (www) questionnaire generation and data collection was developed and tested. This paper discusses the use of www-based data collection and the need for system functionality in experiments and surveys. Insights from the development of the system are reported together with experiences using such tools in simulation and realistic field experimentation. (Author)

  18. Unfolding of true distributions from experimental data distorted by detectors with finite resolutions

    International Nuclear Information System (INIS)

    Gagunashvili, N.D.

    1993-01-01

    A new procedure for unfolding the true distribution from experimental data distorted by a detector is proposed. For the given detector a result can be found by the least squares method, hence, without bias and involving minimal statistical errors. Stability of the result is achieved at the expense of its information content and/or using additional information on the shape of the distributions to be measured. The method may be applied for detectors with linear or nonlinear distortions. 8 refs.; 5 figs

  19. Invited review: Experimental design, data reporting, and sharing in support of animal systems modeling research.

    Science.gov (United States)

    McNamara, J P; Hanigan, M D; White, R R

    2016-12-01

    The National Animal Nutrition Program "National Research Support Project 9" supports efforts in livestock nutrition, including the National Research Council's committees on the nutrient requirements of animals. Our objective was to review the status of experimentation and data reporting in animal nutrition literature and to provide suggestions for the advancement of animal nutrition research and the ongoing improvement of field-applied nutrient requirement models. Improved data reporting consistency and completeness represent a substantial opportunity to improve nutrition-related mathematical models. We reviewed a body of nutrition research; recorded common phrases used to describe diets, animals, housing, and environmental conditions; and proposed equivalent numerical data that could be reported. With the increasing availability of online supplementary material sections in journals, we developed a comprehensive checklist of data that should be included in publications. To continue to improve our research effectiveness, studies utilizing multiple research methodologies to address complex systems and measure multiple variables will be necessary. From the current body of animal nutrition literature, we identified a series of opportunities to integrate research focuses (nutrition, reproduction and genetics) to advance the development of nutrient requirement models. From our survey of current experimentation and data reporting in animal nutrition, we identified 4 key opportunities to advance animal nutrition knowledge: (1) coordinated experiments should be designed to employ multiple research methodologies; (2) systems-oriented research approaches should be encouraged and supported; (3) publication guidelines should be updated to encourage and support sharing of more complete data sets; and (4) new experiments should be more rapidly integrated into our knowledge bases, research programs and practical applications. Copyright © 2016 American Dairy Science Association

  20. Adjustments in Almod3W2 transient analysis code to fit Angra 1 NPP experimental data

    International Nuclear Information System (INIS)

    Madeira, A.A.; Camargo, C.T.M.

    1988-01-01

    Some little modifications were introduced in ALMOD3W2 code, as consequence of the interest in reproducing the full load rejection test in Angra 1 NPP. Such modifications showed to be adequate when code results were compared with experimental data. (author) [pt

  1. Privacy-preserving data cube for electronic medical records: An experimental evaluation.

    Science.gov (United States)

    Kim, Soohyung; Lee, Hyukki; Chung, Yon Dohn

    2017-01-01

    The aim of this study is to evaluate the effectiveness and efficiency of privacy-preserving data cubes of electronic medical records (EMRs). An EMR data cube is a complex of EMR statistics that are summarized or aggregated by all possible combinations of attributes. Data cubes are widely utilized for efficient big data analysis and also have great potential for EMR analysis. For safe data analysis without privacy breaches, we must consider the privacy preservation characteristics of the EMR data cube. In this paper, we introduce a design for a privacy-preserving EMR data cube and the anonymization methods needed to achieve data privacy. We further focus on changes in efficiency and effectiveness that are caused by the anonymization process for privacy preservation. Thus, we experimentally evaluate various types of privacy-preserving EMR data cubes using several practical metrics and discuss the applicability of each anonymization method with consideration for the EMR analysis environment. We construct privacy-preserving EMR data cubes from anonymized EMR datasets. A real EMR dataset and demographic dataset are used for the evaluation. There are a large number of anonymization methods to preserve EMR privacy, and the methods are classified into three categories (i.e., global generalization, local generalization, and bucketization) by anonymization rules. According to this classification, three types of privacy-preserving EMR data cubes were constructed for the evaluation. We perform a comparative analysis by measuring the data size, cell overlap, and information loss of the EMR data cubes. Global generalization considerably reduced the size of the EMR data cube and did not cause the data cube cells to overlap, but incurred a large amount of information loss. Local generalization maintained the data size and generated only moderate information loss, but there were cell overlaps that could decrease the search performance. Bucketization did not cause cells to overlap

  2. Evaluation of CHF experimental data for non-square lattice 7-rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae Hyun; Yoo, Y. J.; Kim, K. K.; Zee, S. Q

    2001-01-01

    A series of CHF experiments are conducted for 7-rod hexagonal test bundles in order to investigate the CHF characteristics of self-sustained square finned (SSF) rod bundles. The experiments are performed in the freon-loop and water-loop located at IPPE in Russia, and 609 data of freon-12 and 229 data of water are obtained from 7 kinds of test bundles classified by the combination of heated length and axial/radial power distributions. As the result of the evaluation of four representative CHF correlations, the EPRI-1 correlation reveals good prediction capability for SSF test bundles. The inlet parameter CHF correlation suggested by IPPE calculates the mean and the standard deviation of P/M for uniformly heated test bundles as 1.002 and 0.049, respectively. In spite of its excellent accuracy, the correlation has a discontinuity point at the boundary between the low velocity and high velocity conditions. KAERI's inlet parameter correlation eliminates this defect by introducing the complete evaporation model at low velocity condition, and calculates the mean and standard deviation of P/M as 0.095 and 0.062 for uniformly heated 496 data points, respectively. The mean/standard deviation of local parameter CHF correlations suggested by IPPE and KAERI are evaluated as 1.023/0.178 and 1.002/0.158, respectively. The inlet parameter correlation developed from uniformly heated test bundles tends to under-predict CHF about 3% for axially non-uniformly heated test bundles. On the other hand, the local parameter correlation reveals large scattering of P/M, and requires re-optimization of the correlation for non-uniform axial power distributions. As the result of the analysis of experimental data, it reveals that the correction model of axial power shapes suggested by IPPE is applicable to the inlet parameter correlations. For the test bundle of radial non-uniform power distribution, the physically unexpected results are obtained at some experimental conditions. In addition

  3. TREAT experimental data base regarding fuel dispersals in LMFBR loss-of-flow accidents

    International Nuclear Information System (INIS)

    Simms, R.; Fink, C.L.; Stanford, G.S.; Regis, J.P.

    1981-01-01

    The reactivity feedback from fuel relocation is a central issue in the analysis of loss-of-flow (LOF) accidents in LMFBRs. Fuel relocation has been studied in a number of LOF simulations in the TREAT reactor. In this paper the results of these tests are analyzed, using, as the principal figure of merit, the changes in equivalent fuel worth associated with the fuel motion. The equivalent fuel worth was calculated from the measured axial fuel distributions by weighting the data with a typical LMFBR fuel-worth function. At nominal power, the initial fuel relocation resulted in increases in equivalent fuel worth. Above nominal power the fuel motion was dispersive, but the dispersive driving forces could not unequivocally be identified from the experimental data

  4. SADE: system of acquisition of experimental data. Definition and analysis of an experiment description language

    International Nuclear Information System (INIS)

    Gagniere, Jean-Michel

    1983-01-01

    This research thesis presents a computer system for the acquisition of experimental data. It is aimed at acquiring, at processing and at storing information from particle detectors. The acquisition configuration is described by an experiment description language. The system comprises a lexical analyser, a syntactic analyser, a translator, and a data processing module. It also comprises a control language and a statistics management and plotting module. The translator builds up series of tables which allow, during an experiment, different sequences to be executed: experiment running, calculations to be performed on this data, building up of statistics. Short execution time and ease of use are always looked for [fr

  5. Experimental Data and Guidelines for Stone Masonry Structures: a Comparative Review

    International Nuclear Information System (INIS)

    Romano, Alessandra

    2008-01-01

    Indications about the mechanical properties of masonry structures contained in many Italian guidelines are based on different aspects both concerning the constituents material (units and mortar) and their assemblage. Indeed, the documents define different classes (depending on the type, the arrangement and the unit properties) and suggest the use of amplification coefficients for taking into account the influence of different factors on the mechanical properties of masonry. In this paper, a critical discussion about the indications proposed by some Italian guidelines for stone masonry structures is presented. Particular attention is addressed to the classification criteria of the masonry type and to the choice of the amplification factors. Finally, a detailed analytical comparison among the suggested values and some inherent experimental data recently published is performed

  6. Numerical solution of a coefficient inverse problem with multi-frequency experimental raw data by a globally convergent algorithm

    Science.gov (United States)

    Nguyen, Dinh-Liem; Klibanov, Michael V.; Nguyen, Loc H.; Kolesov, Aleksandr E.; Fiddy, Michael A.; Liu, Hui

    2017-09-01

    We analyze in this paper the performance of a newly developed globally convergent numerical method for a coefficient inverse problem for the case of multi-frequency experimental backscatter data associated to a single incident wave. These data were collected using a microwave scattering facility at the University of North Carolina at Charlotte. The challenges for the inverse problem under the consideration are not only from its high nonlinearity and severe ill-posedness but also from the facts that the amount of the measured data is minimal and that these raw data are contaminated by a significant amount of noise, due to a non-ideal experimental setup. This setup is motivated by our target application in detecting and identifying explosives. We show in this paper how the raw data can be preprocessed and successfully inverted using our inversion method. More precisely, we are able to reconstruct the dielectric constants and the locations of the scattering objects with a good accuracy, without using any advanced a priori knowledge of their physical and geometrical properties.

  7. Integrated system for production of neutronics and photonics calculational constants. Neutron-induced interactions: index of experimental data

    International Nuclear Information System (INIS)

    MacGregor, M.H.; Cullen, D.E.; Howerton, R.J.; Perkins, S.T.

    1976-01-01

    Indexes to the neutron-induced interaction data in the Experimental Cross Section Information Library (ECSIL) as of July 4, 1976 are tabulated. The tabulation has two arrangements: isotope (ZA) order and reaction-number order

  8. Integrated system for production of neutronics and photonics calculational constants. Neutron-induced interactions: index of experimental data

    Energy Technology Data Exchange (ETDEWEB)

    MacGregor, M.H.; Cullen, D.E.; Howerton, R.J.; Perkins, S.T.

    1976-07-04

    Indexes to the neutron-induced interaction data in the Experimental Cross Section Information Library (ECSIL) as of July 4, 1976 are tabulated. The tabulation has two arrangements: isotope (ZA) order and reaction-number order.

  9. Covariance matrices of experimental data

    International Nuclear Information System (INIS)

    Perey, F.G.

    1978-01-01

    A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U

  10. SPECT reconstruction of combined cone beam and parallel hole collimation with experimental data

    International Nuclear Information System (INIS)

    Li, Jianying; Jaszczak, R.J.; Turkington, T.G.; Greer, K.L.; Coleman, R.E.

    1993-01-01

    The authors have developed three methods to combine parallel and cone bean (P and CB) SPECT data using modified Maximum Likelihood-Expectation Maximization (ML-EM) algorithms. The first combination method applies both parallel and cone beam data sets to reconstruct a single intermediate image after each iteration using the ML-EM algorithm. The other two iterative methods combine the intermediate parallel beam (PB) and cone beam (CB) source estimates to enhance the uniformity of images. These two methods are ad hoc methods. In earlier studies using computer Monte Carlo simulation, they suggested that improved images might be obtained by reconstructing combined P and CB SPECT data. These combined collimation methods are qualitatively evaluated using experimental data. An attenuation compensation is performed by including the effects of attenuation in the transition matrix as a multiplicative factor. The combined P and CB images are compared with CB-only images and the result indicate that the combined P and CB approaches suppress artifacts caused by truncated projections and correct for the distortions of the CB-only images

  11. Experimental demonstration of optical data links using a hybrid CAP/QAM modulation scheme.

    Science.gov (United States)

    Wei, J L; Ingham, J D; Cheng, Q; Cunningham, D G; Penty, R V; White, I H

    2014-03-15

    The first known experimental demonstrations of a 10  Gb/s hybrid CAP-2/QAM-2 and a 20  Gb/s hybrid CAP-4/QAM-4 transmitter/receiver-based optical data link are performed. Successful transmission over 4.3 km of standard single-mode fiber (SMF) is achieved, with a link power penalty ∼0.4  dBo for CAP-2/QAM-2 and ∼1.5  dBo for CAP-4/QAM-4 at BER=10(-9).

  12. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  13. KiMoSys: a web-based repository of experimental data for KInetic MOdels of biological SYStems.

    Science.gov (United States)

    Costa, Rafael S; Veríssimo, André; Vinga, Susana

    2014-08-13

    The kinetic modeling of biological systems is mainly composed of three steps that proceed iteratively: model building, simulation and analysis. In the first step, it is usually required to set initial metabolite concentrations, and to assign kinetic rate laws, along with estimating parameter values using kinetic data through optimization when these are not known. Although the rapid development of high-throughput methods has generated much omics data, experimentalists present only a summary of obtained results for publication, the experimental data files are not usually submitted to any public repository, or simply not available at all. In order to automatize as much as possible the steps of building kinetic models, there is a growing requirement in the systems biology community for easily exchanging data in combination with models, which represents the main motivation of KiMoSys development. KiMoSys is a user-friendly platform that includes a public data repository of published experimental data, containing concentration data of metabolites and enzymes and flux data. It was designed to ensure data management, storage and sharing for a wider systems biology community. This community repository offers a web-based interface and upload facility to turn available data into publicly accessible, centralized and structured-format data files. Moreover, it compiles and integrates available kinetic models associated with the data.KiMoSys also integrates some tools to facilitate the kinetic model construction process of large-scale metabolic networks, especially when the systems biologists perform computational research. KiMoSys is a web-based system that integrates a public data and associated model(s) repository with computational tools, providing the systems biology community with a novel application facilitating data storage and sharing, thus supporting construction of ODE-based kinetic models and collaborative research projects.The web application implemented using Ruby

  14. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.-M.

    2008-01-01

    CFD code validation requires experimental data that characterize the distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The paper reports about the use of wire-mesh sensors to study turbulent mixing processes in single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of other non

  15. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.M.

    2007-01-01

    CFD code validation requires experimental data that characterize distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The presentation reports about the use of wire-mesh sensors to study turbulent mixing processes in the single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of

  16. Using numerical simulations to extract parameters of toroidal electron plasmas from experimental data

    DEFF Research Database (Denmark)

    Ha, B. N.; Stoneking,, M. R.; Marler, Joan

    2009-01-01

    Measurements of the image charge induced on electrodes provide the primary means of diagnosing plasmas in the Lawrence Non-neutral Torus II (LNT II) [Phys. Rev. Lett. 100, 155001 (2008)]. Therefore, it is necessary to develop techniques that determine characteristics of the electron plasma from......, as in the cylindrical case. In the toroidal case, additional information about the m=1 motion of the plasma can be obtained by analysis of the image charge signal amplitude and shape. Finally, results from the numerical simulations are compared to experimental data from the LNT II and plasma characteristics...

  17. Comparative study of methods on outlying data detection in experimental results

    International Nuclear Information System (INIS)

    Oliveira, P.M.S.; Munita, C.S.; Hazenfratz, R.

    2009-01-01

    The interpretation of experimental results through multivariate statistical methods might reveal the outliers existence, which is rarely taken into account by the analysts. However, their presence can influence the results interpretation, generating false conclusions. This paper shows the importance of the outliers determination for one data base of 89 samples of ceramic fragments, analyzed by neutron activation analysis. The results were submitted to five procedures to detect outliers: Mahalanobis distance, cluster analysis, principal component analysis, factor analysis, and standardized residual. The results showed that although cluster analysis is one of the procedures most used to identify outliers, it can fail by not showing the samples that are easily identified as outliers by other methods. In general, the statistical procedures for the identification of the outliers are little known by the analysts. (author)

  18. Comparison of experimental data with results of some drying models for regularly shaped products

    Science.gov (United States)

    Kaya, Ahmet; Aydın, Orhan; Dincer, Ibrahim

    2010-05-01

    This paper presents an experimental and theoretical investigation of drying of moist slab, cylinder and spherical products to study dimensionless moisture content distributions and their comparisons. Experimental study includes the measurement of the moisture content distributions of slab and cylindrical carrot, slab and cylindrical pumpkin and spherical blueberry during drying at various temperatures (e.g., 30, 40, 50 and 60°C) at specific constant velocity ( U = 1 m/s) and the relative humidity φ = 30%. In theoretical analysis, two moisture transfer models are used to determine drying process parameters (e.g., drying coefficient and lag factor) and moisture transfer parameters (e.g., moisture diffusivity and moisture transfer coefficient), and to calculate the dimensionless moisture content distributions. The calculated results are then compared with the experimental moisture data. A considerably high agreement is obtained between the calculations and experimental measurements for the cases considered. The effective diffusivity values were evaluated between 0.741 × 10-5 and 5.981 × 10-5 m2/h for slab products, 0.818 × 10-5 and 6.287 × 10-5 m2/h for cylindrical products and 1.213 × 10-7 and 7.589 × 10-7 m2/h spherical products using the Model-I and 0.316 × 10-5-5.072 × 10-5 m2/h for slab products, 0.580 × 10-5-9.587 × 10-5 m2/h for cylindrical products and 1.408 × 10-7-13.913 × 10-7 m2/h spherical products using the Model-II.

  19. Experimental data of biomaterial derived from Malva sylvestris and charcoal tablet powder for Hg2+ removal from aqueous solutions

    Directory of Open Access Journals (Sweden)

    Alireza Rahbar

    2016-09-01

    Full Text Available In this experimental data article, a novel biomaterial was provided from Malva sylvestris and characterized its properties using various instrumental techniques. The operating parameters consisted of pH and adsorbent dose on Hg2+ adsorption from aqueous solution using M. sylvestris powder (MSP were compared with charcoal tablet powder (CTP, a medicinal drug. The data acquired showed that M. sylvestris is a viable and very promising alternative adsorbent for Hg2+ removal from aqueous solutions. The experimental data suggest that the MSP is a potential adsorbent to use in medicine for treatment of poisoning with heavy metals; however, the application in animal models is a necessary step before the eventual application of MSP in situations involving humans. Keywords: Adsorption, Biomaterial, Hg2+ ion, Malva sylvestris, Charcoal tablet

  20. Utilizing experimentally derived multi-channel gamma-ray spectra for the analysis of airborne data

    International Nuclear Information System (INIS)

    Grasty, R.L.

    1982-01-01

    Gamma-ray spectra derived from measurements on radioactive concrete calibration pads using plywood sheets to simulate the attenuation effect of air have been successfully tested on airbone data. Cesium-137 at 662 keV, from atomic weapons tests was found to contribute significantly to the airborne spectrum. By fitting the experimental spectra, above the cesium energy, to airborne data, significant increases in accuracy were obtained for the measurement of uranium and thorium, compared to the standard 3-window method. By including a cesium spectrum is the analysis of gamma-ray data from a survey carried out in Saskatchewan, it was found that background radiation due to atmospheric bismuth-214 could be measured more reliably than by using a constant over-water background. Similar results were obtained by monitoring low energy lead-214 gamma-rays at 352 keV

  1. Loss of vacuum accident (LOVA): Comparison of computational fluid dynamics (CFD) flow velocities against experimental data for the model validation

    International Nuclear Information System (INIS)

    Bellecci, C.; Gaudio, P.; Lupelli, I.; Malizia, A.; Porfiri, M.T.; Quaranta, R.; Richetta, M.

    2011-01-01

    A recognized safety issue for future fusion reactors fueled with deuterium and tritium is the generation of sizeable quantities of dust. Several mechanisms resulting from material response to plasma bombardment in normal and off-normal conditions are responsible for generating dust of micron and sub-micron length scales inside the VV (Vacuum Vessel) of experimental fusion facilities. The loss of coolant accidents (LOCA), loss of coolant flow accidents (LOFA) and loss of vacuum accidents (LOVA) are types of accidents, expected in experimental fusion reactors like ITER, that may jeopardize components and plasma vessel integrity and cause dust mobilization risky for workers and public. The air velocity is the driven parameter for dust resuspension and its characterization, in the very first phase of the accidents, is critical for the dust release. To study the air velocity trend a small facility, Small Tank for Aerosol Removal and Dust (STARDUST), was set up at the University of Rome 'Tor Vergata', in collaboration with ENEA Frascati laboratories. It simulates a low pressurization rate (300 Pa/s) LOVA event in ITER due to a small air inlet from two different positions of the leak: at the equatorial port level and at the divertor port level. The velocity magnitude in STARDUST was investigated in order to map the velocity field by means of a punctual capacitive transducer placed inside STARDUST without obstacles. FLUENT was used to simulate the flow behavior for the same LOVA scenarios used during the experimental tests. The results of these simulations were compared against the experimental data for CFD code validation. For validation purposes, the CFD simulation data were extracted at the same locations as the experimental data were collected for the first four seconds, because at the beginning of the experiments the maximum velocity values (that could cause the almost complete dust mobilization) have been measured. In this paper the authors present and discuss the

  2. Comparison of GEANT4 Simulations with Experimental Data for Thick Al Absorbers

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Lopes, Ricardo; Hormaza, Joel

    2009-01-01

    Proton beams in medical applications deal with relatively thick targets like the human head or trunk. Therefore, relatively small differences in the total proton stopping power given, for example, by the different models provided by GEANT4 can lead to significant disagreements in the final proton energy spectra when integrated along lengthy proton trajectories. This work presents proton energy spectra obtained by GEANT4.8.2 simulations using ICRU49, Ziegler1985 and Ziegler2000 models for 19.68 MeV protons passing through a number of Al absorbers with various thicknesses. The spectra were compared with the experimental data, with TRIM/SRIM2008 and MCNPX2.4.0 simulations, and with the Payne analytical solution for the transport equation in the Fokker-Plank approximation. It is shown that the MCNPX simulations reasonably reproduce well all experimental spectra. For the relatively thin targets all the methods give practically identical results but this is not the same for the thick absorbers. It should be noted that all the spectra were measured at the proton energies significantly above 2 MeV, i.e., in the so-called 'Bethe-Bloch region'. Therefore the observed disagreements in GEANT4 results, simulated with different models, are somewhat unexpected. Further studies are necessary for better understanding and definitive conclusions.

  3. Hemorheological changes in ischemia-reperfusion: an overview on our experimental surgical data.

    Science.gov (United States)

    Nemeth, Norbert; Furka, Istvan; Miko, Iren

    2014-01-01

    Blood vessel occlusions of various origin, depending on the duration and extension, result in tissue damage, causing ischemic or ischemia-reperfusion injuries. Necessary surgical clamping of vessels in vascular-, gastrointestinal or parenchymal organ surgery, flap preparation-transplantation in reconstructive surgery, as well as traumatological vascular occlusions, all present special aspects. Ischemia and reperfusion have effects on hemorheological state by numerous ways: besides the local metabolic and micro-environmental changes, by hemodynamic alterations, free-radical and inflammatory pathways, acute phase reactions and coagulation changes. These processes may be harmful for red blood cells, impairing their deformability and influencing their aggregation behavior. However, there are still many unsolved or non-completely answered questions on relation of hemorheology and ischemia-reperfusion. How do various organ (liver, kidney, small intestine) or limb ischemic-reperfusionic processes of different duration and temperature affect the hemorheological factors? What is the expected magnitude and dynamics of these alterations? Where is the border of irreversibility? How can hemorheological investigations be applied to experimental models using laboratory animals in respect of inter-species differences? This paper gives a summary on some of our research data on organ/tissue ischemia-reperfusion, hemorheology and microcirculation, related to surgical research and experimental microsurgery.

  4. 232Th and 238U neutron emission cross section calculations and analysis of experimental data

    International Nuclear Information System (INIS)

    Tel, E.

    2004-01-01

    In this study, pre-equilibrium neutron-emission spectra produced by (n,xn) reactions on nuclei 2 32Th and 2 38U have been calculated. Angle-integrated cross sections in neutron induced reactions on targets 2 32Th and 2 38U have been calculated at the bombarding energies up to 18 MeV. We have investigated multiple pre-equilibrium matrix element constant from internal transition for 2 32Th (n,xn) neutron emission spectra. In the calculations, the geometry dependent hybrid model and the cascade exciton model including the effects of pre-equilibrium have been used. In addition, we have described how multiple pre-equilibrium emissions can be included in the Feshbach-Kerman-Koonin (FKK) fully quantum-mechanical theory. By analyzing (n,xn) reaction on 232 T h and 2 38U, with the incident energy from 2 Me V to 18 Me V, the importance of multiple pre-equilibrium emission can be seen cleady. All calculated results have been compared with experimental data. The obtained results have been discussed and compared with the available experimental data and found agreement with each other

  5. Comparison of Laboratory Experimental Data to XBeach Numerical Model Output

    Science.gov (United States)

    Demirci, Ebru; Baykal, Cuneyt; Guler, Isikhan; Sogut, Erdinc

    2016-04-01

    generating data sets for testing and validation of sediment transport relationships for sand transport in the presence of waves and currents. In these series, there is no structure in the basin. The second and third series of experiments were designed to generate data sets for development of tombolos in the lee of detached 4m-long rubble mound breakwater that is 4 m from the initial shoreline. The fourth series of experiments are conducted to investigate tombolo development in the lee of a 4m-long T-head groin with the head section in the same location of the second and the third tests. The fifth series of experiments are used to investigate tombolo development in the lee of a 3-m-long rubble-mound breakwater positioned 1.5 m offshore of the initial shoreline. In this study, the data collected from the above mentioned five experiments are used to compare the results of the experimental data with XBeach numerical model results, both for the "no-structure" and "with-structure" cases regarding to sediment transport relationships in the presence of only waves and currents as well as the shoreline changes together with the detached breakwater and the T-groin. The main purpose is to investigate the similarities and differences between the laboratory experimental data behavior with XBeach numerical model outputs for these five cases. References: Baykal, C., Sogut, E., Ergin, A., Guler, I., Ozyurt, G.T., Guler, G., and Dogan, G.G. (2015). Modelling Long Term Morphological Changes with XBeach: Case Study of Kızılırmak River Mouth, Turkey, European Geosciences Union, General Assembly 2015, Vienna, Austria, 12-17 April 2015. Gravens, M.B. and Wang, P. (2007). "Data report: Laboratory testing of longshore sand transport by waves and currents; morphology change behind headland structures." Technical Report, ERDC/CHL TR-07-8, Coastal and Hydraulics Laboratory, US Army Engineer Research and Development Center, Vicksburg, MS. Roelvink, D., Reniers, A., van Dongeren, A., van Thiel de

  6. Evaluation of DOE radionuclide solubility data and selected retardation parameters: description of calculational and confirmatory experimental activities

    International Nuclear Information System (INIS)

    Kelmers, A.D.; Clark, R.J.; Cutshall, N.H.; Johnson, J.S.; Kessler, J.H.

    1983-01-01

    An experimentally oriented program has been initiated to support the NRC analysis and licensing activities related to high-level nuclear waste repositories. The program will allow the NRC to independently confirm key geochemical values used in the site performance assessments submitted by the DOE candidate repository site projects. Key radionuclide retardation factor values, particularly radionuclide solubility and sorption values under site specific geochemical conditions, are being confirmed. The initial efforts are being directed toward basalt rock/groundwater systems relevant to the BWIP candidate site in the Pasco Basin. Future work will consider tuff (NNWSI candidate site in Yucca Mountain) and salt (unspecified ONWI bedded or domal salt sites) rock/groundwater systems. Initial experimental results with technetium have confirmed the BWIP values for basalt/groundwater systems under oxic redox conditions: high solubility and no sorption. Under reducing redox conditions, however, the experimental work did not confirm the proposed technetium values recommended by BWIP. In the presence of hydrazine to establish reducing conditions, an apparent solubility limit for technetium of about 5E-7 mol/L was encountered; BWIP recommended calculated values of 1E-12 or greater than or equal to 1E-14 mol/L. Experimental evidence concerning sorption of reduced technetium species is incomplete at this time. Equilibrium speciation and saturation indices were calculated for well water data sets from BWIP using the computer code PHREEQUE. Oversaturation was indicated for hematite and quartz in all data sets. Near surface samples were undersaturated with respect to calcite, but most deep samples were oversaturated with respect to calcite and other carbonate minerals

  7. Statistical Multipath Model Based on Experimental GNSS Data in Static Urban Canyon Environment

    Directory of Open Access Journals (Sweden)

    Yuze Wang

    2018-04-01

    Full Text Available A deep understanding of multipath characteristics is essential to design signal simulators and receivers in global navigation satellite system applications. As a new constellation is deployed and more applications occur in the urban environment, the statistical multipath models of navigation signal need further study. In this paper, we present statistical distribution models of multipath time delay, multipath power attenuation, and multipath fading frequency based on the experimental data in the urban canyon environment. The raw data of multipath characteristics are obtained by processing real navigation signal to study the statistical distribution. By fitting the statistical data, it shows that the probability distribution of time delay follows a gamma distribution which is related to the waiting time of Poisson distributed events. The fading frequency follows an exponential distribution, and the mean of multipath power attenuation decreases linearly with an increasing time delay. In addition, the detailed statistical characteristics for different elevations and orbits satellites is studied, and the parameters of each distribution are quite different. The research results give useful guidance for navigation simulator and receiver designers.

  8. Comparison of experimental data with results of some drying models for regularly shaped products

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Ahmet [Aksaray University, Department of Mechanical Engineering, Aksaray (Turkey); Aydin, Orhan [Karadeniz Technical University, Department of Mechanical Engineering, Trabzon (Turkey); Dincer, Ibrahim [University of Ontario Institute of Technology, Faculty of Engineering and Applied Science, Oshawa, ON (Canada)

    2010-05-15

    This paper presents an experimental and theoretical investigation of drying of moist slab, cylinder and spherical products to study dimensionless moisture content distributions and their comparisons. Experimental study includes the measurement of the moisture content distributions of slab and cylindrical carrot, slab and cylindrical pumpkin and spherical blueberry during drying at various temperatures (e.g., 30, 40, 50 and 60 C) at specific constant velocity (U = 1 m/s) and the relative humidity {phi}=30%. In theoretical analysis, two moisture transfer models are used to determine drying process parameters (e.g., drying coefficient and lag factor) and moisture transfer parameters (e.g., moisture diffusivity and moisture transfer coefficient), and to calculate the dimensionless moisture content distributions. The calculated results are then compared with the experimental moisture data. A considerably high agreement is obtained between the calculations and experimental measurements for the cases considered. The effective diffusivity values were evaluated between 0.741 x 10{sup -5} and 5.981 x 10{sup -5} m{sup 2}/h for slab products, 0.818 x 10{sup -5} and 6.287 x 10{sup -5} m{sup 2}/h for cylindrical products and 1.213 x 10{sup -7} and 7.589 x 10{sup -7} m{sup 2}/h spherical products using the model-I and 0.316 x 10{sup -5}-5.072 x 10{sup -5} m{sup 2}/h for slab products, 0.580 x 10{sup -5}-9.587 x 10{sup -5} m{sup 2}/h for cylindrical products and 1.408 x 10{sup -7}-13.913 x 10{sup -7} m{sup 2}/h spherical products using the model-II. (orig.)

  9. Simultaneous estimation of experimental and material parameters

    CSIR Research Space (South Africa)

    Jansen van Rensburg, GJ

    2012-07-01

    Full Text Available to the experimental data. An inverse analysis is performed that determines material properties and boundary conditions simultaneously. This idea is investigated using virtual experimental data. The virtual experimental data is obtained by performing a finite element...

  10. Experimental aerodynamic and acoustic model testing of the Variable Cycle Engine (VCE) testbed coannular exhaust nozzle system: Comprehensive data report

    Science.gov (United States)

    Nelson, D. P.; Morris, P. M.

    1980-01-01

    The component detail design drawings of the one sixth scale model of the variable cycle engine testbed demonstrator exhaust syatem tested are presented. Also provided are the basic acoustic and aerodynamic data acquired during the experimental model tests. The model drawings, an index to the acoustic data, an index to the aerodynamic data, tabulated and graphical acoustic data, and the tabulated aerodynamic data and graphs are discussed.

  11. Experimental research data on stress state of salt rock mass around an underground excavation

    Science.gov (United States)

    Baryshnikov, VD; Baryshnikov, DV

    2018-03-01

    The paper presents the experimental stress state data obtained in surrounding salt rock mass around an excavation in Mir Mine, ALROSA. The deformation characteristics and the values of stresses in the adjacent rock mass are determined. Using the method of drilling a pair of parallel holes in a stressed area, the authors construct linear relationship for the radial displacements of the stress measurement hole boundaries under the short-term loading of the perturbing hole. The resultant elasticity moduli of rocks are comparable with the laboratory core test data. Pre-estimates of actual stresses point at the presence of a plasticity zone in the vicinity of the underground excavation. The stress state behavior at a distance from the excavation boundary disagrees with the Dinnik–Geim hypothesis.

  12. Identification of material properties of orthotropic composite plate using experimental frequency response function data

    Science.gov (United States)

    Tam, Jun Hui; Ong, Zhi Chao; Ismail, Zubaidah; Ang, Bee Chin; Khoo, Shin Yee

    2018-05-01

    The demand for composite materials is increasing due to their great superiority in material properties, e.g., lightweight, high strength and high corrosion resistance. As a result, the invention of composite materials of diverse properties is becoming prevalent, and thus, leading to the development of material identification methods for composite materials. Conventional identification methods are destructive, time-consuming and costly. Therefore, an accurate identification approach is proposed to circumvent these drawbacks, involving the use of Frequency Response Function (FRF) error function defined by the correlation discrepancy between experimental and Finite-Element generated FRFs. A square E-glass epoxy composite plate is investigated under several different configurations of boundary conditions. It is notable that the experimental FRFs are used as the correlation reference, such that, during computation, the predicted FRFs are continuously updated with reference to the experimental FRFs until achieving a solution. The final identified elastic properties, namely in-plane elastic moduli, Ex and Ey, in-plane shear modulus, Gxy, and major Poisson's ratio, vxy of the composite plate are subsequently compared to the benchmark parameters as well as with those obtained using modal-based approach. As compared to the modal-based approach, the proposed method is found to have yielded relatively better results. This can be explained by the direct employment of raw data in the proposed method that avoids errors that might incur during the stage of modal extraction.

  13. Experimental economics for web mining

    OpenAIRE

    Tagiew, Rustam; Ignatov, Dmitry I.; Amroush, Fadi

    2014-01-01

    This paper offers a step towards research infrastructure, which makes data from experimental economics efficiently usable for analysis of web data. We believe that regularities of human behavior found in experimental data also emerge in real world web data. A format for data from experiments is suggested, which enables its publication as open data. Once standardized datasets of experiments are available on-line, web mining can take advantages from this data. Further, the questions about the o...

  14. Investigation of intermittency in simulated and experimental turbulence data by wavelet analysis

    International Nuclear Information System (INIS)

    Mahdizadeh, N.; Ramisch, M.; Stroth, U.; Lechte, C.; Scott, B.D.

    2004-01-01

    Turbulent transport in magnetized plasmas has an intermittent nature. Peaked probability density functions and a 1/frequency decay of the power spectra have been interpreted as signs of self-organized criticality generated, similar to a sand pile, by the critical gradients of ion- (ITG) or electron-temperature-gradient (ETG) driven instabilities. In order to investigate the degree of intermittency in toroidally confined plasmas in the absence of critical pressure or temperature gradients, data from the drift-Alfven-wave turbulence code DALF3 [B. Scott, Plasma Phys. Controlled Fusion 39, 1635 (1997)], running with a fixed background pressure gradient, and from a weakly driven low-temperature plasma are analyzed. The intermittency is studied on different temporal scales, which are separated by a wavelet transform. Simulated and experimental data reproduce the results on intermittent transport found in fusion plasmas. It can therefore be expected that in fusion plasmas, too, a substantial fraction of the bursty nature of turbulent transport is not related to avalanches caused by a critical gradient as generated by ITG or ETG turbulence

  15. PVTxy properties of CO2 mixtures relevant for CO2 capture, transport and storage: Review of available experimental data and theoretical models

    International Nuclear Information System (INIS)

    Li, Hailong; Jakobsen, Jana P.; Wilhelmsen, Oivind; Yan, Jinyue

    2011-01-01

    Highlights: → Accurate knowledge about the thermodynamic properties of CO 2 is essential in the design and operation of CCS systems. → Experimental data about the phase equilibrium and density of CO 2 -mixtures have been reviewed. → Equations of state have been reviewed too regarding CO 2 -mixtures. None has shown any clear advantage in CCS applications. → Identified knowledge gaps suggest to conducting more experiments and developing novel models. -- Abstract: The knowledge about pressure-volume-temperature-composition (PVTxy) properties plays an important role in the design and operation of many processes involved in CO 2 capture and storage (CCS) systems. A literature survey was conducted on both the available experimental data and the theoretical models associated with the thermodynamic properties of CO 2 mixtures within the operation window of CCS. Some gaps were identified between available experimental data and requirements of the system design and operation. The major concerns are: for the vapour-liquid equilibrium, there are no data about CO 2 /COS and few data about the CO 2 /N 2 O 4 mixture. For the volume property, there are no published experimental data for CO 2 /O 2 , CO 2 /CO, CO 2 /N 2 O 4 , CO 2 /COS and CO 2 /NH 3 and the liquid volume of CO 2 /H 2 . The experimental data available for multi-component CO 2 mixtures are also scarce. Many equations of state are available for thermodynamic calculations of CO 2 mixtures. The cubic equations of state have the simplest structure and are capable of giving reasonable results for the PVTxy properties. More complex equations of state such as Lee-Kesler, SAFT and GERG typically give better results for the volume property, but not necessarily for the vapour-liquid equilibrium. None of the equations of state evaluated in the literature show any clear advantage in CCS applications for the calculation of all PVTxy properties. A reference equation of state for CCS should, thus, be a future goal.

  16. Distribution of iodine between water and steam: a reassessment of experimental data on hypoiodous acid

    International Nuclear Information System (INIS)

    Turner, D.J.

    1978-01-01

    A re-analysis has been made of published data on the steam/ water distribution of iodine between 118 0 and 287 0 C. The analysis assumes that the principal reactions are as follows: I 2 + H 2 O = HIO + H + + I - 3I 2 + 3H 2 O = IO 3 - + 5I - + 6H + for which the equilibrium constants are respectively K 2 and K 5 . The analysis of the experimental data was supported by using empirically and theoretically based equations which describe the temperature dependence of equilibrium constants and by comparing predicted behaviour with the observations reported from a number of boiling water reactors. (author)

  17. Modeling long-term yield trends of Miscanthusxgiganteus using experimental data from across Europe

    DEFF Research Database (Denmark)

    Lesur, Claire; Jeuffroy, Marie-Hélène; Makowski, David

    2013-01-01

    and the ceiling phases and (ii) to determine whether M. giganteus ceiling phase is followed by a decline phase where yields decrease across years. Data were analyzed through comparisons between a set of statistical growth models. The model that best fitted the experimental data included a decline phase....... The decline intensity and the value of several other model parameters, such as the maximum yield reached during the ceiling phase or the duration of the establishment phase, were highly variable. The highest maximum yields were obtained in the experiments located in the southern part of the studied area....... giganteus is known to have an establishment phase during which annual yields increased as a function of crop age, followed by a ceiling phase, the duration of which is unknown. We built a database including 16 European long-term experiments (i) to describe the yield evolution during the establishment...

  18. Incompressible boundary-layer stability analysis of LFC experimental data for sub-critical Mach numbers. M.S. Thesis

    Science.gov (United States)

    Berry, S. A.

    1986-01-01

    An incompressible boundary-layer stability analysis of Laminar Flow Control (LFC) experimental data was completed and the results are presented. This analysis was undertaken for three reasons: to study laminar boundary-layer stability on a modern swept LFC airfoil; to calculate incompressible design limits of linear stability theory as applied to a modern airfoil at high subsonic speeds; and to verify the use of linear stability theory as a design tool. The experimental data were taken from the slotted LFC experiment recently completed in the NASA Langley 8-Foot Transonic Pressure Tunnel. Linear stability theory was applied and the results were compared with transition data to arrive at correlated n-factors. Results of the analysis showed that for the configuration and cases studied, Tollmien-Schlichting (TS) amplification was the dominating disturbance influencing transition. For these cases, incompressible linear stability theory correlated with an n-factor for TS waves of approximately 10 at transition. The n-factor method correlated rather consistently to this value despite a number of non-ideal conditions which indicates the method is useful as a design tool for advanced laminar flow airfoils.

  19. Airborne release fractions/rates and respirable fractions for nonreactor nuclear facilities. Volume 1, Analysis of experimental data

    International Nuclear Information System (INIS)

    1994-12-01

    This handbook contains (1) a systematic compilation of airborne release and respirable fraction experimental data for nonreactor nuclear facilities, (2) assessments of the data, and (3) values derived from assessing the data that may be used in safety analyses when the data are applicable. To assist in consistent and effective use of this information, the handbook provides: identification of a consequence determination methodology in which the information can be used; discussion of the applicability of the information and its general technical limits; identification of specific accident phenomena of interest for which the information is applicable; and examples of use of the consequence determination methodology and airborne release and respirable fraction information

  20. A unified approach to linking experimental, statistical and computational analysis of spike train data.

    Directory of Open Access Journals (Sweden)

    Liang Meng

    Full Text Available A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data, but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach--linking statistical, computational, and experimental neuroscience--provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.

  1. Prediction of the health effects of inhaled transuranium elements from experimental animal data

    International Nuclear Information System (INIS)

    Bair, W.J.; Thomas, J.M.

    1976-01-01

    Although animal experiments are conducted to obtain data that can be used to predict the consequences of exposure to alpha-emitting elements on human health, scientists have been hesitant to project the results of animal experiments to man. However, since a human data base does not exist for inhaled transuranics, the animal data cannot be overlooked. The paper describes the derivation of linear non-threshold response relationships for lung cancer in rats after inhalation of alpha-emitting transuranium elements. These relationships were used to calculate risk estimates, which were then compared with a value calculated from the incidence of lung cancer in humans who had been exposed to sources of radiation other than the transuranics. Both estimates were compared with the estimated cancer risk associated with the annual whole-body dose limit of 5 rems for occupational exposure. The rat data suggest that the risk from a working lifetime exposure of 15 rem/a to the lungs from transuranium elements may be 5 times the risk incurred with a whole-body exposure of 5 rem/a, while the human data suggest the risk may be less. Since the histological type of plutonium-induced lung cancer that occurs in experimental animals is rare in man, the use of animal data to estimate risks may be conservative. Risk estimates calculated directly from the results of experiments in which animals actually inhaled transuranic particles circumvent such controversial issues as 'hot particles'. (author)

  2. Protein folding: Defining a standard set of experimental conditions and a preliminary kinetic data set of two-state proteins

    DEFF Research Database (Denmark)

    Maxwell, Karen L.; Wildes, D.; Zarrine-Afsar, A.

    2005-01-01

    Recent years have seen the publication of both empirical and theoretical relationships predicting the rates with which proteins fold. Our ability to test and refine these relationships has been limited, however, by a variety of difficulties associated with the comparison of folding and unfolding ...... efforts is to set uniform standards for the experimental community and to initiate an accumulating, self-consistent data set that will aid ongoing efforts to understand the folding process....... constructs. The lack of a single approach to data analysis and error estimation, or even of a common set of units and reporting standards, further hinders comparative studies of folding. In an effort to overcome these problems, we define here a consensus set of experimental conditions (25°C at pH 7.0, 50 m...... rates, thermodynamics, and structure across diverse sets of proteins. These difficulties include the wide, potentially confounding range of experimental conditions and methods employed to date and the difficulty of obtaining correct and complete sequence and structural details for the characterized...

  3. Program X4TOC4 (Version 86-1). Translation of experimental data from the EXFOR format to a computation format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1986-09-01

    Experimental nuclear reaction data are world-wide compiled in EXFOR format. The computer program X4TOC4 described in the present document translates data from the rather flexible EXFOR format to the more rigid ''computation format'' which is suitable for input to further computer processing of the data including graphical plotting. The program is available costfree from the IAEA Nuclear Data Section, upon request. (author)

  4. Numerical prediction of the cyclic behaviour of metallic polycrystals and comparison with experimental data

    International Nuclear Information System (INIS)

    Sauzay, M.; Ferrie, E.; Steckmeyer, A.

    2010-01-01

    Grain size seems to have only a minor influence on the cyclic strain strain curves (CSSCs) of metallic polycrystals of medium to high stacking fault energy (SFE). Many authors therefore tried to deduce the macroscopic CSSCs curves from the single crystals ones. Either crystals oriented for single slip or multiple slip were considered. In addition, a scale transition law should be used (from the grain scale to the macroscopic scale). The Sachs rule (homogeneous stress, single slip) or the Taylor one (homogeneous plastic strain, multiple slip) were usually used. But the predicted macroscopic CSSCs do not generally agree with the experimental data for metals and alloys, presenting various SFE values. In order to avoid the choice of a particular scale transition rule, many finite element (FE) computations are carried out using meshes of polycrystals including more than one hundred grains without texture. This allows the study of the influence of the crystalline constitutive laws on the macroscopic CSSCs. Activation of a secondary slip system in grains oriented for single slip is either allowed or hindered (slip planarity), which affects strongly the macroscopic CSSCs. The more planar the slip, the higher the predicted macroscopic stress amplitudes. If grains oriented for single slip obey slip planarity and two crystalline CSSCs are used (one for single slip grains and one for multiple slip grains), then the predicted macroscopic CSSCs agree well with experimental data provided the SFE is not too low (austenitic steel 316L, copper, nickel, aluminium). (authors)

  5. Experimental demonstration of a format-flexible single-carrier coherent receiver using data-aided digital signal processing.

    Science.gov (United States)

    Elschner, Robert; Frey, Felix; Meuer, Christian; Fischer, Johannes Karl; Alreesh, Saleem; Schmidt-Langhorst, Carsten; Molle, Lutz; Tanimura, Takahito; Schubert, Colja

    2012-12-17

    We experimentally demonstrate the use of data-aided digital signal processing for format-flexible coherent reception of different 28-GBd PDM and 4D modulated signals in WDM transmission experiments over up to 7680 km SSMF by using the same resource-efficient digital signal processing algorithms for the equalization of all formats. Stable and regular performance in the nonlinear transmission regime is confirmed.

  6. Radiological characteristics of light-water reactor spent fuel: A literature survey of experimental data

    International Nuclear Information System (INIS)

    Roddy, J.W.; Mailen, J.C.

    1987-12-01

    This survey brings together the experimentally determined light-water reactor spent fuel data comprising radionuclide composition, decay heat, and photon and neutron generation rates as identified in a literature survey. Many citations compare these data with values calculated using a radionuclide generation and depletion computer code, ORIGEN, and these comparisons have been included. ORIGEN is a widely recognized method for estimating the actinide, fission product, and activation product contents of irradiated reactor fuel, as well as the resulting heat generation and radiation levels. These estimates are used as source terms in safety evaluations of operating reactors, for evaluation of fuel behavior and regulation of the at-reactor storage, for transportation studies, and for evaluation of the ultimate geologic storage of spent fuel. 82 refs., 4 figs., 17 tabs

  7. Watchdog - a workflow management system for the distributed analysis of large-scale experimental data.

    Science.gov (United States)

    Kluge, Michael; Friedel, Caroline C

    2018-03-13

    The development of high-throughput experimental technologies, such as next-generation sequencing, have led to new challenges for handling, analyzing and integrating the resulting large and diverse datasets. Bioinformatical analysis of these data commonly requires a number of mutually dependent steps applied to numerous samples for multiple conditions and replicates. To support these analyses, a number of workflow management systems (WMSs) have been developed to allow automated execution of corresponding analysis workflows. Major advantages of WMSs are the easy reproducibility of results as well as the reusability of workflows or their components. In this article, we present Watchdog, a WMS for the automated analysis of large-scale experimental data. Main features include straightforward processing of replicate data, support for distributed computer systems, customizable error detection and manual intervention into workflow execution. Watchdog is implemented in Java and thus platform-independent and allows easy sharing of workflows and corresponding program modules. It provides a graphical user interface (GUI) for workflow construction using pre-defined modules as well as a helper script for creating new module definitions. Execution of workflows is possible using either the GUI or a command-line interface and a web-interface is provided for monitoring the execution status and intervening in case of errors. To illustrate its potentials on a real-life example, a comprehensive workflow and modules for the analysis of RNA-seq experiments were implemented and are provided with the software in addition to simple test examples. Watchdog is a powerful and flexible WMS for the analysis of large-scale high-throughput experiments. We believe it will greatly benefit both users with and without programming skills who want to develop and apply bioinformatical workflows with reasonable overhead. The software, example workflows and a comprehensive documentation are freely

  8. Correlation between the meteorological data acquisition systems of the Centro Experimental ARAMAR

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Rando M.; Beu, Cássia M.L., E-mail: rando.oliveira@marinha.mil.br, E-mail: cassia.beu@marinha.mil.br [Centro Tecnológico da Marinha em São Paulo (CEA/CTMSP), Iperó, SP (Brazil). Centro Experimental ARAMAR

    2017-07-01

    Centro Experimental ARAMAR (CEA) is a Brazilian Navy Technological Center located in the rural area of lperó (São Paulo State), about 10-km distant from the nearest urban area. One of the most important activities at CEA is the nuclear fuel cycle research, as well as the development of a small-scale pressurized water reactor (PWR) land based prototype, The Laboratório Radioecológico (LAR E) is responsible for the meteorological observation program which relies on an automatic data collection system, The following variables are continuously measured: pressure, precipitation, wind speed, wind direction, temperature and relative humidity, The obtained data is refined and used in the annual reports to Comissão Nacional de Energia Nuclear (CNEN), and are an important input data for atmospheric dispersion models. Due to the construction of the Laboratório de Geraç!o Nucleoclétrica (LABGENE), it will be necessary to change tbe location of the towers and meteorological sensors, Thus, since 20 14, a new set of towers and sensors (Torre Nova) are in operation. The new location is 900 m distant from the old set (Torre Velha). Therefore, CEA has currently two meteorological data acquisition systems operating concurrently for approximately three years. The present work aims to compare the meteorological data of both systems in order to verify their agreement. The meteorological time series of both systems were submitted to a statistical analysis to evaluate their correlation. The results of this work confirm the compatibility of the two systems, showing that the Torre Velha can be deactivated without impairment to the meteorological time series. (author)

  9. Correlation between the meteorological data acquisition systems of the Centro Experimental ARAMAR

    International Nuclear Information System (INIS)

    Oliveira, Rando M.; Beu, Cássia M.L.

    2017-01-01

    Centro Experimental ARAMAR (CEA) is a Brazilian Navy Technological Center located in the rural area of lperó (São Paulo State), about 10-km distant from the nearest urban area. One of the most important activities at CEA is the nuclear fuel cycle research, as well as the development of a small-scale pressurized water reactor (PWR) land based prototype, The Laboratório Radioecológico (LAR E) is responsible for the meteorological observation program which relies on an automatic data collection system, The following variables are continuously measured: pressure, precipitation, wind speed, wind direction, temperature and relative humidity, The obtained data is refined and used in the annual reports to Comissão Nacional de Energia Nuclear (CNEN), and are an important input data for atmospheric dispersion models. Due to the construction of the Laboratório de Geraç!o Nucleoclétrica (LABGENE), it will be necessary to change tbe location of the towers and meteorological sensors, Thus, since 20 14, a new set of towers and sensors (Torre Nova) are in operation. The new location is 900 m distant from the old set (Torre Velha). Therefore, CEA has currently two meteorological data acquisition systems operating concurrently for approximately three years. The present work aims to compare the meteorological data of both systems in order to verify their agreement. The meteorological time series of both systems were submitted to a statistical analysis to evaluate their correlation. The results of this work confirm the compatibility of the two systems, showing that the Torre Velha can be deactivated without impairment to the meteorological time series. (author)

  10. Experimental design and data-analysis in label-free quantitative LC/MS proteomics: A tutorial with MSqRob.

    Science.gov (United States)

    Goeminne, Ludger J E; Gevaert, Kris; Clement, Lieven

    2018-01-16

    Label-free shotgun proteomics is routinely used to assess proteomes. However, extracting relevant information from the massive amounts of generated data remains difficult. This tutorial provides a strong foundation on analysis of quantitative proteomics data. We provide key statistical concepts that help researchers to design proteomics experiments and we showcase how to analyze quantitative proteomics data using our recent free and open-source R package MSqRob, which was developed to implement the peptide-level robust ridge regression method for relative protein quantification described by Goeminne et al. MSqRob can handle virtually any experimental proteomics design and outputs proteins ordered by statistical significance. Moreover, its graphical user interface and interactive diagnostic plots provide easy inspection and also detection of anomalies in the data and flaws in the data analysis, allowing deeper assessment of the validity of results and a critical review of the experimental design. Our tutorial discusses interactive preprocessing, data analysis and visualization of label-free MS-based quantitative proteomics experiments with simple and more complex designs. We provide well-documented scripts to run analyses in bash mode on GitHub, enabling the integration of MSqRob in automated pipelines on cluster environments (https://github.com/statOmics/MSqRob). The concepts outlined in this tutorial aid in designing better experiments and analyzing the resulting data more appropriately. The two case studies using the MSqRob graphical user interface will contribute to a wider adaptation of advanced peptide-based models, resulting in higher quality data analysis workflows and more reproducible results in the proteomics community. We also provide well-documented scripts for experienced users that aim at automating MSqRob on cluster environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Green-Kubo relation for viscosity tested using experimental data for a two-dimensional dusty plasma

    Science.gov (United States)

    Feng, Yan; Goree, J.; Liu, Bin; Cohen, E. G. D.

    2011-10-01

    The theoretical Green-Kubo relation for viscosity is tested using experimentally obtained data. In a dusty plasma experiment, micron-sized dust particles are introduced into a partially ionized argon plasma, where they become negatively charged. They are electrically levitated to form a single-layer Wigner crystal, which is subsequently melted using laser heating. In the liquid phase, these dust particles experience interparticle electric repulsion, laser heating, and friction from the ambient neutral argon gas, and they can be considered to be in a nonequilibrium steady state. Direct measurements of the positions and velocities of individual dust particles are then used to obtain a time series for an off-diagonal element of the stress tensor and its time autocorrelation function. This calculation also requires the interparticle potential, which was not measured experimentally but was obtained using a Debye-Hückel-type model with experimentally determined parameters. Integrating the autocorrelation function over time yields the viscosity for shearing motion among dust particles. The viscosity so obtained is found to agree with results from a previous experiment using a hydrodynamical Navier-Stokes equation. This comparison serves as a test of the Green-Kubo relation for viscosity. Our result is also compared to the predictions of several simulations.

  12. Experimental atomic physics

    International Nuclear Information System (INIS)

    Anon.

    1985-01-01

    The experimental atomic physics program within the physics division is carried out by two groups, whose reports are given in this section. Work of the accelerator atomic physics group is centered around the 6.5-MV EN tandem accelerator; consequently, most of its research is concerned with atomic processes occurring to, or initiated by, few MeV/amu heavy ions. Other activities of this group include higher energy experiments at the Holifield Heavy Ion Research Facility (HHIRF), studies of electron and positron channeling radiation, and collaborative experiments at other institutions. The second experimental group concerns itself with lower energy atomic collision physics in support of the Fusion Energy Program. During the past year, the new Electron Cyclotron Resonance Source has been completed and some of the first data from this facility is presented. In addition to these two activities in experimental atomic physics, other chapters of this report describe progress in theoretical atomic physics, experimental plasma diagnostic development, and atomic data center compilation activities

  13. Theoretical Evaluation of Crosslink Density of Chain Extended Polyurethane Networks Based on Hydroxyl Terminated Polybutadiene and Butanediol and Comparison with Experimental Data

    Science.gov (United States)

    Sekkar, Venkataraman; Alex, Ancy Smitha; Kumar, Vijendra; Bandyopadhyay, G. G.

    2018-01-01

    Polyurethane networks between hydroxyl terminated polybutadiene (HTPB) and butanediol (BD) were prepared using toluene diisocyanate (TDI) as the curative. HTPB and BD were taken at equivalent ratios viz.: 1:0, 1:1, 1:2, 1:4, and 1:8. Crosslink density (CLD) was theoretically calculated using α-model equations developed by Marsh. CLD for the polyurethane networks was experimentally evaluated from equilibrium swell and stress-strain data. Young's modulus and Mooney-Rivlin approaches were adopted to calculate CLD from stress-strain data. Experimentally obtained CLD values were enormously higher than theoretical values especially at higher BD/HTPB equivalent ratios. The difference in the theoretical and experimental values for CLD was explained in terms of local crystallization due to the formation of hard segments and hydrogen bonded interactions.

  14. Neutron Elastic Scattering Cross Sections Experimental Data and Optical Model Cross Section Calculations. A Compilation of Neutron Data from the Studsvik Neutron Physics Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Holmqvist, B; Wiedling, T

    1969-06-15

    Neutron elastic scattering cross section measurements have been going on for a long period at the Studsvik Van de Graaff laboratory. The cross sections of a range of elements have been investigated in the energy interval 1.5 to 8 MeV. The experimental data have been compared with cross sections calculated with the optical model when using a local nuclear potential.

  15. Spontaneous burp phenomenon: analysis of experimental data and approach to theoretical interpretation

    International Nuclear Information System (INIS)

    Shabalin, E.

    2004-01-01

    Cases of spontaneous release of energy (''burp'') were observed numerously in solid methane, water ice and other hydrogeneous compounds irradiated in fast neutron fields with absorbed dose 2-10 MGy. Nature of this phenomenon is that accumulation of radicals and thermal instability of a sample under due concentration of radicals culminate to autocatalytic reaction of their recombination. The most odious feature of this phenomenon is that irradiation time, before a burp occurs, varies significantly even in identical irradiation condition. Respectively, amount of released energy varies as well. For example, in URAM-2 experiments with water ice, irradiation time before appearance of spontaneous burp varied from 5 hours to 11 hours for equal samples and up to 20 hours for the smallest sample. Another feature of the phenomenon is that regularities for occurrence of spontaneous release of energy (that is, dependence on temperature, size of sample, absorbed dose) can't be extracted explicitly from experimental data, if applying known relations for critical concentration of radicals, namely: Semenov's and Frank-Kamenetski's conditions of thermal instability of a sample, Jackson's relation for chain process of recombination of uniformly distributed radicals, critical condition based on accounting for acceleration of a process of recombination in the regions of micro-cracks, and, finally, critical condition for irregular distribution of radicals proposed by the author of this article. Not one of these theories predicts casual character of a burp on macro-scale basis. So that, notwithstanding fair amount of experimental data on spontaneous burps, it is still impossible to derive well-balanced theory of this phenomenon. One attempt to understand a reason for the strange behavior of spontaneous burping is a probabilistic, cluster model of spontaneous release of energy, developed in the chapter 4 of this paper. Spontaneous burps observed during URAM-2 project and in solid methane

  16. CDApps: integrated software for experimental planning and data processing at beamline B23, Diamond Light Source.

    Science.gov (United States)

    Hussain, Rohanah; Benning, Kristian; Javorfi, Tamas; Longo, Edoardo; Rudd, Timothy R; Pulford, Bill; Siligardi, Giuliano

    2015-03-01

    The B23 Circular Dichroism beamline at Diamond Light Source has been operational since 2009 and has seen visits from more than 200 user groups, who have generated large amounts of data. Based on the experience of overseeing the users' progress at B23, four key areas requiring the most assistance are identified: planning of experiments and note-keeping; designing titration experiments; processing and analysis of the collected data; and production of experimental reports. To streamline these processes an integrated software package has been developed and made available for the users. The subsequent article summarizes the main features of the software.

  17. Experimental investigation of auroral generator regions with conjugate Cluster and FAST data

    Directory of Open Access Journals (Sweden)

    O. Marghitu

    2006-03-01

    Full Text Available Here and in the companion paper, Hamrin et al. (2006, we present experimental evidence for the crossing of auroral generator regions, based on conjugate Cluster and FAST data. To our knowledge, this is the first investigation that concentrates on the evaluation of the power density, E·J, in auroral generator regions, by using in-situ measurements. The Cluster data we discuss were collected within the Plasma Sheet Boundary Layer (PSBL, during a quiet magnetospheric interval, as judged from the geophysical indices, and several minutes before the onset of a small substorm, as indicated by the FAST data. Even at quiet times, the PSBL is an active location: electric fields are associated with plasma motion, caused by the dynamics of the plasma-sheet/lobe interface, while electrical currents are induced by pressure gradients. In the example we show, these ingredients do indeed sustain the conversion of mechanical energy into electromagnetic energy, as proved by the negative power density, E·J<0. The plasma characteristics in the vicinity of the generator regions indicate a complicated 3-D wavy structure of the plasma sheet boundary. Consistent with this structure, we suggest that at least part of the generated electromagnetic energy is carried away by Alfvén waves, to be dissipated in the ionosphere, near the polar cap boundary. Such a scenario is supported by the FAST data, which show energetic electron precipitation conjugated with the generator regions crossed by Cluster. A careful examination of the conjunction timing contributes to the validation of the generator signatures.

  18. Experimental overview

    International Nuclear Information System (INIS)

    Nagamiya, Shoji

    1992-01-01

    Five years ago the first heavy-ion beams were accelerated at both the BNL-AGS and the CERN-SPS. This conference is the 5th anniversary in the experimental field. Currently, four experimental groups (E802/E859, E810, E814, E858) are taking data at BNL and eight groups (NA34-3, NA44, NA45, NA35, NA36, NA38, WA80/WA93, WA85) at CERN. Au and Pb beams are about to come, and a lot of activities are going on for RHIC and LHC. The purpose of this talk is to overview where we are, in particular, by looking at the past data. In this talk, the data of proton rapidity distributions are reviewed first to study nuclear transparency, then, the data of energy spectra and slopes, HBT and anti d production are discussed in connection with the evolution of the collision. Third, the data of strangeness production are described. Finally, the status of J/ψ and that of soft photons and electron pairs are briefly overviewed. (orig.)

  19. Ionizing radiation-induced cancers. Experimental and clinical data

    International Nuclear Information System (INIS)

    Joveniaux, Alain.

    1978-03-01

    This work attempts to give an idea of radiocarcinogenesis, both experimental and clinical. Experimentally the possibility of radio-induced cancer formation has considerable doctrinal importance since it proves without question the carcinogenetic effect of radiations, and also yields basis information on the essential constants implicated in its occurrence: need for a latency time varying with the animal species and technique used, but quite long in relation to the specific lifetime of each species; importance of a massive irradiation, more conducive to cancerisation as long as it produces no necroses liable to stop the formation of any subsequent neoplasia; finally, rarity of is occurrence. Clinically although the cause and effect relationship between treatment and cancer is sometimes difficult to establish categorically, the fact is that hundreds of particularly disturbing observations remain and from their number often emerges under well-defined circumstances, an undeniable clinical certainty. Most importantly these observation fix the criteria necessary for the possibility of a radioinduced cancer to arise, i.e: the notion of a prior irradiation; the appearance of a cancer in the irradiation area; serious tissue damage in relation with an excessive radiation dose; a long latency period between irradition and appearance of the cancer [fr

  20. Experimental data on fission and (n,xn) reactions

    International Nuclear Information System (INIS)

    Belier, G.; Chatillon, A.; Granier, T.; Laborie, J.M.; Laurent, B.; Ledoux, X.; Taieb, J.; Varignon, C.; Bauge, E.; Bersillon, O.; Aupiais, J.; Le Petit, G.; Authier, N.; Casoli, P.

    2011-01-01

    Investigations on neutron-induced fission of actinides and the deuteron breakup are presented. Neutron-induced fission has been studied for 10 years at the WNR (Weapons Neutron Research) neutron facility of the Los Alamos Neutron Science Center (LANSCE). Thanks to this white neutron source the evolution of the prompt fission neutron energy spectra as a function of the incident neutron energy has been characterized in a single experiment up to 200 MeV incident energy. For some isotopes the prompt neutron multiplicity has been extracted. These experimental results demonstrated the effect on the mean neutron energy of the neutron emission before scission for energies higher than the neutron binding energy. This extensive program ( 235 U and 238 U, 239 Pu, 237 Np and 232 Th were measured) is completed by neutron spectra measurements on the CEA 4 MV accelerator. The D(n,2n) reaction is studied both theoretically and experimentally. The cross-section was calculated for several nucleon-nucleon interactions including the AV18 interaction. It has also been measured on the CEA 7 MV tandem accelerator at incident neutron energies up to 25 MeV. Uncertainties lower than 8% between 5 and 10 MeV were obtained. In particular these experiments have extended the measured domain for cross sections. (authors)

  1. Synthesis of analytical and experimental data, capacity evaluation

    International Nuclear Information System (INIS)

    Lin Chiwenn

    2001-01-01

    This part of the presentation deals with the synthesis of analytical and experimental data and capacity evaluation. First, a typical test flow diagram will be discussed to identify key aspects of the test program where analysis is to be performed. Next, actual component test and analysis programs will be presented to illustrate some important parameters to be considered in the modelling process. Then, two combined test and analysis projects will be reviewed to demonstrate the potential use of substructuring in the model testing to reduce the size of the model to be tested. This will be followed by an inelastic response spectral reactor coolant loop analysis, which was used to study a high level seismic test conducted for a PWR reactor coolant system. The potential use of an improved impact calculation method will be discussed after that. As a closure to the test and analysis synthesis process, a reactor internal qualification process will be discussed. Finally, capacity evaluation will be discussed, following the requirements of ASME section III code for class 1 pressure vessel, class 1 piping which includes the reactor coolant loop piping, and the reactor internals. The subsections included in this part of presentation which cover the above mentioned subjects: typical component test and analysis results; combined test and analysis process; a simplified inelastic response spectral; analysis of reactor coolant loop; an improved impact analysis methodology; reactor coolant system and core internal qualification process; ASME section III code, design by analysis of class 1 pressure vessel; design by analysis of class 1 piping; SME section III code, design by analysis of reactor core internals

  2. Ignition and Growth Modeling of Detonating LX-04 (85% HMX / 15% VITON) Using New and Previously Obtained Experimental Data

    Science.gov (United States)

    Tarver, Craig

    2017-06-01

    An Ignition and Growth reactive flow model for detonating LX-04 (85% HMX / 15% Viton) was developed using new and previously obtained experimental data on: cylinder test expansion; wave curvature; failure diameter; and laser interferometric copper and tantalum foil free surface velocities and LiF interface particle velocity histories. A reaction product JWL EOS generated by the CHEETAH code compared favorably with the existing, well normalized LX-04 product JWL when both were used with the Ignition and Growth model. Good agreement with all existing experimental data was obtained. Keywords: LX-04, HMX, detonation, Ignition and Growth PACS:82.33.Vx, 82.40.Fp This work was performed under the auspices of the U. S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  3. Evaluation of the 235U prompt fission neutron spectrum including a detailed analysis of experimental data and improved model information

    Science.gov (United States)

    Neudecker, Denise; Talou, Patrick; Kahler, Albert C.; White, Morgan C.; Kawano, Toshihiko

    2017-09-01

    We present an evaluation of the 235U prompt fission neutron spectrum (PFNS) induced by thermal to 20-MeV neutrons. Experimental data and associated covariances were analyzed in detail. The incident energy dependence of the PFNS was modeled with an extended Los Alamos model combined with the Hauser-Feshbach and the exciton models. These models describe prompt fission, pre-fission compound nucleus and pre-equilibrium neutron emissions. The evaluated PFNS agree well with the experimental data included in this evaluation, preliminary data of the LANL and LLNL Chi-Nu measurement and recent evaluations by Capote et al. and Rising et al. However, they are softer than the ENDF/B-VII.1 (VII.1) and JENDL-4.0 PFNS for incident neutron energies up to 2 MeV. Simulated effective multiplication factors keff of the Godiva and Flattop-25 critical assemblies are further from the measured keff if the current data are used within VII.1 compared to using only VII.1 data. However, if this work is used with ENDF/B-VIII.0β2 data, simulated values of keff agree well with the measured ones.

  4. Statistical evaluation of accelerated stability data obtained at a single temperature. I. Effect of experimental errors in evaluation of stability data obtained.

    Science.gov (United States)

    Yoshioka, S; Aso, Y; Takeda, Y

    1990-06-01

    Accelerated stability data obtained at a single temperature is statistically evaluated, and the utility of such data for assessment of stability is discussed focussing on the chemical stability of solution-state dosage forms. The probability that the drug content of a product is observed to be within the lower specification limit in the accelerated test is interpreted graphically. This probability depends on experimental errors in the assay and temperature control, as well as the true degradation rate and activation energy. Therefore, the observation that the drug content meets the specification in the accelerated testing can provide only limited information on the shelf-life of the drug, without the knowledge of the activation energy and the accuracy and precision of the assay and temperature control.

  5. A unified framework for unraveling the functional interaction structure of a biomolecular network based on stimulus-response experimental data.

    Science.gov (United States)

    Cho, Kwang-Hyun; Choo, Sang-Mok; Wellstead, Peter; Wolkenhauer, Olaf

    2005-08-15

    We propose a unified framework for the identification of functional interaction structures of biomolecular networks in a way that leads to a new experimental design procedure. In developing our approach, we have built upon previous work. Thus we begin by pointing out some of the restrictions associated with existing structure identification methods and point out how these restrictions may be eased. In particular, existing methods use specific forms of experimental algebraic equations with which to identify the functional interaction structure of a biomolecular network. In our work, we employ an extended form of these experimental algebraic equations which, while retaining their merits, also overcome some of their disadvantages. Experimental data are required in order to estimate the coefficients of the experimental algebraic equation set associated with the structure identification task. However, experimentalists are rarely provided with guidance on which parameters to perturb, and to what extent, to perturb them. When a model of network dynamics is required then there is also the vexed question of sample rate and sample time selection to be resolved. Supplying some answers to these questions is the main motivation of this paper. The approach is based on stationary and/or temporal data obtained from parameter perturbations, and unifies the previous approaches of Kholodenko et al. (PNAS 99 (2002) 12841-12846) and Sontag et al. (Bioinformatics 20 (2004) 1877-1886). By way of demonstration, we apply our unified approach to a network model which cannot be properly identified by existing methods. Finally, we propose an experiment design methodology, which is not limited by the amount of parameter perturbations, and illustrate its use with an in numero example.

  6. Developments of programs for the guidance of the experimental logics and the data acquisition

    International Nuclear Information System (INIS)

    Kraemer-Flecken, A.

    1988-01-01

    The new status of technique allows to construct the experimental electronics by the application of ECL modules essentially faster. By the use of the old CAMAC standard it is possible to calibrate experiment configurations by means of a calculator. New techniques in the fabrication of microprocessors and storage -IC's allow the use of microprocessors for the guidance of the experiment electronics and contribute to the creation of an independent on large calculators, modular, and transportable computer. For the calibration of complex detector systems new CAMAC plug-in's exist which allow a data acquisition on the CAMAC bus. With the new eightfold ADC's precision measurements can be perormed. An upgrading of such small data acquisition systems under inclusion of the VME bus is very soon realizable. By this nuclear spectroscopic experiments can be performed essentially more simply. (HSI)

  7. Relevance and reliability of experimental data in human health risk assessment of pesticides.

    Science.gov (United States)

    Kaltenhäuser, Johanna; Kneuer, Carsten; Marx-Stoelting, Philip; Niemann, Lars; Schubert, Jens; Stein, Bernd; Solecki, Roland

    2017-08-01

    Evaluation of data relevance, reliability and contribution to uncertainty is crucial in regulatory health risk assessment if robust conclusions are to be drawn. Whether a specific study is used as key study, as additional information or not accepted depends in part on the criteria according to which its relevance and reliability are judged. In addition to GLP-compliant regulatory studies following OECD Test Guidelines, data from peer-reviewed scientific literature have to be evaluated in regulatory risk assessment of pesticide active substances. Publications should be taken into account if they are of acceptable relevance and reliability. Their contribution to the overall weight of evidence is influenced by factors including test organism, study design and statistical methods, as well as test item identification, documentation and reporting of results. Various reports make recommendations for improving the quality of risk assessments and different criteria catalogues have been published to support evaluation of data relevance and reliability. Their intention was to guide transparent decision making on the integration of the respective information into the regulatory process. This article describes an approach to assess the relevance and reliability of experimental data from guideline-compliant studies as well as from non-guideline studies published in the scientific literature in the specific context of uncertainty and risk assessment of pesticides. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Correlations among experimental and theoretical NMR data to determine the absolute stereochemistry of darcyribeirine, a pentacyclic indole alkaloid isolated from Rauvolfia grandiflora

    Science.gov (United States)

    Cancelieri, Náuvia Maria; Ferreira, Thiago Resende; Vieira, Ivo José Curcino; Braz-Filho, Raimundo; Piló-Veloso, Dorila; Alcântara, Antônio Flávio de Carvalho

    2015-10-01

    Darcyribeirine (1) is a pentacyclic indole alkaloid isolated from Rauvolfia grandiflora. Stereochemistry of 1 was previously proposed based on 1D (coupling constant data) and 2D (NOESY correlations) NMR techniques, having been established a configuration 3R, 15S, and 20R (isomer 1a). Stereoisomers of 1 (i.e., 1a-1h) can be grouped into four sets of enantiomers. Carbon chemical shifts and hydrogen coupling constants were calculated using BLYP/6-31G* theory level for the eight isomers of 1. Calculated NMR data of 1a-1h were correlated with the corresponding experimental data of 1. The best correlations between theoretical and experimental carbon chemical shift data were obtained for the set of enantiomers 1e/1f to structures in the gaseous phase and considering solvent effects (using PCM and explicit models). Similar results were obtained when the same procedure was performed to correlations between theoretical and experimental coupling constant data. Finally, optical rotation calculations indicate 1e as its absolute stereochemistry. Orbital population analysis indicates that the hydrogen bonding between N-H of 1e and DMSO is due to contributions of its frontier unoccupied molecular orbitals, mainly LUMO+1, LUMO+2, and LUMO+3.

  9. Correlation of experimental damage data for the development of the UT-MARLOWE Monte Carlo ion implant simulator

    International Nuclear Information System (INIS)

    Morris, M. F.; Tian, S.; Chen, Y.; Tasch, A.; Baumann, S.; Kirchhoff, J. F.; Hummel, R.; Prussin, S.; Kamenitsa, D.; Jackson, J.

    1999-01-01

    The Monte Carlo ion implant simulator UT-MARLOWE has usually been verified using a large array of Secondary Ion Mass Spectroscopy (SIMS) data (∼200 profiles per ion species)(1). A model has recently been developed (1) to explicitly simulate defect production, diffusion, and their interactions during the picosecond 'defect production stage' of ion implantation. In order to thoroughly validate this model, both SIMS and various damage measurements were obtained (primarily channeling-Rutherford Backscattering Spectroscopy, Differential Reflectometry and Tapered Groove Profilometry, but supported with SEM and XTEM data). In general, the data from the various experimental techniques was consistent, and the Kinetic Accumulation Damage Model (KADM) was developed and validated using this data. This paper discusses the gathering of damage data in conjunction with SIMS in support of the development of an ion implantation simulator

  10. Examination of Experimental Data for Irradiation - Creep in Nuclear Graphite

    Science.gov (United States)

    Mobasheran, Amir Sassan

    The objective of this dissertation was to establish credibility and confidence levels of the observed behavior of nuclear graphite in neutron irradiation environment. Available experimental data associated with the OC-series irradiation -induced creep experiments performed at the Oak Ridge National Laboratory (ORNL) were examined. Pre- and postirradiation measurement data were studied considering "linear" and "nonlinear" creep models. The nonlinear creep model considers the creep coefficient to vary with neutron fluence due to the densification of graphite with neutron irradiation. Within the range of neutron fluence involved (up to 0.53 times 10^{26} neutrons/m ^2, E > 50 KeV), both models were capable of explaining about 96% and 80% of the variation of the irradiation-induced creep strain with neutron fluence at temperatures of 600^circC and 900^circC, respectively. Temperature and reactor power data were analyzed to determine the best estimates for the actual irradiation temperatures. It was determined according to thermocouple readouts that the best estimate values for the irradiation temperatures were well within +/-10 ^circC of the design temperatures of 600^circC and 900 ^circC. The dependence of the secondary creep coefficients (for both linear and nonlinear models) on irradiation temperature was determined assuming that the variation of creep coefficient with temperature, in the temperature range studied, is reasonably linear. It was concluded that the variability in estimate of the creep coefficients is definitely not the results of temperature fluctuations in the experiment. The coefficients for the constitutive equation describing the overall growth of grade H-451 graphite were also studied. It was revealed that the modulus of elasticity and the shear modulus are not affected by creep and that the electrical resistivity is slightly (less than 5%) changed by creep. However, the coefficient of thermal expansion does change with creep. The consistency of

  11. Towards Coordination Patterns for Complex Experimentations in Data Mining

    NARCIS (Netherlands)

    F. Arbab (Farhad); C. Diamantini (Claudia); D. Potena (Domenico); E. Storti (Emanuele)

    2010-01-01

    htmlabstractIn order to support the management of experimental activities in a networked scientic community, the exploitation of serviceoriented paradigm and technologies is a hot research topic in E-science. In particular, scientic workows can be modeled by resorting to the notion of process.

  12. Neridronate: From Experimental Data to Clinical Use

    Directory of Open Access Journals (Sweden)

    Addolorata Corrado

    2017-09-01

    Full Text Available Neridronate is an amino-bisphosphonate that has been officially approved as a treatment for osteogenesis imperfecta, Paget’s disease of bone and type I complex regional pain syndrome in Italy. Neridronate is administered either intravenously or intramuscularly; thus, it represents a valid option for both cases with contraindications to the use of oral bisphosphonates and cases with contraindications or an inability to receive an intravenous administration of these drugs. Furthermore, although the official authorized use of neridronate is limited to only 3 bone diseases, many experimental and clinical studies support the rationale for its use and provide evidence of its effectiveness in other pathologic bone conditions that are characterized by altered bone remodelling.

  13. Comparison of various structural damage tracking techniques with unknown excitations based on experimental data

    Science.gov (United States)

    Huang, Hongwei; Yang, Jann N.; Zhou, Li

    2009-03-01

    An early detection of structural damages is critical for the decision making of repair and replacement maintenance in order to guarantee a specified structural reliability. Consequently, the structural damage detection, based on vibration data measured from the structural health monitoring (SHM) system, has received considerable attention recently. The traditional time-domain analysis techniques, such as the least square estimation (LSE) method and the extended Kalman filter (EKF) approach, require that all the external excitations (inputs) be available, which may not be the case for some SHM systems. Recently, these two approaches have been extended to cover the general case where some of the external excitations (inputs) are not measured, referred to as the LSE with unknown inputs (LSE-UI) and the EKF with unknown inputs (EKF-UI). Also, new analysis methods, referred to as the sequential non-linear least-square estimation with unknown inputs and unknown outputs (SNLSE-UI-UO) and the quadratic sum-square error with unknown inputs (QSSE-UI), have been proposed for the damage tracking of structures when some of the acceleration responses are not measured and the external excitations are not available. In this paper, these newly proposed analysis methods will be compared in terms of accuracy, convergence and efficiency, for damage identification of structures based on experimental data obtained through a series of experimental tests using a small-scale 3-story building model with white noise excitation. The capability of the LSE-UI, EKF-UI, SNLSE-UI-UO and QSSE-UI approaches in tracking the structural damages will be demonstrated.

  14. Using Tabulated Experimental Data to Drive an Orthotropic Elasto-Plastic Three-Dimensional Model for Impact Analysis

    Science.gov (United States)

    Hoffarth, C.; Khaled, B.; Rajan, S. D.; Goldberg, R.; Carney, K.; DuBois, P.; Blankenhorn, Gunther

    2016-01-01

    An orthotropic elasto-plastic-damage three-dimensional model with tabulated input has been developed to analyze the impact response of composite materials. The theory has been implemented as MAT 213 into a tailored version of LS-DYNA being developed under a joint effort of the FAA and NASA and has the following features: (a) the theory addresses any composite architecture that can be experimentally characterized as an orthotropic material and includes rate and temperature sensitivities, (b) the formulation is applicable for solid as well as shell element implementations and utilizes input data in a tabulated form directly from processed experimental data, (c) deformation and damage mechanics are both accounted for within the material model, (d) failure criteria are established that are functions of strain and damage parameters, and mesh size dependence is included, and (e) the theory can be efficiently implemented into a commercial code for both sequential and parallel executions. The salient features of the theory as implemented in LS-DYNA are illustrated using a widely used composite - the T800S/3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber/resin unidirectional composite. First, the experimental tests to characterize the deformation, damage and failure parameters in the material behavior are discussed. Second, the MAT213 input model and implementation details are presented with particular attention given to procedures that have been incorporated to ensure that the yield surfaces in the rate and temperature dependent plasticity model are convex. Finally, the paper concludes with a validation test designed to test the stability, accuracy and efficiency of the implemented model.

  15. Modern methods of experimental construction of texture complete direct pole figures by using X-ray data

    Science.gov (United States)

    Isaenkova, M.; Perlovich, Yu; Fesenko, V.

    2016-04-01

    Currently used methods for constructing texture complete direct pole figure (CDPF) based on the results of X-ray diffractometric measurements were considered with respect to the products of Zr-based alloys and, in particular, used in a nuclear reactor cladding tubes, for which the accuracy of determination of integral texture parameters is of the especial importance. The main attention was devoted to technical issues which are solved by means of computer processing of large arrays of obtained experimental data. Among considered questions there are amendments of the defocusing, techniques for constructing of complete direct pole figures and determination of integral textural parameters. The methods of reconstruction of complete direct pole figures by partial direct pole figures recorded up to tilt angles of sample ψ=70-80°: the method of extrapolation of data to an uninvestigated region of the stereographic projection, and the method of "sewing" of partial pole figures measured for three mutually perpendicular plane sections of the product. The limits of applicability of these methods, depending on the shape of the test product and the degree of inhomogeneity of the layer-by-layer texture, were revealed. On the basis of a large number of experimental data, the accuracy of the integral parameters used for calculation of the physical and mechanical properties of metals with a hexagonal crystal structure was found to be equal to 0.02, when taking into account the texture heterogeneity of regular products from Zr-based alloys.

  16. Tau-U: A Quantitative Approach for Analysis of Single-Case Experimental Data in Aphasia.

    Science.gov (United States)

    Lee, Jaime B; Cherney, Leora R

    2018-03-01

    Tau-U is a quantitative approach for analyzing single-case experimental design (SCED) data. It combines nonoverlap between phases with intervention phase trend and can correct for a baseline trend (Parker, Vannest, & Davis, 2011). We demonstrate the utility of Tau-U by comparing it with the standardized mean difference approach (Busk & Serlin, 1992) that is widely reported within the aphasia SCED literature. Repeated writing measures from 3 participants with chronic aphasia who received computer-based writing treatment are analyzed visually and quantitatively using both Tau-U and the standardized mean difference approach. Visual analysis alone was insufficient for determining an effect between the intervention and writing improvement. The standardized mean difference yielded effect sizes ranging from 4.18 to 26.72 for trained items and 1.25 to 3.20 for untrained items. Tau-U yielded significant (p data from 2 of 3 participants. Tau-U has the unique advantage of allowing for the correction of an undesirable baseline trend. Although further study is needed, Tau-U shows promise as a quantitative approach to augment visual analysis of SCED data in aphasia.

  17. Numerical calculation of aerodynamics wind turbine blade S809 airfoil and comparison of theoretical calculations with experimental measurements and confirming with NREL data

    Science.gov (United States)

    Sogukpinar, Haci; Bozkurt, Ismail

    2018-02-01

    Aerodynamic performance of the airfoil plays the most important role to obtain economically maximum efficiency from a wind turbine. Therefore airfoil should have an ideal aerodynamic shape. In this study, aerodynamic simulation of S809 airfoil is conducted and obtained result compared with previously made NASA experimental result and NREL theoretical data. At first, Lift coefficient, lift to drag ratio and pressure coefficient around S809 airfoil are calculated with SST turbulence model, and are compared with experimental and other theoretical data to correlate simulation correctness of the computational approaches. And result indicates good correlation with both experimental and theoretical data. This calculation point out that as the increasing relative velocity, lift to drag ratio increases. Lift to drag ratio attain maximum at the angle around 6 degree and after that starts to decrease again. Comparison shows that CFD code used in this calculation can predict aerodynamic properties of airfoil.

  18. Measurement of aerosol size distribution by impaction and sedimentation An experimental study and data reduction

    International Nuclear Information System (INIS)

    Diouri, Mohamed.

    1981-09-01

    This study concerns essentially solid aerosols produced by combustion and more particulary the aerosol liberated by a sodium fire taken into account in safety studies related to sodium cooled nuclear reactors. The accurate determination of the aerosol size distribution depends on the selection device use. An experimental study of the parameters affecting the solid aerosol collection efficiency was made with the Andersen Mark II cascade impactor (blow off and bounce, electrical charge of particles, wall-loss). A sedimentation chamber was built and calibrated for the range between 4 and 10 μm. The second part describes a comparative study of different data reduction methods for the impactor and a new method for setting up the aerosol size distribution with data obtained by the sedimentation chamber [fr

  19. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  20. Guidelines for information about therapy experiments: a proposal on best practice for recording experimental data on cancer therapy.

    Science.gov (United States)

    González-Beltrán, Alejandra N; Yong, May Y; Dancey, Gairin; Begent, Richard

    2012-01-06

    Biology, biomedicine and healthcare have become data-driven enterprises, where scientists and clinicians need to generate, access, validate, interpret and integrate different kinds of experimental and patient-related data. Thus, recording and reporting of data in a systematic and unambiguous fashion is crucial to allow aggregation and re-use of data. This paper reviews the benefits of existing biomedical data standards and focuses on key elements to record experiments for therapy development. Specifically, we describe the experiments performed in molecular, cellular, animal and clinical models. We also provide an example set of elements for a therapy tested in a phase I clinical trial. We introduce the Guidelines for Information About Therapy Experiments (GIATE), a minimum information checklist creating a consistent framework to transparently report the purpose, methods and results of the therapeutic experiments. A discussion on the scope, design and structure of the guidelines is presented, together with a description of the intended audience. We also present complementary resources such as a classification scheme, and two alternative ways of creating GIATE information: an electronic lab notebook and a simple spreadsheet-based format. Finally, we use GIATE to record the details of the phase I clinical trial of CHT-25 for patients with refractory lymphomas. The benefits of using GIATE for this experiment are discussed. While data standards are being developed to facilitate data sharing and integration in various aspects of experimental medicine, such as genomics and clinical data, no previous work focused on therapy development. We propose a checklist for therapy experiments and demonstrate its use in the 131Iodine labeled CHT-25 chimeric antibody cancer therapy. As future work, we will expand the set of GIATE tools to continue to encourage its use by cancer researchers, and we will engineer an ontology to annotate GIATE elements and facilitate unambiguous

  1. Guidelines for information about therapy experiments: a proposal on best practice for recording experimental data on cancer therapy

    Directory of Open Access Journals (Sweden)

    González-Beltrán Alejandra N

    2012-01-01

    Full Text Available Abstract Background Biology, biomedicine and healthcare have become data-driven enterprises, where scientists and clinicians need to generate, access, validate, interpret and integrate different kinds of experimental and patient-related data. Thus, recording and reporting of data in a systematic and unambiguous fashion is crucial to allow aggregation and re-use of data. This paper reviews the benefits of existing biomedical data standards and focuses on key elements to record experiments for therapy development. Specifically, we describe the experiments performed in molecular, cellular, animal and clinical models. We also provide an example set of elements for a therapy tested in a phase I clinical trial. Findings We introduce the Guidelines for Information About Therapy Experiments (GIATE, a minimum information checklist creating a consistent framework to transparently report the purpose, methods and results of the therapeutic experiments. A discussion on the scope, design and structure of the guidelines is presented, together with a description of the intended audience. We also present complementary resources such as a classification scheme, and two alternative ways of creating GIATE information: an electronic lab notebook and a simple spreadsheet-based format. Finally, we use GIATE to record the details of the phase I clinical trial of CHT-25 for patients with refractory lymphomas. The benefits of using GIATE for this experiment are discussed. Conclusions While data standards are being developed to facilitate data sharing and integration in various aspects of experimental medicine, such as genomics and clinical data, no previous work focused on therapy development. We propose a checklist for therapy experiments and demonstrate its use in the 131Iodine labeled CHT-25 chimeric antibody cancer therapy. As future work, we will expand the set of GIATE tools to continue to encourage its use by cancer researchers, and we will engineer an ontology to

  2. Observed versus simulated meteorological data: a comparison study for Centro Experimental Aramar

    Energy Technology Data Exchange (ETDEWEB)

    Beu, Cássia Maria Leme, E-mail: cassia.beu@marinha.mil.br [Centro Tecnológico da Marinha em São Paulo (CEA/CTMSP), Iperó, SP (Brazil). Centro Experimental Aramar

    2017-07-01

    Centro Experimental Aramar (CEA) is a campus of the Centro Tecnológico da Marinha em São Paulo (CTMSP), responsible for carrying out the Brazilian Navy´s Nuclear Program. As a nuclear facility, the atmosphere is one of the environmental parameters that must be monitored, both for normal operation and accidental situations. Atmospheric dispersion models are powerful tools in this direction, but their results strongly depend of the quality of the input data. Therefore, good information must be provided to the dispersion models, and data from weather forecast models can be suitable for this role. The purpose of this work is to evaluate the performance of regional weather forecast models for CEA site. CEA is located at a complex terrain area, which can add complexity to the air fluxes and reduce the forecast accuracy, which may be critical during an accidental situation. For this work, two regional atmospheric models were chosen: BRAMS and Eta. These models have been intensively improved for Brazilian researchers for the South America characteristics, are free software and offer the possibility to run locally with higher resolution than are currently available by research organizations. Basic variables (temperature, relative humidity, and wind speed and direction) for 48 hours simulations from Eta and BRAMS were compared with CEA observed data. Results from this work will conducted the next steps for running dispersion atmospheric models on an operational basis for CEA site. (author)

  3. Observed versus simulated meteorological data: a comparison study for Centro Experimental Aramar

    International Nuclear Information System (INIS)

    Beu, Cássia Maria Leme

    2017-01-01

    Centro Experimental Aramar (CEA) is a campus of the Centro Tecnológico da Marinha em São Paulo (CTMSP), responsible for carrying out the Brazilian Navy´s Nuclear Program. As a nuclear facility, the atmosphere is one of the environmental parameters that must be monitored, both for normal operation and accidental situations. Atmospheric dispersion models are powerful tools in this direction, but their results strongly depend of the quality of the input data. Therefore, good information must be provided to the dispersion models, and data from weather forecast models can be suitable for this role. The purpose of this work is to evaluate the performance of regional weather forecast models for CEA site. CEA is located at a complex terrain area, which can add complexity to the air fluxes and reduce the forecast accuracy, which may be critical during an accidental situation. For this work, two regional atmospheric models were chosen: BRAMS and Eta. These models have been intensively improved for Brazilian researchers for the South America characteristics, are free software and offer the possibility to run locally with higher resolution than are currently available by research organizations. Basic variables (temperature, relative humidity, and wind speed and direction) for 48 hours simulations from Eta and BRAMS were compared with CEA observed data. Results from this work will conducted the next steps for running dispersion atmospheric models on an operational basis for CEA site. (author)

  4. A note on the analysis of germination data from complex experimental designs

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Andreasen, Christian; Streibig, Jens Carl

    2017-01-01

    from event-time models fitted separately to data from each germination test by means of meta-analytic random effects models. We show that this approach provides a more appropriate appreciation of the sources of variation in hierarchically structured germination experiments as both between- and within......In recent years germination experiments have become more and more complex. Typically, they are replicated in time as independent runs and at each time point they involve hierarchical, often factorial experimental designs, which are now commonly analysed by means of linear mixed models. However......, in order to characterize germination in response to time elapsed, specific event-time models are needed and mixed model extensions of these models are not readily available, neither in theory nor in practice. As a practical workaround we propose a two-step approach that combines and weighs together results...

  5. Experimental investigation of auroral generator regions with conjugate Cluster and FAST data

    Directory of Open Access Journals (Sweden)

    O. Marghitu

    2006-03-01

    Full Text Available Here and in the companion paper, Hamrin et al. (2006, we present experimental evidence for the crossing of auroral generator regions, based on conjugate Cluster and FAST data. To our knowledge, this is the first investigation that concentrates on the evaluation of the power density, E·J, in auroral generator regions, by using in-situ measurements. The Cluster data we discuss were collected within the Plasma Sheet Boundary Layer (PSBL, during a quiet magnetospheric interval, as judged from the geophysical indices, and several minutes before the onset of a small substorm, as indicated by the FAST data. Even at quiet times, the PSBL is an active location: electric fields are associated with plasma motion, caused by the dynamics of the plasma-sheet/lobe interface, while electrical currents are induced by pressure gradients. In the example we show, these ingredients do indeed sustain the conversion of mechanical energy into electromagnetic energy, as proved by the negative power density, E·J<0. The plasma characteristics in the vicinity of the generator regions indicate a complicated 3-D wavy structure of the plasma sheet boundary. Consistent with this structure, we suggest that at least part of the generated electromagnetic energy is carried away by Alfvén waves, to be dissipated in the ionosphere, near the polar cap boundary. Such a scenario is supported by the FAST data, which show energetic electron precipitation conjugated with the generator regions crossed by Cluster. A careful examination of the conjunction timing contributes to the validation of the generator signatures.

  6. Gamma knife simulation using the MCNP4C code and the zubal phantom and comparison with experimental data

    International Nuclear Information System (INIS)

    Gholami, S.; Kamali Asl, A.; Aghamiri, M.; Allahverdi, M.

    2010-01-01

    Gamma Knife is an instrument specially designed for treating brain disorders. In Gamma Knife, there are 201 narrow beams of cobalt-60 sources that intersect at an isocenter point to treat brain tumors. The tumor is placed at the isocenter and is treated by the emitted gamma rays. Therefore, there is a high dose at this point and a low dose is delivered to the normal tissue surrounding the tumor. Material and Method: In the current work, the MCNP simulation code was used to simulate the Gamma Knife. The calculated values were compared to the experimental ones and previous works. Dose distribution was compared for different collimators in a water phantom and the Zubal brain-equivalent phantom. The dose profiles were obtained along the x, y and z axes. Result: The evaluation of the developed code was performed using experimental data and we found a good agreement between our simulation and experimental data. Discussion: Our results showed that the skull bone has a high contribution to both scatter and absorbed dose. In other words, inserting the exact material of brain and other organs of the head in digital phantom improves the quality of treatment planning. This work is regarding the measurement of absorbed dose and improving the treatment planning procedure in Gamma-Knife radiosurgery in the brain.

  7. Gamma Knife Simulation Using the MCNP4C Code and the Zubal Phantom and Comparison with Experimental Data

    Directory of Open Access Journals (Sweden)

    Somayeh Gholami

    2010-06-01

    Full Text Available Introduction: Gamma Knife is an instrument specially designed for treating brain disorders. In Gamma Knife, there are 201 narrow beams of cobalt-60 sources that intersect at an isocenter point to treat brain tumors. The tumor is placed at the isocenter and is treated by the emitted gamma rays. Therefore, there is a high dose at this point and a low dose is delivered to the normal tissue surrounding the tumor. Material and Method: In the current work, the MCNP simulation code was used to simulate the Gamma Knife. The calculated values were compared to the experimental ones and previous works. Dose distribution was compared for different collimators in a water phantom and the Zubal brain-equivalent phantom. The dose profiles were obtained along the x, y and z axes. Result: The evaluation of the developed code was performed using experimental data and we found a good agreement between our simulation and experimental data. Discussion: Our results showed that the skull bone has a high contribution to both scatter and absorbed dose. In other words, inserting the exact material of brain and other organs of the head in digital phantom improves the quality of treatment planning. This work is regarding the measurement of absorbed dose and improving the treatment planning procedure in Gamma-Knife radiosurgery in the brain.

  8. The library of evaluated and experimental data on charged particles for fusion application (SaBa). Summary documentation

    International Nuclear Information System (INIS)

    Zvenigorodskij, A.G.; Zherebtsov, V.A.; Lazarev, L.M.; Dunaeva, S.A.; Generalov, L.N.; Taova, S.M.; Kamskaya, E.V.; Marshalkina, R.I.

    1999-01-01

    An electronic version of the evaluated and experimental data on charged particles for thermonuclear applications (SaBa) was prepared on the base of handbook 'Nuclear Physics Constants for Thermonuclear Fusion', INDC(CCP)-326/L+F, Vienna, 1991. Data on 100 channels for 52 reactions are presented in the Library. Program code was performed using the object-oriented programming environment Borland C ++ Builder for Microsoft Windows 95 and Windows NT operating systems. Optimal set of data processing procedures and friendly interface provide remarkable possibilities for the active use of this program for various applications in the field of thermonuclear fusion. It is available online (http:/www-nds.iaea.or.at/reports/data/saba/disk1.zip, ../disk2.zip, ../disk3.zip, on CD-ROM or on a set of PC diskettes from the IAEA Nuclear Data Section, costfree, upon request. (author)

  9. Investigation and experimental data de-noising of Damavand tokamak by using fourier series expansion and wavelet code

    International Nuclear Information System (INIS)

    Sadeghi, Y.

    2006-01-01

    Computer Programs are important tools in physics. Analysis of the experimental data and the control of complex handle physical phenomenon and the solution of numerical problem in physics help scientist to the behavior and simulate the process. In this paper, calculation of several Fourier series gives us a visual and analytic impression of data analyses from Fourier series. One of important aspect in data analyses is to find optimum method for de-noising. Wavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution corresponding to its scale. They have advantages over usual traditional methods in analyzing physical situations where the signal contains discontinuities and sharp spikes. Transformed data by wavelets in frequency space has time information and can clearly show the exact location in time of the discontinuity. This aspect makes wavelets an excellent tool in the field of data analysis. In this paper, we show how Fourier series and wavelets can analyses data in Damavand tokamak. ?

  10. Comparison of GEANT4 very low energy cross section models with experimental data in water

    DEFF Research Database (Denmark)

    Incerti, S; Ivanchenko, A; Karamitros, M

    2010-01-01

    The GEANT4 general-purpose Monte Carlo simulation toolkit is able to simulate physical interaction processes of electrons, hydrogen and helium atoms with charge states (H0, H+) and (He0, He+, He2+), respectively, in liquid water, the main component of biological systems, down to the electron volt...... of electromagnetic interactions within the GEANT4 toolkit framework (since GEANT4 version 9.3 beta). This work presents a quantitative comparison of these physics models with a collection of experimental data in water collected from the literature....

  11. Deriving Structural Information from Experimentally Measured Data on Biomolecules.

    Science.gov (United States)

    van Gunsteren, Wilfred F; Allison, Jane R; Daura, Xavier; Dolenc, Jožica; Hansen, Niels; Mark, Alan E; Oostenbrink, Chris; Rusu, Victor H; Smith, Lorna J

    2016-12-23

    During the past half century, the number and accuracy of experimental techniques that can deliver values of observables for biomolecular systems have been steadily increasing. The conversion of a measured value Q exp of an observable quantity Q into structural information is, however, a task beset with theoretical and practical problems: 1) insufficient or inaccurate values of Q exp , 2) inaccuracies in the function Q(r→) used to relate the quantity Q to structure r→ , 3) how to account for the averaging inherent in the measurement of Q exp , 4) how to handle the possible multiple-valuedness of the inverse r→(Q) of the function Q(r→) , to mention a few. These apply to a variety of observable quantities Q and measurement techniques such as X-ray and neutron diffraction, small-angle and wide-angle X-ray scattering, free-electron laser imaging, cryo-electron microscopy, nuclear magnetic resonance, electron paramagnetic resonance, infrared and Raman spectroscopy, circular dichroism, Förster resonance energy transfer, atomic force microscopy and ion-mobility mass spectrometry. The process of deriving structural information from measured data is reviewed with an eye to non-experts and newcomers in the field using examples from the literature of the effect of the various choices and approximations involved in the process. A list of choices to be avoided is provided. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Extension of the energy range of experimental activation cross-sections data of deuteron induced nuclear reactions on indium up to 50MeV.

    Science.gov (United States)

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-11-01

    The energy range of our earlier measured activation cross-sections data of longer-lived products of deuteron induced nuclear reactions on indium were extended from 40MeV up to 50MeV. The traditional stacked foil irradiation technique and non-destructive gamma spectrometry were used. No experimental data were found in literature for this higher energy range. Experimental cross-sections for the formation of the radionuclides (113,110)Sn, (116m,115m,114m,113m,111,110g,109)In and (115)Cd are reported in the 37-50MeV energy range, for production of (110)Sn and (110g,109)In these are the first measurements ever. The experimental data were compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS 1.6 nuclear model code as listed in the on-line library TENDL-2014. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Plutonium chemistry: a synthesis of experimental data and a quantitative model for plutonium oxide solubility

    International Nuclear Information System (INIS)

    Haschke, J.M.; Oversby, V.M.

    2002-01-01

    The chemistry of plutonium is important for assessing potential behavior of radioactive waste under conditions of geologic disposal. This paper reviews experimental data on dissolution of plutonium oxide solids, describes a hybrid kinetic-equilibrium model for predicting steady-state Pu concentrations, and compares laboratory results with predicted Pu concentrations and oxidation-state distributions. The model is based on oxidation of PuO 2 by water to produce PuO 2+x , an oxide that can release Pu(V) to solution. Kinetic relationships between formation of PuO 2+x , dissolution of Pu(V), disproportionation of Pu(V) to Pu(IV) and Pu(VI), and reduction of Pu(VI) are given and used in model calculations. Data from tests of pyrochemical salt wastes in brines are discussed and interpreted using the conceptual model. Essential data for quantitative modeling at conditions relevant to nuclear waste repositories are identified and laboratory experiments to determine rate constants for use in the model are discussed

  14. New experimental stopping power data of 4He, 16O, 40Ar, 48Ca and 84Kr projectiles in different solid materials

    Science.gov (United States)

    Trzaska, W. H.; Knyazheva, G. N.; Perkowski, J.; Andrzejewski, J.; Khlebnikov, S. V.; Kozulin, E. M.; Malkiewicz, T.; Mutterer, M.; Savelieva, E. O.

    2018-03-01

    New experimental data on energy loss of 4He, 16O, 40Ar, 48Ca and 84Kr ions in thin, self-supporting foils of C, Al, Ni, Ag, Lu, Au, Pb and Th are presented. The measurements, using the TOF-E method, were done in a very broad energy range around the stopping power maximum; typically from 0.1 to 11 MeV/u. When available, the extracted stopping power values are compared with the previously published data. The overall agreement is good although a fair comparison is difficult as the covered energy range is much larger than in previous measurements. The small error bars and a broad coverage allowed us to test the predictions of theoretical codes: PASS, CasP, and semi-empirical programs: SRIM, LET, MSTAR, and the Hubert table predictions. The deviations of PASS predictions from the experimental data do not exceed 20% for all the measured combinations. CasP predictions are within 15% from the data for heavier ions but diverge up to 40% for lighter ions. Semi-empirical approaches, including SRIM, deviate from the experimental data by less than 5% for the regions already covered by previous experiments but err by about 10-20% for the ion/target combinations that were not measured before: Ca in Lu as well as Kr in Lu, Pb, and Th.

  15. Study of the experimental data of multifragmentation of gold and krypton nuclei on interactions with photoemulsion nuclei at high energies

    International Nuclear Information System (INIS)

    Saleh, Z.A.; Abdel-Hafez, A.

    2002-01-01

    Results from EMU-01/12 collaboration for the experimental data on multifragmentation of gold residual nuclei created in the interactions with photoemulsion nuclei at the energy of 10.7 GeV/nucleon are presented together with the experimental data on multifragmentation of krypton created on the interactions with photoemulsion nuclei at energy of 0.9 GeV/nucleon. The data are analyzed in the frame of the statistical model of multifragmentation. It is obvious that there are two regimes for nuclear multifragmentation: the former is when less than one-half of nucleons of projectile nucleus are knocked out, the later is when more than one-half of nucleons are knocked out. Residual nuclei with masses close to each other created at different reactions are fragmented practically simultaneously when more than one-half of nucleons of original nuclei are knocked out. These results give an indication that projectiles other than Gold and Krypton may give the same characterization on interaction with emulsion nuclei at high energies

  16. Mechanical properties data of 2-1/4Cr-1Mo steel for the experimental very high temperature gas-cooled reactor

    International Nuclear Information System (INIS)

    Oku, Tatsuo; Kikuyama, Toshihiko; Fukaya, Kiyoshi; Kodaira, Tsuneo

    1978-11-01

    This is a collection of mechanical properties data of 2-1/4Cr-1Mo steel necessary for structural design and safety analysis of the pressure vessel of the Experimental Very High Temperature Gas-Cooled Reactor (VHTR). These include physical properties, mechanical properties, temper embrittlement, creep with fatigue, fracture toughness and irradiation effects. A review of the data shows the research areas to be carried out particularly in the future for more data. (author)

  17. Review of experimental data for modelling LWR fuel cladding behaviour under loss of coolant accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Massih, Ali R. [Quantum Technologies AB, Uppsala Science Park (Sweden)

    2007-02-15

    Extensive range of experiments has been conducted in the past to quantitatively identify and understand the behaviour of fuel rod under loss-of-coolant accident (LOCA) conditions in light water reactors (LWRs). The obtained experimental data provide the basis for the current emergency core cooling system acceptance criteria under LOCA conditions for LWRs. The results of recent experiments indicate that the cladding alloy composition and high burnup effects influence LOCA acceptance criteria margins. In this report, we review some past important and recent experimental results. We first discuss the background to acceptance criteria for LOCA, namely, clad embrittlement phenomenology, clad embrittlement criteria (limitations on maximum clad oxidation and peak clad temperature) and the experimental bases for the criteria. Two broad kinds of test have been carried out under LOCA conditions: (i) Separate effect tests to study clad oxidation, clad deformation and rupture, and zirconium alloy allotropic phase transition during LOCA. (ii) Integral LOCA tests, in which the entire LOCA sequence is simulated on a single rod or a multi-rod array in a fuel bundle, in laboratory or in a tests and results are discussed and empirical correlations deduced from these tests and quantitative models are conferred. In particular, the impact of niobium in zirconium base clad and hydrogen content of the clad on allotropic phase transformation during LOCA and also the burst stress are discussed. We review some recent LOCA integral test results with emphasis on thermal shock tests. Finally, suggestions for modelling and further evaluation of certain experimental results are made.

  18. Assessment of reduced-order unscented Kalman filter for parameter identification in 1-dimensional blood flow models using experimental data.

    Science.gov (United States)

    Caiazzo, A; Caforio, Federica; Montecinos, Gino; Muller, Lucas O; Blanco, Pablo J; Toro, Eluterio F

    2016-10-25

    This work presents a detailed investigation of a parameter estimation approach on the basis of the reduced-order unscented Kalman filter (ROUKF) in the context of 1-dimensional blood flow models. In particular, the main aims of this study are (1) to investigate the effects of using real measurements versus synthetic data for the estimation procedure (i.e., numerical results of the same in silico model, perturbed with noise) and (2) to identify potential difficulties and limitations of the approach in clinically realistic applications to assess the applicability of the filter to such setups. For these purposes, the present numerical study is based on a recently published in vitro model of the arterial network, for which experimental flow and pressure measurements are available at few selected locations. To mimic clinically relevant situations, we focus on the estimation of terminal resistances and arterial wall parameters related to vessel mechanics (Young's modulus and wall thickness) using few experimental observations (at most a single pressure or flow measurement per vessel). In all cases, we first perform a theoretical identifiability analysis on the basis of the generalized sensitivity function, comparing then the results owith the ROUKF, using either synthetic or experimental data, to results obtained using reference parameters and to available measurements. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Overview of Experimental Pulse-Doppler Radar Data Collected Oct 1999

    National Research Council Canada - National Science Library

    Hughes, Steven

    2000-01-01

    The Defence Research Establishment Ottawa has designed and constructed an experimental air-to-air radar system as the first step in demonstrating an air-to-air surveillance capability for the Canadian...

  20. Radiological and environmental studies at uranium mills: a comparison of theoretical and experimental data

    International Nuclear Information System (INIS)

    Momeni, M.H.; Kisieleski, W.E.; Yuan, Y.; Roberts, C.J.

    1978-01-01

    Evaluation of radiological risk of uranium milling is based on identification and quantification of sources of release and consideration of dynamic coupling among the meteorological, physiographical, hydrological environments and the affected individuals. Dispersion pathways of radionuclides are through air, soil, and water, each demanding locally tailored procedures for estimation of the rate of release of radioactivity and the pattern of biological uptake and exposure. The Uranium Dispersion and Dosimetry Code (UDAD), a comprehensive method for estimating the concentrations of the released radionuclides, dose rates, doses, and radiological health effects, is described. Predicted concentrations and exposure rates are compared with experimental data obtained from field research at active mills and abandoned tailings

  1. Studies of thermal-reactor benchmark-data interpretation: experimental corrections

    International Nuclear Information System (INIS)

    Sher, R.; Fiarman, S.

    1976-10-01

    Experimental values of integral parameters of the lattices studied in this report, i.e., the MIT(D 2 O) and TRX benchmark lattices have been re-examined and revised. The revisions correct several systematic errors that have been previously ignored or considered insignificant. These systematic errors are discussed in detail. The final corrected values are presented

  2. Comparison of Monte Carlo simulation of gamma ray attenuation coefficients of amino acids with XCOM program and experimental data

    Science.gov (United States)

    Elbashir, B. O.; Dong, M. G.; Sayyed, M. I.; Issa, Shams A. M.; Matori, K. A.; Zaid, M. H. M.

    2018-06-01

    The mass attenuation coefficients (μ/ρ), effective atomic numbers (Zeff) and electron densities (Ne) of some amino acids obtained experimentally by the other researchers have been calculated using MCNP5 simulations in the energy range 0.122-1.330 MeV. The simulated values of μ/ρ, Zeff, and Ne were compared with the previous experimental work for the amino acids samples and a good agreement was noticed. Moreover, the values of mean free path (MFP) for the samples were calculated using MCNP5 program and compared with the theoretical results obtained by XCOM. The investigation of μ/ρ, Zeff, Ne and MFP values of amino acids using MCNP5 simulations at various photon energies when compared with the XCOM values and previous experimental data for the amino acids samples revealed that MCNP5 code provides accurate photon interaction parameters for amino acids.

  3. Detection of outliers by neural network on the gas centrifuge experimental data of isotopic separation process

    International Nuclear Information System (INIS)

    Andrade, Monica de Carvalho Vasconcelos

    2004-01-01

    This work presents and discusses the neural network technique aiming at the detection of outliers on a set of gas centrifuge isotope separation experimental data. In order to evaluate the application of this new technique, the result obtained of the detection is compared to the result of the statistical analysis combined with the cluster analysis. This method for the detection of outliers presents a considerable potential in the field of data analysis and it is at the same time easier and faster to use and requests very less knowledge of the physics involved in the process. This work established a procedure for detecting experiments which are suspect to contain gross errors inside a data set where the usual techniques for identification of these errors cannot be applied or its use/demands an excessively long work. (author)

  4. Numerical treatment of experimental data in calibration procedures

    International Nuclear Information System (INIS)

    Moreno, C.

    1993-06-01

    A discussion of a numerical procedure to find the proportionality factor between two measured quantities is given in the framework of the least-squares method. Variable, as well as constant, amounts of experimental uncertainties are considered for each variable along their measured range. The variance of the proportionality factor is explicitly given as a closed analytical expression valid for the general case. Limits of the results obtained here have been studied allowing comparisons with those obtained using classical least-squares expressions. Analytical and numerical examples are also discussed. (author). 11 refs, 1 fig., 1 tab

  5. Assessment of CANDU physics codes using experimental data - II: CANDU core physics measurements

    International Nuclear Information System (INIS)

    Roh, Gyu Hong; Jeong, Chang Joon; Choi, Hang Bok

    2001-11-01

    Benchmark calculations of the advanced CANDU reactor analysis tools (WIMS-AECL, SHETAN and RFSP) and the Monte Carlo code MCNP-4B have been performed using Wolsong Units 2 and 3 Phase-B measurement data. In this study, the benchmark calculations have been done for the criticality, boron worth, reactivity device worth, reactivity coefficient, and flux scan. For the validation of the WIMS-AECL/SHETANRFSP code system, the lattice parameters of the fuel channel were generated by the WIMS-AECL code, and incremental cross sections of reactivity devices and structural material were generated by the SHETAN code. The results have shown that the criticality is under-predicted by -4 mk. The reactivity device worths are generally consistent with the measured data except for the strong absorbers such as shutoff rod and mechanical control absorber. The heat transport system temperature coefficient and flux distributions are in good agreement with the measured data. However, the moderator temperature coefficient has shown a relatively large error, which could be caused by the incremental cross-section generation methodology for the reactivity device. For the MCNP-4B benchmark calculation, cross section libraries were newly generated from ENDF/B-VI release 3 through the NJOY97.114 data processing system and a three-dimensional full core model was developed. The simulation results have shown that the criticality is estimated within 4 mk and the estimated reactivity worth of the control devices are generally consistent with the measurement data, which implies that the MCNP code is valid for CANDU core analysis. In the future, therefore, the MCNP code could be used as a reference tool to benchmark design and analysis codes for the advanced fuels for which experimental data are not available

  6. Gamma ray shielding study of barium-bismuth-borosilicate glasses as transparent shielding materials using MCNP-4C code, XCOM program, and available experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Bagheri, Reza; Yousefinia, Hassan [Nuclear Fuel Cycle Research School (NFCRS), Nuclear Science and Technology Research Institute (NSTRI), Atomic Energy Organization of Iran, Tehran (Iran, Islamic Republic of); Moghaddam, Alireza Khorrami [Radiology Department, Paramedical Faculty, Mazandaran University of Medical Sciences, Sari (Iran, Islamic Republic of)

    2017-02-15

    In this work, linear and mass attenuation coefficients, effective atomic number and electron density, mean free paths, and half value layer and 10th value layer values of barium-bismuth-borosilicate glasses were obtained for 662 keV, 1,173 keV, and 1,332 keV gamma ray energies using MCNP-4C code and XCOM program. Then obtained data were compared with available experimental data. The MCNP-4C code and XCOM program results were in good agreement with the experimental data. Barium-bismuth-borosilicate glasses have good gamma ray shielding properties from the shielding point of view.

  7. System identification by experimental data processing, application to turbulent transport of a tracer in pipe flow

    International Nuclear Information System (INIS)

    Burgos, Manuel; Getto, Daniel; Berne, Philippe

    2005-01-01

    System identification is the first, and probably the most important step in detecting abnormal behavior, control system design or performance improving. Data analysis is performed for studying the plant behavior, sensitivity of operation procedures and several other goals. In all these cases, the observed data is the convolution of an input function, and the system's impulse response. Practical discrete time convolutions may be performed multiplying a matrix built from the impulse response by the input vector, but for deconvolution it is necessary to invert the matrix which is singular in a causal system. Another method for deconvolution is by means of Fourier Transforms. Actual readings are usually corrupted by noise and, besides, their transform shows high low frequencies components and high frequency ones mainly due to additive noise. Subjective decisions as cut-off frequency should be taken as well. This paper proposes a deconvolution method based on parameters fitting of suitable models, where they exist, and estimation of values where analytical forms are not available. It is based on the global, non linear fitting of them, with a maximum likelihood criteria. An application of the method is shown using data from two fluid flow experiments. The experimental test rigs basically consist in a long section of straight pipe in which fluid is flowing. A pulse of tracer is injected at the entrance and detected at various locations along the pipe. An attempt of deconvolution of signals from successive probes using a classical model describing the flow of tracer as a plug moving with the average fluid velocity, plus some axial dispersion. The parameters are for instance the velocity of the plug and a dispersion coefficient. After parameter fitting, the model is found to reproduce the experimental data well. The flow rates deduced from the adjusted travel times are in very good agreement with the actual values. In addition, the flow dispersion coefficient is obtained

  8. Experimental Peptide Identification Repository (EPIR): an integrated peptide-centric platform for validation and mining of tandem mass spectrometry data

    DEFF Research Database (Denmark)

    Kristensen, Dan Bach; Brønd, Jan Christian; Nielsen, Peter Aagaard

    2004-01-01

    LC MS/MS has become an established technology in proteomic studies, and with the maturation of the technology the bottleneck has shifted from data generation to data validation and mining. To address this bottleneck we developed Experimental Peptide Identification Repository (EPIR), which...... is an integrated software platform for storage, validation, and mining of LC MS/MS-derived peptide evidence. EPIR is a cumulative data repository where precursor ions are linked to peptide assignments and protein associations returned by a search engine (e.g. Mascot, Sequest, or PepSea). Any number of datasets can...

  9. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  10. Achieving graphical excellence: suggestions and methods for creating high-quality visual displays of experimental data.

    Science.gov (United States)

    Schriger, D L; Cooper, R J

    2001-01-01

    Graphics are an important means of communicating experimental data and results. There is evidence, however, that many of the graphics printed in scientific journals contain errors, redundancies, and lack clarity. Perhaps more important, many graphics fail to portray data at an appropriate level of detail, presenting summary statistics rather than underlying distributions. We seek to aid investigators in the production of high-quality graphics that do their investigations justice by providing the reader with optimum access to the relevant aspects of the data. The depiction of by-subject data, the signification of pairing when present, and the use of symbolic dimensionality (graphing different symbols to identify relevant subgroups) and small multiples (the presentation of an array of similar graphics each depicting one group of subjects) to portray stratification are stressed. Step-by-step instructions for the construction of high-quality graphics are offered. We hope that authors will incorporate these suggestions when developing graphics to accompany their manuscripts and that this process will lead to improvements in the graphical literacy of scientific journals. We also hope that journal editors will keep these principles in mind when refereeing manuscripts submitted for peer review.

  11. Reservoir capacity estimates in shale plays based on experimental adsorption data

    Science.gov (United States)

    Ngo, Tan

    Fine-grained sedimentary rocks are characterized by a complex porous framework containing pores in the nanometer range that can store a significant amount of natural gas (or any other fluids) through adsorption processes. Although the adsorbed gas can take up to a major fraction of the total gas-in-place in these reservoirs, the ability to produce it is limited, and the current technology focuses primarily on the free gas in the fractures. A better understanding and quantification of adsorption/desorption mechanisms in these rocks is therefore required, in order to allow for a more efficient and sustainable use of these resources. Additionally, while water is still predominantly used to fracture the rock, other fluids, such as supercritical CO2 are being considered; here, the idea is to reproduce a similar strategy as for the enhanced recovery of methane in deep coal seams (ECBM). Also in this case, the feasibility of CO2 injection and storage in hydrocarbon shale reservoirs requires a thorough understanding of the rock behavior when exposed to CO2, thus including its adsorption characteristics. The main objectives of this Master's Thesis are as follows: (1) to identify the main controls on gas adsorption in mudrocks (TOC, thermal maturity, clay content, etc.); (2) to create a library of adsorption data measured on shale samples at relevant conditions and to use them for estimating GIP and gas storage in shale reservoirs; (3) to build an experimental apparatus to measure adsorption properties of supercritical fluids (such as CO2 or CH 4) in microporous materials; (4) to measure adsorption isotherms on microporous samples at various temperatures and pressures. The main outcomes of this Master's Thesis are summarized as follows. A review of the literature has been carried out to create a library of methane and CO2 adsorption isotherms on shale samples from various formations worldwide. Large discrepancies have been found between estimates of the adsorbed gas density

  12. Evaluation of CFD Turbulent Heating Prediction Techniques and Comparison With Hypersonic Experimental Data

    Science.gov (United States)

    Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)

    2001-01-01

    Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.

  13. Performance Evaluation of Large Aperture 'Polished Panel' Optical Receivers Based on Experimental Data

    Science.gov (United States)

    Vilnrotter, Victor

    2013-01-01

    Recent interest in hybrid RF/Optical communications has led to the development and installation of a "polished-panel" optical receiver evaluation assembly on the 34-meter research antenna at Deep-Space Station 13 (DSS-13) at NASA's Goldstone Communications Complex. The test setup consists of a custom aluminum panel polished to optical smoothness, and a large-sensor CCD camera designed to image the point-spread function (PSF) generated by the polished aluminum panel. Extensive data has been obtained via realtime tracking and imaging of planets and stars at DSS-13. Both "on-source" and "off-source" data were recorded at various elevations, enabling the development of realistic simulations and analytic models to help determine the performance of future deep-space communications systems operating with on-off keying (OOK) or pulse-position-modulated (PPM) signaling formats with photon-counting detection, and compared with the ultimate quantum bound on detection performance for these modulations. Experimentally determined PSFs were scaled to provide realistic signal-distributions across a photon-counting detector array when a pulse is received, and uncoded as well as block-coded performance analyzed and evaluated for a well-known class of block codes.

  14. Experimental Seismic Event-screening Criteria at the Prototype International Data Center

    Science.gov (United States)

    Fisk, M. D.; Jepsen, D.; Murphy, J. R.

    - Experimental seismic event-screening capabilities are described, based on the difference of body-and surface-wave magnitudes (denoted as Ms:mb) and event depth. These capabilities have been implemented and tested at the prototype International Data Center (PIDC), based on recommendations by the IDC Technical Experts on Event Screening in June 1998. Screening scores are presented that indicate numerically the degree to which an event meets, or does not meet, the Ms:mb and depth screening criteria. Seismic events are also categorized as onshore, offshore, or mixed, based on their 90% location error ellipses and an onshore/offshore grid with five-minute resolution, although this analysis is not used at this time to screen out events.Results are presented of applications to almost 42,000 events with mb>=3.5 in the PIDC Standard Event Bulletin (SEB) and to 121 underground nuclear explosions (UNE's) at the U.S. Nevada Test Site (NTS), the Semipalatinsk and Novaya Zemlya test sites in the Former Soviet Union, the Lop Nor test site in China, and the Indian, Pakistan, and French Polynesian test sites. The screening criteria appear to be quite conservative. None of the known UNE's are screened out, while about 41 percent of the presumed earthquakes in the SEB with mb>=3.5 are screened out. UNE's at the Lop Nor, Indian, and Pakistan test sites on 8 June 1996, 11 May 1998, and 28 May 1998, respectively, have among the lowest Ms:mb scores of all events in the SEB.To assess the validity of the depth screening results, comparisons are presented of SEB depth solutions to those in other bulletins that are presumed to be reliable and independent. Using over 1600 events, the comparisons indicate that the SEB depth confidence intervals are consistent with or shallower than over 99.8 percent of the corresponding depth estimates in the other bulletins. Concluding remarks are provided regarding the performance of the experimental event-screening criteria, and plans for future

  15. 3D computed tomography using a microfocus X-ray source: Analysis of artifact formation in the reconstructed images using simulated as well as experimental projection data

    International Nuclear Information System (INIS)

    Krimmel, S.; Stephan, J.; Baumann, J.

    2005-01-01

    The scope of this contribution is to identify and to quantify the influence of different parameters on the formation of image artifacts in X-ray computed tomography (CT) resulting for example, from beam hardening or from partial lack of information using 3D cone beam CT. In general, the reconstructed image quality depends on a number of acquisition parameters concerning the X-ray source (e.g. X-ray spectrum), the geometrical setup (e.g. cone beam angle), the sample properties (e.g. absorption characteristics) and the detector properties. While it is difficult to distinguish the influence of different effects clearly in experimental projection data, they can be selected individually with the help of simulated projection data by varying the parameter set. The reconstruction of the 3D data set is performed with the filtered back projection algorithm according to Feldkamp, Davis and Kress for experimental as well as for simulated projection data. The experimental data are recorded with an industrial microfocus CT system which features a focal spot size of a few micrometers and uses a digital flat panel detector for data acquisition

  16. The use of linear fractional analogues of rheological models in the problem of approximating the experimental data on the stretch polyvinylchloride elastron

    Directory of Open Access Journals (Sweden)

    Luiza G. Ungarova

    2016-12-01

    Full Text Available We considere and analyze the uniaxial phenomenological models of viscoelastic deformation based on fractional analogues of Scott Blair, Voigt, Maxwell, Kelvin and Zener rheological models. Analytical solutions of the corresponding differential equations are obtained with fractional Riemann–Liouville operators under constant stress with further unloading, that are written by the generalized (two-parameter fractional exponential function and contains from two to four parameters depending on the type of model. A method for identifying the model parameters based on the background information for the experimental creep curves with constant stresses was developed. Nonlinear problem of parametric identification is solved by two-step iterative method. The first stage uses the characteristic data points diagrams and features in the behavior of the models under unrestricted growth of time and the initial approximation of parameters are determined. At the second stage, the refinement of these parameters by coordinate descent (the Hooke–Jeeves's method and minimizing the functional standard deviation for calculated and experimental values is made. Method of identification is realized for all the considered models on the basis of the known experimental data uniaxial viscoelastic deformation of Polyvinylchloride Elastron at a temperature of 20∘C and five the tensile stress levels. Table-valued parameters for all models are given. The errors analysis of constructed phenomenological models is made to experimental data over the entire ensemble of curves viscoelastic deformation. It was found that the approximation errors for the Scott Blair fractional model is 14.17 %, for the Voigt fractional model is 11.13 %, for the Maxvell fractional model is 13.02 %, for the Kelvin fractional model 10.56 %, for the Zener fractional model is 11.06 %. The graphs of the calculated and experimental dependences of viscoelastic deformation of Polyvinylchloride

  17. Gamma Ray Shielding Study of Barium–Bismuth–Borosilicate Glasses as Transparent Shielding Materials using MCNP-4C Code, XCOM Program, and Available Experimental Data

    Directory of Open Access Journals (Sweden)

    Reza Bagheri

    2017-02-01

    Full Text Available In this work, linear and mass attenuation coefficients, effective atomic number and electron density, mean free paths, and half value layer and 10th value layer values of barium–bismuth–borosilicate glasses were obtained for 662 keV, 1,173 keV, and 1,332 keV gamma ray energies using MCNP-4C code and XCOM program. Then obtained data were compared with available experimental data. The MCNP-4C code and XCOM program results were in good agreement with the experimental data. Barium–bismuth–borosilicate glasses have good gamma ray shielding properties from the shielding point of view.

  18. Program ECSX4 (version 78-1): conversion of experimentally measured cross-section data from the four-center-exchange (X-4) format to the Livermore ECSIL format

    International Nuclear Information System (INIS)

    Cullen, D.E.; Perkins, S.T.

    1978-01-01

    A computer code called ECSX4 converts experimentally measured cross-section data from the four-center-exchange (X-4) format to the Livermore Experimental Cross-Section Information Library (ECSIL) format. The major advantage of this program is that it converts the variable format and dimensioned data of the X-4 format to the fixed-field format and consistent set of units of the ECSIL format. This consistency greatly simplifies the subsequent use of the data for cross-section evaluations. 2 figures, 3 tables

  19. Comparison of SRIM, MCNPX and GEANT simulations with experimental data for thick Al absorbers

    International Nuclear Information System (INIS)

    Evseev, Ivan G.; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Yevseyeva, Olga; Assis, Joaquim T. de; Hormaza, Joel M.; Diaz, Katherin S.; Lopes, Ricardo T.

    2010-01-01

    Proton computerized tomography deals with relatively thick targets like the human head or trunk. In this case precise analytical calculation of the proton final energy is a rather complicated task, thus the Monte Carlo simulation stands out as a solution. We used the GEANT4.8.2 code to calculate the proton final energy spectra after passing a thick Al absorber and compared it with the same conditions of the experimental data. The ICRU49, Ziegler85 and Ziegler2000 models from the low energy extension pack were used. The results were also compared with the SRIM2008 and MCNPX2.4 simulations, and with solutions of the Boltzmann transport equation in the Fokker-Planck approximation.

  20. Comparison of SRIM, MCNPX and GEANT simulations with experimental data for thick Al absorbers

    Energy Technology Data Exchange (ETDEWEB)

    Evseev, Ivan G. [Federal University of Technology-Parana-UTFPR, Av.7 de Setembro 3165, Curitiba-PR (Brazil); Schelin, Hugo R. [Federal University of Technology-Parana-UTFPR, Av.7 de Setembro 3165, Curitiba-PR (Brazil)], E-mail: schelin@utfpr.edu.br; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P. [Federal University of Technology-Parana-UTFPR, Av.7 de Setembro 3165, Curitiba-PR (Brazil); Yevseyeva, Olga; Assis, Joaquim T. de [Instituto Politecnico da UERJ, Rua Alberto Rangel s/n, Nova Friburgo-RJ (Brazil); Hormaza, Joel M. [Instituto de Biociencias da UNESP, Distrito de Rubiao Junior s/n, Botucatu-SP (Brazil); Diaz, Katherin S. [CEADEN, Calle 30 502 e/5ta y 7ma Avenida, Playa, Ciudad Habana (Cuba); Lopes, Ricardo T. [Laboratorio de Instrumentacao Nuclear, COPPE, UFRJ, Rio de Janeiro-RJ (Brazil)

    2010-04-15

    Proton computerized tomography deals with relatively thick targets like the human head or trunk. In this case precise analytical calculation of the proton final energy is a rather complicated task, thus the Monte Carlo simulation stands out as a solution. We used the GEANT4.8.2 code to calculate the proton final energy spectra after passing a thick Al absorber and compared it with the same conditions of the experimental data. The ICRU49, Ziegler85 and Ziegler2000 models from the low energy extension pack were used. The results were also compared with the SRIM2008 and MCNPX2.4 simulations, and with solutions of the Boltzmann transport equation in the Fokker-Planck approximation.

  1. Application of artificial neural networks in analysis of CHF experimental data in round tubes

    International Nuclear Information System (INIS)

    Huang Yanping; Chen Bingde; Lang Xuemei; Wang Xiaojun; Shan Jianqiang; Jia Dounan

    2004-01-01

    Artificial neural networks (ANNs) are applied successfully to analyze the critical heat flux (CHF) experimental data from some round tubes in this paper. A set of software adopting artificial neural network method for predicting CHF in round tube and a set of CHF database are gotten. Comparing with common CHF correlations and CHF look-up table, ANN method has stronger ability of allow-wrong and nice robustness. The CHF predicting software adopting artificial neural network technology can improve the predicting accuracy in a wider parameter range, and is easier to update and to use. The artificial neural network method used in this paper can be applied to some similar physical problems. (authors)

  2. Electromagnetic effects and scattering lengths extraction from experimental data on K → 3π decays

    International Nuclear Information System (INIS)

    Gevorkyan, S.R.; Madigozhin, D.T.; Tarasov, A.V.; Voskresenskaya, O.O.

    2008-01-01

    The final state interactions in K ± → π ± π 0 π 0 decays are considered using the methods of non-relativistic quantum mechanics. We show how to take into account the largest electromagnetic effect in the analysis of experimental data using the amplitudes calculated earlier. We propose the relevant expressions for amplitude corrections valid both above and below the two charged pion production threshold M π 0 π 0 2m π ± , including the average effect for the threshold bin. These formulae can be used in the procedure of pion scattering lengths measurement from M π 0 π 0 spectrum

  3. Determination of the root-mean-square radius of the deuteron from present-day experimental data on neutron-proton scattering

    International Nuclear Information System (INIS)

    Babenko, V. A.; Petrov, N. M.

    2008-01-01

    The correlation between the root-mean-square matter radius of the deuteron, r m , and its effective radius, ρ, is investigated. A parabolic relationship between these two quantities makes it possible to determine the root-mean-square radius r m to within 0.01% if the effective radius ρ is known. The matter (r m ), structural (r d ), and charge (r ch ) radii of the deuteron are found with the aid of modern experimental results for phase shifts from the SAID nucleon-nucleon database, and their values are fully consistent with their counterparts deduced by using the experimental value of the effective deuteron radius due to Borbely and his coauthors. The charge-radius value of 2.124(6) fm, which was obtained with the aid of the SAID nucleon-nucleon database, and the charge-radius value of 2.126(12) fm, which was obtained with the aid of the experimental value of the effective radius ρ, are in very good agreement with the present-day chargeradius value of 2.128(11) fm, which was deduced by Sick and Trautmann by processing world-average experimental data on elastic electron scattering by deuterons with allowance for Coulomb distortions.

  4. Some ideas about the modeling of experimental data obtained during spent fuel leaching in the presence of dissolved hydrogen

    International Nuclear Information System (INIS)

    Spahiu, K.

    2003-01-01

    Lately several experimental data have been collected or published on the dissolution of spent fuel in solutions saturated with dissolved hydrogen. In the SFS project there are also several planned experiments of this type with different solids (alpha-doped UO 2 , high burnup spent fuel or MOX) or solution compositions (distilled water, low ionic strength carbonated solutions, concentrated NaCl solutions). There have been already also different hypothesis forwarded to explain the data as well as full models proposed including the influence of the dissolved Fe(II) on the fuel dissolution. Some ideas towards the main lines of modeling spent fuel dissolution under such conditions will be presented. The hydrogen effect on spent fuel dissolution is relatively recent and experiments are still carried out to confirm or rule it out for different spent fuels and conditions. For this reason it would be too ambitious at the present level of knowledge to present a full modeling of such data. This is because a spent fuel dissolution model should be valid for predictions of geological time scales based on relatively short time experiments. This is possible only with a very good understanding of the dissolution process and of the mechanisms underlying the hydrogen effect, while a simple extrapolation of experimental data for repository time scales would not be reliable. (Author)

  5. CFD analysis of pressure drop across grid spacers in rod bundles compared to correlations and heavy liquid metal experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Batta, A., E-mail: batta@kit.edu; Class, A.G., E-mail: class@kit.edu

    2017-02-15

    Early studies of the flow in rod bundles with spacer grids suggest that the pressure drop can be decomposed in contributions due to flow area variations by spacer grids and frictional losses along the rods. For these shape and frictional losses simple correlations based on theoretical and experimental data have been proposed. In the OECD benchmark study LACANES it was observed that correlations could well describe the flow behavior of the heavy liquid metal loop including a rod bundle with the exception of the core region, where different experts chose different pressure-loss correlations for the losses due to spacer grids. Here, RANS–CFD simulations provided very good data compared to the experimental data. It was observed that the most commonly applied Rehme correlation underestimated the shape losses. The available correlations relate the pressure drop across a grid spacer to the relative plugging of the spacer i.e. solidity e{sub max}. More sophisticated correlations distinct between spacer grids with round or sharp leading edge shape. The purpose of this study is to (i) show that CFD is suitable to predict pressure drop across spacer grids and (ii) to access the generality of pressure drop correlations. By verification and validation of CFD results against experimental data obtained in KALLA we show (i). The generality (ii) is challenged by considering three cases which yield identical pressure drop in the correlations. First we test the effect of surface roughness, a parameter not present in the correlations. Here we compare a simulation assuming a typical surface roughness representing the experimental situation to a perfectly smooth spacer surface. Second we reverse the flow direction for the spacer grid employed in the experiments which is asymmetric. The flow direction reversal is chosen for convenience, since an asymmetric spacer grid with given blockage ratio, may result in different flow situations depending on flow direction. Obviously blockage

  6. Effect of impurities and post-experimental purification in SAD phasing with serial femtosecond crystallography data.

    Science.gov (United States)

    Zhang, Tao; Gu, Yuanxin; Fan, Haifu

    2016-06-01

    In serial crystallography (SX) with either an X-ray free-electron laser (XFEL) or synchrotron radiation as the light source, huge numbers of micrometre-sized crystals are used in diffraction data collection. For a SAD experiment using a derivative with introduced heavy atoms, it is difficult to completely exclude crystals of the native protein from the sample. In this paper, simulations were performed to study how the inclusion of native crystals in the derivative sample could affect the result of SAD phasing and how the post-experimental purification proposed by Zhang et al. [(2015), Acta Cryst. D71, 2513-2518] could be used to remove the impurities. A gadolinium derivative of lysozyme and the corresponding native protein were used in the test. Serial femtosecond crystallography (SFX) diffraction snapshots were generated by CrystFEL. SHELXC/D, Phaser, DM, ARP/wARP and REFMAC were used for automatic structure solution. It is shown that a small amount of impurities (snapshots from native crystals) in the set of derivative snapshots can strongly affect the SAD phasing results. On the other hand, post-experimental purification can efficiently remove the impurities, leading to results similar to those from a pure sample.

  7. An Open-Source Data Storage and Visualization Back End for Experimental Data

    DEFF Research Database (Denmark)

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert

    2014-01-01

    and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status......In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component...... for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high...

  8. A real-time data transmission method based on Linux for physical experimental readout systems

    International Nuclear Information System (INIS)

    Cao Ping; Song Kezhu; Yang Junfeng

    2012-01-01

    In a typical physical experimental instrument, such as a fusion or particle physical application, the readout system generally implements an interface between the data acquisition (DAQ) system and the front-end electronics (FEE). The key task of a readout system is to read, pack, and forward the data from the FEE to the back-end data concentration center in real time. To guarantee real-time performance, the VxWorks operating system (OS) is widely used in readout systems. However, VxWorks is not an open-source OS, which gives it has many disadvantages. With the development of multi-core processor and new scheduling algorithm, Linux OS exhibits performance in real-time applications similar to that of VxWorks. It has been successfully used even for some hard real-time systems. Discussions and evaluations of real-time Linux solutions for a possible replacement of VxWorks arise naturally. In this paper, a real-time transmission method based on Linux is introduced. To reduce the number of transfer cycles for large amounts of data, a large block of contiguous memory buffer for DMA transfer is allocated by modifying the Linux Kernel (version 2.6) source code slightly. To increase the throughput for network transmission, the user software is designed into formation of parallelism. To achieve high performance in real-time data transfer from hardware to software, mapping techniques must be used to avoid unnecessary data copying. A simplified readout system is implemented with 4 readout modules in a PXI crate. This system can support up to 48 MB/s data throughput from the front-end hardware to the back-end concentration center through a Gigabit Ethernet connection. There are no restrictions on the use of this method, hardware or software, which means that it can be easily migrated to other interrupt related applications.

  9. Experimental Energy Consumption of Frame Slotted ALOHA and Distributed Queuing for Data Collection Scenarios

    Directory of Open Access Journals (Sweden)

    Pere Tuset-Peiro

    2014-07-01

    Full Text Available Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC protocols, Frame Slotted ALOHA (FSA and Distributed Queuing (DQ. We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.

  10. Experimental energy consumption of Frame Slotted ALOHA and Distributed Queuing for data collection scenarios.

    Science.gov (United States)

    Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier

    2014-07-24

    Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.

  11. A survey and experimental comparison of distributed SPARQL engines for very large RDF data

    KAUST Repository

    Abdelaziz, Ibrahim; Harbi, Razen; Khayyat, Zuhair; Kalnis, Panos

    2017-01-01

    Distributed SPARQL engines promise to support very large RDF datasets by utilizing shared-nothing computer clusters. Some are based on distributed frameworks such as MapReduce; others implement proprietary distributed processing; and some rely on expensive preprocessing for data partitioning. These systems exhibit a variety of trade-offs that are not well-understood, due to the lack of any comprehensive quantitative and qualitative evaluation. In this paper, we present a survey of 22 state-of-the-art systems that cover the entire spectrum of distributed RDF data processing and categorize them by several characteristics. Then, we select 12 representative systems and perform extensive experimental evaluation with respect to preprocessing cost, query performance, scalability and workload adaptability, using a variety of synthetic and real large datasets with up to 4.3 billion triples. Our results provide valuable insights for practitioners to understand the trade-offs for their usage scenarios. Finally, we publish online our evaluation framework, including all datasets and workloads, for researchers to compare their novel systems against the existing ones.

  12. A survey and experimental comparison of distributed SPARQL engines for very large RDF data

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-10-19

    Distributed SPARQL engines promise to support very large RDF datasets by utilizing shared-nothing computer clusters. Some are based on distributed frameworks such as MapReduce; others implement proprietary distributed processing; and some rely on expensive preprocessing for data partitioning. These systems exhibit a variety of trade-offs that are not well-understood, due to the lack of any comprehensive quantitative and qualitative evaluation. In this paper, we present a survey of 22 state-of-the-art systems that cover the entire spectrum of distributed RDF data processing and categorize them by several characteristics. Then, we select 12 representative systems and perform extensive experimental evaluation with respect to preprocessing cost, query performance, scalability and workload adaptability, using a variety of synthetic and real large datasets with up to 4.3 billion triples. Our results provide valuable insights for practitioners to understand the trade-offs for their usage scenarios. Finally, we publish online our evaluation framework, including all datasets and workloads, for researchers to compare their novel systems against the existing ones.

  13. Geochemical databases. Part 1. Pmatch: a program to manage thermochemical data. Part 2. The experimental validation of geochemical computer models

    International Nuclear Information System (INIS)

    Pearson, F.J. Jr.; Avis, J.D.; Nilsson, K.; Skytte Jensen, B.

    1993-01-01

    This work is carried out under cost-sharing contract with European Atomic Energy Community in the framework of its programme on Management and Storage of Radioactive Wastes. Part 1: PMATCH, A Program to Manage Thermochemical Data, describes the development and use of a computer program, by means of which new thermodynamic data from literature may be referenced to a common frame and thereby become internally consistent with an existing database. The report presents the relevant thermodynamic expressions and their use in the program is discussed. When there is not sufficient thermodynamic data available to describe a species behaviour under all conceivable conditions, the problems arising are thoroughly discussed and the available data is handled by approximating expressions. Part II: The Experimental Validation of Geochemical Computer models are the results of experimental investigations of the equilibria established in aqueous suspensions of mixtures of carbonate minerals (Calcium, magnesium, manganese and europium carbonates) compared with theoretical calculations made by means of the geochemical JENSEN program. The study revealed that the geochemical computer program worked well, and that its database was of sufficient validity. However, it was observed that experimental difficulties could hardly be avoided, when as here a gaseous component took part in the equilibria. Whereas the magnesium and calcium carbonates did not demonstrate mutual solid solubility, this produced abnormal effects when manganese and calcium carbonates were mixed resulting in a diminished solubility of both manganese and calcium. With tracer amounts of europium added to a suspension of calcite in sodium carbonate solutions long term experiments revealed a transition after 1-2 months, whereby the tracer became more strongly adsorbed onto calcite. The transition is interpreted as the nucleation and formation of a surface phase incorporating the 'species' NaEu(Co 3 ) 2

  14. Experimental studies of electron capture

    International Nuclear Information System (INIS)

    Pedersen, E.H.

    1983-01-01

    This thesis discusses the main results of recent experimental studies of electron capture in asymmetric collisions. Most of the results have been published, but the thesis also contains yet unpublished data, or data presented only in unrefereed conference proceedings. The thesis aims at giving a coherent discussion of the understanding of the experimental results, based first on simple qualitative considerations and subsequently on quantitative comparisons with the best theoretical calculations currently available. (Auth.)

  15. Experimental data available for radiation damage modelling in reactor materials

    International Nuclear Information System (INIS)

    Wollenberger, H.

    Radiation damage modelling requires rate constants for production, annihilation and trapping of defects. The literature is reviewed with respect to experimental determination of such constants. Useful quantitative information exists only for Cu and Al. Special emphasis is given to the temperature dependence of the rate constants

  16. Simulation of FRET dyes allows quantitative comparison against experimental data

    Science.gov (United States)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  17. Meteorological and snow distribution data in the Izas Experimental Catchment (Spanish Pyrenees) from 2011 to 2017

    Science.gov (United States)

    Revuelto, Jesús; Azorin-Molina, Cesar; Alonso-González, Esteban; Sanmiguel-Vallelado, Alba; Navarro-Serrano, Francisco; Rico, Ibai; López-Moreno, Juan Ignacio

    2017-12-01

    This work describes the snow and meteorological data set available for the Izas Experimental Catchment in the Central Spanish Pyrenees, from the 2011 to 2017 snow seasons. The experimental site is located on the southern side of the Pyrenees between 2000 and 2300 m above sea level, covering an area of 55 ha. The site is a good example of a subalpine environment in which the evolution of snow accumulation and melt are of major importance in many mountain processes. The climatic data set consists of (i) continuous meteorological variables acquired from an automatic weather station (AWS), (ii) detailed information on snow depth distribution collected with a terrestrial laser scanner (TLS, lidar technology) for certain dates across the snow season (between three and six TLS surveys per snow season) and (iii) time-lapse images showing the evolution of the snow-covered area (SCA). The meteorological variables acquired at the AWS are precipitation, air temperature, incoming and reflected solar radiation, infrared surface temperature, relative humidity, wind speed and direction, atmospheric air pressure, surface temperature (snow or soil surface), and soil temperature; all were taken at 10 min intervals. Snow depth distribution was measured during 23 field campaigns using a TLS, and daily information on the SCA was also retrieved from time-lapse photography. The data set (https://doi.org/10.5281/zenodo.848277) is valuable since it provides high-spatial-resolution information on the snow depth and snow cover, which is particularly useful when combined with meteorological variables to simulate snow energy and mass balance. This information has already been analyzed in various scientific studies on snow pack dynamics and its interaction with the local climatology or topographical characteristics. However, the database generated has great potential for understanding other environmental processes from a hydrometeorological or ecological perspective in which snow dynamics play a

  18. Comparisons of RELAP5-3D Analyses to Experimental Data from the Natural Convection Shutdown Heat Removal Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Hu, Rui; Lisowski, Darius; Kraus, Adam

    2016-04-17

    The Reactor Cavity Cooling System (RCCS) is an important passive safety system being incorporated into the overall safety strategy for high temperature advanced reactor concepts such as the High Temperature Gas- Cooled Reactors (HTGR). The Natural Convection Shutdown Heat Removal Test Facility (NSTF) at Argonne National Laboratory (Argonne) reflects a 1/2-scale model of the primary features of one conceptual air-cooled RCCS design. The project conducts ex-vessel, passive heat removal experiments in support of Department of Energy Office of Nuclear Energy’s Advanced Reactor Technology (ART) program, while also generating data for code validation purposes. While experiments are being conducted at the NSTF to evaluate the feasibility of the passive RCCS, parallel modeling and simulation efforts are ongoing to support the design, fabrication, and operation of these natural convection systems. Both system-level and high fidelity computational fluid dynamics (CFD) analyses were performed to gain a complete understanding of the complex flow and heat transfer phenomena in natural convection systems. This paper provides a summary of the RELAP5-3D NSTF model development efforts and provides comparisons between simulation results and experimental data from the NSTF. Overall, the simulation results compared favorably to the experimental data, however, further analyses need to be conducted to investigate any identified differences.

  19. The smoothing and fast Fourier transformation of experimental X-ray and neutron data from amorphous materials

    International Nuclear Information System (INIS)

    Dixon, M.; Wright, A.C.; Hutchinson, P.

    1977-01-01

    The application of fast Fourier transformation techniques to the analysis of experimental X-ray and neutron diffraction patterns from amorphous materials is discussed and compared with conventional techniques using Filon's quadrature. The fast Fourier transform package described also includes cubic spline smoothing and has been extensively tested, using model data to which statistical errors have been added by means of a pseudo-random number generator with Gaussian shaper. Neither cubic spline nor hand smoothing has much effect on the resulting transform since the noise removed is of too high a frequency. (Auth.)

  20. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  1. The modelling of condensation in horizontal tubes and the comparison with experimental data

    Directory of Open Access Journals (Sweden)

    Bryk Rafał

    2017-01-01

    Full Text Available The condensation in horizontal tubes plays an important role in determining the operation mode of passive safety systems of modern nuclear power plants. In this paper, two different approaches for modelling of this phenomenon are compared and verified against experimental data. The first approach is based on the flow regime map developed by Tandon. Depending on the regime, the heat transfer coefficient is calculated according to corresponding semi-empirical correlation. The second approach uses a general, fully empirical correlation proposed by Shah. Both models are developed with utilization of the object-oriented, equation-based Modelica language and the open-source Open-Modelica environment. The results are compared with data obtained during a large scale integral test, simulating a Loss of Coolant Accident scenario performed at the dedicated Integral Test Facility Karlstein (INKA which was built at the Components Testing Department of AREVA in Karlstein, Germany. The INKA facility was designed to test the performance of the passive safety systems of KERENA, the new AREVA boiling water reactor design. INKA represents the KERENA containment with a volume scaling of 1:24. Components heights and levels over the ground are in the full scale. The comparison of simulations results shows a good agreement.

  2. Application of a two-region kinetic model for reflected reactors to experimental data

    International Nuclear Information System (INIS)

    Busch, R.D.; Spriggs, G.D.; Williams, J.G.

    1996-01-01

    Reflected reactors constitute one of the most important classes of nuclear reactors. Yet, during the past 50 yr, a plethora of experimental data involving reflected systems has been reported in the literature that cannot be satisfactorily explained using the open-quotes standardclose quotes (i.e., one-region) point-kinetic model. In particular, many have observed that the prompt-decay a curves obtained from Rossi-α and pulsed-neutron experiments can exhibit multiple decay modes in the vicinity near delayed critical in some types of reflected systems. When analyzed using theories based on the standard point-kinetic model, these data yielded system lifetimes that do not always agree well with the lifetimes predicted by numerical solutions of the multigroup, multidimensional diffusion or transport equations. In several cases, when the longest lived decay mode (i.e., the dominant root) was plotted as a function of reactivity, the a curve intercepted the reactivity axis at a reactivity significantly greater than 1$. Brunson dubbed this seemingly inexplicable behavior as the open-quotes dollar discrepancy.close quotes Furthermore, it has also been observed that the kinetic behavior of some reflected, fast-burst assemblies exhibits a very pronounced nonlinear relationship between reactivity and the initial inverse period for reactivity insertions > 1 $

  3. REBA experimenters' manual

    International Nuclear Information System (INIS)

    Schuch, R.L.

    1977-04-01

    The REBA is a high-energy, pulsed electron beam or bremsstrahlung x-ray generator whose operational purpose is to provide an energy source of short duration for conducting experiments, primarily to determine material responses to rapid surface and in-depth deposition of energy. The purpose of this manual is to serve as a basic source of information for prospective users of REBA. Included is a brief discussion of the design and operation of the facility as well as a summary of output characteristics for electron beam modes and environmental data for x-ray operation. The manual also contains a description of the REBA experimental facilities, including geometry of the test cell, instrumentation and data collection capabilities, and services and support available to experimenters

  4. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  5. Experimental data from irradiation of physical detectors disclose weaknesses in basic assumptions of the δ ray theory of track structure

    DEFF Research Database (Denmark)

    Olsen, K. J.; Hansen, Jørgen-Walther

    1985-01-01

    The applicability of track structure theory has been tested by comparing predictions based on the theory with experimental high-LET dose-response data for an amino acid alanine and a nylon based radiochromic dye film radiation detector. The linear energy transfer LET, has been varied from 28...

  6. The new real-time control and data acquisition system for an experimental tritium removal facility

    International Nuclear Information System (INIS)

    Stefan, Iuliana; Stefan, Liviu; Retevoi, Carmen; Balteanu, Ovidiu; Bucur, Ciprian

    2006-01-01

    Full text: The purpose of the paper is to present a real-time control and data acquisition system based on virtual instrumentation (LabView, compact I/O) applicable to an experimental heavy water detritiation plant. The initial data acquisition system based on analogue instruments is now upgraded to a fully digital system, this because of greater flexibility and capability than analogue hardware what allows easy modifications of the control system. Virtual instrumentation became lately much used for monitoring and controlling the operational parameters in plants. In the specific case of ETRF there are a lot of process parameters which have to be monitored and controlled. The essential improvement in the new system is the collection of all signals and control functions by a PC, what makes any changes in configuration easy. The system hardware-PC with embedded controllers is selected, as most cost effective. The LabView platform provides faster program development with a convenient user interface. The system provides independent digital control of each parameters and records data of the process. The system is flexible and has the advantage of further extension. (authors)

  7. Estimation of surface absorptivity in laser surface heating process with experimental data

    International Nuclear Information System (INIS)

    Chen, H-T; Wu, X-Y

    2006-01-01

    This study applies a hybrid technique of the Laplace transform and finite-difference methods in conjunction with the least-squares method and experimental temperature data inside the test material to predict the unknown surface temperature, heat flux and absorptivity for various surface conditions in the laser surface heating process. In this study, the functional form of the surface temperature is unknown a priori and is assumed to be a function of time before performing the inverse calculation. In addition, the whole time domain is divided into several analysis sub-time intervals and then these unknown estimates on each analysis interval can be predicted. In order to show the accuracy of the present inverse method, comparisons are made among the present estimates, direct results and previous results, showing that the present estimates agree with the direct results for the simulated problem. However, the present estimates of the surface absorptivity deviate slightly from previous estimated results under the assumption of constant thermal properties. The effect of the surface conditions on the surface absorptivity and temperature is not negligible

  8. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first...

  9. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the...

  10. [Interactions of DNA bases with individual water molecules. Molecular mechanics and quantum mechanics computation results vs. experimental data].

    Science.gov (United States)

    Gonzalez, E; Lino, J; Deriabina, A; Herrera, J N F; Poltev, V I

    2013-01-01

    To elucidate details of the DNA-water interactions we performed the calculations and systemaitic search for minima of interaction energy of the systems consisting of one of DNA bases and one or two water molecules. The results of calculations using two force fields of molecular mechanics (MM) and correlated ab initio method MP2/6-31G(d, p) of quantum mechanics (QM) have been compared with one another and with experimental data. The calculations demonstrated a qualitative agreement between geometry characteristics of the most of local energy minima obtained via different methods. The deepest minima revealed by MM and QM methods correspond to water molecule position between two neighbor hydrophilic centers of the base and to the formation by water molecule of hydrogen bonds with them. Nevertheless, the relative depth of some minima and peculiarities of mutual water-base positions in' these minima depend on the method used. The analysis revealed insignificance of some differences in the results of calculations performed via different methods and the importance of other ones for the description of DNA hydration. The calculations via MM methods enable us to reproduce quantitatively all the experimental data on the enthalpies of complex formation of single water molecule with the set of mono-, di-, and trimethylated bases, as well as on water molecule locations near base hydrophilic atoms in the crystals of DNA duplex fragments, while some of these data cannot be rationalized by QM calculations.

  11. The development of human factors experimental evaluation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sim, Bong Shick; Oh, In Suk; Cha, Kyung Ho; Lee, Hyun Chul; Park, Geun Ok; Cheon, Se Woo; Suh, Sang Moon

    1997-07-01

    New human factors issues, such as evaluation of information navigation, the consideration of operator characteristics, and operator performance assessment, related to the HMI design based on VDUs are being risen. Thus, in order to solve these human factors issues, this project aims to establish the experimental technologies including the techniques for experimental design, experimental measurement, data collection and analysis, and to develop ITF (Integrated Test Facility) suitable for the experiment of HMI design evaluation. For the establish of the experimental data analysis and evaluation methodologies, we developed as the following: (1) a paradigm for human factors experimentation including experimental designs, procedures, and data analysis. (2) the methods for the assessment of operator`s mental workload (3) DAEXESS (data analysis and experiment evaluation supporting system). Also, we have established a experiment execution technologies through the preliminary experiments, such as the suitability evaluation of information display on a LSDP, the evaluation of information display on a LSDP, the evaluation of computerized operation procedure and an experiment of advanced alarm system (ADIOS). Finally, we developed the ITF including human machine simulator, telemetry system, an eye tracking system, an audio/video data measurement system, and three dimensional micro behaviour analysis system. (author). 81 refs., 68 tabs., 73 figs.

  12. Lessons from the Large Hadron Collider for model-based experimentation : the concept of a model of data acquisition and the scope of the hierarchy of models

    NARCIS (Netherlands)

    Karaca, Koray

    2017-01-01

    According to the hierarchy of models (HoM) account of scientific experimentation developed by Patrick Suppes and elaborated by Deborah Mayo, theoretical considerations about the phenomena of interest are involved in an experiment through theoretical models that in turn relate to experimental data

  13. Proposal for the transmittal of data to LASL and the reporting of TRAC analyses for the multinational reflood experimental program

    International Nuclear Information System (INIS)

    Bleiweis, P.B.; Kirchner, W.L.; Sicilian, J.M.

    1979-04-01

    The proposed form of the digital tape containing the reduced experimental data from any of the 2D/3D facilities (CCTF, SCTF, UPTF, and possibly PKL Core-II) and the procedures which LASL will use in performing TRAC calculations and reporting results are described in this document

  14. THE CALCULATION OF FAST-NEUTRON ATTENUATION PROBABILITIES THROUGH A NINE- INCH POLYETHYLENE SLAB AND COMPARISON WITH EXPERIMENTAL DATA

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, L. G.

    1963-06-15

    Calculations of neutron penetration probabilities were performed to evaluate the Monte Carlo Multilayer Slab Penetration Procedure. A 9-in. polyethylene alab was chosen for the calculations and results were compared with experimental data. The calculated and measured dose rates agree within 20% for all exit polar angles. The calculations indicate that incident neutrons with energies less than 2.5 Mev do not contribute significantly to the transmitted dose rate. (auth)

  15. Analysis of progressive distorsion. Validation of the method based on effective primary stress. Discussion of Anderson's experimental data

    International Nuclear Information System (INIS)

    Moulin, Didier.

    1981-02-01

    An empirical rule usable for design by analysis against progressive distorsion has been set up from experiments conducted in C.E.N. Saclay. This rule is checked with experimental data obtained by W.F. ANDERSON, this experiment is sufficiently different from the Saclay one to evaluate the merits of the rule. The satisfactory results achieved, are another validation of the efficiency diagram on which the method is based [fr

  16. Evaluation of climatic data, post-treatment water yield and snowpack differences between closed and open stands of lodgepole pine on Tenderfoot Creek Experimental Forest

    Science.gov (United States)

    Phillip E. Farnes; Katherine J. Hansen

    2002-01-01

    Data collection on Tenderfoot Creek Experimental Forest was initiated in 1992 and has expanded to the present time. A preliminary report was prepared to include data collection through the 1995 season (Farnes et aI, 1995). Some data was updated in Farnes et al, 1999. Since then, data has been collected but has not been edited, summarized or tabulated in electronic form...

  17. Comparative study of OMA applied to experimental and simulated data from an operating Vestas V27 wind turbine

    DEFF Research Database (Denmark)

    Requeson, Oscar Ramirez; Tcherniak, Dmitri; Larsen, Gunner Chr.

    2015-01-01

    ), and modal analysis requirements are thus fulfilled for the dynamic characterization. Under operation, the system cannot be considered as LTI and must be modelled as a linear periodic time-variant (LPTV) system, which allows for the application of the related theory for such systems. One of these methods...... results, and in turn, numerical and analytical modelling help improve and validate new experimental techniques. Wind turbines are complex dynamic systems that consist of mutually moving substructures under high dynamic loads. At a standstill, the system can be modelled as linear time-invariant (LTI...... which is the assumption of isotropic rotors. Since rotors are never completely isotropic in real life, this paper presents the application of operational modal analysis together with the Coleman transformation on both experimental data from a full-scale Vestas wind turbine with instrumented blades...

  18. Correlation between experimental data of protonation of aromatic compounds at (+) atmospheric pressure photoionization and theoretically calculated enthalpies.

    Science.gov (United States)

    Ahmed, Arif; Lim, Dongwon; Choi, Cheol Ho; Kim, Sunghwan

    2017-06-30

    The theoretical enthalpy calculated from the overall protonation reaction (electron transfer plus hydrogen transfer) in positive-mode (+) atmospheric-pressure photoionization (APPI) was compared with experimental results for 49 aromatic compounds. A linear relationship was observed between the calculated ΔH and the relative abundance of the protonated peak. The parameter gives reasonable predictions for all the aromatic hydrocarbon compounds used in this study. A parameter is devised by combining experimental MS data and high-level theoretical calculations. A (+) APPI Q Exactive Orbitrap mass spectrometer was used to obtain MS data for each solution. B3LYP exchange-correlation functions with the standard 6-311+G(df,2p) basis set was used to perform density functional theory (DFT) calculations. All the molecules with ΔH toluene clusters produced protonated ions, regardless of the desolvation temperature. For molecules with ΔH >0, molecular ions were more abundant at typical APPI desolvation temperatures (300°C), while the protonated ions became comparable or dominant at higher temperatures (400°C). The toluene cluster size was an important factor when predicting the ionization behavior of aromatic hydrocarbon ions in (+) APPI. The data used in this study clearly show that the theoretically calculated reaction enthalpy (ΔH) of protonation with toluene dimers can be used to predict the protonation behavior of aromatic compounds. When compounds have a negative ΔH value, the types of ions generated for aromatic compounds could be very well predicted based on the ΔH value. The ΔH can explain overall protonation behavior of compounds with ΔH values >0. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Assessment of leaf carotenoids content with a new carotenoid index: Development and validation on experimental and model data

    Science.gov (United States)

    Zhou, Xianfeng; Huang, Wenjiang; Kong, Weiping; Ye, Huichun; Dong, Yingying; Casa, Raffaele

    2017-05-01

    Leaf carotenoids content (LCar) is an important indicator of plant physiological status. Accurate estimation of LCar provides valuable insight into early detection of stress in vegetation. With spectroscopy techniques, a semi-empirical approach based on spectral indices was extensively used for carotenoids content estimation. However, established spectral indices for carotenoids that generally rely on limited measured data, might lack predictive accuracy for carotenoids estimation in various species and at different growth stages. In this study, we propose a new carotenoid index (CARI) for LCar assessment based on a large synthetic dataset simulated from the leaf radiative transfer model PROSPECT-5, and evaluate its capability with both simulated data from PROSPECT-5 and 4SAIL and extensive experimental datasets: the ANGERS dataset and experimental data acquired in field experiments in China in 2004. Results show that CARI was the index most linearly correlated with carotenoids content at the leaf level using a synthetic dataset (R2 = 0.943, RMSE = 1.196 μg/cm2), compared with published spectral indices. Cross-validation results with CARI using ANGERS data achieved quite an accurate estimation (R2 = 0.545, RMSE = 3.413 μg/cm2), though the RBRI performed as the best index (R2 = 0.727, RMSE = 2.640 μg/cm2). CARI also showed good accuracy (R2 = 0.639, RMSE = 1.520 μg/cm2) for LCar assessment with leaf level field survey data, though PRI performed better (R2 = 0.710, RMSE = 1.369 μg/cm2). Whereas RBRI, PRI and other assessed spectral indices showed a good performance for a given dataset, overall their estimation accuracy was not consistent across all datasets used in this study. Conversely CARI was more robust showing good results in all datasets. Further assessment of LCar with simulated and measured canopy reflectance data indicated that CARI might not be very sensitive to LCar changes at low leaf area index (LAI) value, and in these conditions soil moisture

  20. Assessment of CATHARE2 V1.5qR6 using the experimental data of BETHSY natural circulation tests

    International Nuclear Information System (INIS)

    Huang Yanping; Jia Dounan

    2003-01-01

    The assessment of CATHARE2 V1.5qR6 is carried out against the experimental data of BETHSY natural circulation test-4. 1a-TC. Results show that the experimental process under single phase natural circulation can be predicted very well by CATHARE2 V1.5qR6, the primary mass inventory at the transition points from single phase natural circulation to two-phase natural circulation and from two-phase natural circulation to reflux condensation mode are also predicted correctly. The predicted results for thermohydraulic parameters of two-phase natural circulation and reflux condensation mode are not so good. Generally speaking, the prediction capability of CATHARE2 V1.5 for strong and two-phase flow process should be improved further in future

  1. An assessment of prediction methods of CHF in tubes with a large experimental data bank

    International Nuclear Information System (INIS)

    Leung, L.K.H.; Groeneveld, D.C.

    1993-01-01

    An assessment of prediction methods of CHF in tubes has been carried out using an expanded CHF data bank at Chalk River Laboratories (CRL). It includes eight different CHF look-up tables (two AECL versions and six USSR (or Russian) versions) and three empirical correlations. These prediction methods were developed from relatively large data bases and therefore have a wide range of application. Some limitations, however, were imposed in this study, to avoid any invalid predictions due to extrapolation of these methods. Therefore, these comparisons are limited to the specific data base that is tailored to suit the range of an individual method. This has resulted in a different number of data used in each case. The comparison of predictions against the experimental data is based on the constant inlet-condition approach (i.e., the pressure, mass flux, inlet fluid temperature and tube geometry are the primary parameters). Overall, the AECL tables have the widest range of application. They are assessed with 21 771 data points and the root-mean-square error is only 8.3%. About 60% of these data were used in the development of the AECL tables. The best version of the USSR/Russian CHF table is valid for 13 300 data points with a root-mean-square error of 8.8%. The USSR/Russian table that has the widest range of application covers a total of 18 800 data points, but the error increases to 9.3%. The range of application for empirical correlations, however, are generally much narrower than those covered by the CHF tables. The number of data used to assess these correlations is therefore further limited. Among the tested correlations, the Becker and Persson correlation covers the least amount of data (only 7 499 data points), but has the best accuracy (with a root-mean-square error of 9.71%). 33 refs., 2 figs., 3 tabs

  2. Animal mortality resulting from uniform exposures to photon radiations: Calculated LD50s and a compilation of experimental data

    International Nuclear Information System (INIS)

    Jones, T.D.; Morris, M.D.; Wells, S.M.; Young, R.W.

    1986-12-01

    Studies conducted during the 1950s and 1960s of radiation-induced mortality to diverse animal species under various exposure protocols were compiled into a mortality data base. Some 24 variables were extracted and recomputed from each of the published studies, which were collected from a variety of available sources, primarily journal articles. Two features of this compilation effort are (1) an attempt to give an estimate of the uniform dose received by the bone marrow in each treatment so that interspecies differences due to body size were minimized and (2) a recomputation of the LD 50 where sufficient experimental data are available. Exposure rates varied in magnitude from about 10 -2 to 10 3 R/min. This report describes the data base, the sources of data, and the data-handling techniques; presents a bibliography of studies compiled; and tabulates data from each study. 103 refs., 44 tabs

  3. End-to-side nerve neurorrhaphy: critical appraisal of experimental and clinical data.

    Science.gov (United States)

    Fernandez, E; Lauretti, L; Tufo, T; D'Ercole, M; Ciampini, A; Doglietto, F

    2007-01-01

    End-to-side neurorrhaphy (ESN) or terminolateral neurorraphy consists of connecting the distal stump of a transected nerve, named the recipient nerve, to the side of an intact adjacent nerve, named the donor nerve, "in which only an epineurial window is performed". This procedure was reintroduced in 1994 by Viterbo, who presented a report on an experimental study in rats. Several experimental and clinical studies followed this report with various and sometimes conflicting results. In this paper we present a review of the pertinent literature. Our personal experience using a sort of end-to-side nerve anastomosis, in which the donor nerve is partially transected, is also presented and compared with ESN as defined above. When the proximal nerve stump of a transected nerve is not available, ESN, which is claimed to permit anatomic and functional preservation of the donor nerve, seems an attractive technique, though yet not proven to be effective. Deliberate axotomy of the donor nerve yields results that are proportional to the entity of axotomy, but such technique, though resembling ESN, is an end-to-end neurorrhaphy. Neither experimental or clinical evidence support liberalizing the clinical use of ESN, a procedure with only an epineurial window in the donor nerve and without deliberate axotomy. Much more experimental investigation needs to be done to explain the ability of normal, intact nerves to sprout laterally. Such procedure appears justified only in an investigational setting.

  4. Use of air monitoring and experimental aerosol data for intake assessment for Mayak plutonium workers

    International Nuclear Information System (INIS)

    Zaytseva, Yekaterina V.; Tretyakov, Fyodor D.; Romanov, Sergey A.; Miller, Guthrie; Bertelli, Luiz; Guilmette, Raymond A.

    2007-01-01

    One of the major uncertainties in reconstructing doses to Mayak Plutonium (Pu) workers is the unknown exposure patterns experienced by individuals. These uncertainties include the amounts of Pu inhaled, the temporal exposure pattern of Pu air concentration, the particle-size distribution and solubility of the inhaled aerosols. To date, little individual and workplace-specific information has been used to assess these parameters for the Mayak workforce. However, extensive workplace-specific alpha activity air monitoring data set has been collated, which, if coupled with individual occupational histories, can potentially provide customized intake scenarios for individual Mayak workers. The most available Pu air concentration data are annual averages, which exist for over 100 defined work stations at radiochemical and chemical-metallurgical manufacturing facilities and basically for the whole period of Mayak production operations. Much sparser but more accurate data on Pu concentrations in workers' breathing zone are available for some major workplaces and occupations. The latter demonstrate that within a working shift, Pu concentrations varied over a range of several orders of magnitude depending on the nature of the operations performed. An approach to use the collated data set for individual intake reconstruction is formulated and its practical application is demonstrated. Initial results of ongoing experimental study on historic particle size at Mayak PA and their implications for intake estimation are presented. (authors)

  5. Integral analyses of fission product retention at mitigated thermally-induced SGTR using ARTIST experimental data

    International Nuclear Information System (INIS)

    Rýdl, Adolf; Lind, Terttaliisa; Birchley, Jonathan

    2016-01-01

    Highlights: • Source term analyses in a PWR of mitigated thermally-induced SGTR scenario performed. • Experimental ARTIST program results on aerosol scrubbing efficiency used in analyses. • Results demonstrate enhanced aerosol retention in a flooded steam generator. • High aerosol retention cannot be predicted by current theoretical scrubbing models. - Abstract: Integral source-term analyses are performed using MELCOR for a PWR Station Blackout (SBO) sequence leading to induced steam generator tube rupture (SGTR). In the absence of any mitigation measures, such a sequence can result in a containment bypass where the radioactive materials can be released directly to the environment. In some SGTR scenarios flooding of the faulted SG secondary side with water can mitigate the accident escalation and also the release of aerosol-borne and volatile radioactive materials. Data on the efficiency of aerosol scrubbing in an SG tube bundle were obtained in the international ARTIST project. In this paper ARTIST data are used directly with parametric MELCOR analyses of a mitigated SGTR sequence to provide more realistic estimates of the releases to environment in such a type of scenario or similar. Comparison is made with predictions using the default scrubbing model in MELCOR, as a representative of the aerosol scrubbing models in current integral codes. Specifically, simulations are performed for an unmitigated sequence and 2 cases where the SG secondary was refilled at different times after the tube rupture. The results, reflecting the experimental observations from ARTIST, demonstrate enhanced aerosol retention in the highly turbulent two-phase flow conditions caused by the complex geometry of the SG secondary side. This effect is not captured by any of the models currently available. The underlying physics remains only partly understood, indicating need for further studies to support a more mechanistic treatment of the retention process.

  6. Experimental music for experimental physics

    CERN Multimedia

    Rosaria Marraffino

    2014-01-01

    Using the sonification technique, physicist and composer Domenico Vicinanza paid homage to CERN at its 60th anniversary ceremony. After months of hard work, he turned the CERN Convention and LHC data into music.   Click here to download the full score of the "LHChamber music". Every birthday deserves gifts and CERN’s 60th anniversary was no exception. Two gifts were very special, thanks to the hard work of Domenico Vicinanza, a physicist and composer. He created two experimental pieces by applying the sonification technique to the CERN Convention and to data recorded by the four LHC detectors during Run 1. “This technique allows us to ‘hear’ data using an algorithm that translates numbers or letters into notes. It keeps the same information enclosed in a graph or a document, but has a more aesthetic exposition,” explains Domenico Vicinanza. “The result is meant to be a metaphor for scientific cooperation, in which d...

  7. Diffusion model analyses of the experimental data of 12C+27Al, 40Ca dissipative collisions

    International Nuclear Information System (INIS)

    SHEN Wen-qing; QIAO Wei-min; ZHU Yong-tai; ZHAN Wen-long

    1985-01-01

    Assuming that the intermediate system decays with a statistical lifetime, the general behavior of the threefold differential cross section d 3 tau/dZdEdtheta in the dissipative collisions of 68 MeV 12 C+ 27 Al and 68.6 MeV 12 C+ 40 Ca system is analyzed in the diffusion model framework. The lifetime of the intermediate system and the separation distance for the completely damped deep-inelastic component are obtained. The calculated results and the experimental data of the angular distributions and Wilczynski plots are compared. The probable reasons for the differences between them are briefly discussed

  8. Curation of Laboratory Experimental Data as Part of the Overall Data Lifecycle

    Directory of Open Access Journals (Sweden)

    Jeremy Frey

    2008-08-01

    Full Text Available The explosion in the production of scientific data in recent years is placing strains upon conventional systems supporting integration, analysis, interpretation and dissemination of data and thus constraining the whole scientific process. Support for handling large quantities of diverse information can be provided by e-Science methodologies and the cyber-infrastructure that enables collaborative handling of such data. Regard needs to be taken of the whole process involved in scientific discovery. This includes the consideration of the requirements of the users and consumers further down the information chain and what they might ideally prefer to impose on the generators of those data. As the degree of digital capture in the laboratory increases, it is possible to improve the automatic acquisition of the ‘context of the data’ as well as the data themselves. This process provides an opportunity for the data creators to ensure that many of the problems they often encounter in later stages are avoided. We wish to elevate curation to an operation to be considered by the laboratory scientist as part of good laboratory practice, not a procedure of concern merely to the few specialising in archival processes. Designing curation into experiments is an effective solution to the provision of high-quality metadata that leads to better, more re-usable data and to better science.

  9. The mzTab Data Exchange Format: Communicating Mass-spectrometry-based Proteomics and Metabolomics Experimental Results to a Wider Audience*

    Science.gov (United States)

    Griss, Johannes; Jones, Andrew R.; Sachsenberg, Timo; Walzer, Mathias; Gatto, Laurent; Hartler, Jürgen; Thallinger, Gerhard G.; Salek, Reza M.; Steinbeck, Christoph; Neuhauser, Nadin; Cox, Jürgen; Neumann, Steffen; Fan, Jun; Reisinger, Florian; Xu, Qing-Wei; del Toro, Noemi; Pérez-Riverol, Yasset; Ghali, Fawaz; Bandeira, Nuno; Xenarios, Ioannis; Kohlbacher, Oliver; Vizcaíno, Juan Antonio; Hermjakob, Henning

    2014-01-01

    The HUPO Proteomics Standards Initiative has developed several standardized data formats to facilitate data sharing in mass spectrometry (MS)-based proteomics. These allow researchers to report their complete results in a unified way. However, at present, there is no format to describe the final qualitative and quantitative results for proteomics and metabolomics experiments in a simple tabular format. Many downstream analysis use cases are only concerned with the final results of an experiment and require an easily accessible format, compatible with tools such as Microsoft Excel or R. We developed the mzTab file format for MS-based proteomics and metabolomics results to meet this need. mzTab is intended as a lightweight supplement to the existing standard XML-based file formats (mzML, mzIdentML, mzQuantML), providing a comprehensive summary, similar in concept to the supplemental material of a scientific publication. mzTab files can contain protein, peptide, and small molecule identifications together with experimental metadata and basic quantitative information. The format is not intended to store the complete experimental evidence but provides mechanisms to report results at different levels of detail. These range from a simple summary of the final results to a representation of the results including the experimental design. This format is ideally suited to make MS-based proteomics and metabolomics results available to a wider biological community outside the field of MS. Several software tools for proteomics and metabolomics have already adapted the format as an output format. The comprehensive mzTab specification document and extensive additional documentation can be found online. PMID:24980485

  10. Finite-Geometry and Polarized Multiple-Scattering Corrections of Experimental Fast- Neutron Polarization Data by Means of Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Aspelund, O; Gustafsson, B

    1967-05-15

    After an introductory discussion of various methods for correction of experimental left-right ratios for polarized multiple-scattering and finite-geometry effects necessary and sufficient formulas for consistent tracking of polarization effects in successive scattering orders are derived. The simplifying assumptions are then made that the scattering is purely elastic and nuclear, and that in the description of the kinematics of the arbitrary Scattering {mu}, only one triple-parameter - the so-called spin rotation parameter {beta}{sup ({mu})} - is required. Based upon these formulas a general discussion of the importance of the correct inclusion of polarization effects in any scattering order is presented. Special attention is then paid to the question of depolarization of an already polarized beam. Subsequently, the afore-mentioned formulas are incorporated in the comprehensive Monte Carlo program MULTPOL, which has been designed so as to correctly account for finite-geometry effects in the sense that both the scattering sample and the detectors (both having cylindrical shapes) are objects of finite dimensions located at finite distances from each other and from the source of polarized fast-neutrons. A special feature of MULTPOL is the application of the method of correlated sampling for reduction of the standard deviations .of the results of the simulated experiment. Typical data of performance of MULTPOL have been obtained by the application of this program to the correction of experimental polarization data observed in n + '{sup 12}C elastic scattering between 1 and 2 MeV. Finally, in the concluding remarks the possible modification of MULTPOL to other experimental geometries is briefly discussed.

  11. Adsorption of cellulases onto sugar beet shreds and modeling of the experimental data

    Directory of Open Access Journals (Sweden)

    Ivetić Darjana Ž.

    2014-01-01

    Full Text Available This study investigated the adsorption of cellulases onto sugar beet shreds. The experiments were carried out using untreated, as well as dried and not dried dilute acid and steam pretreated sugar beet shreds at different initial enzyme loads. Both dilute acid and steam pretreatment were beneficial in respect of cellulases adsorption providing 8 and 9 times higher amounts of adsorbed proteins, respectively, in comparison to the results obtained with the untreated substrate. Although the use of higher solids load enabled by drying of pretreated substrates, could be beneficial for process productivity, at the same time it decreases the adsorption of enzymes. The obtained experimental data were fitted to five adsorption models, and the Langmuir model having the lowest residual sum of squares was used for the determination of adsorption parameters which were used to calculate the strength of cellulases binding to the substrates.[Projekat Ministarstva nauke Republike Srbije, br. TR 31002

  12. The Experimental Role of Accounting in Shaping Project

    DEFF Research Database (Denmark)

    Christensen, Mark; Skærbæk, Peter; Tryggestad, Kjell

    2016-01-01

    , 2009) and the literature on accounting as programs (Miller, 1991, Miller and Power, 2013) by showing how accounting plays an experimental role in shaping the dynamics between a New Public Management programme of outsourcing and its projects. The study illuminates decisionmaking about programs...... and strategic options as a trial of strength involving accounting in an experimental role, in our case, over how to source the facilities management in the Danish Defence Forces. Our contribution is to show how accounting plays an experimental role in generating new and unexpected program options...... of economic experiments with (out-)sourcing options using various accounting data such as; location specific cost data in vivo (real life data like those from the accounting system) for individual barracks, and aggregated cost data and cost projections in vitro ( in a controlled experimental project setting...

  13. Summary Report of Consultants' Meeting on Accuracy of Experimental and Theoretical Nuclear Cross-Section Data for Ion Beam Analysis and Benchmarking

    International Nuclear Information System (INIS)

    Abriola, Daniel; Dimitriou, Paraskevi; Gurbich, Alexander F.

    2013-11-01

    A summary is given of a Consultants' Meeting assembled to assess the accuracy of experimental and theoretical nuclear cross-section data for Ion Beam Analysis and the role of benchmarking experiments. The participants discussed the different approaches to assigning uncertainties to evaluated data, and presented results of benchmark experiments performed in their laboratories. They concluded that priority should be given to the validation of cross- section data by benchmark experiments, and recommended that an experts meeting be held to prepare the guidelines, methodology and work program of a future coordinated project on benchmarking.

  14. Neutrino--proton interactions at Fermilab energies: Experimental arrangement, analysis procedures, and qualitative features of the data

    International Nuclear Information System (INIS)

    Chapman, J.W.; Coffin, C.T.; Diamond, R.N.; French, H.; Louis, W.; Roe, B.P.; Seidl, A.A.; Vander Velde, J.C.; Berge, J.P.; Bogert, D.; DiBianca, F.A.; Dunaitsev, A.; Efremenko, V.; Ermolov, P.; Fowler, W.; Hanft, R.; Harigel, G.; Huson, F.R.; Kolganov, V.; Mukhin, A.; Nezrick, F.A.; Rjabov, Y.; Scott, W.G.; Smart, W.; Truxton, R.

    1976-01-01

    The Fermilab 15-ft bubble chamber filled with hydrogen was exposed to a broad-momentum-band horn-focused neutrino beam produced by 300-GeV interacting protons. The selection procedure to choose a charged-current neutrino event sample is discussed. Fewer than three percent of the events are due to neutral hadron interactions. We present and experimentally test a method that can be used to identify the muon, estimate the incident neutrino energy, and eliminate most neutral-current interactions from the charged-current sample. Above 10 GeV the method produces an approximately 86% pure sample of charged-current events with an error in energy estimation of the order of 8% over a broad region of the data. In addition we establish experimentally several important properties of high-energy charged-current neutrino interactions. The hadrons are produced in a jet, the individual particles having sharply limited momenta perpendicular to the hadronic axis. The jet structure is maintained with constant properties to very high values of Q 2 and hadronic mass. The fraction of energy going into invisible particles is moderate, consistent with that expected. The average number of neutral pions rises linearly with the average number of charged particles

  15. Problems of the experimental implementation of MTJ

    International Nuclear Information System (INIS)

    Mazaletskiy, L A; Rudy, A S; Trushin, O S; Naumov, V V; Mironenko, A A; Vasilev, S V

    2015-01-01

    The results of experimental studies of MRAM technology based on standard magnetic tunneling junctions are presented. Basic steps of experimental fabrication of MRAM cell are considered. Experimental samples of MTJ with variable lateral sizes are fabricated. Current-voltage characteristics of the tunnel barriers are investigated. Main parameters of the tunnel barriers are estimated from comparison of the experimental data with the theory. (paper)

  16. Geochemical Data for Upper Mineral Creek, Colorado, Under Existing Ambient Conditions and During an Experimental pH Modification, August 2005

    Science.gov (United States)

    Runkel, Robert L.; Kimball, Briant A.; Steiger, Judy I.; Walton-Day, Katherine

    2009-01-01

    Mineral Creek, an acid mine drainage stream in south-western Colorado, was the subject of a water-quality study that employed a paired synoptic approach. Under the paired synoptic approach, two synoptic sampling campaigns were conducted on the same study reach. The initial synoptic campaign, conducted August 22, 2005, documented stream-water quality under existing ambient conditions. A second synoptic campaign, conducted August 24, 2005, documented stream-water quality during a pH-modification experiment that elevated the pH of Mineral Creek. The experimental pH modification was designed to determine the potential reductions in dissolved constituent concentrations that would result from the implementation of an active treatment system for acid mine drainage. During both synoptic sampling campaigns, a solution containing lithium bromide was injected continuously to allow for the calculation of streamflow using the tracer-dilution method. Synoptic water-quality samples were collected from 30 stream sites and 11 inflow locations along the 2-kilometer study reach. Data from the study provide spatial profiles of pH, concentration, and streamflow under both existing and experimentally-altered conditions. This report presents the data obtained August 21-24, 2005, as well as the methods used for sample collection and data analysis.

  17. Using the /phi/resund experimental data to evaluate the ARAC emergency response models

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Gryning, S.E.

    1988-07-01

    A series of meteorological and tracer experiments, was conducted during May and June 1984 over the 20-km wide /O/resund strait between Denmark and Sweden for the purpose of studying atmospheric dispersion processes over cold water and warm land surfaces and providing the data needed to evaluate meso-scale models in a coastal environment. In concert with these objectives the data from these experiments have been used as part of a continuing effort to evaluate the capability of the three-dimensional MATHEW/ADPIC (M/A) atmospheric dispersion models to simulate pollutant transport and diffusion characteristics of the atmospheric during a wide variety of meteorological conditions. Since previous studies have focused primarily on M/A model evaluations over rolling and complex terrain at inland sites, the /O/resund experiments provide a unique opportunity to evaluate the models in a coastal environment. The M/A models are used by the Atmospheric Release Advisory Capability (ARAC), developed by the Lawrence Livermore National Laboratory, for performing real-time assessments of the environmental consequences of potential or actual releases of radioactivity into the atmosphere. These assessments include estimation of radiation doses to nearby population centers and of the extent of surface contamination. Model evaluations, using field experimental data such as those generated by the /O/resund experiments, serve as a basis for providing emergency response managers with estimated of the uncertainties associated with accident consequence assessments. This report provides a brief description of the /O/resund experiments, the current understanding of the meteorological processes governing pollutant dispersion over the /O/resund strait, and the results of the M/A model simulations of these experiments. 11 refs., 7 figs., 1 tab

  18. MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…

  19. Experimental Mathematics and Computational Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  20. Users manual for SPLPACK-1: a program package for plotting and editing of experimental and analytical data of various transient systems

    International Nuclear Information System (INIS)

    Muramatsu, Ken; Kosaka, Atsuo; Abe, Kiyoharu; Araya, Fumimasa; Kanazawa, Masayuki; Tanabe, Shuichi; Maniwa, Masaki.

    1983-11-01

    In the field of nuclear safety research, a number of computer codes are being developed for predicting trasient behavior of various nuclear facilities, and they are verified by comparing calculational results with experimental data. In the development and verification of these codes, data plotting is one of indispensable but labor-consuming processes. SPLPACK-1 which is a package of computer programs, written mostly in FORTRAN-IV, has been developed for data editing and plotting. The SPLPACK-1 package consists of two parts, SPLEDIT and SPLPLOT-1, and plotting is performed through two steps : (1) to store data by SPLEDIT in a file in a standardized format (the SPL format) and (2) to plot the data in the file by SPLPLOT-1 or other optional programs. The standardization of data format makes it possible to plot outputs from various sources by a single plotter program. This benefit not only reduces the costs of plotter program developments but also makes it easy to compare many analytical and experimental data in one figure. SPLPLOT-1 draws two-dimensional graphs of time-dependent history of variables or graphs of relationship between two variables. For user's convenience, it is provided with functions of auto-scaling, automatic unit conversion, data processing by user-supplied subroutines and so on. Several additional programs are available for other types of plotting such as bird's-eye view graphs. SPLPACK-1 has been effectively used in JAERI for the developments and uses of safety codes (THYDE-B1, THYDE-P1, RELAP4, RELAP5, MARCH, CORRAL, etc.) and experiments (ROSA-III and LOFT). (author)

  1. Experimental determination of critical data of multi-component mixtures containing potential gasoline additives 2-butanol by a flow-type apparatus

    International Nuclear Information System (INIS)

    He, Maogang; Xin, Nan; Wang, Chengjie; Liu, Yang; Zhang, Ying; Liu, Xiangyang

    2016-01-01

    Graphical abstract: Experimental critical pressures of 2-butanol + hexane + heptane system. - Highlights: • Critical properties of six binary systems and two ternary systems were measured. • Six binary systems containing 2-butanol show non-ideal behavior in their T c –x 1 curves. • Non-ideal behavior of mixtures with 2-butanol relies on azeotropy. • Experimental data for binary systems were fitted well with Redlich–Kister equation. • Critical surfaces of ternary systems were plotted using the Cibulka’s expressions. - Abstract: In this work, we used a flow method for measurement of critical properties of six binary mixtures (2-butanol + cyclohexane, 2-butanol + hexane, 2-butanol + heptane, 2-butanol + octane, 2-butanol + nonane and 2-butanol + decane) and two ternary mixtures (2-butanol + hexane + heptane and 2-butanol + octane + decane). The critical properties were determined by observing the disappearance and reappearance of the gas–liquid phase meniscus in a quartz glass tube. The standard uncertainties of temperatures and pressures for both binary and ternary mixtures were estimated to be less than 0.2 K and 5.2 kPa, respectively. These critical data provide the boundaries of the two-phase regions of the related mixture systems. Six binary systems show non-ideal behaviors in the loci of critical temperatures. We used the Redlich–Kister equations to correlate the critical temperatures and pressures of these systems and listed the binary interaction parameters. The maximum average absolute deviation (AAD) of each binary system between experimental data and calculated results from Redlich–Kister equations is 0.038% for critical temperatures, and 0.244% for critical pressures. Moreover, the two ternary systems were newly reported and correlated by Cibulka’s and Singh’s expressions. The maximum AAD of critical temperatures and critical pressures are 0.103% and 0.433%, respectively.

  2. Experimental determination of spin-dependent electron density by joint refinement of X-ray and polarized neutron diffraction data.

    Science.gov (United States)

    Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed

    2012-11-01

    New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.

  3. Experimental data on compressive strength and durability of sulfur concrete modified by styrene and bitumen.

    Science.gov (United States)

    Dehestani, M; Teimortashlu, E; Molaei, M; Ghomian, M; Firoozi, S; Aghili, S

    2017-08-01

    In this data article experimental data on the compressive strength, and the durability of styrene and bitumen modified sulfur concrete against acidic water and ignition are presented. The percent of the sulfur cement and the gradation of the aggregates used are according to the ACI 548.2R-93 and ASTM 3515 respectively. For the styrene modified sulfur concrete different percentages of styrene are used. Also for the bitumen modified sulfur concrete, different percentages of bitumen and the emulsifying agent (triton X-100) are utilized. From each batch three 10×10×10 cm cubic samples were casted. One of the samples was used for the compressive strength on the second day of casting, and one on the twenty-eighth day. Then the two samples were put under the high pressure flame of the burning liquid gas for thirty seconds and their ignition resistances were observed. The third sample was put into the acidic water and after twenty eight days immersion in water was dried in the ambient temperature. After drying its compressive strength has been evaluated.

  4. Intermediate Radical Termination Theory in Elucidation of RAFT Kinetics and Comparison to Experimental Data

    Directory of Open Access Journals (Sweden)

    M. Baqeri-Jagharq

    2008-12-01

    Full Text Available In current work a comprehensive mechanism based on intermediate radical termination theory is assumed for RAFT polymerization of styrene over cumyl dithiobenzoate as RAFT agent. Rate constants for addition (ka and fragmentation reactions (kf are set to 6×106 and 5×104 respectively, which lead to an equilibrium constant value of K = ka/kf = 1.2 x 102. Moment equations method was used to model this mechanism and the results were compared to experimental data to verify modeling. The effects of changing RAFT agent concentration on conversion, molecular weight and polydispersity index of the final product were investigated through the modeling. According to the results, the likelihood of living polymerization increases with raising RAFT agent concentration which leads to linearity of conversion and molecular weight curves and therefore lowering the polydispersity index and narrowing the molecular weight distribution.

  5. Radiation effects modeling and experimental data on I2L devices

    International Nuclear Information System (INIS)

    Long, D.M.; Repper, C.J.; Ragonese, L.J.; Yang, N.T.

    1976-01-01

    This paper reports on an Integrated Injection Logic (I 2 L) radiation effects model which includes radiation effects phenomena. Twenty-five individual current components were identified for an I 2 L logic gate by assuming wholly vertical or wholly horizontal current flow. Equations were developed for each component in terms of basic parameters such as doping profiles, distances, and diffusion lengths, and set up on a computer for specific logic cell configurations. For neutron damage, the model shows excellent agreement with experimental data. Reactor test results on GE I 2 L samples showed a neutron hardness level in the range of 6 x 10 12 to 3 x 10 13 n/cm 2 (1 MeV Eq), and cobalt-60 tests showed a total dose hardness of 6 x 10 4 to greater than 1 x 10 6 Rads(Si) (all device types at an injection current of 50 microamps per gate). It was found that significant hardness improvements could be achieved by: (a) diffusion profile variation, (b) utilizing a tight N + collar around the cell, and (c) locating the collector close to the injector. Flash X-ray tests showed a transient logic upset threshold of 1 x 10 9 Rads(Si)/sec for a 28 ns pulse, and a survival level greater than 2 x 10 12 Rads(Si)/sec

  6. Data triggered data processing at MFTF-B

    International Nuclear Information System (INIS)

    Jackson, R.J.; Balch, T.R.; Preckshot, G.G.

    1985-01-01

    A primary characteristic of most batch systems is that the input data files must exist before jobs are scheduled. On the Mirror Fusion Test Facility (MFTF-B) at Lawrence Livermore National Laboratory we schedule jobs to process experimental data to be collected during a five minute shot cycle. Our data-driven processing system emulates a coarsely granular data flow architecture. Processing jobs are scheduled before the experimental data is collected. Processing jobs ''fire'', or execute, as input data becomes available. Similar to UNIX ''pipes'', data produced by upstream processing nodes may be used as inputs by following nodes. Users, working on our networked SUN workstations, specify data processing templates which define processes and their data dependencies. Data specifications indicate the source of data; actual associations with specific data instantiations are made when the jobs are scheduled. We report here on details of diagnostic data processing and our experiences

  7. Comparison study of photon attenuation characteristics of Lead-Boron Polyethylene by MCNP code, XCOM and experimental data

    Science.gov (United States)

    Zhang, Lei; Jia, Mingchun; Gong, Junjun; Xia, Wenming

    2017-08-01

    The linear attenuation coefficient, mass attenuation coefficient and mean free path of various Lead-Boron Polyethylene (PbBPE) samples which can be used as the photon shielding materials in marine reactor have been simulated using the Monte Carlo N-Particle (MCNP)-5 code. The MCNP simulation results are in good agreement with the XCOM values and the reported experimental data for source Cesium-137 and Cobalt-60. Thus, this method based on MCNP can be used to simulate the photon attenuation characteristics of various types of PbBPE materials.

  8. Experimental work on drift chambers

    International Nuclear Information System (INIS)

    Alcaraz, J.; Duran, I.; Gonzalez, E.; Martinez-Laso, L.; Olmos, P.

    1989-01-01

    An experimental work made on drift chambers is described in two chapters. In the firt chapter we present the description of the experimental installation used, as well as some details on the data adquisition systems and the characteristics on three ways used for calibration proposes (cosmic muons, β radiation and test beam using SPS at CERN facilities). The second chapter describes the defferent prototypes studied. The experimental set up and the analysis are given. Some results are discussed. The magnetic field effect is also studied. (Author)

  9. Experimental plasma research project summaries

    International Nuclear Information System (INIS)

    1982-10-01

    The experimental plasma Research Branch has responsibility for developing a broad range of experimental data and new experimental techniques that are required for operating and interpreting present large-scale confinement experiments, and for designing future deuterium-tritium burining facilities. The Branch pursued these objectives by supporting research in DOE laboratories, other Federal laboratories, other Federal laboratories, universities, and private industry. Initiation and renewal of research projects are primarily through submission of unsolicited proposals by these institutions to DOE. Summaries of these projects are given

  10. The mzTab data exchange format: communicating mass-spectrometry-based proteomics and metabolomics experimental results to a wider audience.

    Science.gov (United States)

    Griss, Johannes; Jones, Andrew R; Sachsenberg, Timo; Walzer, Mathias; Gatto, Laurent; Hartler, Jürgen; Thallinger, Gerhard G; Salek, Reza M; Steinbeck, Christoph; Neuhauser, Nadin; Cox, Jürgen; Neumann, Steffen; Fan, Jun; Reisinger, Florian; Xu, Qing-Wei; Del Toro, Noemi; Pérez-Riverol, Yasset; Ghali, Fawaz; Bandeira, Nuno; Xenarios, Ioannis; Kohlbacher, Oliver; Vizcaíno, Juan Antonio; Hermjakob, Henning

    2014-10-01

    The HUPO Proteomics Standards Initiative has developed several standardized data formats to facilitate data sharing in mass spectrometry (MS)-based proteomics. These allow researchers to report their complete results in a unified way. However, at present, there is no format to describe the final qualitative and quantitative results for proteomics and metabolomics experiments in a simple tabular format. Many downstream analysis use cases are only concerned with the final results of an experiment and require an easily accessible format, compatible with tools such as Microsoft Excel or R. We developed the mzTab file format for MS-based proteomics and metabolomics results to meet this need. mzTab is intended as a lightweight supplement to the existing standard XML-based file formats (mzML, mzIdentML, mzQuantML), providing a comprehensive summary, similar in concept to the supplemental material of a scientific publication. mzTab files can contain protein, peptide, and small molecule identifications together with experimental metadata and basic quantitative information. The format is not intended to store the complete experimental evidence but provides mechanisms to report results at different levels of detail. These range from a simple summary of the final results to a representation of the results including the experimental design. This format is ideally suited to make MS-based proteomics and metabolomics results available to a wider biological community outside the field of MS. Several software tools for proteomics and metabolomics have already adapted the format as an output format. The comprehensive mzTab specification document and extensive additional documentation can be found online. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. GeneLab Phase 2: Integrated Search Data Federation of Space Biology Experimental Data

    Science.gov (United States)

    Tran, P. B.; Berrios, D. C.; Gurram, M. M.; Hashim, J. C. M.; Raghunandan, S.; Lin, S. Y.; Le, T. Q.; Heher, D. M.; Thai, H. T.; Welch, J. D.; hide

    2016-01-01

    The GeneLab project is a science initiative to maximize the scientific return of omics data collected from spaceflight and from ground simulations of microgravity and radiation experiments, supported by a data system for a public bioinformatics repository and collaborative analysis tools for these data. The mission of GeneLab is to maximize the utilization of the valuable biological research resources aboard the ISS by collecting genomic, transcriptomic, proteomic and metabolomic (so-called omics) data to enable the exploration of the molecular network responses of terrestrial biology to space environments using a systems biology approach. All GeneLab data are made available to a worldwide network of researchers through its open-access data system. GeneLab is currently being developed by NASA to support Open Science biomedical research in order to enable the human exploration of space and improve life on earth. Open access to Phase 1 of the GeneLab Data Systems (GLDS) was implemented in April 2015. Download volumes have grown steadily, mirroring the growth in curated space biology research data sets (61 as of June 2016), now exceeding 10 TB/month, with over 10,000 file downloads since the start of Phase 1. For the period April 2015 to May 2016, most frequently downloaded were data from studies of Mus musculus (39) followed closely by Arabidopsis thaliana (30), with the remaining downloads roughly equally split across 12 other organisms (each 10 of total downloads). GLDS Phase 2 is focusing on interoperability, supporting data federation, including integrated search capabilities, of GLDS-housed data sets with external data sources, such as gene expression data from NIHNCBIs Gene Expression Omnibus (GEO), proteomic data from EBIs PRIDE system, and metagenomic data from Argonne National Laboratory's MG-RAST. GEO and MG-RAST employ specifications for investigation metadata that are different from those used by the GLDS and PRIDE (e.g., ISA-Tab). The GLDS Phase 2 system

  12. Geophysical data collection using an interactive personal computer system. Part 1. ; Experimental monitoring of Suwanosejima volcano

    Energy Technology Data Exchange (ETDEWEB)

    Iguchi, M. (Kyoto Univerdity, Kyoto (Japan). Disaster Prevention Reserach Institute)

    1991-10-15

    In the article, a computer-communication system was developed in order to collect geophysical data from remote volcanos via a public telephpne network. This system is composed of a host presonal computer at an observatory and several personal computers as terminals at remote stations. Each terminal acquires geophysical data, such as seismic, intrasonic, and ground deformation date. These gara are stored in the terminals temporarily, and transmitted to the host computer upon command from host computer. Experimental monitoring was conducted between Sakurajima Volcanological Observatory and several statins in the Satsunan Islands and southern Kyushu. The seismic and eruptive activities of Suwanosejima volcano were monitored by this system. Consequently, earthquakes and air-shocks accompanied by the explosive activity were observed. B-type earthquakes occurred prio to the relatively prolonged eruptive activity. Intermittent occurrences of volcanic tremors were also clearly recognized from the change in mean amplitubes of seismic waves. 7 refs., 10 figs., 2 tabs.

  13. Experimental design in chemistry: A tutorial.

    Science.gov (United States)

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  14. TOP2017 Experimental summary

    CERN Document Server

    Giammanco, Andrea

    2017-01-01

    Thanks to the unprecedentedly fast accumulation of high-energy data at the LHC during the ongoing Run~2, most of the traditional top-quark analyses are experiencing the luxury of having to worry about how to punch through the ``Systematics Wall'', and think about new ways to maximize the utility of their data. New processes involving top quarks are being studied for the first time, and the good old pair-production processes are being explored in unusual settings, such as collisions involving heavy ions, or ``reference data'' collected by the LHC at relatively low centre-of-mass energy. The TOP2017 conference featured 37 talks delivered by experimental physicists, including seven in the ``Young Scientists Forum'' where young colleagues were given the opportunity to elaborate more deeply than usual on their own work. As it is impossible to do justice to all the experimental resu...

  15. Extension of the energy range of the experimental activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium up to 65MeV.

    Science.gov (United States)

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-04-01

    Activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium were extended up to 65MeV by using stacked foil irradiation and gamma spectrometry experimental methods. Experimental cross-sections data for the formation of the radionuclides (159)Dy, (157)Dy, (155)Dy, (161)Tb, (160)Tb, (156)Tb, (155)Tb, (154m2)Tb, (154m1)Tb, (154g)Tb, (153)Tb, (152)Tb and (151)Tb are reported in the 36-65MeV energy range, and compared with an old dataset from 1964. The experimental data were also compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS nuclear reaction model code as listed in the latest on-line libraries TENDL 2013. Copyright © 2015. Published by Elsevier Ltd.

  16. Animal mortality resulting from uniform exposures to photon radiations: Calculated LD/sub 50/s and a compilation of experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Jones, T.D.; Morris, M.D.; Wells, S.M.; Young, R.W.

    1986-12-01

    Studies conducted during the 1950s and 1960s of radiation-induced mortality to diverse animal species under various exposure protocols were compiled into a mortality data base. Some 24 variables were extracted and recomputed from each of the published studies, which were collected from a variety of available sources, primarily journal articles. Two features of this compilation effort are (1) an attempt to give an estimate of the uniform dose received by the bone marrow in each treatment so that interspecies differences due to body size were minimized and (2) a recomputation of the LD/sub 50/ where sufficient experimental data are available. Exposure rates varied in magnitude from about 10/sup -2/ to 10/sup 3/ R/min. This report describes the data base, the sources of data, and the data-handling techniques; presents a bibliography of studies compiled; and tabulates data from each study. 103 refs., 44 tabs.

  17. The Ag-Al-Cu system Part I: Reassessment of the constituent binaries on the basis of new experimental data

    International Nuclear Information System (INIS)

    Witusiewicz, V.T.; Hecht, U.; Fries, S.G.; Rex, S.

    2004-01-01

    Aiming to obtain a reliable description of the ternary Ag-Al-Cu system the thermodynamic evaluation of the constituent binaries Ag-Al, Ag-Cu and Al-Cu are revised by modelling of the Gibbs energy of all individual phases using the CALPHAD approach. The model parameters have been evaluated using a computer optimisation technique based on the established descriptions of the systems and taking into account the data on thermodynamic properties and phase equilibria both reported in recent publications and obtained by own measurements. The phase diagrams and the thermodynamic properties calculated with the evaluated parameters are in good agreement with the corresponding experimental data

  18. Summary of experimental insertions workshop

    International Nuclear Information System (INIS)

    Sandweiss, J.; Month, M.

    1976-01-01

    The last ISABELLE workshop of the summer 1976 series, which was held at Brookhaven, August 16-20, focused on the design and utilization of the experimental insertions. The goals of the workshop, which were somewhat more general than might be suggested by the title, are: (1) review the ISABELLE proposal from the point of view of experimental use; (2) contribute useful information on the ''open questions'' in the ISABELLE design; (3) develop data for experimental equipment and operating cost estimates; and (4) project a first approximation to ISABELLE operating modes

  19. Theory-laden experimentation

    DEFF Research Database (Denmark)

    Schindler, Samuel

    2013-01-01

    light bending in 1919 by Eddington and others) to show that TDRs are used by scientists to resolve data conflicts. I argue that the rationality of the practices which employ TDRs can be saved if the independent support of the theories driving TDRs is construed in a particular way.......The thesis of theory-ladenness of observations, in its various guises, is widely considered as either ill-conceived or harmless to the rationality of science. The latter view rests partly on the work of the proponents of New Experimentalism who have argued, among other things, that experimental...... practices are efficient in guarding against any epistemological threat posed by theory-ladenness. In this paper I show that one can generate a thesis of theory-ladenness for experimental practices from an influential New Experimentalist account. The notion I introduce for this purpose is the concept...

  20. Evaluated experimental database on critical heat flux in WWER FA models

    International Nuclear Information System (INIS)

    Artamonov, S.; Sergeev, V.; Volkov, S.

    2015-01-01

    The paper presents the description of the evaluated experimental database on critical heat flux in WWER FA models of new designs. This database was developed on the basis of the experimental data obtained in the years of 2009-2012. In the course of its development, the database was reviewed in terms of completeness of the information about the experiments and its compliance with the requirements of Rostekhnadzor regulatory documents. The description of the experimental FA model characteristics and experimental conditions was specified. Besides, the experimental data were statistically processed with the aim to reject incorrect ones and the sets of experimental data on critical heat fluxes (CHF) were compared for different FA models. As a result, for the fi rst time, the evaluated database on CHF in FA models of new designs was developed, that was complemented with analysis functions, and its main purpose is to be used in the process of development, verification and upgrading of calculation techniques. The developed database incorporates the data of 4183 experimental conditions obtained in 53 WWER FA models of various designs. Keywords: WWER reactor, fuel assembly, CHF, evaluated experimental data, database, statistical analysis. (author)