WorldWideScience

Sample records for experimental data

  1. PROTEUS Experimental data

    International Nuclear Information System (INIS)

    Perret, G.

    2013-01-01

    This presentation gives an overview of the PROTEUS experimental programme performed at PSI over more than 30 years. In the 1970's the Gas-Cooled Fast Reactor (GCFR) experiments were essentially designed to improve the nuclear data in the fast energy range. The light water reactor experiments performed in the 1980's (HCLWR) and until 2006 (LWR-PROTEUS, Phases I, II and III) allowed to study various configurations for PWR and BWR. More information is available on the PROTEUS web site at http://proteus.web.psi.ch

  2. Experimental data and EXFOR

    International Nuclear Information System (INIS)

    Plompen, A.

    2012-01-01

    Nuclear data needs are first of all determined by the applications in which they are used. In the field of nuclear fission energy recent developments have led to re-emphasize nuclear safety and security, including the issue of nuclear waste, and to downplay the importance of energy-sustainability and economic viability of the various options in nuclear energy. In practice, this requires a shift in attention: more emphasis on data needs related to light-water reactors, both currently operating and under construction, continuing emphasis on data related to minimization of high level nuclear waste, and reduced emphasis on innovative options for nuclear energy sustainability such as fast reactors. The increased emphasis on safety of nuclear systems places high demands on the predictability of their performance and the quality of their safety assessments. Verification and validation schemes for safety assessments and design methods require nuclear data that allow establishment of the margins associated with estimates of diverse quantities such as reactivity and reactivity coefficients, shielding, inventory build-up, and radiation dose. Sensitivity and uncertainty analyses for key nuclear systems parameters point at strict requirements for uncertainties on important nuclear data. In particular, these help prioritize nuclear data development by isotope, reaction and energy range; a key asset in a time where resources for research in the nuclear field are under strain, while the demands for reliability and accuracy are higher than ever

  3. Covariance data evaluation for experimental data

    International Nuclear Information System (INIS)

    Liu Tingjin

    1993-01-01

    Some methods and codes have been developed and utilized for covariance data evaluation of experimental data, including parameter analysis, physical analysis, Spline fitting etc.. These methods and codes can be used in many different cases

  4. Data archiving in experimental physics

    International Nuclear Information System (INIS)

    Dalesio, L.R.; Watson, W. III; Bickley, M.; Clausen, M.

    1998-01-01

    In experimental physics, data is archived from a wide variety of sources and used for a wide variety of purposes. In each of these environments, trade-offs are made between data storage rate, data availability, and retrieval rate. This paper presents archive alternatives in EPICS, the overall archiver design and details on the data collection and retrieval requirements, performance studies, design choices, design alternatives, and measurements made on the beta version of the archiver

  5. Analysis of DCA experimental data

    International Nuclear Information System (INIS)

    Min, B. J.; Kim, S. Y.; Ryu, S. J.; Seok, H. C.

    2000-01-01

    The lattice characteristics of DCA are calculated with WIMS-ATR code to validate WIMS-AECL code for the lattice analysis of CANDU core by using experimental data of DCA at JNC. Analytical studies of some critical experiments had been performed to analyze the effects of fuel composition. Different items of reactor physics such as local power peaking factor (LPF), effective multiplication factor (Keff) and coolant void reactivity were calculated for two coolant void fractions (0% and 100%). LPFs calculated by WIMS-ATR code are in close agreement with the experimental results. LPFs calculated by WIMS-AECL code with WINFRITH and ENDF/B-V libraries have similar values for both libraries but the differences between experimental data and calculated results by WIMS-AECL code are larger than those of WIMS-ATR code. The maximum difference between the values calculated by WIMS-ATR and experimental values of LPFs are within 1.3%. The coupled code systems WIMS-ATR and CITATION used in this analysis predict Keff within 1% ΔK and coolant void reactivity within 4 % ΔK/K in all cases. The coolant void reactivity of uranium fuel is found to be positive. To validate WIMS-AECL code, the core characteristics of DCA shall be calculated by WIMS-AECL and CITATION codes in the future

  6. TFTR Experimental Data Analysis Collaboration

    International Nuclear Information System (INIS)

    Callen, J.D.

    1993-01-01

    The research performed under the second year of this three-year grant has concentrated on a few key TFTR experimental data analysis issues: MHD mode identification and effects on supershots; identification of new MHD modes; MHD mode theory-experiment comparisons; local electron heat transport inferred from impurity-induced cool pulses; and some other topics. Progress in these areas and activities undertaken in conjunction with this grant are summarized briefly in this report

  7. Covariance matrices of experimental data

    International Nuclear Information System (INIS)

    Perey, F.G.

    1978-01-01

    A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U

  8. Acquisition and treatment systems for experimental data

    International Nuclear Information System (INIS)

    Bouard, E.

    1988-01-01

    The acquisition and treatment systems for experimental data has been conceived to give a response to experimental requirements in a research reactor such OSIRIS. Its objective is to acquire and treat the ensemble of informations coming from one or many experiences, to archive useful data for an ulterior treatment and to give at the experimentator a tool ensemble for a better track of his experience. Its main characteristics are given in this text [fr

  9. The computer library of experimental neutron data

    International Nuclear Information System (INIS)

    Bychkov, V.M.; Manokhin, V.N.; Surgutanov, V.V.

    1976-05-01

    The paper describes the computer library of experimental neutron data at the Obninsk Nuclear Data Centre. The format of the library (EXFOR) and the system of programmes for supplying the library are briefly described. (author)

  10. Data Analysis in Experimental Biomedical Research

    DEFF Research Database (Denmark)

    Markovich, Dmitriy

    This thesis covers two non-related topics in experimental biomedical research: data analysis in thrombin generation experiments (collaboration with Novo Nordisk A/S), and analysis of images and physiological signals in the context of neurovascular signalling and blood flow regulation in the brain...... to critically assess and compare obtained results. We reverse engineered the data analysis performed by CAT, a de facto standard assay in the field. This revealed a number of possibilities to improve its methods of data analysis. We found that experimental calibration data is described well with textbook...

  11. Developing Phenomena Models from Experimental Data

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2003-01-01

    A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover...... unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance...... of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data....

  12. Developing Phenomena Models from Experimental Data

    DEFF Research Database (Denmark)

    A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover...... unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance...... of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data....

  13. Non-parametric smoothing of experimental data

    International Nuclear Information System (INIS)

    Kuketayev, A.T.; Pen'kov, F.M.

    2007-01-01

    Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving

  14. Construction of covariance matrix for experimental data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhang Jianhua

    1992-01-01

    For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained

  15. Experimental data base for containment thermalhydraulic analysis

    International Nuclear Information System (INIS)

    Cheng, X.; Bazin, P.; Cornet, P.; Hittner, D.; Jackson, J.D.; Lopez Jimenez, J.; Naviglio, A.; Oriolo, F.; Petzold, H.

    2001-01-01

    This paper describes the joint research project DABASCO which is supported by the European Community under a cost-shared contract and participated by nine European institutions. The main objective of the project is to provide a generic experimental data base for the development of physical models and correlations for containment thermalhydraulic analysis. The project consists of seven separate-effects experimental programs which deal with new innovative conceptual features, e.g. passive decay heat removal and spray systems. The results of the various stages of the test programs will be assessed by industrial partners in relation to their applicability to reactor conditions

  16. Modeling of Experimental Adsorption Isotherm Data

    Directory of Open Access Journals (Sweden)

    Xunjun Chen

    2015-01-01

    Full Text Available Adsorption is considered to be one of the most effective technologies widely used in global environmental protection areas. Modeling of experimental adsorption isotherm data is an essential way for predicting the mechanisms of adsorption, which will lead to an improvement in the area of adsorption science. In this paper, we employed three isotherm models, namely: Langmuir, Freundlich, and Dubinin-Radushkevich to correlate four sets of experimental adsorption isotherm data, which were obtained by batch tests in lab. The linearized and non-linearized isotherm models were compared and discussed. In order to determine the best fit isotherm model, the correlation coefficient (r2 and standard errors (S.E. for each parameter were used to evaluate the data. The modeling results showed that non-linear Langmuir model could fit the data better than others, with relatively higher r2 values and smaller S.E. The linear Langmuir model had the highest value of r2, however, the maximum adsorption capacities estimated from linear Langmuir model were deviated from the experimental data.

  17. Adjustment model of thermoluminescence experimental data

    International Nuclear Information System (INIS)

    Moreno y Moreno, A.; Moreno B, A.

    2002-01-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a i * exp (-1/b i * (T-C i )) where: a i , b i , c i are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  18. Server for experimental data from LHD

    International Nuclear Information System (INIS)

    Emoto, M.; Ohdachi, S.; Watanabe, K.; Sudo, S.; Nagayama, Y.

    2006-01-01

    In order to unify various types of data, the Kaiseki Server was developed. This server provides physical experimental data of large helical device (LHD) experiments. Many types of data acquisition systems currently exist in operation, and they produce files of various formats. Therefore, it has been difficult to analyze different types of acquisition data at the same time because the data of each system should be read in a particular manner. To facilitate the usage of this data by researchers, the authors have developed a new server system, which provides a unified data format and a unique data retrieval interface. Although the Kaiseki Server satisfied the initial demand, new requests arose from researchers, one of which was the remote usage of the server. The current system cannot be used remotely because of security issues. Another request was group ownership, i.e., users belonging to the same group should have equal access to data. To satisfy these demands, the authors modified the server. However, since other requests may arise in the future, the new system must be flexible so that it can satisfy future demands. Therefore, the authors decided to develop a new server using a three-tier structure

  19. Collective states in 230Th: experimental data

    Directory of Open Access Journals (Sweden)

    A. I. Levon

    2009-12-01

    Full Text Available The excitation spectra in the deformed nucleus 230Th were studied by means of the (p, t reaction, using the Q3D spectrograph facility at the Munich Tandem accelerator. The angular distributions of tritons are measured for about 200 excitations seen in the triton spectra up to 3.3 MeV. Firm 0+ assignments are made for 16 excited states by comparison of experimental angular distributions with the calculated ones using the CHUCK3 code and relatively firm - for 4 states. Assignments up to spin 6+ are made for other states. Analysis of the obtained data will be presented in forthcoming paper.

  20. Experimental data and dose-response models

    International Nuclear Information System (INIS)

    Ullrich, R.L.

    1985-01-01

    Dose-response relationships for radiation carcinogenesis have been of interest to biologists, modelers, and statisticians for many years. Despite his interest there are few instances in which there are sufficient experimental data to allow the fitting of various dose-response models. In those experimental systems for which data are available the dose-response curves for tumor induction for the various systems cannot be described by a single model. Dose-response models which have been observed following acute exposures to gamma rays include threshold, quadratic, and linear models. Data on sex, age, and environmental influences of dose suggest a strong role of host factors on the dose response. With decreasing dose rate the effectiveness of gamma ray irradiation tends to decrease in essentially every instance. In those cases in which the high dose rate dose response could be described by a quadratic model, the effect of dose rate is consistent with predictions based on radiation effects on the induction of initial events. Whether the underlying reasons for the observed dose-rate effect is a result of effects on the induction of initial events or is due to effects on the subsequent steps in the carcinogenic process is unknown. Information on the dose response for tumor induction for high LET (linear energy transfer) radiations such as neutrons is even more limited. The observed dose and dose rate data for tumor induction following neutron exposure are complex and do not appear to be consistent with predictions based on models for the induction of initial events

  1. A memory module for experimental data handling

    Science.gov (United States)

    De Blois, J.

    1985-02-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory chian internal connector. The memory unit will be used in Mössbauer experiments and in thermal neutron scattering experiments.

  2. A memory module for experimental data handling

    International Nuclear Information System (INIS)

    Blois, J. de

    1985-01-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory by an internal connector. The memory unit will be used in Moessbauer experiments and in thermal neutron scattering experiments. (orig.)

  3. Status of experimental data for neutron induced reactions

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Mamoru [Tohoku Univ., Sendai (Japan)

    1998-11-01

    A short review is presented on the status of experimental data for neutron induced reactions above 20 MeV based on the EXFOR data base and journals. Experimental data which were obtained in a systematic manner and/or by plural authors are surveyed and tabulated for the nuclear data evaluation and the benchmark test of the evaluated data. (author). 61 refs.

  4. Development of the NSRR experimental data bank system, (1)

    International Nuclear Information System (INIS)

    Ishijima, Kiyomi; Uemura, Mutsumi; Ohnishi, Nobuaki

    1981-01-01

    To promote collection, arrangement, and utilization of the NSRR experimental data, development of the NSRR experimental data bank system was intended. Fundamental parts of the NSRR experimental data bank system, including the processing program DTBNK, have been completed. Data of the experiments performed so far have been collected and stored. Outline of the processing program and the method of utilization and the present status of the data bank system are discussed. (author)

  5. Neridronate: From Experimental Data to Clinical Use

    Directory of Open Access Journals (Sweden)

    Addolorata Corrado

    2017-09-01

    Full Text Available Neridronate is an amino-bisphosphonate that has been officially approved as a treatment for osteogenesis imperfecta, Paget’s disease of bone and type I complex regional pain syndrome in Italy. Neridronate is administered either intravenously or intramuscularly; thus, it represents a valid option for both cases with contraindications to the use of oral bisphosphonates and cases with contraindications or an inability to receive an intravenous administration of these drugs. Furthermore, although the official authorized use of neridronate is limited to only 3 bone diseases, many experimental and clinical studies support the rationale for its use and provide evidence of its effectiveness in other pathologic bone conditions that are characterized by altered bone remodelling.

  6. Comparison of TRAC calculations with experimental data

    International Nuclear Information System (INIS)

    Jackson, J.F.; Vigil, J.C.

    1980-01-01

    TRAC is an advanced best-estimate computer code for analyzing postulated accidents in light water reactors. This paper gives a brief description of the code followed by comparisons of TRAC calculations with data from a variety of separate-effects, system-effects, and integral experiments. Based on these comparisons, the capabilities and limitations of the early versions of TRAC are evaluated

  7. Fitting experimental data by using weighted Monte Carlo events

    International Nuclear Information System (INIS)

    Stojnev, S.

    2003-01-01

    A method for fitting experimental data using modified Monte Carlo (MC) sample is developed. It is intended to help when a single finite MC source has to fit experimental data looking for parameters in a certain underlying theory. The extraction of the searched parameters, the errors estimation and the goodness-of-fit testing is based on the binned maximum likelihood method

  8. Collection of experimental data for fusion neutronics benchmark

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Yamamoto, Junji; Ichihara, Chihiro; Ueki, Kotaro; Ikeda, Yujiro.

    1994-02-01

    During the recent ten years or more, many benchmark experiments for fusion neutronics have been carried out at two principal D-T neutron sources, FNS at JAERI and OKTAVIAN at Osaka University, and precious experimental data have been accumulated. Under an activity of Fusion Reactor Physics Subcommittee of Reactor Physics Committee, these experimental data are compiled in this report. (author)

  9. Insights in Experimental Data : Interactive Statistics with the ILLMO Program

    NARCIS (Netherlands)

    Martens, J.B.O.S.

    2017-01-01

    Empirical researchers turn to statistics to assist them in drawing conclusions, also called inferences, from their collected data. Often, this data is experimental data, i.e., it consists of (repeated) measurements collected in one or more distinct conditions. The observed data can hence be

  10. Steam as turbine blade coolant: Experimental data generation

    Energy Technology Data Exchange (ETDEWEB)

    Wilmsen, B.; Engeda, A.; Lloyd, J.R. [Michigan State Univ., East Lansing, MI (United States)

    1995-10-01

    Steam as a coolant is a possible option to cool blades in high temperature gas turbines. However, to quantify steam as a coolant, there exists practically no experimental data. This work deals with an attempt to generate such data and with the design of an experimental setup used for the purpose. Initially, in order to guide the direction of experiments, a preliminary theoretical and empirical prediction of the expected experimental data is performed and is presented here. This initial analysis also compares the coolant properties of steam and air.

  11. User's manual of JT-60 experimental data analysis system

    International Nuclear Information System (INIS)

    Hirayama, Takashi; Morishima, Soichi; Yoshioka, Yuji

    2010-02-01

    In the Japan Atomic Energy Agency Naka Fusion Institute, a lot of experiments have been conducted by using the large tokamak device JT-60 aiming to realize fusion power plant. In order to optimize the JT-60 experiment and to investigate complex characteristics of plasma, JT-60 experimental data analysis system was developed and used for collecting, referring and analyzing the JT-60 experimental data. Main components of the system are a data analysis server and a database server for the analyses and accumulation of the experimental data respectively. Other peripheral devices of the system are magnetic disk units, NAS (Network Attached Storage) device, and a backup tape drive. This is a user's manual of the JT-60 experimental data analysis system. (author)

  12. application of covariance analysis to feed/ ration experimental data

    African Journals Online (AJOL)

    Prince Acheampong

    ABSTRACT. The use Analysis of Covariance (ANOCOVA) to feed/ration experimental data for birds was examined. Correlation and Regression analyses were used to adjust for the covariate – initial weight of the experimental birds. The Fisher's F statistic for the straight forward Analysis of Variance (ANOVA) showed ...

  13. Figure output program for JFT-2M experimental data

    International Nuclear Information System (INIS)

    Miura, Yukitoshi; Mori, Masahiro; Matsuda, Toshiaki; Takada, Susumu.

    1991-11-01

    The software for the figure output of JFT-2M experimental data is reported. Since the configuration of a figure is determined by some easy input parameters, then any format of each experimental output is configured freely by this software. (author)

  14. Experimental data base for gamma-ray strength functions

    International Nuclear Information System (INIS)

    Kopecky, J.

    1999-01-01

    Theoretical and experimental knowledge of γ-ray strength functions is a very important ingredient for description and calculation of photon production data in all reaction channels. This study focusses on experimental γ-ray strength functions, collected over a period of about 40 years and based on measurements of partial radiative widths

  15. Improving plant bioaccumulation science through consistent reporting of experimental data

    DEFF Research Database (Denmark)

    Fantke, Peter; Arnot, Jon A.; Doucette, William J.

    2016-01-01

    Experimental data and models for plant bioaccumulation of organic contaminants play a crucial role for assessing the potential human and ecological risks associated with chemical use. Plants are receptor organisms and direct or indirect vectors for chemical exposures to all other organisms. As new...... experimental data are generated they are used to improve our understanding of plant-chemical interactions that in turn allows for the development of better scientific knowledge and conceptual and predictive models. The interrelationship between experimental data and model development is an ongoing, never......-ending process needed to advance our ability to provide reliable quality information that can be used in various contexts including regulatory risk assessment. However, relatively few standard experimental protocols for generating plant bioaccumulation data are currently available and because of inconsistent...

  16. Contribution of computer science to the evaluation of experimental data

    International Nuclear Information System (INIS)

    Steuerwald, J.

    1978-11-01

    The GALE data acquisition system and EDDAR data processing system, used at Max-Planck-Institut fuer Plasmaphysik, serve to illustrate some of the various ways in which computer science plays a major role in developing the evaluation of experimental data. (orig.) [de

  17. Detection of outliers in gas centrifuge experimental data

    International Nuclear Information System (INIS)

    Andrade, Monica C.V.; Nascimento, Claudio A.O.

    2005-01-01

    Isotope separation in a gas centrifuge is a very complex process. Development and optimization of a gas centrifuge requires experimentation. These data contain experimental errors, and like other experimental data, there may be some gross errors, also known as outliers. The detection of outliers in gas centrifuge experimental data may be quite complicated because there is not enough repetition for precise statistical determination and the physical equations may be applied only on the control of the mass flows. Moreover, the concentrations are poorly predicted by phenomenological models. This paper presents the application of a three-layer feed-forward neural network to the detection of outliers in a very extensive experiment for the analysis of the separation performance of a gas centrifuge. (author)

  18. Detection of outliers in a gas centrifuge experimental data

    Directory of Open Access Journals (Sweden)

    M. C. V. Andrade

    2005-09-01

    Full Text Available Isotope separation with a gas centrifuge is a very complex process. Development and optimization of a gas centrifuge requires experimentation. These data contain experimental errors, and like other experimental data, there may be some gross errors, also known as outliers. The detection of outliers in gas centrifuge experimental data is quite complicated because there is not enough repetition for precise statistical determination and the physical equations may be applied only to control of the mass flow. Moreover, the concentrations are poorly predicted by phenomenological models. This paper presents the application of a three-layer feed-forward neural network to the detection of outliers in analysis of performed on a very extensive experiment.

  19. Data base of reactor physics experimental results in Kyoto University critical assembly experimental facilities

    International Nuclear Information System (INIS)

    Ichihara, Chihiro; Fujine, Shigenori; Hayashi, Masatoshi

    1986-01-01

    The Kyoto University critical assembly experimental facilities belong to the Kyoto University Research Reactor Institute, and are the versatile critical assembly constructed for experimentally studying reactor physics and reactor engineering. The facilities are those for common utilization by universities in whole Japan. During more than ten years since the initial criticality in 1974, various experiments on reactor physics and reactor engineering have been carried out using many experimental facilities such as two solidmoderated cores, a light water-moderated core and a neutron generator. The kinds of the experiment carried out were diverse, and to find out the required data from them is very troublesome, accordingly it has become necessary to make a data base which can be processed by a computer with the data accumulated during the past more than ten years. The outline of the data base, the data base CAEX using personal computers, the data base supported by a large computer and so on are reported. (Kako, I.)

  20. Reconstruction of dynamic structures of experimental setups based on measurable experimental data only

    Science.gov (United States)

    Chen, Tian-Yu; Chen, Yang; Yang, Hu-Jiang; Xiao, Jing-Hua; Hu, Gang

    2018-03-01

    Nowadays, massive amounts of data have been accumulated in various and wide fields, it has become today one of the central issues in interdisciplinary fields to analyze existing data and extract as much useful information as possible from data. It is often that the output data of systems are measurable while dynamic structures producing these data are hidden, and thus studies to reveal system structures by analyzing available data, i.e., reconstructions of systems become one of the most important tasks of information extractions. In the past, most of the works in this respect were based on theoretical analyses and numerical verifications. Direct analyses of experimental data are very rare. In physical science, most of the analyses of experimental setups were based on the first principles of physics laws, i.e., so-called top-down analyses. In this paper, we conducted an experiment of “Boer resonant instrument for forced vibration” (BRIFV) and inferred the dynamic structure of the experimental set purely from the analysis of the measurable experimental data, i.e., by applying the bottom-up strategy. Dynamics of the experimental set is strongly nonlinear and chaotic, and itʼs subjects to inevitable noises. We proposed to use high-order correlation computations to treat nonlinear dynamics; use two-time correlations to treat noise effects. By applying these approaches, we have successfully reconstructed the structure of the experimental setup, and the dynamic system reconstructed with the measured data reproduces good experimental results in a wide range of parameters.

  1. Improving plant bioaccumulation science through consistent reporting of experimental data.

    Science.gov (United States)

    Fantke, Peter; Arnot, Jon A; Doucette, William J

    2016-10-01

    Experimental data and models for plant bioaccumulation of organic contaminants play a crucial role for assessing the potential human and ecological risks associated with chemical use. Plants are receptor organisms and direct or indirect vectors for chemical exposures to all other organisms. As new experimental data are generated they are used to improve our understanding of plant-chemical interactions that in turn allows for the development of better scientific knowledge and conceptual and predictive models. The interrelationship between experimental data and model development is an ongoing, never-ending process needed to advance our ability to provide reliable quality information that can be used in various contexts including regulatory risk assessment. However, relatively few standard experimental protocols for generating plant bioaccumulation data are currently available and because of inconsistent data collection and reporting requirements, the information generated is often less useful than it could be for direct applications in chemical assessments and for model development and refinement. We review existing testing guidelines, common data reporting practices, and provide recommendations for revising testing guidelines and reporting requirements to improve bioaccumulation knowledge and models. This analysis provides a list of experimental parameters that will help to develop high quality datasets and support modeling tools for assessing bioaccumulation of organic chemicals in plants and ultimately addressing uncertainty in ecological and human health risk assessments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Criteria of the validation of experimental and evaluated covariance data

    International Nuclear Information System (INIS)

    Badikov, S.

    2008-01-01

    The criteria of the validation of experimental and evaluated covariance data are reviewed. In particular: a) the criterion of the positive definiteness for covariance matrices, b) the relationship between the 'integral' experimental and estimated uncertainties, c) the validity of the statistical invariants, d) the restrictions imposed to correlations between experimental errors, are described. Applying these criteria in nuclear data evaluation was considered and 4 particular points have been examined. First preserving positive definiteness of covariance matrices in case of arbitrary transformation of a random vector was considered, properties of the covariance matrices in operations widely used in neutron and reactor physics (splitting and collapsing energy groups, averaging the physical values over energy groups, estimation parameters on the basis of measurements by means of generalized least squares method) were studied. Secondly, an algorithm for comparison of experimental and estimated 'integral' uncertainties was developed, square root of determinant of a covariance matrix is recommended for use in nuclear data evaluation as a measure of 'integral' uncertainty for vectors of experimental and estimated values. Thirdly, a set of statistical invariants-values which are preserved in statistical processing was presented. And fourthly, the inequality that signals a correlation between experimental errors that leads to unphysical values is given. An application is given concerning the cross-section of the (n,t) reaction on Li 6 with a neutron incident energy comprised between 1 and 100 keV

  3. Graphic display of spatially distributed binary-state experimental data

    International Nuclear Information System (INIS)

    Watson, B.L.

    1981-01-01

    Experimental data collected from a large number of transducers spatially distributed throughout a three-dimensional volume has typically posed a difficult interpretation task for the analyst. This paper describes one approach to alleviating this problem by presenting color graphic displays of experimental data; specifically, data representing the dynamic three-dimensional distribution of cooling fluid collected during the reflood and refill of simulated nuclear reactor vessels. Color-coded binary data (wet/dry) are integrated with a graphic representation of the reactor vessel and displayed on a high-resolution color CRT. The display is updated with successive data sets and made into 16-mm movies for distribution and analysis. Specific display formats are presented and extension to other applications discussed

  4. Thermodynamic properties of caffeine: Reconciliation of available experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Emel' yanenko, Vladimir N. [Department of Physical Chemistry, University of Rostock, Hermannstrasse 14, D-18051 Rostock (Germany); Verevkin, Sergey P. [Department of Physical Chemistry, University of Rostock, Hermannstrasse 14, D-18051 Rostock (Germany)], E-mail: sergey.verevkin@uni-rostock.de

    2008-12-15

    Molar enthalpies of sublimation of two crystal forms of caffeine were obtained from the temperature dependence of the vapour pressure measured by the transpiration method. A large number of primary experimental results on the temperature dependences of vapour pressure and phase transitions have been collected from the literature and have been treated in a uniform manner in order to derive sublimation enthalpies of caffeine at T = 298.15 K. This collection together with the new experimental results reported here has helped to resolve contradictions in the available sublimation enthalpies data and to recommend a consistent and reliable set of sublimation and formation enthalpies for both crystal forms under study. Ab initio calculations of the gaseous molar enthalpy of formation of caffeine have been performed using the G3MP2 method and the results are in excellent agreement with the selected experimental data.

  5. Thermodynamic properties of caffeine: Reconciliation of available experimental data

    International Nuclear Information System (INIS)

    Emel'yanenko, Vladimir N.; Verevkin, Sergey P.

    2008-01-01

    Molar enthalpies of sublimation of two crystal forms of caffeine were obtained from the temperature dependence of the vapour pressure measured by the transpiration method. A large number of primary experimental results on the temperature dependences of vapour pressure and phase transitions have been collected from the literature and have been treated in a uniform manner in order to derive sublimation enthalpies of caffeine at T = 298.15 K. This collection together with the new experimental results reported here has helped to resolve contradictions in the available sublimation enthalpies data and to recommend a consistent and reliable set of sublimation and formation enthalpies for both crystal forms under study. Ab initio calculations of the gaseous molar enthalpy of formation of caffeine have been performed using the G3MP2 method and the results are in excellent agreement with the selected experimental data

  6. Experimental (Network) and Evaluated Nuclear Reaction Data at NDS

    International Nuclear Information System (INIS)

    Otsuka, N.; Semkova, V.; Simakov, S.P.; Zerkin, V.

    2011-01-01

    Dr Simakov of Nuclear Data Services Unit in the Nuclear Data Section (NDS) gave a brief overview of the data compilation and evaluation activities in the nuclear data community: experimental nuclear reaction data (EXFOR, http://www-nds.iaea.org/exfor/) and evaluated nuclear reaction data (ENDF, http://www-nds.iaea.org/endf). The International Network of Nuclear Reaction Data Centres (NRDC) coordinated by NDS includes 14 Centres in 8 Countries (China, Hungary, India, Japan, Korea, Russian, Ukraine, USA) and 2 International Organizations (NEA, IAEA). It had the first meeting of four core centres (Brookhaven, Saclay, Obninsk, Vienna) in 1966 and the EXFOR was adopted as an official data exchange format. In 2000, IAEA implemented the EXFOR database as a relational multiform database and the EXFOR is a trusted, increasing and living database with 19100 experimental works (as of September 2011) and 141600 data tables. The EXFOR provides a compilation control system for selection of articles and compilation of data and the NRDC home page provides manuals, documents and codes. The nuclear data can be retrieved by the web-retrieval system or distributed on a DVD on request. The EXFOR data play a critical role in the development of evaluated nuclear reaction data. There are several major general purpose libraries: ENDF (US), CENDL (China), JEFF (EU), JENDL (Japan) and RUSFOND (Russia). In addition, there are special libraries for particular applications: EAF (European Activation File), FENDL (Fusion Evaluated Nuclear Data Library for ITER neutronics), IBANDL (Ion Beam Analysis Nuclear Data Library for surface analysis of solids), IRDF, DXS (Dosimetry, radiation damage and gas production data) and Medical portal. Dr V. Zerkin of NDS demonstrated the data retrieval from the EXFOR database and the ENDF library.

  7. Experimental (Network) and Evaluated Nuclear Reaction Data at NDS

    Energy Technology Data Exchange (ETDEWEB)

    Otsuka, N; Semkova, V; Simakov, S P; Zerkin, V [Nuclear Data Services Unit, Nuclear Data Section, IAEA, Vienna (Austria)

    2011-11-15

    Dr Simakov of Nuclear Data Services Unit in the Nuclear Data Section (NDS) gave a brief overview of the data compilation and evaluation activities in the nuclear data community: experimental nuclear reaction data (EXFOR, http://www-nds.iaea.org/exfor/) and evaluated nuclear reaction data (ENDF, http://www-nds.iaea.org/endf). The International Network of Nuclear Reaction Data Centres (NRDC) coordinated by NDS includes 14 Centres in 8 Countries (China, Hungary, India, Japan, Korea, Russian, Ukraine, USA) and 2 International Organizations (NEA, IAEA). It had the first meeting of four core centres (Brookhaven, Saclay, Obninsk, Vienna) in 1966 and the EXFOR was adopted as an official data exchange format. In 2000, IAEA implemented the EXFOR database as a relational multiform database and the EXFOR is a trusted, increasing and living database with 19100 experimental works (as of September 2011) and 141600 data tables. The EXFOR provides a compilation control system for selection of articles and compilation of data and the NRDC home page provides manuals, documents and codes. The nuclear data can be retrieved by the web-retrieval system or distributed on a DVD on request. The EXFOR data play a critical role in the development of evaluated nuclear reaction data. There are several major general purpose libraries: ENDF (US), CENDL (China), JEFF (EU), JENDL (Japan) and RUSFOND (Russia). In addition, there are special libraries for particular applications: EAF (European Activation File), FENDL (Fusion Evaluated Nuclear Data Library for ITER neutronics), IBANDL (Ion Beam Analysis Nuclear Data Library for surface analysis of solids), IRDF, DXS (Dosimetry, radiation damage and gas production data) and Medical portal. Dr V. Zerkin of NDS demonstrated the data retrieval from the EXFOR database and the ENDF library.

  8. Analysis of experimental data sets for local scour depth around ...

    African Journals Online (AJOL)

    The performance of soft computing techniques to analyse and interpret the experimental data of local scour depth around bridge abutment, measured at different laboratory conditions and environment, is presented. The scour around bridge piers and abutments is, in the majority of cases, the main reason for bridge failures.

  9. Experimental animal data and modeling of late somatic effects

    International Nuclear Information System (INIS)

    Fry, R.J.M.

    1988-01-01

    This section is restricted to radiation-induced life shortening and cancer and mainly to studies with external radiation. The emphasis will be on the experimental data that are available and the experimental systems that could provide the type of data with which to either formulate or test models. Genetic effects which are of concern are not discussed in this section. Experimental animal radiation studies fall into those that establish general principles and those that demonstrate mechanisms. General principles include the influence of dose, radiation quality, dose rate, fractionation, protraction and such biological factors as age and gender. The influence of these factors are considered as general principles because they are independent, at least qualitatively, of the species studied. For example, if an increase in the LET of the radiation causes an increased effectiveness in cancer induction in a mouse a comparable increase in effectiveness can be expected in humans. Thus, models, whether empirical or mechanistic, formulated from experimental animal data should be generally applicable

  10. Status of experimental data for the VHTR core design

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Seok; Chang, Jong Hwa; Park, Chang Kue

    2004-05-01

    The VHTR (Very High Temperature Reactor) is being emerged as a next generation nuclear reactor to demonstrate emission-free nuclear-assisted electricity and hydrogen production. The VHTR could be either a prismatic or pebble type helium cooled, graphite moderated reactor. The final decision will be made after the completion of the pre-conceptual design for each type. For the pre-conceptual design for both types, computational tools are being developed. Experimental data are required to validate the tools to be developed. Many experiments on the HTGR (High Temperature Gas-cooled Reactor) cores have been performed to confirm the design data and to validate the design tools. The applicability and availability of the existing experimental data have been investigated for the VHTR core design in this report.

  11. Error bounds for molecular Hamiltonians inverted from experimental data

    International Nuclear Information System (INIS)

    Geremia, J.M.; Rabitz, Herschel

    2003-01-01

    Inverting experimental data provides a powerful technique for obtaining information about molecular Hamiltonians. However, rigorously quantifying how laboratory error propagates through the inversion algorithm has always presented a challenge. In this paper, we develop an inversion algorithm that realistically treats experimental error. It propagates the distribution of observed laboratory measurements into a family of Hamiltonians that are statistically consistent with the distribution of the data. This algorithm is built upon the formalism of map-facilitated inversion to alleviate computational expense and permit the use of powerful nonlinear optimization algorithms. Its capabilities are demonstrated by identifying inversion families for the X 1 Σ g + and a 3 Σ u + states of Na 2 that are consistent with the laboratory data

  12. Experimental software for modeling and interpreting educational data analysis processes

    Directory of Open Access Journals (Sweden)

    Natalya V. Zorina

    2017-12-01

    Full Text Available Problems, tasks and processes of educational data mining are considered in this article. The objective is to create a fundamentally new information system of the University using the results educational data analysis. One of the functions of such a system is knowledge extraction from accumulated in the operation process data. The creation of the national system of this type is an iterative and time-consuming process requiring the preliminary studies and incremental prototyping modules. The novelty of such systems is that there is a lack of those using this methodology of the development, for this purpose a number of experiments was carried out in order to collect data, choose appropriate methods for the study and to interpret them. As a result of the experiment, the authors were available sources available for analysis in the information environment of the home university. The data were taken from the semester performance, obtained from the information system of the training department of the Institute of IT MTU MIREA, the data obtained as a result of the independent work of students and data, using specially designed Google-forms. To automate the collection of information and analysis of educational data, an experimental software package was created. As a methodology for developing the experimental software complex, a decision was made using the methodologies of rational-empirical complexes (REX and single-experimentation program technologies (TPEI. The details of the program implementation of the complex are described in detail, conclusions are given about the availability of the data sources used, and conclusions are drawn about the prospects for further development.

  13. The visualization and availability of experimental research data at Elsevier

    Science.gov (United States)

    Keall, Bethan

    2014-05-01

    In the digital age, the visualization and availability of experimental research data is an increasingly prominent aspect of the research process and of the scientific output that researchers generate. We expect that the importance of data will continue to grow, driven by technological advancements, requirements from funding bodies to make research data available, and a developing research data infrastructure that is supported by data repositories, science publishers, and other stakeholders. Elsevier is actively contributing to these efforts, for example by setting up bidirectional links between online articles on ScienceDirect and relevant data sets on trusted data repositories. A key aspect of Elsevier's "Article of the Future" program, these links enrich the online article and make it easier for researchers to find relevant data and articles and help place data in the right context for re-use. Recently, we have set up such links with some of the leading data repositories in Earth Sciences, including the British Geological Survey, Integrated Earth Data Applications, the UK Natural Environment Research Council, and the Oak Ridge National Laboratory DAAC. Building on these links, Elsevier has also developed a number of data integration and visualization tools, such as an interactive map viewer that displays the locations of relevant data from PANGAEA next to articles on ScienceDirect. In this presentation we will give an overview of these and other capabilities of the Article of the Future, focusing on how they help advance communication of research in the digital age.

  14. BirdsEyeView (BEV: graphical overviews of experimental data

    Directory of Open Access Journals (Sweden)

    Zhang Lifeng

    2012-09-01

    Full Text Available Abstract Background Analyzing global experimental data can be tedious and time-consuming. Thus, helping biologists see results as quickly and easily as possible can facilitate biological research, and is the purpose of the software we describe. Results We present BirdsEyeView, a software system for visualizing experimental transcriptomic data using different views that users can switch among and compare. BirdsEyeView graphically maps data to three views: Cellular Map (currently a plant cell, Pathway Tree with dynamic mapping, and Gene Ontology http://www.geneontology.org Biological Processes and Molecular Functions. By displaying color-coded values for transcript levels across different views, BirdsEyeView can assist users in developing hypotheses about their experiment results. Conclusions BirdsEyeView is a software system available as a Java Webstart package for visualizing transcriptomic data in the context of different biological views to assist biologists in investigating experimental results. BirdsEyeView can be obtained from http://metnetdb.org/MetNet_BirdsEyeView.htm.

  15. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  16. WPS criterion proposition based on experimental data base interpretation

    International Nuclear Information System (INIS)

    Chapuliot, S.; Izard, J.P.; Moinereau, D.; Marie, S.

    2011-01-01

    This article gives the background and the methodology developed to define a K J based criterion for brittle fracture of Reactor Pressure Vessel (RPV) submitted to Pressurized Thermal Shock (PTS), and taking into account Warm Pre Stressing effect (WPS). The first step of this methodology is the constitution of an experimental data base. This work was performed through bibliography and partnerships, and allows merging experimental results dealing with: -) Various ferritic steels; -) Various material states (as received, thermally aged, irradiated...); -) Various mode of fracture (cleavage, inter-granular, mixed mode); -) Various specimen geometry and size (CT, SENB, mock-ups); -) Various thermo-mechanical transients. Based on this experimental data base, a simple K J based limit is proposed and compared to experimental results. Parametric studies are performed in order to define the main parameters of the problem. Finally, a simple proposition based on a detailed analysis of tests results is performed. This proposition giving satisfactory results in every cases, it constitutes a good candidate for integration in French RSE-M code for in service assessment. (authors)

  17. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.

    1989-01-01

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  18. Data taking and processing system for nuclear experimental physics study

    International Nuclear Information System (INIS)

    Nagashima, Y.; Kimura, H.; Katori, K.; Kuriyama, K.

    1979-01-01

    A multi input, multi mode, multi user data taking and processing system was developed. This system has following special features. 1) It is multi computer system which is constitute with two special processors and two mini computers. 2) The pseudo devices are introduced to make operating procedurs simply and easily. Especially, the selection or modification of 1 - 8 coincidence mode can be done very easily and quickly. 3) A 16 Kch spectrum storage has 8 partitions. Every partitions having floating size are handled automatically by the data taking software SHINE. 4) On line real time data processing can be done. Useing the FORTRAN language, user may prepare the processing software apart from the data taking software. Under the RSX-11D system software, this software runs concurrently with the data taking software by a multi programming mode. 5) The data communication between arbitraly external devices and this system can be done. With this communication procedures, not only the data transfer between computers, but also the control of the experimental devices are realized. Like the real time processing software, this software can be prepared by users and be ran concurrently with other softwares. 6) For data monitoring, two different graphic displays are used complementally. One is a refresh typed high speed display. The other is a storage typed large screen display. Raw datas are displayed on the former. Processed datas or multi parametric large volume datas are displayed on the later one. (author)

  19. Systematic integration of experimental data and models in systems biology.

    Science.gov (United States)

    Li, Peter; Dada, Joseph O; Jameson, Daniel; Spasic, Irena; Swainston, Neil; Carroll, Kathleen; Dunn, Warwick; Khan, Farid; Malys, Naglis; Messiha, Hanan L; Simeonidis, Evangelos; Weichart, Dieter; Winder, Catherine; Wishart, Jill; Broomhead, David S; Goble, Carole A; Gaskell, Simon J; Kell, Douglas B; Westerhoff, Hans V; Mendes, Pedro; Paton, Norman W

    2010-11-29

    The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  20. Confrontation of thermoluminescence models in lithium fluoride with experimental data

    International Nuclear Information System (INIS)

    Niewiadomski, T.

    1976-12-01

    The thermoluminescent properties of lithium fluoride depend on numerous factors and are much more complex than those of other phosphors. The so far developed fragmentary models are meant to explain the relationships between crystal defect structure and the processes involved in TL. An attempt has been made to compare these models with the veryfied experimental data and to point out the observations which are inconsistant with the models. (author)

  1. An Experimental Metagenome Data Management and AnalysisSystem

    Energy Technology Data Exchange (ETDEWEB)

    Markowitz, Victor M.; Korzeniewski, Frank; Palaniappan, Krishna; Szeto, Ernest; Ivanova, Natalia N.; Kyrpides, Nikos C.; Hugenholtz, Philip

    2006-03-01

    The application of shotgun sequencing to environmental samples has revealed a new universe of microbial community genomes (metagenomes) involving previously uncultured organisms. Metagenome analysis, which is expected to provide a comprehensive picture of the gene functions and metabolic capacity of microbial community, needs to be conducted in the context of a comprehensive data management and analysis system. We present in this paper IMG/M, an experimental metagenome data management and analysis system that is based on the Integrated Microbial Genomes (IMG) system. IMG/M provides tools and viewers for analyzing both metagenomes and isolate genomes individually or in a comparative context.

  2. Revisiting dibenzothiophene thermochemical data: Experimental and computational studies

    International Nuclear Information System (INIS)

    Freitas, Vera L.S.; Gomes, Jose R.B.; Ribeiro da Silva, Maria D.M.C.

    2009-01-01

    Thermochemical data of dibenzothiophene were studied in the present work by experimental techniques and computational calculations. The standard (p 0 =0.1MPa) molar enthalpy of formation, at T = 298.15 K, in the gaseous phase, was determined from the enthalpy of combustion and sublimation, obtained by rotating bomb calorimetry in oxygen, and by Calvet microcalorimetry, respectively. This value was compared with estimated data from G3(MP2)//B3LYP computations and also with the other results available in the literature.

  3. Analysis and discussion on the experimental data of electrolyte analyzer

    Science.gov (United States)

    Dong, XinYu; Jiang, JunJie; Liu, MengJun; Li, Weiwei

    2018-06-01

    In the subsequent verification of electrolyte analyzer, we found that the instrument can achieve good repeatability and stability in repeated measurements with a short period of time, in line with the requirements of verification regulation of linear error and cross contamination rate, but the phenomenon of large indication error is very common, the measurement results of different manufacturers have great difference, in order to find and solve this problem, help enterprises to improve quality of product, to obtain accurate and reliable measurement data, we conducted the experimental evaluation of electrolyte analyzer, and the data were analyzed by statistical analysis.

  4. Neutron cross section and covariance data evaluation of experimental data for {sup 27}Al

    Energy Technology Data Exchange (ETDEWEB)

    Chunjuan, Li; Jianfeng, Liu [Physics Department , Zhengzhou Univ., Zhengzhou (China); Tingjin, Liu [China Nuclear Data Center, China Inst. of Atomic Energy, Beijing (China)

    2006-07-15

    The evaluation of neutron cross section and covariance data for {sup 27}Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for {sup 27}Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)

  5. Neutron cross section and covariance data evaluation of experimental data for 27Al

    International Nuclear Information System (INIS)

    Li Chunjuan; Liu Jianfeng; Liu Tingjin

    2006-01-01

    The evaluation of neutron cross section and covariance data for 27 Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for 27 Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)

  6. Application of data base management systems for developing experimental data base using ES computers

    International Nuclear Information System (INIS)

    Vasil'ev, V.I.; Karpov, V.V.; Mikhajlyuk, D.N.; Ostroumov, Yu.A.; Rumyantsev, A.N.

    1987-01-01

    Modern data base measurement systems (DBMS) are widely used for development and operation of different data bases by assignment of data processing systems in economy, planning, management. But up today development and operation of data masses with experimental physical data in ES computer has been based mainly on the traditional technology of consequent or index-consequent files. The principal statements of DBMS technology applicability for compiling and operation of data bases with data on physical experiments are formulated based on the analysis of DBMS opportunities. It is shown that application of DBMS allows to essentially reduce general costs of calculational resources for development and operation of data bases and to decrease the scope of stored experimental data when analyzing information content of data

  7. Increasing process understanding by analyzing complex interactions in experimental data

    DEFF Research Database (Denmark)

    Naelapaa, Kaisa; Allesø, Morten; Kristensen, Henning Gjelstrup

    2009-01-01

    understanding of a coating process. It was possible to model the response, that is, the amount of drug released, using both mentioned techniques. However, the ANOVAmodel was difficult to interpret as several interactions between process parameters existed. In contrast to ANOVA, GEMANOVA is especially suited...... for modeling complex interactions and making easily understandable models of these. GEMANOVA modeling allowed a simple visualization of the entire experimental space. Furthermore, information was obtained on how relative changes in the settings of process parameters influence the film quality and thereby drug......There is a recognized need for new approaches to understand unit operations with pharmaceutical relevance. A method for analyzing complex interactions in experimental data is introduced. Higher-order interactions do exist between process parameters, which complicate the interpretation...

  8. Regularization of the double period method for experimental data processing

    Science.gov (United States)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  9. Sequences by Metastable Attractors: Interweaving Dynamical Systems and Experimental Data

    Directory of Open Access Journals (Sweden)

    Axel Hutt

    2017-05-01

    Full Text Available Metastable attractors and heteroclinic orbits are present in the dynamics of various complex systems. Although their occurrence is well-known, their identification and modeling is a challenging task. The present work reviews briefly the literature and proposes a novel combination of their identification in experimental data and their modeling by dynamical systems. This combination applies recurrence structure analysis permitting the derivation of an optimal symbolic representation of metastable states and their dynamical transitions. To derive heteroclinic sequences of metastable attractors in various experimental conditions, the work introduces a Hausdorff clustering algorithm for symbolic dynamics. The application to brain signals (event-related potentials utilizing neural field models illustrates the methodology.

  10. A Small Guide to Generating Covariances of Experimental Data

    International Nuclear Information System (INIS)

    Mannhart, Wolf

    2011-05-01

    A complete description of the uncertainties of an experiment can only be realized by a detailed list of all the uncertainty components, their value and a specification of existing correlations between the data. Based on such information the covariance matrix can be generated, which is necessary for any further proceeding with the experimental data. It is not necessary, and not recommended, that an experimenter evaluates this covariance matrix. The reason for this is that a incorrectly evaluated final covariance matrix can never be corrected if the details are not given. (Such obviously wrong covariance matrices have recently occasionally been found in the literature). Hence quotation of a covariance matrix is an additional step which should not occur without quoting a detailed list of the various uncertainty components and their correlations as well. It must be hoped that editors of journals will understand these necessary requirements. The generalized least squares procedure shown permits an easy way of interchanging data D 0 with parameter estimates P. This means new data can easily be combined with an earlier evaluation. However, it must be mentioned that this is only valid as long as the new data have no correlation with any of the older data of the prior evaluation. Otherwise the old data which show correlation with new data have to be extracted from the evaluation and then, together with the new data and taking account of the correlation, have again to be added to the reduced evaluation. In most cases this step cannot be performed and the evaluation has to be completely redone. A partial way out is given if the evaluation is performed step by step and the results of each step are stored. Then the evaluation need only be repeated from the step which contains correlated data for the first time while all earlier steps remain unchanged. Finally it should be noted that the addition of a small set of new data to a prior evaluation consisting of a large number of

  11. Comparison of ATHENA/RELAP results against ice experimental data

    CERN Document Server

    Moore-Richard, L

    2002-01-01

    In order to demonstrate the adequacy of the International Thermonuclear Experimental Reactor design from a safety stand point as well as investigating the behavior of two-phase flow phenomena during an ingress of coolant event, an integrated ICE test facility was constructed in Japan. The data generated from the ICE facility offers a valuable means to validate computer codes such as ATHENA /RELAP5, which is one of the codes used at the Idaho National Engineering And Environmental Laboratory (INEEL) to evaluate the safety of various fusion reactor concepts. In this paper we compared numerical results generated by the ATHENA code with corresponding test data from the ICE facility. Overall we found good agreement between the test data and the predicted results.

  12. Acquisition of reactor experimental data; Akviziter reaktorskih eksperimentalnih podataka

    Energy Technology Data Exchange (ETDEWEB)

    Petrovic, M; Tasic, A [Institut za nuklearne nauke ' Boris Kidric' , Vinca, Belgrade (Yugoslavia)

    1966-07-01

    This paper include the analysis of possible experiments and relevant experimental devices for detection, registering and analysis of inducing and response signals. It contains a concept of our system for detection and registering of data, which i appropriate for our research program. Non-typical details of certain acquisition systems are described as well. U ovom radu se analiziraju moguci eksperimenti i odgovarajuci eksperimentalni uredjaji za detekciju, registraciju i analizu signala pobude i odziva. Dalje se iznosi koncepcija naseg sistema za detekciju i registraciju podataka pogodnog za nas program istrazivanja. Netipicni detalji pojedinih kola akvizitera takodje se iznose u radu (author)

  13. Experimental validation of incomplete data CT image reconstruction techniques

    International Nuclear Information System (INIS)

    Eberhard, J.W.; Hsiao, M.L.; Tam, K.C.

    1989-01-01

    X-ray CT inspection of large metal parts is often limited by x-ray penetration problems along many of the ray paths required for a complete CT data set. In addition, because of the complex geometry of many industrial parts, manipulation difficulties often prevent scanning over some range of angles. CT images reconstructed from these incomplete data sets contain a variety of artifacts which limit their usefulness in part quality determination. Over the past several years, the authors' company has developed 2 new methods of incorporating a priori information about the parts under inspection to significantly improve incomplete data CT image quality. This work reviews the methods which were developed and presents experimental results which confirm the effectiveness of the techniques. The new methods for dealing with incomplete CT data sets rely on a priori information from part blueprints (in electronic form), outer boundary information from touch sensors, estimates of part outer boundaries from available x-ray data, and linear x-ray attenuation coefficients of the part. The two methods make use of this information in different fashions. The relative performance of the two methods in detecting various flaw types is compared. Methods for accurately registering a priori information with x-ray data are also described. These results are critical to a new industrial x-ray inspection cell built for inspection of large aircraft engine parts

  14. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  15. Prediction of sonic boom from experimental near-field overpressure data. Volume 2: Data base construction

    Science.gov (United States)

    Glatt, C. R.; Reiners, S. J.; Hague, D. S.

    1975-01-01

    A computerized method for storing, updating and augmenting experimentally determined overpressure signatures has been developed. A data base of pressure signatures for a shuttle type vehicle has been stored. The data base has been used for the prediction of sonic boom with the program described in Volume I.

  16. Data acquisition, processing and display of experimental data for the Tokamak de Varennes

    International Nuclear Information System (INIS)

    Robins, E.S.; Larsen, J.M.; Lee, A.; Somers, G.

    1985-01-01

    The Tokamak de Varennes is to be a national facility for research into magnetic nuclear fusion. A centralised computer system is currently under development to facilitate the remote control, acquisition, processing and display of experimental data. The software (GALE-V) consists of a set of tasks to build data structures which mirror the physical arrangement of each experiment and provide the bases for the interpretation and presentation of the data to each experimenter. Data retrieval is accomplished through the graphics subsystem, and an interface for user-written data processing programs allows for the varied needs of data analysis of each experiment. Other facilities being developed provide the tools for a user to retrieve, process and view the data in a simple manner

  17. Normalization and experimental design for ChIP-chip data

    Directory of Open Access Journals (Sweden)

    Alekseyenko Artyom A

    2007-06-01

    Full Text Available Abstract Background Chromatin immunoprecipitation on tiling arrays (ChIP-chip has been widely used to investigate the DNA binding sites for a variety of proteins on a genome-wide scale. However, several issues in the processing and analysis of ChIP-chip data have not been resolved fully, including the effect of background (mock control subtraction and normalization within and across arrays. Results The binding profiles of Drosophila male-specific lethal (MSL complex on a tiling array provide a unique opportunity for investigating these topics, as it is known to bind on the X chromosome but not on the autosomes. These large bound and control regions on the same array allow clear evaluation of analytical methods. We introduce a novel normalization scheme specifically designed for ChIP-chip data from dual-channel arrays and demonstrate that this step is critical for correcting systematic dye-bias that may exist in the data. Subtraction of the mock (non-specific antibody or no antibody control data is generally needed to eliminate the bias, but appropriate normalization obviates the need for mock experiments and increases the correlation among replicates. The idea underlying the normalization can be used subsequently to estimate the background noise level in each array for normalization across arrays. We demonstrate the effectiveness of the methods with the MSL complex binding data and other publicly available data. Conclusion Proper normalization is essential for ChIP-chip experiments. The proposed normalization technique can correct systematic errors and compensate for the lack of mock control data, thus reducing the experimental cost and producing more accurate results.

  18. New System For Tokamak T-10 Experimental Data Acquisition, Data Handling And Remote Access

    International Nuclear Information System (INIS)

    Sokolov, M. M.; Igonkina, G. B.; Koutcherenko, I. Yu.; Nurov, D. N.

    2008-01-01

    For carrying out the experiments on nuclear fusion devices in the Institute of Nuclear Fusion, Moscow, a system for experimental data acquisition, data handling and remote access (further 'DAS-T10') was developed and has been used in the Institute since the year 2000. The DAS-T10 maintains the whole cycle of experimental data handling: from configuration of data measuring equipment and acquisition of raw data from the fusion device (the Device), to presentation of math-processed data and support of the experiment data archive. The DAS-T10 provides facilities for the researchers to access the data both at early stages of an experiment and well afterwards, locally from within the experiment network and remotely over the Internet.The DAS-T10 is undergoing a modernization since the year 2007. The new version of the DAS-T10 will accommodate to modern data measuring equipment and will implement improved architectural solutions. The innovations will allow the DAS-T10 to produce and handle larger amounts of experimental data, thus providing the opportunities to intensify and extend the fusion researches. The new features of the DAS-T10 along with the existing design principles are reviewed in this paper

  19. Covariance data evaluation of some experimental data for n + 65,63,NatCu

    International Nuclear Information System (INIS)

    Jia Min; Liu Jianfeng; Liu Tingjin

    2003-01-01

    The evaluation of covariance data for 65,63,Nat Cu in the energy range from 99.5 keV to 20 MeV was carried out using EXPCOV and SPC code based on the experimental data available. The data can be as a part of the covariance file 33 in the evaluated library in ENDF/B6 format for the corresponding nuclides, and also can be used as the basis of theoretical calculation concerned. (authors)

  20. Estimation of covariance matrix on the experimental data for nuclear data evaluation

    International Nuclear Information System (INIS)

    Murata, T.

    1985-01-01

    In order to evaluate fission and capture cross sections of some U and Pu isotopes for JENDL-3, we have a plan for evaluating them simultaneously with a least-squares method. For the simultaneous evaluation, the covariance matrix is required for each experimental data set. In the present work, we have studied the procedures for deriving the covariance matrix from the error data given in the experimental papers. The covariance matrices were obtained using the partial errors and estimated correlation coefficients between the same type partial errors for different neutron energy. Some examples of the covariance matrix estimation are explained and the preliminary results of the simultaneous evaluation are presented. (author)

  1. Clustering of experimental data and its application to nuclear data evaluation

    International Nuclear Information System (INIS)

    Abboud, A.; Rashed, R.; Ibrahim, M.

    1997-01-01

    A semi-automatic pre-processing technique has been proposed by Iwasaki to classify the experimental data for a reaction into one or a small number of large data groups, called main cluster(s), and to eliminate some data which deviate from the main body of the data. The classifying method is based on a technique like pattern clustering in the information processing domain. Test of the data clustering formed reasonable main clusters for three activation cross-sections. This technique is a helpful tool in the neutron cross-section evaluation. (author). 4 refs, 1 fig., 3 tabs

  2. Clustering of experimental data and its application to nuclear data evaluation

    International Nuclear Information System (INIS)

    Abboud, A.; Rashed, R.; Ibrahim, M.

    1998-01-01

    A semi-automatic pre-processing technique has been proposed by Iwasaki to classify the experimental data for a reaction into one or a small number of large data groups, called main cluster (s), and to eliminate some data which deviates from the main body of the data. The classifying method is based on technique like pattern clustering in the information processing domain. Test of the data clustering formed reasonable main clusters for three activation cross-sections. This technique is a helpful tool in the neutron cross-section evaluation

  3. Theoretical interpretation of experimental data from direct dark matter detection

    Energy Technology Data Exchange (ETDEWEB)

    Chung-Lin, Shan

    2007-10-15

    I derive expressions that allow to reconstruct the normalized one-dimensional velocity distribution function of halo WIMPs and to determine its moments from the recoil energy spectrum as well as from experimental data directly. The reconstruction of the velocity distribution function is further extended to take into account the annual modulation of the event rate. All these expressions are independent of the as yet unknown WIMP density near the Earth as well as of the WIMP-nucleus cross section. The only information about the nature of halo WIMPs which one needs is the WIMP mass. I also present a method for the determination of the WIMP mass by combining two (or more) experiments with different detector materials. This method is not only independent of the model of Galactic halo but also of that of WIMPs. (orig.)

  4. Theoretical interpretation of experimental data from direct dark matter detection

    International Nuclear Information System (INIS)

    Shan Chung-Lin

    2007-10-01

    I derive expressions that allow to reconstruct the normalized one-dimensional velocity distribution function of halo WIMPs and to determine its moments from the recoil energy spectrum as well as from experimental data directly. The reconstruction of the velocity distribution function is further extended to take into account the annual modulation of the event rate. All these expressions are independent of the as yet unknown WIMP density near the Earth as well as of the WIMP-nucleus cross section. The only information about the nature of halo WIMPs which one needs is the WIMP mass. I also present a method for the determination of the WIMP mass by combining two (or more) experiments with different detector materials. This method is not only independent of the model of Galactic halo but also of that of WIMPs. (orig.)

  5. Universal Implicatures and Free Choice Effects: Experimental Data

    Directory of Open Access Journals (Sweden)

    Emmanuel Chemla

    2009-05-01

    Full Text Available Universal inferences like (i have been taken as evidence for a local/syntactic treatment of scalar implicatures (i.e. theories where the enrichment of "some" into "some but not all" can happen sub-sententially: (i Everybody read some of the books --> Everybody read [some but not all the books]. In this paper, I provide experimental evidence which casts doubt on this argument. The counter-argument relies on a new set of data involving free choice inferences (a sub-species of scalar implicatures and negative counterparts of (i, namely sentences with the quantifier "no" instead of "every". The results show that the globalist account of scalar implicatures is incomplete (mainly because of free choice inferences but that the distribution of universal inferences made available by the localist move remains incomplete as well (mainly because of the negative cases. doi:10.3765/sp.2.2 BibTeX info

  6. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  7. Status of experimental data related to Be in ITER materials R and D data bank

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Shigeru [ITER Joint Central Team, Muenchen (Germany)

    1998-01-01

    To keep traceability of many valuable raw data that were experimentally obtained in the ITER Technology R and D Tasks related to materials for In-Vessel components (divertor, first wall, blanket, vacuum vessel, etc.) and to easily make the best use of these data in the ITER design activities, the `ITER Materials R and D Data Bank` has been built up, with the use of Excel{sup TM} spread sheets. The paper describes status of experimental data collected in this data bank on thermo-mechanical properties of unirradiated and neutron irradiated Be, on plasma-material interactions of Be, on mechanical properties of various kinds of Be/Cu joints (including plasma sprayed Be), and on thermal fatigue tests of Be/Cu mock-ups. (author)

  8. Computational reverse shoulder prosthesis model: Experimental data and verification.

    Science.gov (United States)

    Martins, A; Quental, C; Folgado, J; Ambrósio, J; Monteiro, J; Sarmento, M

    2015-09-18

    The reverse shoulder prosthesis aims to restore the stability and function of pathological shoulders, but the biomechanical aspects of the geometrical changes induced by the implant are yet to be fully understood. Considering a large-scale musculoskeletal model of the upper limb, the aim of this study is to evaluate how the Delta reverse shoulder prosthesis influences the biomechanical behavior of the shoulder joint. In this study, the kinematic data of an unloaded abduction in the frontal plane and an unloaded forward flexion in the sagittal plane were experimentally acquired through video-imaging for a control group, composed of 10 healthy shoulders, and a reverse shoulder group, composed of 3 reverse shoulders. Synchronously, the EMG data of 7 superficial muscles were also collected. The muscle force sharing problem was solved through the minimization of the metabolic energy consumption. The evaluation of the shoulder kinematics shows an increase in the lateral rotation of the scapula in the reverse shoulder group, and an increase in the contribution of the scapulothoracic joint to the shoulder joint. Regarding the muscle force sharing problem, the musculoskeletal model estimates an increased activity of the deltoid, teres minor, clavicular fibers of the pectoralis major, and coracobrachialis muscles in the reverse shoulder group. The comparison between the muscle forces predicted and the EMG data acquired revealed a good correlation, which provides further confidence in the model. Overall, the shoulder joint reaction force was lower in the reverse shoulder group than in the control group. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Optical bandgap of semiconductor nanostructures: Methods for experimental data analysis

    Science.gov (United States)

    Raciti, R.; Bahariqushchi, R.; Summonte, C.; Aydinli, A.; Terrasi, A.; Mirabella, S.

    2017-06-01

    Determination of the optical bandgap (Eg) in semiconductor nanostructures is a key issue in understanding the extent of quantum confinement effects (QCE) on electronic properties and it usually involves some analytical approximation in experimental data reduction and modeling of the light absorption processes. Here, we compare some of the analytical procedures frequently used to evaluate the optical bandgap from reflectance (R) and transmittance (T) spectra. Ge quantum wells and quantum dots embedded in SiO2 were produced by plasma enhanced chemical vapor deposition, and light absorption was characterized by UV-Vis/NIR spectrophotometry. R&T elaboration to extract the absorption spectra was conducted by two approximated methods (single or double pass approximation, single pass analysis, and double pass analysis, respectively) followed by Eg evaluation through linear fit of Tauc or Cody plots. Direct fitting of R&T spectra through a Tauc-Lorentz oscillator model is used as comparison. Methods and data are discussed also in terms of the light absorption process in the presence of QCE. The reported data show that, despite the approximation, the DPA approach joined with Tauc plot gives reliable results, with clear advantages in terms of computational efforts and understanding of QCE.

  10. Status of experimental data of proton-induced reactions for intermediate-energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Yukinobu; Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Yamano, Naoki; Fukahori, Tokio

    1998-11-01

    The present status of experimental data of proton-induced reactions is reviewed, with particular attention to total reaction cross section, elastic and inelastic scattering cross section, double-differential particle production cross section, isotope production cross section, and activation cross section. (author)

  11. Discussions on the experimental data covering (EDC) procedure

    International Nuclear Information System (INIS)

    Lee, S.Y.; Ban, C.H.

    2004-01-01

    In describing step-9 of TRAC-CSAU, there is a statement that there is no clear guide with which one can determine the code uncertainty parameters and their statistics using the integral and/or separate effects tests. On the other hand, there is an important requirement, stating that the code uncertainty should be evaluated through direct data comparison with the relevant integral systems and separate-effects experiments at different scales. There are some efforts, in the best-estimate LOCA methodologies, to use the IET and SET data for determining the code uncertainty parameters and their statistics. But it is hard to find a systematic way to relate the code uncertainty parameters and/or their statistics with the results of the direct data comparison, especially in the case of the components with multiple code uncertainty parameters. It is essential to develop a procedure to implement the requirement of the direct data comparison with SETs and IETs in determining the code uncertainty. In this paper, Code Accuracy Based Uncertainty Estimation (CABUE) technique is introduced with an emphasis on the role of Experimental Data Coverings to connect the code accuracy with the overall uncertainty for a code prediction. Contrasting to the fact that the code accuracy is used only for confirmation of the conservatism in TRACCSAU, in CABUE, it can be represented by the scalable code parameters and their statistics through the EDC extensive calculations, it gives several benefits. Since the code accuracy becomes the measure of the determination of the statistics of code parameters, a code with better accuracy naturally provide smaller overall calculational uncertainty. This is the full implementation of the requirement of 'direct data comparison'. Adopting EDC procedure in CABUE where uniform application of distribution-free percentile estimation technique with simple random sampling calculations is used in various levels, makes it easy to chose the number of code uncertainty

  12. Deriving Structural Information from Experimentally Measured Data on Biomolecules.

    Science.gov (United States)

    van Gunsteren, Wilfred F; Allison, Jane R; Daura, Xavier; Dolenc, Jožica; Hansen, Niels; Mark, Alan E; Oostenbrink, Chris; Rusu, Victor H; Smith, Lorna J

    2016-12-23

    During the past half century, the number and accuracy of experimental techniques that can deliver values of observables for biomolecular systems have been steadily increasing. The conversion of a measured value Q exp of an observable quantity Q into structural information is, however, a task beset with theoretical and practical problems: 1) insufficient or inaccurate values of Q exp , 2) inaccuracies in the function Q(r→) used to relate the quantity Q to structure r→ , 3) how to account for the averaging inherent in the measurement of Q exp , 4) how to handle the possible multiple-valuedness of the inverse r→(Q) of the function Q(r→) , to mention a few. These apply to a variety of observable quantities Q and measurement techniques such as X-ray and neutron diffraction, small-angle and wide-angle X-ray scattering, free-electron laser imaging, cryo-electron microscopy, nuclear magnetic resonance, electron paramagnetic resonance, infrared and Raman spectroscopy, circular dichroism, Förster resonance energy transfer, atomic force microscopy and ion-mobility mass spectrometry. The process of deriving structural information from measured data is reviewed with an eye to non-experts and newcomers in the field using examples from the literature of the effect of the various choices and approximations involved in the process. A list of choices to be avoided is provided. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Analysis of Elektrogorsk 108 test facility experimental data

    International Nuclear Information System (INIS)

    Urbonas, R.

    2001-01-01

    In the paper an evaluation of experimental data obtained at Russian Elektrogorsk 108 (E-108) test facility is presented. E-108 facility is a scaled model of Russian RBMK design reactor. An attempt to validate state-of-the-art thermal hydraulic codes on the basis of E-108 test facility was made. Originally these codes were developed and validated for BWRs and PWRs. Since state-of-art thermal hydraulic codes are widely used for simulation of RBMK reactors further codes' implementation and validation is required. The facility was modelled by employing RELAP5 (INEEL, USA) thermal hydraulic system analysis best estimate code. The results show dependence from number of nodes used in the heated channels, frictional and form losses employed. The obtained oscillatory behaviour is resulted by density wave and critical heat flux. It is shown that codes are able to predict thermal hydraulic instability and sudden heat structure temperature excursion, when critical heat flux is approached, well. In addition, an uncertainty analysis of one of the experiments was performed by employing GRS developed System for Uncertainty and Sensitivity Analysis (SUSA). It was one of the first attempts to use this statistic-based methodology in Lithuania.(author)

  14. Network inference from functional experimental data (Conference Presentation)

    Science.gov (United States)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  15. LHC experimental data from today's data challenges to the promise of tomorrow

    CERN Multimedia

    CERN. Geneva; Panzer-Steindel, Bernd; Rademakers, Fons

    2003-01-01

    The LHC experiments constitute a challenge in several discipline in both High Energy Physics and Information Technologies. This is definitely the case for data acquisition, processing and analysis. This challenge has been addressed by many years or R&D activity during which prototypes of components or subsystems have been developed. This prototyping phase is now culminating with an evaluation of the prototypes in large-scale tests (approximately called "Data Challenges"). In a period of restricted funding, the expectation is to realize the LHC data acquisition and computing infrastructures by making extensive use of standard and commodity components. The lectures will start with a brief overview of the requirements of the LHC experiments in terms of data acquisition and computing. The different tasks of the experimental data chain will also be explained: data acquisition, selection, storage, processing and analysis. The major trends of the computing and networking industries will then be indicated with pa...

  16. The coupling of high-speed high resolution experimental data and LES through data assimilation techniques

    Science.gov (United States)

    Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.

    2017-11-01

    Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.

  17. Examination of Experimental Data for Irradiation - Creep in Nuclear Graphite

    Science.gov (United States)

    Mobasheran, Amir Sassan

    The objective of this dissertation was to establish credibility and confidence levels of the observed behavior of nuclear graphite in neutron irradiation environment. Available experimental data associated with the OC-series irradiation -induced creep experiments performed at the Oak Ridge National Laboratory (ORNL) were examined. Pre- and postirradiation measurement data were studied considering "linear" and "nonlinear" creep models. The nonlinear creep model considers the creep coefficient to vary with neutron fluence due to the densification of graphite with neutron irradiation. Within the range of neutron fluence involved (up to 0.53 times 10^{26} neutrons/m ^2, E > 50 KeV), both models were capable of explaining about 96% and 80% of the variation of the irradiation-induced creep strain with neutron fluence at temperatures of 600^circC and 900^circC, respectively. Temperature and reactor power data were analyzed to determine the best estimates for the actual irradiation temperatures. It was determined according to thermocouple readouts that the best estimate values for the irradiation temperatures were well within +/-10 ^circC of the design temperatures of 600^circC and 900 ^circC. The dependence of the secondary creep coefficients (for both linear and nonlinear models) on irradiation temperature was determined assuming that the variation of creep coefficient with temperature, in the temperature range studied, is reasonably linear. It was concluded that the variability in estimate of the creep coefficients is definitely not the results of temperature fluctuations in the experiment. The coefficients for the constitutive equation describing the overall growth of grade H-451 graphite were also studied. It was revealed that the modulus of elasticity and the shear modulus are not affected by creep and that the electrical resistivity is slightly (less than 5%) changed by creep. However, the coefficient of thermal expansion does change with creep. The consistency of

  18. Comparison of Laboratory Experimental Data to XBeach Numerical Model Output

    Science.gov (United States)

    Demirci, Ebru; Baykal, Cuneyt; Guler, Isikhan; Sogut, Erdinc

    2016-04-01

    generating data sets for testing and validation of sediment transport relationships for sand transport in the presence of waves and currents. In these series, there is no structure in the basin. The second and third series of experiments were designed to generate data sets for development of tombolos in the lee of detached 4m-long rubble mound breakwater that is 4 m from the initial shoreline. The fourth series of experiments are conducted to investigate tombolo development in the lee of a 4m-long T-head groin with the head section in the same location of the second and the third tests. The fifth series of experiments are used to investigate tombolo development in the lee of a 3-m-long rubble-mound breakwater positioned 1.5 m offshore of the initial shoreline. In this study, the data collected from the above mentioned five experiments are used to compare the results of the experimental data with XBeach numerical model results, both for the "no-structure" and "with-structure" cases regarding to sediment transport relationships in the presence of only waves and currents as well as the shoreline changes together with the detached breakwater and the T-groin. The main purpose is to investigate the similarities and differences between the laboratory experimental data behavior with XBeach numerical model outputs for these five cases. References: Baykal, C., Sogut, E., Ergin, A., Guler, I., Ozyurt, G.T., Guler, G., and Dogan, G.G. (2015). Modelling Long Term Morphological Changes with XBeach: Case Study of Kızılırmak River Mouth, Turkey, European Geosciences Union, General Assembly 2015, Vienna, Austria, 12-17 April 2015. Gravens, M.B. and Wang, P. (2007). "Data report: Laboratory testing of longshore sand transport by waves and currents; morphology change behind headland structures." Technical Report, ERDC/CHL TR-07-8, Coastal and Hydraulics Laboratory, US Army Engineer Research and Development Center, Vicksburg, MS. Roelvink, D., Reniers, A., van Dongeren, A., van Thiel de

  19. Synthesis of analytical and experimental data, capacity evaluation

    International Nuclear Information System (INIS)

    Lin Chiwenn

    2001-01-01

    This part of the presentation deals with the synthesis of analytical and experimental data and capacity evaluation. First, a typical test flow diagram will be discussed to identify key aspects of the test program where analysis is to be performed. Next, actual component test and analysis programs will be presented to illustrate some important parameters to be considered in the modelling process. Then, two combined test and analysis projects will be reviewed to demonstrate the potential use of substructuring in the model testing to reduce the size of the model to be tested. This will be followed by an inelastic response spectral reactor coolant loop analysis, which was used to study a high level seismic test conducted for a PWR reactor coolant system. The potential use of an improved impact calculation method will be discussed after that. As a closure to the test and analysis synthesis process, a reactor internal qualification process will be discussed. Finally, capacity evaluation will be discussed, following the requirements of ASME section III code for class 1 pressure vessel, class 1 piping which includes the reactor coolant loop piping, and the reactor internals. The subsections included in this part of presentation which cover the above mentioned subjects: typical component test and analysis results; combined test and analysis process; a simplified inelastic response spectral; analysis of reactor coolant loop; an improved impact analysis methodology; reactor coolant system and core internal qualification process; ASME section III code, design by analysis of class 1 pressure vessel; design by analysis of class 1 piping; SME section III code, design by analysis of reactor core internals

  20. Methods of experimental settlement of contradicting data in evaluated nuclear data libraries

    Directory of Open Access Journals (Sweden)

    V. A. Libman

    2016-12-01

    Full Text Available The latest versions of the evaluated nuclear data libraries (ENDLs have contradictions concerning data about neutron cross sections. To resolve this contradiction we propose the method of experimental verification. This method is based on using of the filtered neutron beams and following measurement of appropriate samples. The basic idea of the method is to modify the suited filtered neutron beam so that the differences between the neutron cross sections in accordance with different ENDLs become measurable. Demonstration of the method is given by the example of cerium, which according to the latest versions of four ENDLs has significantly different total neutron cross section.

  1. Multi-window dialogue system of data processing for experimental setup VASSILISSA in PAW environment

    International Nuclear Information System (INIS)

    Andreev, A.N.; Vakatov, D.V.; Veselski, M.; Eremin, A.V.; Ivanov, V.V.; Khasanov, A.M.

    1992-01-01

    Multi-window dialogue system for processing data acquired from experimental setup VASSILISSA is presented. The system provides friendly user's interface for experimental data conversion, selection and preparing for graphic analysis with PAW. 7 refs.; 5 figs.; 1 tab

  2. Experimental Advanced Airborne Research Lidar (EAARL) Data Processing Manual

    Science.gov (United States)

    Bonisteel, Jamie M.; Nayegandhi, Amar; Wright, C. Wayne; Brock, John C.; Nagle, David

    2009-01-01

    The Experimental Advanced Airborne Research Lidar (EAARL) is an example of a Light Detection and Ranging (Lidar) system that utilizes a blue-green wavelength (532 nanometers) to determine the distance to an object. The distance is determined by recording the travel time of a transmitted pulse at the speed of light (fig. 1). This system uses raster laser scanning with full-waveform (multi-peak) resolving capabilities to measure submerged topography and adjacent coastal land elevations simultaneously (Nayegandhi and others, 2009). This document reviews procedures for the post-processing of EAARL data using the custom-built Airborne Lidar Processing System (ALPS). ALPS software was developed in an open-source programming environment operated on a Linux platform. It has the ability to combine the laser return backscatter digitized at 1-nanosecond intervals with aircraft positioning information. This solution enables the exploration and processing of the EAARL data in an interactive or batch mode. ALPS also includes modules for the creation of bare earth, canopy-top, and submerged topography Digital Elevation Models (DEMs). The EAARL system uses an Earth-centered coordinate and reference system that removes the necessity to reference submerged topography data relative to water level or tide gages (Nayegandhi and others, 2006). The EAARL system can be mounted in an array of small twin-engine aircraft that operate at 300 meters above ground level (AGL) at a speed of 60 meters per second (117 knots). While other systems strive to maximize operational depth limits, EAARL has a narrow transmit beam and receiver field of view (1.5 to 2 milliradians), which improves the depth-measurement accuracy in shallow, clear water but limits the maximum depth to about 1.5 Secchi disk depth (~20 meters) in clear water. The laser transmitter [Continuum EPO-5000 yttrium aluminum garnet (YAG)] produces up to 5,000 short-duration (1.2 nanosecond), low-power (70 microjoules) pulses each second

  3. Experimental CFD grade data for stratified two-phase flows

    Energy Technology Data Exchange (ETDEWEB)

    Vallee, Christophe, E-mail: c.vallee@fzd.d [Forschungszentrum Dresden-Rossendorf e.V., Institute of Safety Research, D-01314 Dresden (Germany); Lucas, Dirk; Beyer, Matthias; Pietruske, Heiko; Schuetz, Peter; Carl, Helmar [Forschungszentrum Dresden-Rossendorf e.V., Institute of Safety Research, D-01314 Dresden (Germany)

    2010-09-15

    1:3. The investigations focus on the flow regimes observed in the region of the elbow and of the steam generator inlet chamber, which are equipped with glass side walls. An overview of the experimental methodology and of the acquired data is given. These cover experiments without water circulation, which can be seen as test cases for CFD development, as well as counter-current flow limitation experiments, representing transient validation cases of a typical nuclear reactor safety issue.

  4. Experimental CFD grade data for stratified two-phase flows

    International Nuclear Information System (INIS)

    Vallee, Christophe; Lucas, Dirk; Beyer, Matthias; Pietruske, Heiko; Schuetz, Peter; Carl, Helmar

    2010-01-01

    :3. The investigations focus on the flow regimes observed in the region of the elbow and of the steam generator inlet chamber, which are equipped with glass side walls. An overview of the experimental methodology and of the acquired data is given. These cover experiments without water circulation, which can be seen as test cases for CFD development, as well as counter-current flow limitation experiments, representing transient validation cases of a typical nuclear reactor safety issue.

  5. Experimental data available for radiation damage modelling in reactor materials

    International Nuclear Information System (INIS)

    Wollenberger, H.

    Radiation damage modelling requires rate constants for production, annihilation and trapping of defects. The literature is reviewed with respect to experimental determination of such constants. Useful quantitative information exists only for Cu and Al. Special emphasis is given to the temperature dependence of the rate constants

  6. Towards Coordination Patterns for Complex Experimentations in Data Mining

    NARCIS (Netherlands)

    F. Arbab (Farhad); C. Diamantini (Claudia); D. Potena (Domenico); E. Storti (Emanuele)

    2010-01-01

    htmlabstractIn order to support the management of experimental activities in a networked scientic community, the exploitation of serviceoriented paradigm and technologies is a hot research topic in E-science. In particular, scientic workows can be modeled by resorting to the notion of process.

  7. Management, Analysis, and Visualization of Experimental and Observational Data -- The Convergence of Data and Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Greenwald, Martin; Kleese van Dam, Kersten; Parashar, Manish; Wild, Stefan, M.; Wiley, H. Steven

    2016-10-27

    Scientific user facilities---particle accelerators, telescopes, colliders, supercomputers, light sources, sequencing facilities, and more---operated by the U.S. Department of Energy (DOE) Office of Science (SC) generate ever increasing volumes of data at unprecedented rates from experiments, observations, and simulations. At the same time there is a growing community of experimentalists that require real-time data analysis feedback, to enable them to steer their complex experimental instruments to optimized scientific outcomes and new discoveries. Recent efforts in DOE-SC have focused on articulating the data-centric challenges and opportunities facing these science communities. Key challenges include difficulties coping with data size, rate, and complexity in the context of both real-time and post-experiment data analysis and interpretation. Solutions will require algorithmic and mathematical advances, as well as hardware and software infrastructures that adequately support data-intensive scientific workloads. This paper presents the summary findings of a workshop held by DOE-SC in September 2015, convened to identify the major challenges and the research that is needed to meet those challenges.

  8. Can Experimental Scientists, Data Evaluators and Compilers, and Nuclear Data Users Understand One Another?

    International Nuclear Information System (INIS)

    Usachev, L.N.

    1966-01-01

    The International Atomic Energy Agency organizes conferences on a wide variety of scientific subjects, all of which are of fundamental importance for the development of nuclear power. These include the technology of fuel elements, their stability in neutron fields, and chemical reprocessing as well as reactor physics, mathematical computational methods and the problems of protection and dosimetry. The problem of microscopic nuclear data, an essential aspect of reactor work, is just one of these many subjects. On the other hand, it should be remembered that the possibility of releasing nuclear energy was established in the first place by obtaining nuclear data on the fission process occurring in the uranium nucleus following the capture of a neutron and on the escape of the 2-3 secondary fission neutrons. In early nuclear power work the information provided by nuclear data was of considerable, even of decisive, importance. For example, the information available on the neutron balance in fast reactors showed that such reactors could operate as breeders and thus that it was worth while developing them. Strictly speaking, it is of course difficult to speak of a knowledge of nuclear data at this early period. It is perhaps more accurate to speak of the understanding of and the feeling for such data which grew up on the basis of the existing physical ideas on the fission of the nucleus, radiative capture and neutron scattering. Experimental data were very scanty but for that reason they were particularly valuable

  9. Can Experimental Scientists, Data Evaluators and Compilers, and Nuclear Data Users Understand One Another?

    Energy Technology Data Exchange (ETDEWEB)

    Usachev, L. N. [Institute of Physics and Energetics, Obninsk, USSR (Russian Federation)

    1966-07-01

    The International Atomic Energy Agency organizes conferences on a wide variety of scientific subjects, all of which are of fundamental importance for the development of nuclear power. These include the technology of fuel elements, their stability in neutron fields, and chemical reprocessing as well as reactor physics, mathematical computational methods and the problems of protection and dosimetry. The problem of microscopic nuclear data, an essential aspect of reactor work, is just one of these many subjects. On the other hand, it should be remembered that the possibility of releasing nuclear energy was established in the first place by obtaining nuclear data on the fission process occurring in the uranium nucleus following the capture of a neutron and on the escape of the 2-3 secondary fission neutrons. In early nuclear power work the information provided by nuclear data was of considerable, even of decisive, importance. For example, the information available on the neutron balance in fast reactors showed that such reactors could operate as breeders and thus that it was worth while developing them. Strictly speaking, it is of course difficult to speak of a knowledge of nuclear data at this early period. It is perhaps more accurate to speak of the understanding of and the feeling for such data which grew up on the basis of the existing physical ideas on the fission of the nucleus, radiative capture and neutron scattering. Experimental data were very scanty but for that reason they were particularly valuable.

  10. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first...

  11. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the...

  12. Techniques for data compression in experimental nuclear physics problems

    International Nuclear Information System (INIS)

    Byalko, A.A.; Volkov, N.G.; Tsupko-Sitnikov, V.M.

    1984-01-01

    Techniques and ways for data compression during physical experiments are estimated. Data compression algorithms are divided into three groups: the first one includes the algorithms based on coding and which posses only average indexes by data files, the second group includes algorithms with data processing elements, the third one - algorithms for converted data storage. The greatest promise for the techniques connected with data conversion is concluded. The techniques possess high indexes for compression efficiency and for fast response, permit to store information close to the source one

  13. Data collection and evaluation for experimental computer science research

    Science.gov (United States)

    Zelkowitz, Marvin V.

    1983-01-01

    The Software Engineering Laboratory was monitoring software development at NASA Goddard Space Flight Center since 1976. The data collection activities of the Laboratory and some of the difficulties of obtaining reliable data are described. In addition, the application of this data collection process to a current prototyping experiment is reviewed.

  14. Modeling Aerobic Carbon Source Degradation Processes using Titrimetric Data and Combined Respirometric-Titrimetric Data: Experimental Data and Model Structure

    DEFF Research Database (Denmark)

    Gernaey, Krist; Petersen, B.; Nopens, I.

    2002-01-01

    Experimental data are presented that resulted from aerobic batch degradation experiments in activated sludge with simple carbon sources (acetate and dextrose) as substrates. Data collection was done using combined respirometric-titrimetric measurements. The respirometer consists of an open aerated....... For acetate, protons were consumed during aerobic degradation, whereas for dextrose protons were produced. For both carbon sources, a linear relationship was found between the amount of carbon source added and the amount of protons consumed (in case of acetate: 0.38 meq/mmol) or produced (in case of dextrose...

  15. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  16. Numerical treatment of experimental data in calibration procedures

    International Nuclear Information System (INIS)

    Moreno, C.

    1993-06-01

    A discussion of a numerical procedure to find the proportionality factor between two measured quantities is given in the framework of the least-squares method. Variable, as well as constant, amounts of experimental uncertainties are considered for each variable along their measured range. The variance of the proportionality factor is explicitly given as a closed analytical expression valid for the general case. Limits of the results obtained here have been studied allowing comparisons with those obtained using classical least-squares expressions. Analytical and numerical examples are also discussed. (author). 11 refs, 1 fig., 1 tab

  17. Experimental data on PCI and PCMI within the IFPE data base

    International Nuclear Information System (INIS)

    Killeen, J.C.; Sartori, E.; Turnbull, J.A.

    2005-01-01

    Following the conclusions reached at the end of the FUMEX-I code comparison exercise, the International Fuel Performance Experimental Database (IFPE) gave priority to collecting and assembling data sets addressing: thermal performance, fission gas release and pellet-clad mechanical interaction (PCMI). The data available that address the last topic are the subject of the current paper. The data on mechanical interaction in fuel rods fall into three broad categories: - Fuel rod diameter changes caused by periods spent at higher than normal power. - The result of power ramp testing to define a failure threshold. - Single effects studies to measure changes in gaseous porosity causing fuel swelling during controlled test conditions. In the first category, the fuel remained un-failed at the end of the test and the resulting permanent clad strain was due to PCMI caused by thermal expansion of the pellet and gaseous fuel swelling. Some excellent data in this category come from the last two Riso Fission Gas Release projects. The second category, namely, failure by pellet-clad interaction (PCI) and stress corrosion cracking (SCC) involves the simultaneous imposition of stress and the availability of corrosive fission products. A comprehensive list of tests carried out in the Swedish Studsvik reactor is included in the database. The third category is a recent acquisition to the database and comprises data on fuel swelling obtained from ramp tests on AGR fuel and carried out in the Halden BWR. This data set contains a wealth of well-qualified data which are invaluable for the development and validation of fuel swelling models. (authors)

  18. Ionizing radiation-induced cancers. Experimental and clinical data

    International Nuclear Information System (INIS)

    Joveniaux, Alain.

    1978-03-01

    This work attempts to give an idea of radiocarcinogenesis, both experimental and clinical. Experimentally the possibility of radio-induced cancer formation has considerable doctrinal importance since it proves without question the carcinogenetic effect of radiations, and also yields basis information on the essential constants implicated in its occurrence: need for a latency time varying with the animal species and technique used, but quite long in relation to the specific lifetime of each species; importance of a massive irradiation, more conducive to cancerisation as long as it produces no necroses liable to stop the formation of any subsequent neoplasia; finally, rarity of is occurrence. Clinically although the cause and effect relationship between treatment and cancer is sometimes difficult to establish categorically, the fact is that hundreds of particularly disturbing observations remain and from their number often emerges under well-defined circumstances, an undeniable clinical certainty. Most importantly these observation fix the criteria necessary for the possibility of a radioinduced cancer to arise, i.e: the notion of a prior irradiation; the appearance of a cancer in the irradiation area; serious tissue damage in relation with an excessive radiation dose; a long latency period between irradition and appearance of the cancer [fr

  19. Simulation of FRET dyes allows quantitative comparison against experimental data

    Science.gov (United States)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  20. Experimental data on fission and (n,xn) reactions

    International Nuclear Information System (INIS)

    Belier, G.; Chatillon, A.; Granier, T.; Laborie, J.M.; Laurent, B.; Ledoux, X.; Taieb, J.; Varignon, C.; Bauge, E.; Bersillon, O.; Aupiais, J.; Le Petit, G.; Authier, N.; Casoli, P.

    2011-01-01

    Investigations on neutron-induced fission of actinides and the deuteron breakup are presented. Neutron-induced fission has been studied for 10 years at the WNR (Weapons Neutron Research) neutron facility of the Los Alamos Neutron Science Center (LANSCE). Thanks to this white neutron source the evolution of the prompt fission neutron energy spectra as a function of the incident neutron energy has been characterized in a single experiment up to 200 MeV incident energy. For some isotopes the prompt neutron multiplicity has been extracted. These experimental results demonstrated the effect on the mean neutron energy of the neutron emission before scission for energies higher than the neutron binding energy. This extensive program ( 235 U and 238 U, 239 Pu, 237 Np and 232 Th were measured) is completed by neutron spectra measurements on the CEA 4 MV accelerator. The D(n,2n) reaction is studied both theoretically and experimentally. The cross-section was calculated for several nucleon-nucleon interactions including the AV18 interaction. It has also been measured on the CEA 7 MV tandem accelerator at incident neutron energies up to 25 MeV. Uncertainties lower than 8% between 5 and 10 MeV were obtained. In particular these experiments have extended the measured domain for cross sections. (authors)

  1. Data driven parallelism in experimental high energy physics applications

    International Nuclear Information System (INIS)

    Pohl, M.

    1987-01-01

    I present global design principles for the implementation of high energy physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of high energy physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordiate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms). (orig.)

  2. Data driven parallelism in experimental high energy physics applications

    Science.gov (United States)

    Pohl, Martin

    1987-08-01

    I present global design principles for the implementation of High Energy Physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of High Energy Physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The Task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordinate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms).

  3. Cheetah Experimental Platform Web 1.0: Cleaning Pupillary Data

    DEFF Research Database (Denmark)

    Zugal, Stefan; Pinggera, Jakob; Neurauter, Manuel

    2017-01-01

    –tracking devices led to increased attention for objectively measuring cognitive load via pupil dilation. However, this approach requires a standardized data processing routine to reliably measure cognitive load. This technical report presents CEP–Web, an open source platform to providing state of the art data...

  4. The experimental operation of a seismological data centre at Blacknest

    International Nuclear Information System (INIS)

    Grover, F.H.

    1978-10-01

    A short account is given of the development and operation of a unit within Blacknest which acts as a centre for handling data received from overseas seismological array stations and stations in the British Isles and also exchanges data with other centres. The work has been carried out as a long-term experiment to assess the capability of small networks of existing research and development stations to participate in the monitoring of a possible future Comprehensive Test Ban treaty (CTB) and to gain experience of the operational requirements for Data Centres. A preliminary assessment of a UK National Technical Means (NTM) for verifying a CTB is obtained inter alia. (author)

  5. Improved experimental determination of critical-point data for tungsten

    International Nuclear Information System (INIS)

    Fucke, W.; Seydel, U.

    1980-01-01

    It is shown that under certain conditions in resistive pulse-heating experiments, refractory liquid metals can be heated up to the limit of thermodynamic stability (spinodal) of the superheated liquid. Here, an explosion-like decomposition takes place which is directly monitored by measurements of expansion, surface radiation, and electric resistivity, thus allowing the determination of the temperature-pressure dependence of the spinodal transition. A comparison of the spinodal equation obtained this way with theoretical models yields the critical temperature Tsub(c), pressure psub(c), and volume vsub(c). A completely experimentally-determined set of the critical parameters for tungsten is presented: Tsub(c) = (13400 +- 1400) K, psub(c) = (3370 +- 850) bar, vsub(c) = (43 +- 4) cm 3 mol -1 . (author)

  6. Hadronic models and experimental data for the neutrino beam production

    CERN Document Server

    Collazuol, G; Guglielmi, A M; Sala, P R

    2000-01-01

    The predictions of meson production by 450 GeV/c protons on Be using the Monte Carlo FLUKA standalone and GEANT-FLUKA and GEANT-GHEISHA in GEANT are compared with available experimental measurements. The comparison enlightens the improvements of the hadronic generator models of the present standalone code FLUKA with respect to the 1992 version which is embedded into GEANT-FLUKA. Worse results were obtained with the GHEISHA package. A complete simulation of the SPS neutrino beam line at CERN showed significant variations in the intensity and composition of the neutrino beam when FLUKA standalone instead of the GEANT-FLUKA package is used to simulate particle production in the Be target.

  7. Hadronic models and experimental data for the neutrino beam production

    International Nuclear Information System (INIS)

    Collazuol, G.; Ferrari, A.; Guglielmi, A.; Sala, P.R.

    2000-01-01

    The predictions of meson production by 450 GeV/c protons on Be using the Monte Carlo FLUKA standalone and GEANT-FLUKA and GEANT-GHEISHA in GEANT are compared with available experimental measurements. The comparison enlightens the improvements of the hadronic generator models of the present standalone code FLUKA with respect to the 1992 version which is embedded into GEANT-FLUKA. Worse results were obtained with the GHEISHA package. A complete simulation of the SPS neutrino beam line at CERN showed significant variations in the intensity and composition of the neutrino beam when FLUKA standalone instead of the GEANT-FLUKA package is used to simulate particle production in the Be target

  8. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  9. Experimental data and boundary conditions for a Double - Skin Facade building in transparent insulation mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    was carried out in a full scale test facility ‘The Cube’, in order to compile three sets of high quality experimental data for validation purposes. The data sets are available for preheating mode, external air curtain mode and transparent insulation mode. The objective of this article is to provide the reader...... with all information about the experimental data and measurements, necessary to complete an independent empirical validation of any simulation tool. The article includes detailed information about the experimental apparatus, experimental principles and experimental full-scale test facility ‘The Cube...

  10. Passive and Active Observation: Experimental Design Issues in Big Data

    OpenAIRE

    Pesce, Elena; Riccomagno, Eva; Wynn, Henry P.

    2017-01-01

    Data can be collected in scientific studies via a controlled experiment or passive observation. Big data is often collected in a passive way, e.g. from social media. Understanding the difference between active and passive observation is critical to the analysis. For example in studies of causation great efforts are made to guard against hidden confounders or feedback which can destroy the identification of causation by corrupting or omitting counterfactuals (controls). Various solutions of th...

  11. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  12. 40 CFR 158.210 - Experimental use permit data requirements for product chemistry.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit data requirements for product chemistry. 158.210 Section 158.210 Protection of Environment ENVIRONMENTAL PROTECTION... Experimental use permit data requirements for product chemistry. All product chemistry data, as described in...

  13. 40 CFR 158.270 - Experimental use permit data requirements for residue chemistry.

    Science.gov (United States)

    2010-07-01

    ... requirements for residue chemistry. 158.270 Section 158.270 Protection of Environment ENVIRONMENTAL PROTECTION... Experimental use permit data requirements for residue chemistry. All residue chemistry data, as described in... section 408(r) is sought. Residue chemistry data are not required for an experimental use permit issued on...

  14. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  15. Experimental protocol for packaging and encrypting multiple data

    International Nuclear Information System (INIS)

    Barrera, John Fredy; Trejos, Sorayda; Tebaldi, Myrian; Torroba, Roberto

    2013-01-01

    We present a novel single optical packaging and encryption (SOPE) procedure for multiple inputs. This procedure is based on a merging of a 2f scheme with a digital holographic technique to achieve efficient handling of multiple data. Through the 2f system with a random phase mask attached in its input plane, and the holographic technique, we obtain each processed input. A posteriori filtering and repositioning protocol on each hologram followed by an addition of all processed data, allows storing these data to form a single package. The final package is digitally multiplied by a second random phase mask acting as an encryption mask. In this way, the final user receives only one encrypted information unit and a single key, instead of a conventional multiple-image collecting method and several keys. Processing of individual images is cast into an optimization problem. The proposed optimization aims to simplify the handling and recovery of images while packing all of them into a single unit. The decoding process does not have the usual cross-talk or noise problems involved in other methods, as filtering and repositioning precedes the encryption step. All data are recovered in just one step at the same time by applying a simple Fourier transform operation and the decoding key. The proposed protocol takes advantage of optical processing and the versatility of the digital format. Experiments have been conducted using a Mach–Zehnder interferometer. An application is subsequently demonstrated to illustrate the feasibility of the SOPE procedure. (paper)

  16. Distilling free-form natural laws from experimental data.

    Science.gov (United States)

    Schmidt, Michael; Lipson, Hod

    2009-04-03

    For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the "alphabet" used to describe those systems.

  17. Application of descriptive statistics in analysis of experimental data

    OpenAIRE

    Mirilović Milorad; Pejin Ivana

    2008-01-01

    Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

  18. Experimental Study of Concealment Data in Video Sequences MPEG-2

    Directory of Open Access Journals (Sweden)

    A. A. Alimov

    2011-03-01

    Full Text Available MPEG-2 uses video compression with loses based on the use of discrete cosine transformation (DCT to small blocks of encoded image. As a result, there is range of factors, each of which corresponds to a frequency index of the encoded block. The human eye, due to natural approximation, does not perceive the difference when the high-frequency DCT coefficients change. The investigated algorithm uses this feature of the human vision to embed required data in video stream invisibly.

  19. Experimental determination of critical data of liquid molybdenum

    International Nuclear Information System (INIS)

    Seydel, U.; Fucke, W.

    1978-01-01

    The submicrosecond resistive pulse heating of wire-shaped metallic samples in a highly incompressible medium leads to a thermodynamic state very close to the critical point of the liquid metal. The additional application of a static pressure may result in a critical or supercritical transition. First results on the critical data of molybdenum are reported: Tsub(c) = (11 150 +- 550) K, psub(c) = (5460 +- 1160) bar, vsub(c) = (36.5 +- 3.5) cm 3 mol -1 . (author)

  20. AKK update. Improvements from new theoretical input and experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Albino, S.; Kniehl, B.A.; Kramer, G. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik

    2008-06-15

    We perform a number of improvements to the previous AKK extraction of fragmentation functions for {pi}{sup {+-}}, K{sup {+-}}, p/ anti p, K{sup 0}{sub S} and {lambda}/ anti {lambda} particles at next-to-leading order. Inclusive hadron production measurements from pp(anti p) reactions at BRAHMS, CDF, PHENIX and STAR are added to the data sample. We use the charge-sign asymmetry of the produced hadrons in pp reactions to constrain the valence quark fragmentations. Data from e{sup +}e{sup -} reactions in regions of smaller x and lower {radical}(s) are added. Hadron mass effects are treated for all observables and, for each particle, the hadron mass used for the description of the e{sup +}e{sup -} reaction is fitted. The baryons' fitted masses are found to be only around 1% above their true masses, while the values of the mesons' fitted masses have the correct order of magnitude. Large x resummation is applied in the coefficient functions of the e{sup +}e{sup -} reactions, and also in the evolution of the fragmentation functions, which in most cases results in a significant reduction of the minimized {chi}{sub 2}. To further exploit the data, all published normalization errors are incorporated via a correlation matrix. (orig.)

  1. The standard and degenerate primordial nucleosynthesis versus recent experimental data

    International Nuclear Information System (INIS)

    Esposito, S.; Mangano, G.; Miele, G.; Pisanti, O.

    2000-01-01

    We report the results on Big Bang Nucleosynthesis (BBN) based on an updated code, with accuracy of the order of 0.1% on He4 abundance, compared with the predictions of other recent similar analysis. We discuss the compatibility of the theoretical results, for vanishing neutrino chemical potentials, with the observational data. Bounds on the number of relativistic neutrinos and baryon abundance are obtained by a likelihood analysis. We also analyze the effect of large neutrino chemical potentials on primordial nucleosynthesis, motivated by the recent results on the Cosmic Microwave Background Radiation spectrum. The BBN exclusion plots for electron neutrino chemical potential and the effective number of relativistic neutrinos are reported. We find that the standard BBN seems to be only marginally in agreement with the recent BOOMERANG and MAXIMA-1 results, while the agreement is much better for degenerate BBN scenarios for large effective number of neutrinos, N ν ∼ 10. (author)

  2. QCD-based pion distribution amplitudes confronting experimental data

    International Nuclear Information System (INIS)

    Bakulev, A.P.; Mikhajlov, S.V.; Stefanis, N.G.

    2001-01-01

    We use QCD sum rules with nonlocal condensates to recalculate more accurately the moments and their confidence intervals of the twist-2 pion distribution amplitude including radiative corrections. We are thus able to construct an admissible set of pion distribution amplitudes which define a reliability region in the a 2 , a 4 plane of the Gegenbauer polynomial expansion coefficients. We emphasize that models like that of Chernyak and Zhitnitsky, as well as the asymptotic solution, are excluded from this set. We show that the determined a 2 , a 4 region strongly overlaps with that extracted from the CLEO data by Schmedding and Yakovlev and that this region is also not far from the results of the first direct measurement of the pion valence quark momentum distribution by the Fermilab E791 collaboration. Comparisons with recent lattice calculations and instanton-based models are briefly discussed

  3. PARALLEL ITERATIVE RECONSTRUCTION OF PHANTOM CATPHAN ON EXPERIMENTAL DATA

    Directory of Open Access Journals (Sweden)

    M. A. Mirzavand

    2016-01-01

    Full Text Available The principles of fast parallel iterative algorithms based on the use of graphics accelerators and OpenGL library are considered in the paper. The proposed approach provides simultaneous minimization of the residuals of the desired solution and total variation of the reconstructed three- dimensional image. The number of necessary input data, i. e. conical X-ray projections, can be reduced several times. It means in a corresponding number of times the possibility to reduce radiation exposure to the patient. At the same time maintain the necessary contrast and spatial resolution of threedimensional image of the patient. Heuristic iterative algorithm can be used as an alternative to the well-known three-dimensional Feldkamp algorithm.

  4. HZETRN radiation transport validation using balloon-based experimental data

    Science.gov (United States)

    Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.

    2018-05-01

    The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that

  5. Detection of damage in welded structure using experimental modal data

    Energy Technology Data Exchange (ETDEWEB)

    Abu Husain, N [Transportation Research Alliance, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Ouyang, H, E-mail: nurul@fkm.utm.my, E-mail: h.ouyang@liv.ac.uk [Department of Engineering, Harrison-Hughes Building, University of Liverpool, Brownlow Hill, Liverpool L69 3GH (United Kingdom)

    2011-07-19

    A typical automotive structure could contain thousands of spot weld joints that contribute significantly to the vehicle's structural stiffness and dynamic characteristics. However, some of these joints may be imperfect or even absent during the manufacturing process and they are also highly susceptible to damage due to operational and environmental conditions during the vehicle lifetime. Therefore, early detection and estimation of damage are important so necessary actions can be taken to avoid further problems. Changes in physical parameters due to existence of damage in a structure often leads to alteration of vibration modes; thus demonstrating the dependency between the vibration characteristics and the physical properties of structures. A sensitivity-based model updating method, performed using a combination of MATLAB and NASTRAN, has been selected for the purpose of this work. The updating procedure is regarded as parameter identification which aims to bring the numerical prediction to be as closely as possible to the measured natural frequencies and mode shapes data of the damaged structure in order to identify the damage parameters (characterised by the reductions in the Young's modulus of the weld patches to indicate the loss of material/stiffness at the damage region).

  6. STATISTICS, Program System for Statistical Analysis of Experimental Data

    International Nuclear Information System (INIS)

    Helmreich, F.

    1991-01-01

    1 - Description of problem or function: The package is composed of 83 routines, the most important of which are the following: BINDTR: Binomial distribution; HYPDTR: Hypergeometric distribution; POIDTR: Poisson distribution; GAMDTR: Gamma distribution; BETADTR: Beta-1 and Beta-2 distributions; NORDTR: Normal distribution; CHIDTR: Chi-square distribution; STUDTR : Distribution of 'Student's T'; FISDTR: Distribution of F; EXPDTR: Exponential distribution; WEIDTR: Weibull distribution; FRAKTIL: Calculation of the fractiles of the normal, chi-square, Student's, and F distributions; VARVGL: Test for equality of variance for several sample observations; ANPAST: Kolmogorov-Smirnov test and chi-square test of goodness of fit; MULIRE: Multiple linear regression analysis for a dependent variable and a set of independent variables; STPRG: Performs a stepwise multiple linear regression analysis for a dependent variable and a set of independent variables. At each step, the variable entered into the regression equation is the one which has the greatest amount of variance between it and the dependent variable. Any independent variable can be forced into or deleted from the regression equation, irrespective of its contribution to the equation. LTEST: Tests the hypotheses of linearity of the data. SPRANK: Calculates the Spearman rank correlation coefficient. 2 - Method of solution: VARVGL: The Bartlett's Test, the Cochran's Test and the Hartley's Test are performed in the program. MULIRE: The Gauss-Jordan method is used in the solution of the normal equations. STPRG: The abbreviated Doolittle method is used to (1) determine variables to enter into the regression, and (2) complete regression coefficient calculation. 3 - Restrictions on the complexity of the problem: VARVGL: The Hartley's Test is only performed if the sample observations are all of the same size

  7. Root plasticity buffers competition among plants: theory meets experimental data.

    Science.gov (United States)

    Schiffers, Katja; Tielbörger, Katja; Tietjen, Britta; Jeltsch, Florian

    2011-03-01

    Morphological plasticity is a striking characteristic of plants in natural communities. In the context of foraging behavior particularly, root plasticity has been documented for numerous species. Root plasticity is known to mitigate competitive interactions by reducing the overlap of the individuals' rhizospheres. But despite its obvious effect on resource acquisition, plasticity has been generally neglected in previous empirical and theoretical studies estimating interaction intensity among plants. In this study, we developed a semi-mechanistic model that addresses this shortcoming by introducing the idea of compensatory growth into the classical-zone-of influence (ZOI) and field-of-neighborhood (FON) approaches. The model parameters describing the belowground plastic sphere of influence (PSI) were parameterized using data from an accompanying field experiment. Measurements of the uptake of a stable nutrient analogue at distinct distances to the neighboring plants showed that the study species responded plastically to belowground competition by avoiding overlap of individuals' rhizospheres. An unexpected finding was that the sphere of influence of the study species Bromus hordeaceus could be best described by a unimodal function of distance to the plant's center and not with a continuously decreasing function as commonly assumed. We employed the parameterized model to investigate the interplay between plasticity and two other important factors determining the intensity of competitive interactions: overall plant density and the distribution of individuals in space. The simulation results confirm that the reduction of competition intensity due to morphological plasticity strongly depends on the spatial structure of the competitive environment. We advocate the use of semi-mechanistic simulations that explicitly consider morphological plasticity to improve our mechanistic understanding of plant interactions.

  8. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    Science.gov (United States)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  9. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    International Nuclear Information System (INIS)

    Geng, S.M.; Tew, R.C.

    1994-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free-piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine-specific calibration to bring predictions and experimental data into agreement

  10. 40 CFR 158.2172 - Experimental use permit microbial pesticides residue data requirements table.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit microbial....2172 Experimental use permit microbial pesticides residue data requirements table. (a) General. Sections 158.100 through 158.130 describe how to use this table to determine the residue chemistry data...

  11. Experimental burn plot trial in the Kruger National Park: history, experimental design and suggestions for data analysis

    Directory of Open Access Journals (Sweden)

    R. Biggs

    2003-12-01

    Full Text Available The experimental burn plot (EBP trial initiated in 1954 is one of few ongoing long-termfire ecology research projects in Africa. The trial aims to assess the impacts of differentfire regimes in the Kruger National Park. Recent studies on the EBPs have raised questions as to the experimental design of the trial, and the appropriate model specificationwhen analysing data. Archival documentation reveals that the original design was modified on several occasions, related to changes in the park's fire policy. These modifications include the addition of extra plots, subdivision of plots and changes in treatmentsover time, and have resulted in a design which is only partially randomised. The representativity of the trial plots has been questioned on account of their relatively small size,the concentration of herbivores on especially the frequently burnt plots, and soil variation between plots. It is suggested that these factors be included as covariates inexplanatory models or that certain plots be excluded from data analysis based on resultsof independent studies of these factors. Suggestions are provided for the specificationof the experimental design when analysing data using Analysis of Variance. It is concluded that there is no practical alternative to treating the trial as a fully randomisedcomplete block design.

  12. Experimental data for the slug two-phase flow characteristics in horizontal pipeline

    Directory of Open Access Journals (Sweden)

    Abdalellah O. Mohmmed

    2018-02-01

    Full Text Available The data presented in this article were the basis for the study reported in the research articles entitled “Statistical assessment of experimental observation on the slug body length and slug translational velocity in a horizontal pipe” (Al-Kayiem et al., 2017 [1] which presents an experimental investigation of the slug velocity and slug body length for air-water tow phase flow in horizontal pipe. Here, in this article, the experimental set-up and the major instruments used for obtaining the computed data were explained in details. This data will be presented in the form of tables and videos.

  13. Experimental data at high PT and its interpretation: the role of theory

    Energy Technology Data Exchange (ETDEWEB)

    Belonoshko, A. B.; Rosengren, A.

    2011-07-01

    Experiments, relevant for planetary science, are performed often under extreme conditions of pressure and temperature. This makes them technically difficult. The results are often difficult to interpret correctly, especially in the cases when experimental data are scarce and experimental trends difficult to establish. Theory, while normally is inferior in precision of delivered data, is superior in providing a big picture and details behind materials behavior. We consider the experiments performed for deuterium, Mo, and Fe. We demonstrate that when experimental data is verified by theory, significant insight can be gained. (Author) 26 refs.

  14. Operation and management manual of JT-60 experimental data analysis system

    International Nuclear Information System (INIS)

    Hirayama, Takashi; Morishima, Soichi

    2014-03-01

    In the Japan Atomic Energy Agency Naka Fusion Institute, a lot of experiments have been conducted by using the large tokamak device JT-60 aiming to realize fusion power plant. In order to optimize the JT-60 experiment and to investigate complex characteristics of plasma, JT-60 experimental data analysis system was developed and used for collecting, referring and analyzing the JT-60 experimental data. Main components of the system are a data analysis server and a database server for the analyses and accumulation of the experimental data respectively. Other peripheral devices of the system are magnetic disk units, NAS (Network Attached Storage) device, and a backup tape drive. This is an operation and management manual the JT-60 experimental data analysis system. (author)

  15. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Haynes, D.A.; Mancini, R.C.; Cooley, J.H.; Tommasini, R.; Golovkin, I.E.; Sherrill, M.E.; Haan, S.W.

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  16. Experimental and evaluated data on the discrete level excitation function of the 238U(n,n') reaction

    International Nuclear Information System (INIS)

    Simakov, S.P.

    1991-01-01

    Experimental data on the 238 U excitation function are compiled and analyzed. The experimental data are compared with the evaluated data from the BNAB, ENDF/B-IV and ENDL-78 evaluated data libraries. It is shown that the BNAB evaluated data are in good agreement with the existing experimental data, including new results from recent experiments. (author). 26 refs, 2 figs, 2 tabs

  17. Containment accident analysis using CONTEMPT4/M0D2 compared with experimental data

    International Nuclear Information System (INIS)

    Metcalfe, L.J.; Hargroves, D.W.; Wells, R.A.

    1978-01-01

    CONTEMPT4/MOD2 is a new computer program developed to predict the long-term thermal hydraulic behavior of light-water reactor and experimental containment systems during postulated loss-of-coolant accident (LOCA) conditions. Improvements over previous containment codes include multicompartment capability and ice condenser analytical models. A program description and comparisons of calculated results with experimental data are presented

  18. Comparison of numerical results with experimental data for single-phase natural convection in an experimental sodium loop

    International Nuclear Information System (INIS)

    Ribando, R.J.

    1979-01-01

    A comparison is made between computed results and experimental data for single-phase natural convection in an experimental sodium loop. The tests were conducted in the Thermal-Hydraulic Out-of-Reactor Safety (THORS) Facility, an engineering-scale high temperature sodium facility at the Oak Ridge National Laboratory used for thermal-hydraulic testing of simulated LMFBR subassemblies at normal and off-normal operating conditions. Heat generation in the 19 pin assembly during these tests was typical of decay heat levels. Tests were conducted both with zero initial forced flow and with a small initial forced flow. The bypass line was closed in most tests, but open in one. The computer code used to analyze these tests [LONAC (LOw flow and NAtural Convection)] is an ORNL-developed, fast running, one-dimensional, single-phase finite difference model for simulating forced and free convection transients in the THORS loop

  19. Outline and handling manual of experimental data time slice monitoring software 'SLICE'

    International Nuclear Information System (INIS)

    Shirai, Hiroshi; Hirayama, Toshio; Shimizu, Katsuhiro; Tani, Keiji; Azumi, Masafumi; Hirai, Ken-ichiro; Konno, Satoshi; Takase, Keizou.

    1993-02-01

    We have developed a software 'SLICE' which maps various kinds of plasma experimental data measured at the different geometrical position of JT-60U and JFT-2M onto the equilibrium magnetic configuration and treats them as a function of volume averaged minor radius ρ. Experimental data can be handled uniformly by using 'SLICE'. Plenty of commands of 'SLICE' make it easy to process the mapped data. The experimental data measured as line integrated values are also transformed by Abel inversion. The mapped data are fitted to a functional form and saved to the database 'MAPDB'. 'SLICE' can read the data from 'MAPDB' and re-display and transform them. Still more 'SLICE' creates run data of orbit following Monte-Carlo code 'OFMC' and tokamak predictive and interpretation code system 'TOPICS'. This report summarizes an outline and the usage of 'SLICE'. (author)

  20. STRAIN-CONTROLLED BIAXIAL TENSION OF NATURAL RUBBER: NEW EXPERIMENTAL DATA

    KAUST Repository

    Pancheri, Francesco Q.; Dorfmann, Luis

    2014-01-01

    model to derive stress-stretch relations to validate the experimental data. The material model parameters are determined using the primary loading path in uniaxial and equibiaxial tension. Excellent agreement is found when the model is used to predict

  1. LBA-ECO LC-02 Forest Flammability Data, Catuaba Experimental Farm, Acre, Brazil: 1998

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides the results of controlled burns conducted to assess the flammability of mature forests on the Catuaba Experimental Farm of the Federal...

  2. LBA-ECO LC-02 Forest Flammability Data, Catuaba Experimental Farm, Acre, Brazil: 1998

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set provides the results of controlled burns conducted to assess the flammability of mature forests on the Catuaba Experimental Farm of the...

  3. A semantic web approach applied to integrative bioinformatics experimentation: a biological use case with genomics data.

    NARCIS (Netherlands)

    Post, L.J.G.; Roos, M.; Marshall, M.S.; van Driel, R.; Breit, T.M.

    2007-01-01

    The numerous public data resources make integrative bioinformatics experimentation increasingly important in life sciences research. However, it is severely hampered by the way the data and information are made available. The semantic web approach enhances data exchange and integration by providing

  4. Updating and using the international non-neutron experimental nuclear data base in ''Generalized EXFOR'' format

    International Nuclear Information System (INIS)

    Zhuravleva, G.M.; Chukreev, F.E.

    1985-10-01

    A software system for the automatic preparation of non-formalized textual information for the international exchange of nuclear data in the ''Generalized Exchange Format (EXFOR)'' is described. The ''Generalized EXFOR'' format is briefly outlined and data are given on the size of the international non-neutron experimental data base in this format. (author)

  5. Summary of climatic data for the Bonanza Creek Experimental Forest, interior Alaska.

    Science.gov (United States)

    Richard J. Barney; Erwin R. Berglund

    1973-01-01

    A summary of climatic data during the 1968-71 growing seasons is presented for the subarctic Bonanza Creek Experimental Forest located near Fairbanks, Alaska. Data were obtained from three weather station sites at elevations of 1,650, 1,150, and 550 feet from May until September each year. Data are for relative humidity, rainfall, and maximum, minimum, and mean...

  6. 78 FR 18576 - Agency Information Collection Activities; Comment Request; Experimental Sites Data Collection...

    Science.gov (United States)

    2013-03-27

    ... and provide the requested data in the desired format. ED is soliciting comments on the proposed... specific information/performance data for analysis of the experiments. This effort will assist the...; Comment Request; Experimental Sites Data Collection Instrument AGENCY: Department of Education (ED...

  7. Challenges in Data Collection and Analysis in Multi-National Experimentation

    Science.gov (United States)

    2007-06-01

    sampling of personnel when individual interviews would be labor intensive and time consuming. Ideally surveys contribute to the cognitive aspect of...the experimental data for the data collection plan. In addition to gaining data concerning the cognitive aspect , surveys can be used when no other

  8. Mathematical processing of experimental data on neutron yield from separate fission fragments

    International Nuclear Information System (INIS)

    Basova, B.G.; Rabinovich, A.D.; Ryazanov, D.K.

    1975-01-01

    The algorithm is described for processing the multi-dimensional experiments on measurements of prompt emission of neutrons from separate fission fragments. While processing the data the effect of a number of experimental corrections is correctly taken into account; random coincidence background, neutron spectrum, neutron detector efficiency, instrument angular resolution. On the basis of the described algorithm a program for BESM-4 computer is realized and the treatment of experimental data is performed according to the spontaneous fission of 252 Cf

  9. The importance of the accuracy of the experimental data for the prediction of solubility

    Directory of Open Access Journals (Sweden)

    SLAVICA ERIĆ

    2010-04-01

    Full Text Available Aqueous solubility is an important factor influencing several aspects of the pharmacokinetic profile of a drug. Numerous publications present different methodologies for the development of reliable computational models for the prediction of solubility from structure. The quality of such models can be significantly affected by the accuracy of the employed experimental solubility data. In this work, the importance of the accuracy of the experimental solubility data used for model training was investigated. Three data sets were used as training sets – data set 1, containing solubility data collected from various literature sources using a few criteria (n = 319, data set 2, created by substituting 28 values from data set 1 with uniformly determined experimental data from one laboratory (n = 319, and data set 3, created by including 56 additional components, for which the solubility was also determined under uniform conditions in the same laboratory, in the data set 2 (n = 375. The selection of the most significant descriptors was performed by the heuristic method, using one-parameter and multi-parameter analysis. The correlations between the most significant descriptors and solubility were established using multi-linear regression analysis (MLR for all three investigated data sets. Notable differences were observed between the equations corresponding to different data sets, suggesting that models updated with new experimental data need to be additionally optimized. It was successfully shown that the inclusion of uniform experimental data consistently leads to an improvement in the correlation coefficients. These findings contribute to an emerging consensus that improving the reliability of solubility prediction requires the inclusion of many diverse compounds for which solubility was measured under standardized conditions in the data set.

  10. An Open-Source Data Storage and Visualization Back End for Experimental Data

    DEFF Research Database (Denmark)

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert

    2014-01-01

    and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status......In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component...... for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high...

  11. Analysis of experimental data on relativistic nuclear collisions in the Lobachevsky space

    International Nuclear Information System (INIS)

    Baldin, A.A.; Baldina, Eh.G.; Kladnitskaya, E.N.; Rogachevskij, O.V.

    2004-01-01

    Relativistic nuclear collisions are considered in terms of relative 4-velocity and rapidity space (the Lobachevsky space). The connection between geometric relations in the Lobachevsky space and measurable (experimentally determined) kinematic characteristics (transverse momentum, longitudinal rapidity, square relative 4-velocity b ik , etc.) is discussed. The experimental data obtained using the propane bubble chamber are analyzed on the basis of triangulation in the Lobachevsky space. General properties of relativistic invariants distributions characterizing the geometric position of particles in the Lobachevsky space are discussed. The transition energy region is considered on the basis of relativistic approach to experimental data on multiparticle processes. Possible applications of the obtained results for planning of experimental research and analysis of data on multiple particle production are discussed

  12. BioQ: tracing experimental origins in public genomic databases using a novel data provenance model.

    Science.gov (United States)

    Saccone, Scott F; Quan, Jiaxi; Jones, Peter L

    2012-04-15

    Public genomic databases, which are often used to guide genetic studies of human disease, are now being applied to genomic medicine through in silico integrative genomics. These databases, however, often lack tools for systematically determining the experimental origins of the data. We introduce a new data provenance model that we have implemented in a public web application, BioQ, for assessing the reliability of the data by systematically tracing its experimental origins to the original subjects and biologics. BioQ allows investigators to both visualize data provenance as well as explore individual elements of experimental process flow using precise tools for detailed data exploration and documentation. It includes a number of human genetic variation databases such as the HapMap and 1000 Genomes projects. BioQ is freely available to the public at http://bioq.saclab.net.

  13. Experimental determination of (p, ρ, T) data for binary mixtures of methane and helium

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Segovia, J.J.; Chamorro, C.R.

    2016-01-01

    Highlights: • Accurate density data for two binary mixtures of methane and helium are presented. • Experimental data are compared with the densities calculated from different EOS. • Deviations from GERG-2008 exceeded the 3% for some points. • Deviations from AGA8-DC92 did not exceed the 0.3% at any experimental point. • The relative deviations are clearly higher for GERG-2008 than for AGA8-DC92. - Abstract: The basis for the development and evaluation of equations of state for mixtures is experimental data for several thermodynamic properties. The quality and the availability of experimental data limit the achievable accuracy of the equation. Referring to the fundamentals of GERG-2008 wide-range equation of state, no suitable data were available for many mixtures containing secondary natural gas components. This work provides accurate experimental (p, ρ, T) data for two binary mixtures of methane with helium (0.95 (amount-of-substance fraction) CH_4 + 0.05 He and 0.90 CH_4 + 0.10 He). Density measurements were performed at temperatures between (250 and 400) K and pressures up to 20 MPa by using a single-sinker densimeter with magnetic suspension coupling. Experimental data were compared with the corresponding densities calculated from the GERG-2008 and the AGA8-DC92 equations of state. Deviations from GERG-2008 were found within a 2% band for the (0.95 CH_4 + 0.05 He) mixture but exceeded the 3% limit for the (0.95 CH_4 + 0.05 He) mixture. The highest deviations were observed at T = 250 K and pressures between (17 and 19) MPa. Values calculated from AGA8-DC92, however, deviated from the experimental data by only 0.1% at high pressures and exceeded the 0.2% limit only at temperatures of 300 K and above, for the (0.90 CH_4 + 0.10 He) mixture.

  14. EXFOR – a global experimental nuclear reaction data repository: Status and new developments

    Directory of Open Access Journals (Sweden)

    Semkova Valentina

    2017-01-01

    Full Text Available Members of the International Network of Nuclear Reaction Data Centres (NRDC have collaborated since the 1960s on the worldwide collection, compilation and dissemination of experimental nuclear reaction data. New publications are systematically complied, and all agreed data assembled and incorporated within the EXFOR database. Recent upgrades to achieve greater completeness of the contents are described, along with reviews and adjustments of the compilation rules for specific types of data.

  15. GeneLab Phase 2: Integrated Search Data Federation of Space Biology Experimental Data

    Science.gov (United States)

    Tran, P. B.; Berrios, D. C.; Gurram, M. M.; Hashim, J. C. M.; Raghunandan, S.; Lin, S. Y.; Le, T. Q.; Heher, D. M.; Thai, H. T.; Welch, J. D.; hide

    2016-01-01

    The GeneLab project is a science initiative to maximize the scientific return of omics data collected from spaceflight and from ground simulations of microgravity and radiation experiments, supported by a data system for a public bioinformatics repository and collaborative analysis tools for these data. The mission of GeneLab is to maximize the utilization of the valuable biological research resources aboard the ISS by collecting genomic, transcriptomic, proteomic and metabolomic (so-called omics) data to enable the exploration of the molecular network responses of terrestrial biology to space environments using a systems biology approach. All GeneLab data are made available to a worldwide network of researchers through its open-access data system. GeneLab is currently being developed by NASA to support Open Science biomedical research in order to enable the human exploration of space and improve life on earth. Open access to Phase 1 of the GeneLab Data Systems (GLDS) was implemented in April 2015. Download volumes have grown steadily, mirroring the growth in curated space biology research data sets (61 as of June 2016), now exceeding 10 TB/month, with over 10,000 file downloads since the start of Phase 1. For the period April 2015 to May 2016, most frequently downloaded were data from studies of Mus musculus (39) followed closely by Arabidopsis thaliana (30), with the remaining downloads roughly equally split across 12 other organisms (each 10 of total downloads). GLDS Phase 2 is focusing on interoperability, supporting data federation, including integrated search capabilities, of GLDS-housed data sets with external data sources, such as gene expression data from NIHNCBIs Gene Expression Omnibus (GEO), proteomic data from EBIs PRIDE system, and metagenomic data from Argonne National Laboratory's MG-RAST. GEO and MG-RAST employ specifications for investigation metadata that are different from those used by the GLDS and PRIDE (e.g., ISA-Tab). The GLDS Phase 2 system

  16. 1988 Progress report of the EDF department for the analysis of experimental data and measurements

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    The 1988 activity report of the department for the analysis of experimental data and measurements (Department of Retour d'Experience Mesures-Essais, of EDF, France), is presented. The mission of the department is to collect and investigate data from the nuclear power plant operations. The investigations started before 1988, were carried on in 1988. The department main activities are: technology and information transfer from experimental activities, the construction of a standard data acquisition and processing system, the actions involving the N4 turbine, and the modelling and construction of new non-destructive methods of control. The most important facts and activities carried out in 1988 are presented [fr

  17. Assessment CANDU physics codes using experimental data - part 1: criticality measurement

    International Nuclear Information System (INIS)

    Roh, Gyu Hong; Choi, Hang Bok; Jeong, Chang Joon

    2001-08-01

    In order to assess the applicability of MCNP-4B code to the heavy water moderated, light water cooled and pressure-tube type reactor, the MCNP-4B physics calculations has been carried out for the Deuterium Critical Assembly (DCA), and the results were compared with those of the experimental data. In this study, the key safety parameters like as the multiplication factor, void coefficient, local power peaking factor and bundle power distribution in the scattered core are simulated. In order to use the cross section data consistently for the fuels to be analyzed in the future, new MCNP libraries have been generated from ENDF/B-VI release 3. Generally, the MCNP-4B calculation results show a good agreement with experimental data of DCA core. After benchmarking MCNP-4B against available experimental data, it will be used as the reference tool to benchmark design and analysis codes for the advanced CANDU fuels

  18. Development of experimental data bank on heat transfer crisis under stationary conditions

    International Nuclear Information System (INIS)

    Koshtyalek, Ya.

    1982-01-01

    The development of an experimental data bank on heat transfer orisis under stationary conditions is discussed. The work is being carried out under the auspices of CMEA in compliance with the resolution of CMEA countries experts meetinq in January 1981 held in Moscow. The data bank is supposed to be formed as a sequential set of available experimental data on the regimes with heat-transfer crisis, recorded on a standard magnetic tape for ES or IBM comuter family. All operations with the bank are to be performed via the computer. Recommendations are given to what the record structure should be used and an example of a code is suggested for a user to extract data from the bank in accordance with various criteria. At the present time parameters of more than 12000 experimental regimes are prepared for the bank and some 3000 more are being processed [ru

  19. Experimental data from a full-scale facility investigating radiant and convective terminals

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Jensen, Rasmus Lund

    The objective of this technical report is to provide information on the accuracy of the experiments performed in “the Cube” (part I, II and III). Moreover, this report lists the experimental data, which have been monitored in the test facility (part IV). These data are available online and can be...

  20. 40 CFR 158.2080 - Experimental use permit data requirements-biochemical pesticides.

    Science.gov (United States)

    2010-07-01

    ... requirements-biochemical pesticides. 158.2080 Section 158.2080 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Biochemical Pesticides § 158.2080 Experimental use permit data requirements—biochemical pesticides. (a) Sections 158.2081...

  1. 40 CFR 158.2170 - Experimental use permit data requirements-microbial pesticides.

    Science.gov (United States)

    2010-07-01

    ... requirements-microbial pesticides. 158.2170 Section 158.2170 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Microbial Pesticides § 158.2170 Experimental use permit data requirements—microbial pesticides. (a) For all microbial pesticides. (1) The...

  2. Determination of the angle of attack on the mexico rotor using experimental data

    DEFF Research Database (Denmark)

    Yang, Hua; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2010-01-01

    characteristics from experimental data on the MEXICO (Model Experiments in controlled Conditions) rotor. Detailed surface pressure and Particle Image Velocimetry (PIV) flow field at different rotor azimuth positions were examined for determining the sectional airfoil data. It is worthwhile noting that the present...

  3. Curation of Laboratory Experimental Data as Part of the Overall Data Lifecycle

    Directory of Open Access Journals (Sweden)

    Jeremy Frey

    2008-08-01

    Full Text Available The explosion in the production of scientific data in recent years is placing strains upon conventional systems supporting integration, analysis, interpretation and dissemination of data and thus constraining the whole scientific process. Support for handling large quantities of diverse information can be provided by e-Science methodologies and the cyber-infrastructure that enables collaborative handling of such data. Regard needs to be taken of the whole process involved in scientific discovery. This includes the consideration of the requirements of the users and consumers further down the information chain and what they might ideally prefer to impose on the generators of those data. As the degree of digital capture in the laboratory increases, it is possible to improve the automatic acquisition of the ‘context of the data’ as well as the data themselves. This process provides an opportunity for the data creators to ensure that many of the problems they often encounter in later stages are avoided. We wish to elevate curation to an operation to be considered by the laboratory scientist as part of good laboratory practice, not a procedure of concern merely to the few specialising in archival processes. Designing curation into experiments is an effective solution to the provision of high-quality metadata that leads to better, more re-usable data and to better science.

  4. Bank of experimental data on heat transfer crisis at water boiling in circular tubes

    International Nuclear Information System (INIS)

    Sedova, T.K.; Smolin, V.N.; Shpanskij, S.V.

    1982-01-01

    Basic principles and structure of an automated information system (bank) are described. The system is to accumulate and store experimental data on heat-transfer crisis in boiling water flow within tu bular fuel elements. For each experimental section registered in the bank there is a certain amount of information including both geometry and design characteristics (dimensions, heat release distrivution, number of registered regimes and so on) and the investigated operation regimes. Each regime is characterized by values of pressure, outlet enthalpy, critical power, coolant flow rate and others. The searching programme screens the available experimental section and regime lists transfering the information to subprogrammes wherein, on the basis of the user request, the selection of a particular section and regime is performed. A brief analysis of accumulated experimental data from 26 Soviet and foreign sources is given [ru

  5. Comparison of numerical results with experimental data for single-phase natural convection in an experimental sodium loop. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Ribando, R.J.

    1979-01-01

    A comparison is made between computed results and experimental data for a single-phase natural convection test in an experimental sodium loop. The test was conducted in the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility, an engineering-scale high temperature sodium loop at the Oak Ridge National Laboratory (ORNL) used for thermal-hydraulic testing of simulated Liquid Metal Fast Breeder Reactor (LMFBR) subassemblies at normal and off-normal operating conditions. Heat generation in the 19 pin assembly during the test was typical of decay heat levels. The test chosen for analysis in this paper was one of seven natural convection runs conducted in the facility using a variety of initial conditions and testing parameters. Specifically, in this test the bypass line was open to simulate a parallel heated assembly and the test was begun with a pump coastdown from a small initial forced flow. The computer program used to analyze the test, LONAC (LOw flow and NAtural Convection) is an ORNL-developed, fast-running, one-dimensional, single-phase, finite-difference model used for simulating forced and free convection transients in the THORS loop.

  6. Comparison of numerical results with experimental data for single-phase natural convection in an experimental sodium loop

    International Nuclear Information System (INIS)

    Ribando, R.J.

    1979-01-01

    A comparison is made between computed results and experimental data for a single-phase natural convection test in an experimental sodium loop. The test was conducted in the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility, an engineering-scale high temperature sodium loop at the Oak Ridge National Laboratory (ORNL) used for thermal-hydraulic testing of simulated Liquid Metal Fast Breeder Reactor (LMFBR) subassemblies at normal and off-normal operating conditions. Heat generation in the 19 pin assembly during the test was typical of decay heat levels. The test chosen for analysis in this paper was one of seven natural convection runs conducted in the facility using a variety of initial conditions and testing parameters. Specifically, in this test the bypass line was open to simulate a parallel heated assembly and the test was begun with a pump coastdown from a small initial forced flow. The computer program used to analyze the test, LONAC (LOw flow and NAtural Convection) is an ORNL-developed, fast-running, one-dimensional, single-phase, finite-difference model used for simulating forced and free convection transients in the THORS loop

  7. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  8. The new STRESA tool for preservation of thermalhydraulic experimental data produced in the European Commission

    International Nuclear Information System (INIS)

    Pla, Patricia; Pascal, Ghislain; Tanarro, Jorge; Annunziato, Alessandro

    2015-01-01

    Highlights: • ITFs and severe accident data is of high importance to validate thermal hydraulic codes for NPPs. • LOBI, FARO, KROTOS and STORM produced a lot of TH and SA experimental data. • The JRC facilities data was stored in the JRC STRESA database developed by JRC. • The paper presents the new JRC STRESA database developed by JRC in 2014–2015. • The long-term importance of well maintained ITF databases (like STRESA) is demonstrated. - Abstract: The experimental data recorded in Integral Effect Test Facilities (ITFs) are traditionally used in order to validate best estimate (BE) system codes and to investigate the behaviour of nuclear power plants (NPPs) under accident scenarios. In the same way, facilities dedicated to specific thermal-hydraulic (TH) severe accident (SA) phenomena are used for the development and improvement of specific analytical models and codes used in the SA analysis for light water reactors (LWR). The extent to which the existing reactor safety experimental databases are preserved was well known and frequently debated and questioned in the nuclear community. The Joint Research Centre (JRC) of the European Commission (EC) has been deeply involved in several projects for experimental data production and experimental data preservation. In this context the STRESA (Storage of Thermal REactor Safety Analysis Data) web-based informatics platform was developed by JRC-Ispra in the year 2000. At present the JRC STRESA database is hosted and maintained by JRC-Petten. The Nuclear Reactor Safety Assessment Unit (NRSA) of the JRC-Petten is engaged in the administration of a new STRESA tool that secures EU storage for SA experimental data and calculations. The development of this new STRESA tool was completed by early 2015 and published on the 25/06/2015 in the URL: (http://stresa.jrc.ec.europa.eu/). The target was to keep the main features of the original STRESA structure but using the new informatics technologies that are nowadays

  9. Experimental benchmark data for PWR rod bundle with spacer-grids

    International Nuclear Information System (INIS)

    Dominguez-Ontiveros, Elvis E.; Hassan, Yassin A.; Conner, Michael E.; Karoutas, Zeses

    2012-01-01

    In numerical simulations of fuel rod bundle flow fields, the unsteady Navier–Stokes equations have to be solved in order to determine the time (phase) dependent characteristics of the flow. In order to validate the simulations results, detailed comparison with experimental data must be done. Experiments investigating complex flows in rod bundles with spacer grids that have mixing devices (such as flow mixing vanes) have mostly been performed using single-point measurements. In order to obtain more details and insight on the discrepancies between experimental and numerical data as well as to obtain a global understanding of the causes of these discrepancies, comparisons of the distributions of complete phase-averaged velocity and turbulence fields for various locations near spacer-grids should be performed. The experimental technique Particle Image Velocimetry (PIV) is capable of providing such benchmark data. This paper describes an experimental database obtained using two-dimensional Time Resolved Particle Image Velocimetry (TR-PIV) measurements within a 5 × 5 PWR rod bundle with spacer-grids that have flow mixing vanes. One of the unique characteristic of this set-up is the use of the Matched Index of Refraction technique employed in this investigation to allow complete optical access to the rod bundle. This unique feature allows flow visualization and measurement within the bundle without rod obstruction. This approach also allows the use of high temporal and spatial non-intrusive dynamic measurement techniques namely TR-PIV to investigate the flow evolution below and immediately above the spacer. The experimental data presented in this paper includes explanation of the various cases tested such as test rig dimensions, measurement zones, the test equipment and the boundary conditions in order to provide appropriate data for comparison with Computational Fluid Dynamics (CFD) simulations. Turbulence parameters of the obtained data are presented in order to gain

  10. Experimental data bases useful for quantification of model uncertainties in best estimate codes

    International Nuclear Information System (INIS)

    Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.

    1988-01-01

    A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs

  11. Probing the Structure and Dynamics of Proteins by Combining Molecular Dynamics Simulations and Experimental NMR Data.

    Science.gov (United States)

    Allison, Jane R; Hertig, Samuel; Missimer, John H; Smith, Lorna J; Steinmetz, Michel O; Dolenc, Jožica

    2012-10-09

    NMR experiments provide detailed structural information about biological macromolecules in solution. However, the amount of information obtained is usually much less than the number of degrees of freedom of the macromolecule. Moreover, the relationships between experimental observables and structural information, such as interatomic distances or dihedral angle values, may be multiple-valued and may rely on empirical parameters and approximations. The extraction of structural information from experimental data is further complicated by the time- and ensemble-averaged nature of NMR observables. Combining NMR data with molecular dynamics simulations can elucidate and alleviate some of these problems, as well as allow inconsistencies in the NMR data to be identified. Here, we use a number of examples from our work to highlight the power of molecular dynamics simulations in providing a structural interpretation of solution NMR data.

  12. Correction of Magnetic Optics and Beam Trajectory Using LOCO Based Algorithm with Expanded Experimental Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, A.; Edstrom, D.; Emanov, F. A.; Koop, I. A.; Perevedentsev, E. A.; Rogovsky, Yu. A.; Shwartz, D. B.; Valishev, A.

    2017-03-28

    Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.

  13. Analysis of shallow water experimental acoustic data including normal mode model comparisons

    NARCIS (Netherlands)

    McHugh, R.; Simons, D.G.

    2000-01-01

    Ss part of a propagation model validation exercise experimental acoustic and oceanographic data was collected from a shallow-water, long-range channel, off the west coast of Scotland. Temporal variability effects in this channel were assessed through visual inspection of stacked plots, each of which

  14. 40 CFR 158.2174 - Experimental use permit microbial pesticides nontarget organisms and environmental fate data...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Experimental use permit microbial... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS... controls the target insect pest by a mechanism of infectivity; i.e., may create an epizootic condition in...

  15. Adjustments in Almod3W2 transient analysis code to fit Angra 1 NPP experimental data

    International Nuclear Information System (INIS)

    Madeira, A.A.; Camargo, C.T.M.

    1988-01-01

    Some little modifications were introduced in ALMOD3W2 code, as consequence of the interest in reproducing the full load rejection test in Angra 1 NPP. Such modifications showed to be adequate when code results were compared with experimental data. (author) [pt

  16. Comparison of existing plastic collapse load solutions with experimental data for 90° elbows

    International Nuclear Information System (INIS)

    Han, Jae-Jun; Lee, Kuk-Hee; Kim, Nak-Hyun; Kim, Yun-Jae; Jerng, Dong Wook; Budden, Peter J.

    2012-01-01

    This paper compares published experimental plastic collapse loads for 90° elbows with existing closed-form solutions. A total of 46 experimental data are considered, covering pure bending (in-plane closing, in-plane opening and out-of-plane bending) and combined pressure and bending loads. The plastic collapse load solutions considered are from the ASME code, the Ductile Fracture handbook of Zahoor, by Chattopadhyay and co-workers, and by Y.-J. Kim and co-workers. Comparisons with the experimental data shows that the ASME code solution is conservative by a factor of 2 on collapse load for in-plane closing bending, 2.3 for out-of-plane bending, and 3 for in-plane opening bending. The solutions given by Kim and co-workers give the least conservative estimates of plastic collapse loads, although they provide slightly non-conservative estimates for some data. - Highlights: ► We compare published 46 experimental data of plastic collapse loads for 90° elbows with existing four different plastic collapse load solutions. ► We find that the ASME code solution is conservative by a factor of 2–3, depending on the loading mode. ► We find that the solutions given by Kim and co-workers give the least conservative estimates of plastic collapse loads.

  17. Experimental data base of turbulent flow in rod bundles using laser doppler velocimeter

    International Nuclear Information System (INIS)

    Chung, Moon Ki; Yang, Sun Kyu; Chung, Heung June; Won, Soon Yeun; Kim, Bok Deuk; Cho, Young Rho

    1992-01-01

    This report presents in detail the hydraulic characteristics measurements in subchannels of rod bundles using one-component LDV (Laser Doppler Velocimeter). In particular, this report presents the figures and tabulations of the resulting data. The detailed explanations about these results are shown in references publicated or presented at the conference. 4 kinds of experimental work were performed so far. (Author)

  18. Summary Report of the Workshop on the Experimental Nuclear Reaction Data Database

    International Nuclear Information System (INIS)

    Semkova, V.; Pritychenko, B.

    2014-12-01

    The Workshop on the Experimental Nuclear Reaction Data Database (EXFOR) was held at IAEA Headquarters in Vienna from 6 to 10 October 2014. The workshop was organized to discuss various aspects of the EXFOR compilation process including compilation rules, different techniques for nuclear reaction data measurements, software developments, etc. A summary of the presentations and discussions that took place during the workshop is reported here. (author)

  19. Summary Report of the Workshop on The Experimental Nuclear Reaction Data Database

    Energy Technology Data Exchange (ETDEWEB)

    Semkova, V. [IAEA Nuclear Data Section, Vienna (Austria); Pritychenko, B. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2014-12-01

    The Workshop on the Experimental Nuclear Reaction Data Database (EXFOR) was held at IAEA Headquarters in Vienna from 6 to 10 October 2014. The workshop was organized to discuss various aspects of the EXFOR compilation process including compilation rules, different techniques for nuclear reaction data measurements, software developments, etc. A summary of the presentations and discussions that took place during the workshop is reported here.

  20. Experimental Data Does Not Violate Bell's Inequality for "Right Kolmogorov Space''

    DEFF Research Database (Denmark)

    Fischer, Paul; Avis, David; Hilbert, Astrid

    2008-01-01

    of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent...... probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory....

  1. Experimental data of the static behavior of reinforced concrete beams at room and low temperature.

    Science.gov (United States)

    Mirzazadeh, M Mehdi; Noël, Martin; Green, Mark F

    2016-06-01

    This article provides data on the static behavior of reinforced concrete at room and low temperature including, strength, ductility, and crack widths of the reinforced concrete. The experimental data on the application of digital image correlation (DIC) or particle image velocimetry (PIV) in measuring crack widths and the accuracy and precision of DIC/PIV method with temperature variations when is used for measuring strains is provided as well.

  2. A compilation of experimental burnout data for axial flow of water in rod bundles

    International Nuclear Information System (INIS)

    Chapman, A.G.; Carrard, G.

    1981-02-01

    A compilation has been made of burnout (critical heat flux) data from the results of more thant 12,000 tests on 321 electrically-heated, water-cooled experimental assemblies each simulating, to some extent, the operating or postulated accident conditions in the fuel elements of water-cooled nuclear power reactors. The main geometric characteristics of the assemblies are listed and references are given for the sources of information from which the data were gathered

  3. Chemometrics in analytical chemistry-part I: history, experimental design and data analysis tools.

    Science.gov (United States)

    Brereton, Richard G; Jansen, Jeroen; Lopes, João; Marini, Federico; Pomerantsev, Alexey; Rodionova, Oxana; Roger, Jean Michel; Walczak, Beata; Tauler, Romà

    2017-10-01

    Chemometrics has achieved major recognition and progress in the analytical chemistry field. In the first part of this tutorial, major achievements and contributions of chemometrics to some of the more important stages of the analytical process, like experimental design, sampling, and data analysis (including data pretreatment and fusion), are summarised. The tutorial is intended to give a general updated overview of the chemometrics field to further contribute to its dissemination and promotion in analytical chemistry.

  4. Benchmarking Experimental and Computational Thermochemical Data: A Case Study of the Butane Conformers.

    Science.gov (United States)

    Barna, Dóra; Nagy, Balázs; Csontos, József; Császár, Attila G; Tasi, Gyula

    2012-02-14

    Due to its crucial importance, numerous studies have been conducted to determine the enthalpy difference between the conformers of butane. However, it is shown here that the most reliable experimental values are biased due to the statistical model utilized during the evaluation of the raw experimental data. In this study, using the appropriate statistical model, both the experimental expectation values and the associated uncertainties are revised. For the 133-196 and 223-297 K temperature ranges, 668 ± 20 and 653 ± 125 cal mol(-1), respectively, are recommended as reference values. Furthermore, to show that present-day quantum chemistry is a favorable alternative to experimental techniques in the determination of enthalpy differences of conformers, a focal-point analysis, based on coupled-cluster electronic structure computations, has been performed that included contributions of up to perturbative quadruple excitations as well as small correction terms beyond the Born-Oppenheimer and nonrelativistic approximations. For the 133-196 and 223-297 K temperature ranges, in exceptional agreement with the corresponding revised experimental data, our computations yielded 668 ± 3 and 650 ± 6 cal mol(-1), respectively. The most reliable enthalpy difference values for 0 and 298.15 K are also provided by the computational approach, 680.9 ± 2.5 and 647.4 ± 7.0 cal mol(-1), respectively.

  5. Intuitive web-based experimental design for high-throughput biomedical data.

    Science.gov (United States)

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

  6. From experimental zoology to big data: Observation and integration in the study of animal development.

    Science.gov (United States)

    Bolker, Jessica; Brauckmann, Sabine

    2015-06-01

    The founding of the Journal of Experimental Zoology in 1904 was inspired by a widespread turn toward experimental biology in the 19th century. The founding editors sought to promote experimental, laboratory-based approaches, particularly in developmental biology. This agenda raised key practical and epistemological questions about how and where to study development: Does the environment matter? How do we know that a cell or embryo isolated to facilitate observation reveals normal developmental processes? How can we integrate descriptive and experimental data? R.G. Harrison, the journal's first editor, grappled with these questions in justifying his use of cell culture to study neural patterning. Others confronted them in different contexts: for example, F.B. Sumner insisted on the primacy of fieldwork in his studies on adaptation, but also performed breeding experiments using wild-collected animals. The work of Harrison, Sumner, and other early contributors exemplified both the power of new techniques, and the meticulous explanation of practice and epistemology that was marshaled to promote experimental approaches. A century later, experimentation is widely viewed as the standard way to study development; yet at the same time, cutting-edge "big data" projects are essentially descriptive, closer to natural history than to the approaches championed by Harrison et al. Thus, the original questions about how and where we can best learn about development are still with us. Examining their history can inform current efforts to incorporate data from experiment and description, lab and field, and a broad range of organisms and disciplines, into an integrated understanding of animal development. © 2015 Wiley Periodicals, Inc.

  7. Three-dimensional inviscid analysis of radial turbine flow and a limited comparison with experimental data

    Science.gov (United States)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  8. Preliminary Validation of the MATRA-LMR Code Using Existing Sodium-Cooled Experimental Data

    International Nuclear Information System (INIS)

    Choi, Sun Rock; Kim, Sangji

    2014-01-01

    The main objective of the SFR prototype plant is to verify TRU metal fuel performance, reactor operation, and transmutation ability of high-level wastes. The core thermal-hydraulic design is used to ensure the safe fuel performance during the whole plant operation. The fuel design limit is highly dependent on both the maximum cladding temperature and the uncertainties of the design parameters. Therefore, an accurate temperature calculation in each subassembly is highly important to assure a safe and reliable operation of the reactor systems. The current core thermalhydraulic design is mainly performed using the SLTHEN (Steady-State LMR Thermal-Hydraulic Analysis Code Based on ENERGY Model) code, which has been already validated using the existing sodium-cooled experimental data. In addition to the SLTHEN code, a detailed analysis is performed using the MATRA-LMR (Multichannel Analyzer for Transient and steady-state in Rod Array-Liquid Metal Reactor) code. In this work, the MATRA-LMR code is validated for a single subassembly evaluation using the previous experimental data. The MATRA-LMR code has been validated using existing sodium-cooled experimental data. The results demonstrate that the design code appropriately predicts the temperature distributions compared with the experimental values. Major differences are observed in the experiments with the large pin number due to the radial-wise mixing difference

  9. Treating experimental data of inverse kinetic method by unitary linear regression analysis

    International Nuclear Information System (INIS)

    Zhao Yusen; Chen Xiaoliang

    2009-01-01

    The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)

  10. Data handling at EBR-II [Experimental Breeder Reactor II] for advanced diagnostics and control work

    International Nuclear Information System (INIS)

    Lindsay, R.W.; Schorzman, L.W.

    1988-01-01

    Improved control and diagnostics systems are being developed for nuclear and other applications. The Experimental Breeder Reactor II (EBR-II) Division of Argonne National Laboratory has embarked on a project to upgrade the EBR-II control and data handling systems. The nature of the work at EBR-II requires that reactor plant data be readily available for experimenters, and that the plant control systems be flexible to accommodate testing and development needs. In addition, operational concerns require that improved operator interfaces and computerized diagnostics be included in the reactor plant control system. The EBR-II systems have been upgraded to incorporate new data handling computers, new digital plant process controllers, and new displays and diagnostics are being developed and tested for permanent use. In addition, improved engineering surveillance will be possible with the new systems

  11. Absorber and regenerator models for liquid desiccant air conditioning systems. Validation and comparison using experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Krause, M.; Heinzen, R.; Jordan, U.; Vajen, K. [Kassel Univ., Inst. of Thermal Engineering, Kassel (Germany); Saman, W.; Halawa, E. [Sustainable Energy Centre, Univ. of South Australia, Mawson Lakes, Adelaide (Australia)

    2008-07-01

    Solar assisted air conditioning systems using liquid desiccants represent a promising option to decrease high summer energy demand caused by electrically driven vapor compression machines. The main components of liquid desiccant systems are absorbers for dehumidifying and cooling of supply air and regenerators for concentrating the desiccant. However, high efficient and validated reliable components are required and the design and operation have to be adjusted to each respective building design, location, and user demand. Simulation tools can help to optimize component and system design. The present paper presents new developed numerical models for absorbers and regenerators, as well as experimental data of a regenerator prototype. The models have been compared with a finite-difference method model as well as experimental data. The data are gained from the regenerator prototype presented and an absorber presented in the literature. (orig.)

  12. Code ''Repol'' to fit experimental data with a polynomial and its graphics plotting

    International Nuclear Information System (INIS)

    Travesi, A.; Romero, L.

    1983-01-01

    The ''Repol'' code performs the fitting of a set of experimental data, with a polynomial of mth. degree (max. 10), using the Least Squares Criterion. Further, it presents the graphic plotting of the fitted polynomial, in the appropriate coordinates axes system, by a plotter. An additional option allows also the graphic plotting of the experimental data, used for the fit. The necessary data to execute this code, are asked to the operator in the screen, in a iterative way, by screen-operator dialogue, and the values are introduced through the keyboard. This code is written in Fortran IV, and because of its structure programming in subroutine blocks, can be adapted to any computer with graphic screen and keyboard terminal, with a plotter serial connected to it, whose software has the Hewlett Packard ''Graphics 1000''. (author)

  13. A computer program to evaluate the experimental data in instrumental multielement neutron activation analysis

    International Nuclear Information System (INIS)

    Greim, L.; Motamedi, K.; Niedergesaess, R.

    1976-01-01

    A computer code evaluating experimental data of neutron activation analysis (NAA) for determination of atomic abundancies is described. The experimental data are, beside a probe designation, the probe weight, irradiation parameters and a Ge(Li)-pulse-height-spectrum from the activity measurement. The organisation of the necessary nuclear data, comprising all methods of activation in reactor-irradiations, is given. Furthermore the automatic evaluation of spectra, the designation of the resulting peaks to nuclei and the calculation of atomic abundancies are described. The complete evaluation of a spectrum with many lines, e.g. 100 lines of 20 nuclei, takes less than 1 minute machine-time on the TR 440 computer. (orig.) [de

  14. Code REPOL to fit experimental data with a polynomial, and its graphics plotting

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    The REPOL code, performs the fitting a set of experimental data, with a polynomial of mth. degree (max. 10), using the Least Squares Criterion. further, it presents the graphic plotting of the fitted polynomial, in the appropriate coordinates axes system, by a plotter. An additional option allows also the graphic plotting of the experimental data, used for the fit. The necessary data to execute this code, are asked to the operator in the screen, in a iterative way, by screen-operator dialogue, and the values are introduced through the keyboard. This code is written in Fortran IV, and because of its structure programming in subroutine blocks, can be adapted to any computer with graphic screen and keyboard terminal, with a plotter serial connected to it, whose Software has the Hewlett Packard Graphics 1000. (Author) 5 refs

  15. Can experimental data in humans verify the finite element-based bone remodeling algorithm?

    DEFF Research Database (Denmark)

    Wong, C.; Gehrchen, P.M.; Kiaer, T.

    2008-01-01

    STUDY DESIGN: A finite element analysis-based bone remodeling study in human was conducted in the lumbar spine operated on with pedicle screws. Bone remodeling results were compared to prospective experimental bone mineral content data of patients operated on with pedicle screws. OBJECTIVE......: The validity of 2 bone remodeling algorithms was evaluated by comparing against prospective bone mineral content measurements. Also, the potential stress shielding effect was examined using the 2 bone remodeling algorithms and the experimental bone mineral data. SUMMARY OF BACKGROUND DATA: In previous studies...... operated on with pedicle screws between L4 and L5. The stress shielding effect was also examined. The bone remodeling results were compared with prospective bone mineral content measurements of 4 patients. They were measured after surgery, 3-, 6- and 12-months postoperatively. RESULTS: After 1 year...

  16. STRAIN-CONTROLLED BIAXIAL TENSION OF NATURAL RUBBER: NEW EXPERIMENTAL DATA

    KAUST Repository

    Pancheri, Francesco Q.

    2014-03-01

    We present a new experimental method and provide data showing the response of 40A natural rubber in uniaxial, pure shear, and biaxial tension. Real-time biaxial strain control allows for independent and automatic variation of the velocity of extension and retraction of each actuator to maintain the preselected deformation rate within the gage area of the specimen. Wealso focus on the Valanis-Landel hypothesis that is used to verify and validate the consistency of the data.Weuse a threeterm Ogden model to derive stress-stretch relations to validate the experimental data. The material model parameters are determined using the primary loading path in uniaxial and equibiaxial tension. Excellent agreement is found when the model is used to predict the response in biaxial tension for different maximum in-plane stretches. The application of the Valanis-Landel hypothesis also results in excellent agreement with the theoretical prediction.

  17. Within-subject mediation analysis for experimental data in cognitive psychology and neuroscience.

    Science.gov (United States)

    Vuorre, Matti; Bolger, Niall

    2017-12-15

    Statistical mediation allows researchers to investigate potential causal effects of experimental manipulations through intervening variables. It is a powerful tool for assessing the presence and strength of postulated causal mechanisms. Although mediation is used in certain areas of psychology, it is rarely applied in cognitive psychology and neuroscience. One reason for the scarcity of applications is that these areas of psychology commonly employ within-subjects designs, and mediation models for within-subjects data are considerably more complicated than for between-subjects data. Here, we draw attention to the importance and ubiquity of mediational hypotheses in within-subjects designs, and we present a general and flexible software package for conducting Bayesian within-subjects mediation analyses in the R programming environment. We use experimental data from cognitive psychology to illustrate the benefits of within-subject mediation for theory testing and comparison.

  18. Analysis of cerebral vessels dynamics using experimental data with missed segments

    Science.gov (United States)

    Pavlova, O. N.; Abdurashitov, A. S.; Ulanova, M. V.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.; Pavlov, A. N.

    2018-04-01

    Physiological signals often contain various bad segments that occur due to artifacts, failures of the recording equipment or varying experimental conditions. The related experimental data need to be preprocessed to avoid such parts of recordings. In the case of few bad segments, they can simply be removed from the signal and its analysis is further performed. However, when there are many extracted segments, the internal structure of the analyzed physiological process may be destroyed, and it is unclear whether such signal can be used in diagnostic-related studies. In this paper we address this problem for the case of cerebral vessels dynamics. We perform analysis of simulated data in order to reveal general features of quantifying scaling features of complex signals with distinct correlation properties and show that the effects of data loss are significantly different for experimental data with long-range correlations and anti-correlations. We conclude that the cerebral vessels dynamics is significantly less sensitive to missed data fragments as compared with signals with anti-correlated statistics.

  19. iLAP: a workflow-driven software for experimental protocol development, data acquisition and analysis

    Directory of Open Access Journals (Sweden)

    McNally James

    2009-01-01

    Full Text Available Abstract Background In recent years, the genome biology community has expended considerable effort to confront the challenges of managing heterogeneous data in a structured and organized way and developed laboratory information management systems (LIMS for both raw and processed data. On the other hand, electronic notebooks were developed to record and manage scientific data, and facilitate data-sharing. Software which enables both, management of large datasets and digital recording of laboratory procedures would serve a real need in laboratories using medium and high-throughput techniques. Results We have developed iLAP (Laboratory data management, Analysis, and Protocol development, a workflow-driven information management system specifically designed to create and manage experimental protocols, and to analyze and share laboratory data. The system combines experimental protocol development, wizard-based data acquisition, and high-throughput data analysis into a single, integrated system. We demonstrate the power and the flexibility of the platform using a microscopy case study based on a combinatorial multiple fluorescence in situ hybridization (m-FISH protocol and 3D-image reconstruction. iLAP is freely available under the open source license AGPL from http://genome.tugraz.at/iLAP/. Conclusion iLAP is a flexible and versatile information management system, which has the potential to close the gap between electronic notebooks and LIMS and can therefore be of great value for a broad scientific community.

  20. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within ±10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the 92 Mo(n, 2n) 91g Mo reaction in FENDL, and lack of activation cross section data, e.g., the 138 Ba(n, 2n) 137m Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  1. Experimental validation of decay heat calculation codes and associated nuclear data libraries for fusion energy

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Wada, Masayuki; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    Validity of decay heat calculations for safety designs of fusion reactors was investigated by using decay heat experimental data on thirty-two fusion reactor relevant materials obtained at the 14-MeV neutron source facility of FNS in JAERI. Calculation codes developed in Japan, ACT4 and CINAC version 4, and nuclear data bases such as JENDL/Act-96, FENDL/A-2.0 and Lib90 were used for the calculation. Although several corrections in algorithms for both the calculation codes were needed, it was shown by comparing calculated results with the experimental data that most of activation cross sections and decay data were adequate. In cases of type 316 stainless steel and copper which were important for ITER, prediction accuracy of decay heat within {+-}10% was confirmed. However, it was pointed out that there were some problems in parts of data such as improper activation cross sections, e,g., the {sup 92}Mo(n, 2n){sup 91g}Mo reaction in FENDL, and lack of activation cross section data, e.g., the {sup 138}Ba(n, 2n){sup 137m}Ba reaction in JENDL. Modifications of cross section data were recommended for 19 reactions in JENDL and FENDL. It was also pointed out that X-ray and conversion electron energies should be included in decay data. (author)

  2. Combining Simulated and Experimental Data to Simulate Ultrasonic Array Data From Defects in Materials With High Structural Noise.

    Science.gov (United States)

    Bloxham, Harry A; Velichko, Alexander; Wilcox, Paul David

    2016-12-01

    Ultrasonic nondestructive testing inspections using phased arrays are performed on a wide range of components and materials. All real inspections suffer, to varying extents, from coherent noise, including image artifacts and speckle caused by complex geometries and grain scatter, respectively. By its nature, this noise is not reduced by averaging; however, it degrades the signal-to-noise ratio of defects and ultimately limits their detectability. When evaluating the effectiveness of an inspection, a large pool of data from samples containing a range of different defects are important to estimate the probability of detection of defects and to help characterize them. For a given inspection, coherent noise is easy to measure experimentally but hard to model realistically. Conversely, the ultrasonic response of defects can be simulated relatively easily. This paper proposes a novel method of simulating realistic array data by combining noise-free simulations of defect responses with coherent noise taken from experimental data. This removes the need for costly physical samples with known defects to be made and allows for large data sets to be created easily.

  3. New amides for uranium extraction: comparison between in silico predictions and experimental data

    International Nuclear Information System (INIS)

    Klimshuk, O.; Ouadi, A.; Billard, I.; Varnek, A.; Fourches, D.; Solov'ev, V.

    2006-01-01

    New methods and original software tools for computer-aided molecular design have been used to develop 'in silico' new monoamides which efficiently extract U(VI). A set of available experimental values of the uranyl partition coefficient (log D) in water/toluene system for 22 monoamides have been used by the ISIDA program in order to establish quantitative relationships between structure of the molecules and their extraction properties. Then, developed structure-property models have been applied to screen a virtual combinatorial library containing more than 2000 molecules. Selected hits have been synthesized and studied experimentally as extractants using the same protocol as for the molecules from the initial data set. Comparison between predicted and experimentally obtained log D values for new extractants is discussed. (author)

  4. SADE: system of acquisition of experimental data. Definition and analysis of an experiment description language

    International Nuclear Information System (INIS)

    Gagniere, Jean-Michel

    1983-01-01

    This research thesis presents a computer system for the acquisition of experimental data. It is aimed at acquiring, at processing and at storing information from particle detectors. The acquisition configuration is described by an experiment description language. The system comprises a lexical analyser, a syntactic analyser, a translator, and a data processing module. It also comprises a control language and a statistics management and plotting module. The translator builds up series of tables which allow, during an experiment, different sequences to be executed: experiment running, calculations to be performed on this data, building up of statistics. Short execution time and ease of use are always looked for [fr

  5. Experimental data of thermal cracking of soybean oil and blends with hydrogenated fat

    Directory of Open Access Journals (Sweden)

    R.F. Beims

    2018-04-01

    Full Text Available This article presents the experimental data on the thermal cracking of soybean oil and blends with hydrogenated fat. Thermal cracking experiments were carried out in a plug flow reactor with pure soybean oil and two blends with hydrogenated fat to reduce the degree of unsaturation of the feedstock. The same operational conditions was considered. The data obtained showed a total aromatics content reduction by 14% with the lowest degree of unsaturation feedstock. Other physicochemical data is presented, such as iodine index, acid index, density, kinematic viscosity. A distillation curve was carried out and compared with the curve from a petroleum sample.

  6. Assessment of electronic component failure rates on the basis of experimental data

    International Nuclear Information System (INIS)

    Nitsch, R.

    1991-01-01

    Assessment and prediction of failure rates of electronic systems are made using experimental data derived from laboratory-scale tests or from the practice, as for instance from component failure rate statistics or component repair statistics. Some problems and uncertainties encountered in an evaluation of such field data are discussed in the paper. In order to establish a sound basis for comparative assessment of data from various sources, the items of comparison and the procedure in case of doubt have to be defined. The paper explains two standard methods proposed for practical failure rate definition. (orig.) [de

  7. The use of the normalized residual in averaging experimental data and in treating outliers

    International Nuclear Information System (INIS)

    James, M.F.; Mills, R.W.; Weaver, D.R.

    1992-01-01

    In comparing and averaging different measurements of a particular quantity, the problem frequently arises of treating discrepant data. Ideally the evaluator should then study the experimental methods in detail, to try to resolve the discrepancies. This however is often either impractical or unsuccessful. An alternative statistical approach is outlined here, using the ''normalized residual''. The theoretical probability distribution of this quantity is compared with that observed for fission chain yield data from a recent evaluation by the authors. The use of the normalized residual in treating discrepant data is explained and compared with alternative methods. (author)

  8. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    Science.gov (United States)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  9. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  10. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-05-01

    A proposal has been made at LBL to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multilevel, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data through typical transformations and correlations in under 30 s. The throughput for such a facility, for five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600. 3 figures

  11. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    Science.gov (United States)

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  12. Gastric bypass: why Roux-en-Y? A review of experimental data.

    Science.gov (United States)

    Collins, Brendan J; Miyashita, Tomoharu; Schweitzer, Michael; Magnuson, Thomas; Harmon, John W

    2007-10-01

    To highlight the clinical and experimental rationales that support why the Roux-en-Y limb is an important surgical principle for bariatric gastric bypass. We reviewed PubMed citations for open Roux-en-Y gastric bypass (RYGBP), laparoscopic RYGBP, loop gastric bypass, chronic alkaline reflux gastritis, and duodenoesophageal reflux. We reviewed clinical and experimental articles. Clinical articles included prospective, retrospective, and case series of patients undergoing RYGBP, laparoscopic RYGBP, or loop gastric bypass. Experimental articles that were reviewed included in vivo and in vitro models of chronic duodenoesophageal reflux and its effect on carcinogenesis. No formal data extraction was performed. We reviewed published operative times, lengths of stay, and anastomotic leak rates for laparoscopic RYGBP and loop gastric bypass. For in vivo and in vitro experimental models of duodenoesophageal reflux, we reviewed the kinetics and potential molecular mechanisms of carcinogenesis. Recent data suggest that laparoscopic loop gastric bypass, performed without the creation of a Roux-en-Y gastroenterostomy, is a faster surgical technique that confers similarly robust weight loss compared with RYGBP or laparoscopic RYGBP. In the absence of a Roux limb, the long-term effects of chronic alkaline reflux are unknown. Animal models and in vitro analyses of chronic alkaline reflux suggest a carcinogenic effect.

  13. The art of collecting experimental data internationally: EXFOR, CINDA and the NRDC network

    International Nuclear Information System (INIS)

    Henriksson, H.; Schwerer, O.; Rochman, D.; Mikhaylyukova, M.V.; Otuka, N.

    2008-01-01

    The world-wide network of nuclear reaction data centres (NRDC) has, for about 40 years, provided data services to the scientific community. This network covers all types of nuclear reaction data, including neutron-induced, charged-particle-induced, and photonuclear data, used in a wide range of applications, such as fission reactors, accelerator driven systems, fusion facilities, nuclear medicine, materials analysis, environmental monitoring, and basic research. The now 13 nuclear data centres included in the NRDC are dividing the efforts of compilation and distribution for particular types of reactions and/or geographic regions all over the world. A central activity of the network is the collection and compilation of experimental nuclear reaction data and the related bibliographic information in the EXFOR and CINDA databases. Many of the individual data centres also distribute other types of nuclear data information, including evaluated data libraries, nuclear structure and decay data, and nuclear data reports. The network today ensures the world-wide transfer of information and coordinated evolution of an important source of nuclear data for current and future nuclear applications. (authors)

  14. THE ART OF COLLECTING EXPERIMENTAL DATA INTERNATIONALLY: EXFOR, CINDA AND THE NRDC NETWORK

    International Nuclear Information System (INIS)

    HENRIKSSON, H.; SCHWERER, O.; ROCHMAN, D.; MIKHAYLYUKOVA, M.V.; OTUKA, N.

    2007-01-01

    The world-wide network of nuclear reaction data centers (NRDC) has, for about 40 years, provided data services to the scientific community. This network covers all types of nuclear reaction data, including neutron-induced, charged-particle-induced, and photonuclear data, used in a wide range of applications, such as fission reactors, accelerator driven systems, fusion facilities, nuclear medicine, materials analysis, environmental monitoring, and basic research. The now 13 nuclear data centers included in the NRDC are dividing the efforts of compilation and distribution for particular types of reactions and/or geographic regions all over the world. A central activity of the network is the collection and compilation of experimental nuclear reaction data and the related bibliographic information in the EXFOR and CINDA databases. Many of the individual data centers also distribute other types of nuclear data information, including evaluated data libraries, nuclear structure and decay data, and nuclear data reports. The network today ensures the world-wide transfer of information and coordinated evolution of an important source of nuclear data for current and future nuclear applications

  15. DEAR Monte Carlo simulation versus experimental data in measurements with the DEAR NTP setup

    International Nuclear Information System (INIS)

    Bragadireanu, A.M.; Iliescu, M.; Petrascu, C.; Ponta, T.

    1999-01-01

    The DEAR NTP setup was installed in DAΦNE and is taking background data since February 1999. The goal of this work is to compare the measurements, in terms of charged particle hits (clusters), with the DEAR Monte Carlo simulation, taking into account the main effects due to which the particles are lost from circulating beams: Touschek effect and beam-gas interaction. To be mentioned that, during this period, no collisions between electrons and positrons have been achieved in the DEAR Interaction Point (IP) and consequently we don't have any experimental data concerning the hadronic background coming from φ-decays directly, or as secondary products of hadronic interactions. The NTP setup was shielded using lead and copper which gives a shielding factor of about 4. In parallel with the NTP setup, the signals from two scintillator slabs (150 x 80 x 2 mm) collected by 4 PMTs, positioned bellow the NTP setup and facing the IP, were digitized and counted using a National Instruments Timer/Counter Card. To compare experimental data with results of the Monte Carlo simulation we selected periods with only one circulating beam (electrons or positrons), in order to have a clean data set and we selected data files with CCD occupancy lower than 5%. As concerning the X-rays, the statistics was too poor to perform any quantitative comparison. The comparison between Monte Carlo, CCD data and kaon monitor data, for two beams are shown. It can be seen the agreement is fairly good and promising along the way of checking our routines which describes the experimental setup and the physical processes occurring in the accelerator environment. (authors)

  16. The essential value of long-term experimental data for hydrology and water management

    Science.gov (United States)

    Tetzlaff, D.; Carey, S. K.; McNamara, J. P.; Laudon, H.; Soulsby, C.

    2017-12-01

    Observations and data from long-term experimental watersheds are the foundation of hydrology as a geoscience. They allow us to benchmark process understanding, observe trends and natural cycles, and are pre-requisites for testing predictive models. Long-term experimental watersheds also are places where new measurement technologies are developed. These studies offer a crucial evidence base for understanding and managing the provision of clean water supplies; predicting and mitigating the effects of floods, and protecting ecosystem services provided by rivers and wetlands. They also show how to manage land and water in an integrated, sustainable way that reduces environmental and economic costs. We present a number of compelling examples illustrating how hydrologic process understanding has been generated through comparing hypotheses to data, and how this understanding has been essential for managing water supplies, floods, and ecosystem services today.

  17. DNB Mechanistic model assessment based on experimental data in narrow rectangular channel

    International Nuclear Information System (INIS)

    Zhou Lei; Yan Xiao; Huang Yanping; Xiao Zejun; Huang Shanfang

    2011-01-01

    The departure from nuclear boiling (DNB) is important concerning about the safety of a PWR. Lacking assessment by experimental data points, it's doubtful whether the existing models can be used in narrow rectangular channels or not. Based on experimental data points in narrow rectangular channels, two kinds of classical DNB models, which include liquid sublayer dryout model (LSDM) and bubble crowding model (BCM), were assessed. The results show that the BCM has much wider application range than the LSDM. Several thermal parameters show systematical influences on the calculated results by the models. The performances of all the models deteriorate as the void fraction increases. The reason may be attributed to the geometrical differences between a circular tube and narrow rectangular channel. (authors)

  18. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    Energy Technology Data Exchange (ETDEWEB)

    Bhandarkar, Manisha, E-mail: manisha@ipr.res.in; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-11-15

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  19. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    International Nuclear Information System (INIS)

    Bhandarkar, Manisha; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-01-01

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  20. Existing experimental criticality data applicable to nuclear-fuel-transportation systems

    International Nuclear Information System (INIS)

    Bierman, S.R.

    1983-02-01

    Analytical techniques are generally relied upon in making criticality evaluations involving nuclear material outside reactors. For these evaluations to be accepted the calculations must be validated by comparison with experimental data for a known set of conditions having physical and neutronic characteristics similar to those conditions being evaluated analytically. The purpose of this report is to identify those existing experimental data that are suitable for use in verifying criticality calculations on nuclear fuel transportation systems. In addition, near term needs for additional data in this area are identified. Of the considerable amount of criticality data currently existing, that are applicable to non-reactor systems, those particularly suitable for use in support of nuclear material transportation systems have been identified and catalogued into the following groups: (1) critical assemblies of fuel rods in water; (2) critical assemblies of fuel rods in water containing soluble neutron absorbers; (3) critical assemblies containing solid neutron absorber; (4) critical assemblies of fuel rods in water with heavy metal reflectors; and (5) critical assemblies of fuel rods in water with irregular features. A listing of the current near term needs for additional data in each of the groups has been developed for future use in planning criticality research in support of nuclear fuel transportation systems. The criticality experiments needed to provide these data are briefly described and identified according to priority and relative cost of performing the experiments

  1. Archival and Dissemination of the U.S. and Canadian Experimental Nuclear Reaction Data (EXFOR Project)

    Science.gov (United States)

    Pritychenko, Boris; Hlavac, Stanislav; Schwerer, Otto; Zerkin, Viktor

    2017-09-01

    The Exchange Format (EXFOR) or experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource includes numerical data sets and bibliographical information for more than 22,000 experiments since the beginning of nuclear science. Analysis of the experimental data sets, recovery and archiving will be discussed. Examples of the recent developments of the data renormalization, uploads and inverse reaction calculations for nuclear science and technology applications will be presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development and research activities. It is publicly available at the National Nuclear Data Center website http://www.nndc.bnl.gov/exfor and the International Atomic Energy Agency mirror site http://www-nds.iaea.org/exfor. This work was sponsored in part by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookha ven Science Associates, LLC.

  2. Experimental vapor-liquid equilibria data for binary mixtures of xylene isomers

    Directory of Open Access Journals (Sweden)

    W.L. Rodrigues

    2005-09-01

    Full Text Available Separation of aromatic C8 compounds by distillation is a difficult task due to the low relative volatilities of the compounds and to the high degree of purity required of the final commercial products. For rigorous simulation and optimization of this separation, the use of a model capable of describing vapor-liquid equilibria accurately is necessary. Nevertheless, experimental data are not available for all binaries at atmospheric pressure. Vapor-liquid equilibria data for binary mixtures were isobarically obtained with a modified Fischer cell at 100.65 kPa. The vapor and liquid phase compositions were analyzed with a gas chromatograph. The methodology was initially tested for cyclo-hexane+n-heptane data; results obtained are similar to other data in the literature. Data for xylene binary mixtures were then obtained, and after testing, were considered to be thermodynamically consistent. Experimental data were regressed with Aspen Plus® 10.1 and binary interaction parameters were reported for the most frequently used activity coefficient models and for the classic mixing rules of two cubic equations of state.

  3. Theoretical bases and possibilities of program BRASIER for experimental data fitting and management

    International Nuclear Information System (INIS)

    Quintero, B.; Santos, J.; Garcia Yip, F.; Lopez, I.

    1992-01-01

    In the paper the theoretical bases and primary possibilities of the program BRASIER are shown. It was performed for the management and fitting of experimental data. Relevant characteristics are: Utilization of several regression methods, errors treatment, P oint-Drop Technique , multidimensional fitting, friendly interactivity, graphical possibilities and file management. The fact of using various regression methods has resulted in greater convergence possibility with respect to other similar programs that use an unique algorithm

  4. Unfolding of true distributions from experimental data distorted by detectors with finite resolutions

    International Nuclear Information System (INIS)

    Gagunashvili, N.D.

    1993-01-01

    A new procedure for unfolding the true distribution from experimental data distorted by a detector is proposed. For the given detector a result can be found by the least squares method, hence, without bias and involving minimal statistical errors. Stability of the result is achieved at the expense of its information content and/or using additional information on the shape of the distributions to be measured. The method may be applied for detectors with linear or nonlinear distortions. 8 refs.; 5 figs

  5. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Collection of creep fatigue laws and their comparison with experimental data

    International Nuclear Information System (INIS)

    Rieunier, J.B.; Dufresne, J.

    1982-07-01

    A systematic investigation has been undertaken to collect the main model describing phenomena of creep-fatigue interaction. A total of 13 models was collected. Simultaneously, 660 experimental data on 304 stainless steel were collected and compared to the results obtained from theoretical models. Conclusion are that none of these models describes correctly all phenomena considered (imposed strain or stress - hold time - two strain levels etc...) but each of those phenomena is well represented by some laws

  7. Experimental and numerical analysis for potential heat reuse in liquid cooled data centres

    International Nuclear Information System (INIS)

    Carbó, Andreu; Oró, Eduard; Salom, Jaume; Canuto, Mauro; Macías, Mario; Guitart, Jordi

    2016-01-01

    Highlights: • The potential heat reuse of a liquid data centre has been characterized. • Dynamic behaviours of a liquid cooled data centre have been studied. • A dynamic energy model of liquid cooling data centres is developed. • The dynamic energy model has been validated with experimental data. • Server usage and consumption relation was developed for different IT loads. - Abstract: The rapid increase of data centre industry has stimulated the interest of both researchers and professionals in order to reduce energy consumption and carbon footprint of these unique infrastructures. The implementation of energy efficiency strategies and the use of renewables play an important role to reduce the overall data centre energy demand. Information Technology (IT) equipment produce vast amount of heat which must be removed and therefore waste heat recovery is a likely energy efficiency strategy to be studied in detail. To evaluate the potential of heat reuse a unique liquid cooled data centre test bench was designed and built. An extensive thermal characterization under different scenarios was performed. The effective liquid cooling capacity is affected by the inlet water temperature. The lower the inlet water temperature the higher the liquid cooling capacity; however, the outlet water temperature will be also low. Therefore, the requirements of the heat reuse application play an important role in the optimization of the cooling configuration. The experimental data was then used to validate a dynamic energy model developed in TRNSYS. This model is able to predict the behaviour of liquid cooling data centres and can be used to study the potential compatibility between large data centres with different heat reuse applications. The model also incorporates normalized power consumption profiles for heterogeneous workloads that have been derived from realistic IT loads.

  8. Fatigue crack extension in nozzle junctions; comparison of analytical approximations with experimental data

    International Nuclear Information System (INIS)

    Broekhoven, M.J.G.; Ruijtenbeek, M.G. van de

    1975-01-01

    The fracture mechanics based stress intensity factor (K-factor) concept has obtained wide-spread acceptance as a tool for quantitative analysis of both fatigue crack growth and instable fracture. The present study discusses the applicability of various simple analytical approximations by comparing results with experimental data. A semi-analytical procedure has been developed whose main characteristics are: the true stress distribution perpendicular to the crack plane for the uncracked structure is used as input data; an extended version of the Shah and Kobayashi solution for elliptical cracks, loaded on their surfaces by tractions described by fourth order double symmetrical polynomials fit through the data of previous step is used to calculate full K-factor variations along the crack fronts; several corrections, a.o. to correct for free surfaces and for a corner radius are incorporated. The experiments concern careful monitoring crack growth rates (da/dN) under uniaxial fatigue loading of precracked nozzle-on-plate models, a.o. using a closed T.V. circuit. Resulting da/dN versus crack length (a) curves are converted into K versus a curves using da/dN versus ΔK curves for the same material (ASTM A 508 C12) obtained by standard procedures. Comparison of theoretical and experimental data yields the conclusion that: simple analytical approximations as sometimes recommended in literature may largely overestimate or underestimate K-factors for nozzle corner cracks; a computer program based on the semi-analytical procedure yields results within seconds of CPU-time once the input data have been generated. These results compare well with experimental and available finite element data for the range of crack depths of practical concern

  9. The impact of retirement on health: quasi-experimental methods using administrative data.

    Science.gov (United States)

    Horner, Elizabeth Mokyr; Cullen, Mark R

    2016-02-19

    Is retirement good or bad for health? Disentangling causality is difficult. Much of the previous quasi-experimental research on the effect of health on retirement used self-reported health and relied upon discontinuities in public retirement incentives across Europe. The current study investigated the effect of retirement on health by exploiting discontinuities in private retirement incentives to test the effect of retirement on health using a quasi-experimental study design. Secondary data (1997-2009) on a cohort of male manufacturing workers in a United States setting. Health status was determined using claims data from private insurance and Medicare. Analyses used employer-based administrative and claims data and claim data from Medicare. Widely used selection on observables models overstate the negative impact of retirement due to the endogeneity of the decision to retire. In addition, health status as measured by administrative claims data provide some advantages over the more commonly used survey items. Using an instrument and administrative health records, we find null to positive effects from retirement on all fronts, with a possible exception of increased risk for diabetes. This study provides evidence that retirement is not detrimental and may be beneficial to health for a sample of manufacturing workers. In addition, it supports previous research indicating that quasi-experimental methodologies are necessary to evaluate the relationship between retirement and health, as any selection on observable model will overstate the negative relationship of retirement on health. Further, it provides a model for how such research could be implemented in countries like the United States that do not have a strong public pension program. Finally, it demonstrates that such research need-not rely upon survey data, which has certain shortcomings and is not always available for homogenous samples.

  10. Compilation of reactor-physical data of the AVR experimental reactor for 1982

    International Nuclear Information System (INIS)

    Werner, H.; Wawrzik, U.; Grotkamp, T.; Buettgen, I.

    1983-12-01

    Since the end of 1981 the calculation model AVR-80 has been taken as a basis for compiling reactor-physical data of the AVR experimental reactor. A brief outline of the operation history of 1982 is given, including the beginning of a large-scale experiment dealing with change-over from high enriched uranium to low enriched uranium. Calculations relative to spectral shift, diffusion, temperature, burnup, and recirculation of the fuel elements are described in brief. The essential results of neutron-physical and thermodynamic calculations and the characteristical data of the various types of fuel used are shown in tables and illustrations. (RF) [de

  11. Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data

    Science.gov (United States)

    2017-03-02

    AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum- Encrypted Data Philip Walther UNIVERSITT WIEN Final...on Quantum- Encrypted Data 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-16-1-0004 5c.  PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Philip Walther 5d...1010 AT 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) EOARD Unit 4515 APO AE 09421-4515 10

  12. CDApps: integrated software for experimental planning and data processing at beamline B23, Diamond Light Source.

    Science.gov (United States)

    Hussain, Rohanah; Benning, Kristian; Javorfi, Tamas; Longo, Edoardo; Rudd, Timothy R; Pulford, Bill; Siligardi, Giuliano

    2015-03-01

    The B23 Circular Dichroism beamline at Diamond Light Source has been operational since 2009 and has seen visits from more than 200 user groups, who have generated large amounts of data. Based on the experience of overseeing the users' progress at B23, four key areas requiring the most assistance are identified: planning of experiments and note-keeping; designing titration experiments; processing and analysis of the collected data; and production of experimental reports. To streamline these processes an integrated software package has been developed and made available for the users. The subsequent article summarizes the main features of the software.

  13. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Crecy, F de

    1994-12-31

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs.

  14. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  15. Distribution of iodine between water and steam: a reassessment of experimental data on hypoiodous acid

    International Nuclear Information System (INIS)

    Turner, D.J.

    1978-01-01

    A re-analysis has been made of published data on the steam/ water distribution of iodine between 118 0 and 287 0 C. The analysis assumes that the principal reactions are as follows: I 2 + H 2 O = HIO + H + + I - 3I 2 + 3H 2 O = IO 3 - + 5I - + 6H + for which the equilibrium constants are respectively K 2 and K 5 . The analysis of the experimental data was supported by using empirically and theoretically based equations which describe the temperature dependence of equilibrium constants and by comparing predicted behaviour with the observations reported from a number of boiling water reactors. (author)

  16. Integral method of treatment of experimental data from radiochemical solar neutrino detectors

    International Nuclear Information System (INIS)

    Gavrin, V.N.; Kopylov, A.V.; Streltsov, A.V.

    1985-01-01

    An analysis is made of the statistical errors in solar neutrino detection by radiochemical detectors at different times of exposure. It is shown that short exposures (tau/sub e/ = one-half to one half-life) give minimal one-year error. The possibility is considered of the detection of the solar neutrino flux variation due to annual changes of the Earth-Sun distance. The integral method of treatment of the experimental data is described. Results are given of the statistical treatment of computer simulated data

  17. The upgrade of the J-TEXT experimental data access and management system

    International Nuclear Information System (INIS)

    Yang, C.; Zhang, M.; Zheng, W.; Liu, R.; Zhuang, G.

    2014-01-01

    Highlights: • The J-TEXT DAMS is developed based on B/S model, which makes it conveniently access the system. • The JWeb-Scope adopts segment strategy to read data that improve the speed of reading data. • DAMS have integrated the management and JWeb-Scope and make an easy way for visitors to access the experiment data. • The JWeb-Scope can be visited all over the world, plot experiment data and zoom in or out smoothly. - Abstract: The experimental data of J-TEXT tokamak are stored in the MDSplus database. The old J-TEXT data access system is based on the tools provided by MDSplus. Since the number of signals is huge, the data retrieval for an experiment is difficult. To solve this problem, the J-TEXT experimental data access and management system (DAMS) based on MDSplus has been developed. The DAMS left the old MDSplus system unchanged providing new tools, which can help users to handle all signals as well as to retrieve signals they need thanks to the user information requirements. The DAMS also offers users a way to create their jScope configuration files which can be downloaded to the local computer. In addition, the DAMS provides a JWeb-Scope tool to visualize the signal in a browser. JWeb-Scope adopts segment strategy to read massive data efficiently. Users can plot one or more signals on their own choice and zoom-in, zoom-out smoothly. The whole system is based on B/S model, so that the users only need of the browsers to access the DAMS. The DAMS has been tested and it has a better user experience. It will be integrated into the J-TEXT remote participation system later

  18. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations

    Directory of Open Access Journals (Sweden)

    Andrea Stocco

    2018-04-01

    Full Text Available This article describes the data analyzed in the paper “Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model” (Stocco et al., 2017 [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004 [2], Simon task (Craft and Simon, 1970 [3], and Automated Operation Span (Unsworth et al., 2005 [4], as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  19. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations.

    Science.gov (United States)

    Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S

    2018-04-01

    This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  20. A Harmony Search Algorithm for the Reproduction of Experimental Data in the Social Force Model

    Directory of Open Access Journals (Sweden)

    Osama Moh'd Alia

    2014-01-01

    Full Text Available Crowd dynamics is a discipline dealing with the management and flow of crowds in congested places and circumstances. Pedestrian congestion is a pressing issue where crowd dynamics models can be applied. The reproduction of experimental data (velocity-density relation and specific flow rate is a major component for the validation and calibration of such models. In the social force model, researchers have proposed various techniques to adjust essential parameters governing the repulsive social force, which is an effort at reproducing such experimental data. Despite that and various other efforts, the optimal reproduction of the real life data is unachievable. In this paper, a harmony search-based technique called HS-SFM is proposed to overcome the difficulties of the calibration process for SFM, where the fundamental diagram of velocity-density relation and the specific flow rate are reproduced in conformance with the related empirical data. The improvisation process of HS is modified by incorporating the global best particle concept from particle swarm optimization (PSO to increase the convergence rate and overcome the high computational demands of HS-SFM. Simulation results have shown HS-FSM’s ability to produce near optimal SFM parameter values, which makes it possible for SFM to almost reproduce the related empirical data.

  1. Code REX to fit experimental data to exponential functions and graphics plotting

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    The REX code, written in Fortran IV, performs the fitting a set of experimental data to different kind of functions as: straight-line (Y = A + BX) , and various exponential type (Y-A B x , Y=A X B ; Y=A exp(BX) ) , using the Least Squares criterion. Such fitting could be done directly for one selected function of for the our simultaneously and allows to chose the function that best fitting to the data, since presents the statistics data of all the fitting. Further, it presents the graphics plotting, of the fitted function, in the appropriate coordinate axes system. An additional option allows also the Graphic plotting of experimental data used for the fitting. All the data necessary to execute this code are asked to the operator in the terminal screen, in the iterative way by screen-operator dialogue, and the values are introduced through the keyboard. This code could be executed with any computer provided with graphic screen and keyboard terminal, with a X-Y plotter serial connected to the graphics terminal. (Author) 5 refs

  2. Control and data acquisition systems for the Fermi Elettra experimental stations

    International Nuclear Information System (INIS)

    Borghes, R.; Chenda, V.; Curri, A.; Gaio, G.; Kourousias, G.; Lonza, M.; Passos, G.; Passuello, R.; Pivetta, L.; Prica, M.; Pugliese, R.; Strangolino, G.

    2012-01-01

    FERMI-Elettra is a single-pass Free Electron Laser (FEL) user-facility covering the wavelength range from 100 nm to 4 nm. The facility is located in Trieste, Italy, nearby the third-generation synchrotron light source Elettra. Three experimental stations, dedicated to different scientific areas, have been installed in 2011: Low Density Matter (LDM), Elastic and Inelastic Scattering (EIS) and Diffraction and Projection Imaging (DiProI). The experiment control and data acquisition system is the natural extension of the machine control system. It integrates a shot-by-shot data acquisition framework with a centralized data storage and analysis system. Low-level applications for data acquisition and online processing have been developed using the Tango framework on Linux platforms. High-level experimental applications can be developed on both Linux and Windows platforms using C/C++, Python, LabView, IDL or Matlab. The Elettra scientific computing portal allows remote access to the experiment and to the data storage system. (authors)

  3. Review of experimental data for modelling LWR fuel cladding behaviour under loss of coolant accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Massih, Ali R. [Quantum Technologies AB, Uppsala Science Park (Sweden)

    2007-02-15

    Extensive range of experiments has been conducted in the past to quantitatively identify and understand the behaviour of fuel rod under loss-of-coolant accident (LOCA) conditions in light water reactors (LWRs). The obtained experimental data provide the basis for the current emergency core cooling system acceptance criteria under LOCA conditions for LWRs. The results of recent experiments indicate that the cladding alloy composition and high burnup effects influence LOCA acceptance criteria margins. In this report, we review some past important and recent experimental results. We first discuss the background to acceptance criteria for LOCA, namely, clad embrittlement phenomenology, clad embrittlement criteria (limitations on maximum clad oxidation and peak clad temperature) and the experimental bases for the criteria. Two broad kinds of test have been carried out under LOCA conditions: (i) Separate effect tests to study clad oxidation, clad deformation and rupture, and zirconium alloy allotropic phase transition during LOCA. (ii) Integral LOCA tests, in which the entire LOCA sequence is simulated on a single rod or a multi-rod array in a fuel bundle, in laboratory or in a tests and results are discussed and empirical correlations deduced from these tests and quantitative models are conferred. In particular, the impact of niobium in zirconium base clad and hydrogen content of the clad on allotropic phase transformation during LOCA and also the burst stress are discussed. We review some recent LOCA integral test results with emphasis on thermal shock tests. Finally, suggestions for modelling and further evaluation of certain experimental results are made.

  4. Analysis of experimental air-detritiation data using TSOAK-M1

    International Nuclear Information System (INIS)

    Land, R.H.; Maroni, V.A.; Minkoff, M.

    1980-01-01

    A computer code (TSOAK-M1) has been developed which permits the determination of tritium reaction (T 2 to HTO)/adsorption/release and instrument correction parameters from enclosure (building) detritiation test data. The code is based on a simplified model which treats each parameter as a normalized time-independent constant throughout the data-unfolding steps. TSOAK-M1 was used to analyze existing small-cubicle test data with good success, and the resulting normalized parameters were employed to evaluate hypothetical reactor-building detritiation scenarios. It was concluded from the latter evaluation that the complications associated with moisture formation, adsorption, and release, particularly in terms of extended cleanup times, may not be as great as was previously thought. It is recommended that the validity of the TSOAK-M1 model be tested using data from detritiation tests conducted on large experimental enclosures (5 to 10 m 3 ) and, if possible, actual facility buildings

  5. Experimental data processing technique for nonstationary heat transfer on fuel rod simulators

    International Nuclear Information System (INIS)

    Nikonov, S.P.; Nikonov, A.P.; Belyukin, V.A.

    1982-01-01

    Non-stationary heat-transfer data processing is considered in connection with experimental studies of the emergency cooling whereat fuel rod imitators both with direct and indirect shell heating were used. The objective of data processing was obtaining the temperature distribution within the imitator, the heat flux removed by the coolant and the shell-coolant heat-transfer coefficient. The special attention was paid to the temperature distribution calculation at the data processing during the reflooding experiments. In this case two factors are assumed to be known: the time dependency of temperature variation at a certain point within the imitator cross-section and the heat flux at some point of the same cross-section. The initial data preparation for calculations, employing the procedure of smoothing by cubic spline functions, is considered as well, with application of an algorithm reported in the literature, which is efficient for the given functional dependency wherein the deviation in each point is known [ru

  6. Inference of missing data and chemical model parameters using experimental statistics

    Science.gov (United States)

    Casey, Tiernan; Najm, Habib

    2017-11-01

    A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.

  7. Seven challenges for model-driven data collection in experimental and observational studies

    Directory of Open Access Journals (Sweden)

    J. Lessler

    2015-03-01

    Full Text Available Infectious disease models are both concise statements of hypotheses and powerful techniques for creating tools from hypotheses and theories. As such, they have tremendous potential for guiding data collection in experimental and observational studies, leading to more efficient testing of hypotheses and more robust study designs. In numerous instances, infectious disease models have played a key role in informing data collection, including the Garki project studying malaria, the response to the 2009 pandemic of H1N1 influenza in the United Kingdom and studies of T-cell immunodynamics in mammals. However, such synergies remain the exception rather than the rule; and a close marriage of dynamic modeling and empirical data collection is far from the norm in infectious disease research. Overcoming the challenges to using models to inform data collection has the potential to accelerate innovation and to improve practice in how we deal with infectious disease threats.

  8. Comparison between a Computational Seated Human Model and Experimental Verification Data

    Directory of Open Access Journals (Sweden)

    Christian G. Olesen

    2014-01-01

    Full Text Available Sitting-acquired deep tissue injuries (SADTI are the most serious type of pressure ulcers. In order to investigate the aetiology of SADTI a new approach is under development: a musculo-skeletal model which can predict forces between the chair and the human body at different seated postures. This study focuses on comparing results from a model developed in the AnyBody Modeling System, with data collected from an experimental setup. A chair with force-measuring equipment was developed, an experiment was conducted with three subjects, and the experimental results were compared with the predictions of the computational model. The results show that the model predicted the reaction forces for different chair postures well. The correlation coefficients of how well the experiment and model correlate for the seat angle, backrest angle and footrest height was 0.93, 0.96, and 0.95. The study show a good agreement between experimental data and model prediction of forces between a human body and a chair. The model can in the future be used in designing wheelchairs or automotive seats.

  9. Stereochemical analysis of (+)-limonene using theoretical and experimental NMR and chiroptical data

    Science.gov (United States)

    Reinscheid, F.; Reinscheid, U. M.

    2016-02-01

    Using limonene as test molecule, the success and the limitations of three chiroptical methods (optical rotatory dispersion (ORD), electronic and vibrational circular dichroism, ECD and VCD) could be demonstrated. At quite low levels of theory (mpw1pw91/cc-pvdz, IEFPCM (integral equation formalism polarizable continuum model)) the experimental ORD values differ by less than 10 units from the calculated values. The modelling in the condensed phase still represents a challenge so that experimental NMR data were used to test for aggregation and solvent-solute interactions. After establishing a reasonable structural model, only the ECD spectra prediction showed a decisive dependence on the basis set: only augmented (in the case of Dunning's basis sets) or diffuse (in the case of Pople's basis sets) basis sets predicted the position and shape of the ECD bands correctly. Based on these result we propose a procedure to assign the absolute configuration (AC) of an unknown compound using the comparison between experimental and calculated chiroptical data.

  10. Statistical analysis of correlated experimental data and neutron cross section evaluation

    International Nuclear Information System (INIS)

    Badikov, S.A.

    1998-01-01

    The technique for evaluation of neutron cross sections on the basis of statistical analysis of correlated experimental data is presented. The most important stages of evaluation beginning from compilation of correlation matrix for measurement uncertainties till representation of the analysis results in the ENDF-6 format are described in details. Special attention is paid to restrictions (positive uncertainty) on covariation matrix of approximate parameters uncertainties generated within the method of least square fit which is derived from physical reasons. The requirements for source experimental data assuring satisfaction of the restrictions mentioned above are formulated. Correlation matrices of measurement uncertainties in particular should be also positively determined. Variants of modelling the positively determined correlation matrices of measurement uncertainties in a situation when their consequent calculation on the basis of experimental information is impossible are discussed. The technique described is used for creating the new generation of estimates of dosimetric reactions cross sections for the first version of the Russian dosimetric file (including nontrivial covariation information)

  11. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  12. In situ impulse test: an experimental and analytical evaluation of data interpretation procedures

    International Nuclear Information System (INIS)

    1975-08-01

    Special experimental field testing and analytical studies were undertaken at Fort Lawton in Seattle, Washington, to study ''close-in'' wave propagation and evaluate data interpretation procedures for a new in situ impulse test. This test was developed to determine the shear wave velocity and dynamic modulus of soils underlying potential nuclear power plant sites. The test is different from conventional geophysical testing in that the velocity variation with strain is determined for each test. In general, strains between 10 -1 and 10 -3 percent are achieved. The experimental field work consisted of performing special tests in a large test sand fill to obtain detailed ''close-in'' data. Six recording transducers were placed at various points on the energy source, while approximately 37 different transducers were installed within the soil fill, all within 7 feet of the energy source. Velocity measurements were then taken simultaneously under controlled test conditions to study shear wave propagation phenomenology and help evaluate data interpretation procedures. Typical test data are presented along with detailed descriptions of the results

  13. Comparison of a fuel sheath failure model with published experimental data

    International Nuclear Information System (INIS)

    Varty, R.L.; Rosinger, H.E.

    1982-01-01

    A fuel sheath failure model has been compared with the published results of experiments in which a Zircaloy-4 fuel sheath was subjected to a temperature ramp and a differential pressure until failure occurred. The model assumes that the deformation of the sheath is controlled by steady-state creep and that there is a relationship between tangential stress and temperature at the instant of failure. The sheath failure model predictions agree reasonably well with the experimental data. The burst temperature is slightly overpredicted by the model. The burst strain is overpredicted for small experimental burst strains but is underpredicted otherwise. The reasons for these trends are discussed and the extremely wide variation in burst strain reported in the literature is explained using the model

  14. Computational study of a low head draft tube and validation with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Henau, V De; Payette, F A; Sabourin, M [Alstom Power Systems, Hydro 1350 chemin Saint-Roch, Sorel-Tracy (Quebec), J3R 5P9 (Canada); Deschenes, C; Gagnon, J M; Gouin, P, E-mail: vincent.dehenau@power.alstom.co [Hydraulic Machinery Laboratory, Laval University 1065 ave. de la Medecine, Quebec (Canada)

    2010-08-15

    The objective of this paper is to investigate methodologies to improve the reliability of CFD analysis of low head turbine draft tubes. When only the draft tube performance is investigated, the study indicates that draft tube only simulations with an adequate treatment of the inlet boundary conditions for velocity and turbulence are a good alternative to rotor/stator (stage) simulations. The definition of the inlet velocity in the near wall regions is critical to get an agreement between the stage and draft tube only solutions. An average turbulent kinetic energy intensity level and average turbulent kinetic energy dissipation length scale are sufficient as turbulence inlet conditions as long as these averages are coherent with the stage solution. Comparisons of the rotor/stator simulation results to the experimental data highlight some discrepancies between the predicted draft tube flow and the experimental observations.

  15. The use of experimental data in an MTR-type nuclear reactor safety analysis

    Science.gov (United States)

    Day, Simon E.

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type ( i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core.

  16. The use of experimental data in an MTR-type nuclear reactor safety analysis

    International Nuclear Information System (INIS)

    Day, S.E.

    2006-01-01

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type (i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core. (author)

  17. The use of experimental data in an MTR-type nuclear reactor safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Day, S.E

    2006-07-01

    Reactivity initiated accidents (RIAs) are a category of events required for research reactor safety analysis. A subset of this is unprotected RIAs in which mechanical systems or human intervention are not credited in the response of the system. Light-water cooled and moderated MTR-type (i.e., aluminum-clad uranium plate fuel) reactors are self-limiting up to some reactivity insertion limit beyond which fuel damage occurs. This characteristic was studied in the Borax and Spert reactor tests of the 1950s and 1960s in the USA. This thesis considers the use of this experimental data in generic MTR-type reactor safety analysis. The approach presented herein is based on fundamental phenomenological understanding and uses correlations in the reactor test data with suitable account taken for differences in important system parameters. Specifically, a semi-empirical approach is used to quantify the relationship between the power, energy and temperature rise response of the system as well as parametric dependencies on void coefficient and the degree of subcooling. Secondary effects including the dependence on coolant flow are also examined. A rigorous curve fitting approach and error assessment is used to quantify the trends in the experimental data. In addition to the initial power burst stage of an unprotected transient, the longer term stability of the system is considered with a stylized treatment of characteristic power/temperature oscillations (chugging). A bridge from the HEU-based experimental data to the LEU fuel cycle is assessed and outlined based on existing simulation results presented in the literature. A cell-model based parametric study is included. The results are used to construct a practical safety analysis methodology for determining reactivity insertion safety limits for a light-water moderated and cooled MTR-type core. (author)

  18. Validation of the CATHARE2 code against experimental data from Brayton-cycle plants

    International Nuclear Information System (INIS)

    Bentivoglio, Fabrice; Tauveron, Nicolas; Geffraye, Genevieve; Gentner, Herve

    2008-01-01

    In recent years the Commissariat a l'Energie Atomique (CEA) has commissioned a wide range of feasibility studies of future-advanced nuclear reactors, in particular gas-cooled reactors (GCR). The thermohydraulic behaviour of these systems is a key issue for, among other things, the design of the core, the assessment of thermal stresses, and the design of decay heat removal systems. These studies therefore require efficient and reliable simulation tools capable of modelling the whole reactor, including the core, the core vessel, piping, heat exchangers and turbo-machinery. CATHARE2 is a thermal-hydraulic 1D reference safety code developed and extensively validated for the French pressurized water reactors. It has been recently adapted to deal also with gas-cooled reactor applications. In order to validate CATHARE2 for these new applications, CEA has initiated an ambitious long-term experimental program. The foreseen experimental facilities range from small-scale loops for physical correlations, to component technology and system demonstration loops. In the short-term perspective, CATHARE2 is being validated against existing experimental data. And in particular from the German power plants Oberhausen I and II. These facilities have both been operated by the German utility Energie Versorgung Oberhausen (E.V.O.) and their power conversion systems resemble to the high-temperature reactor concepts: Oberhausen I is a 13.75-MWe Brayton-cycle air turbine plant, and Oberhausen II is a 50-MWe Brayton-cycle helium turbine plant. The paper presents these two plants, the adopted CATHARE2 modelling and a comparison between experimental data and code results for both steady state and transient cases

  19. An assessment of prediction methods of CHF in tubes with a large experimental data bank

    International Nuclear Information System (INIS)

    Leung, L.K.H.; Groeneveld, D.C.

    1993-01-01

    An assessment of prediction methods of CHF in tubes has been carried out using an expanded CHF data bank at Chalk River Laboratories (CRL). It includes eight different CHF look-up tables (two AECL versions and six USSR (or Russian) versions) and three empirical correlations. These prediction methods were developed from relatively large data bases and therefore have a wide range of application. Some limitations, however, were imposed in this study, to avoid any invalid predictions due to extrapolation of these methods. Therefore, these comparisons are limited to the specific data base that is tailored to suit the range of an individual method. This has resulted in a different number of data used in each case. The comparison of predictions against the experimental data is based on the constant inlet-condition approach (i.e., the pressure, mass flux, inlet fluid temperature and tube geometry are the primary parameters). Overall, the AECL tables have the widest range of application. They are assessed with 21 771 data points and the root-mean-square error is only 8.3%. About 60% of these data were used in the development of the AECL tables. The best version of the USSR/Russian CHF table is valid for 13 300 data points with a root-mean-square error of 8.8%. The USSR/Russian table that has the widest range of application covers a total of 18 800 data points, but the error increases to 9.3%. The range of application for empirical correlations, however, are generally much narrower than those covered by the CHF tables. The number of data used to assess these correlations is therefore further limited. Among the tested correlations, the Becker and Persson correlation covers the least amount of data (only 7 499 data points), but has the best accuracy (with a root-mean-square error of 9.71%). 33 refs., 2 figs., 3 tabs

  20. Development of advanced methods for analysis of experimental data in diffusion

    Science.gov (United States)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix

  1. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    Science.gov (United States)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  2. Detection of outliers by neural network on the gas centrifuge experimental data of isotopic separation process

    International Nuclear Information System (INIS)

    Andrade, Monica de Carvalho Vasconcelos

    2004-01-01

    This work presents and discusses the neural network technique aiming at the detection of outliers on a set of gas centrifuge isotope separation experimental data. In order to evaluate the application of this new technique, the result obtained of the detection is compared to the result of the statistical analysis combined with the cluster analysis. This method for the detection of outliers presents a considerable potential in the field of data analysis and it is at the same time easier and faster to use and requests very less knowledge of the physics involved in the process. This work established a procedure for detecting experiments which are suspect to contain gross errors inside a data set where the usual techniques for identification of these errors cannot be applied or its use/demands an excessively long work. (author)

  3. Utilizing experimentally derived multi-channel gamma-ray spectra for the analysis of airborne data

    International Nuclear Information System (INIS)

    Grasty, R.L.

    1982-01-01

    Gamma-ray spectra derived from measurements on radioactive concrete calibration pads using plywood sheets to simulate the attenuation effect of air have been successfully tested on airbone data. Cesium-137 at 662 keV, from atomic weapons tests was found to contribute significantly to the airborne spectrum. By fitting the experimental spectra, above the cesium energy, to airborne data, significant increases in accuracy were obtained for the measurement of uranium and thorium, compared to the standard 3-window method. By including a cesium spectrum is the analysis of gamma-ray data from a survey carried out in Saskatchewan, it was found that background radiation due to atmospheric bismuth-214 could be measured more reliably than by using a constant over-water background. Similar results were obtained by monitoring low energy lead-214 gamma-rays at 352 keV

  4. Privacy-preserving data cube for electronic medical records: An experimental evaluation.

    Science.gov (United States)

    Kim, Soohyung; Lee, Hyukki; Chung, Yon Dohn

    2017-01-01

    The aim of this study is to evaluate the effectiveness and efficiency of privacy-preserving data cubes of electronic medical records (EMRs). An EMR data cube is a complex of EMR statistics that are summarized or aggregated by all possible combinations of attributes. Data cubes are widely utilized for efficient big data analysis and also have great potential for EMR analysis. For safe data analysis without privacy breaches, we must consider the privacy preservation characteristics of the EMR data cube. In this paper, we introduce a design for a privacy-preserving EMR data cube and the anonymization methods needed to achieve data privacy. We further focus on changes in efficiency and effectiveness that are caused by the anonymization process for privacy preservation. Thus, we experimentally evaluate various types of privacy-preserving EMR data cubes using several practical metrics and discuss the applicability of each anonymization method with consideration for the EMR analysis environment. We construct privacy-preserving EMR data cubes from anonymized EMR datasets. A real EMR dataset and demographic dataset are used for the evaluation. There are a large number of anonymization methods to preserve EMR privacy, and the methods are classified into three categories (i.e., global generalization, local generalization, and bucketization) by anonymization rules. According to this classification, three types of privacy-preserving EMR data cubes were constructed for the evaluation. We perform a comparative analysis by measuring the data size, cell overlap, and information loss of the EMR data cubes. Global generalization considerably reduced the size of the EMR data cube and did not cause the data cube cells to overlap, but incurred a large amount of information loss. Local generalization maintained the data size and generated only moderate information loss, but there were cell overlaps that could decrease the search performance. Bucketization did not cause cells to overlap

  5. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  6. Invited review: Experimental design, data reporting, and sharing in support of animal systems modeling research.

    Science.gov (United States)

    McNamara, J P; Hanigan, M D; White, R R

    2016-12-01

    The National Animal Nutrition Program "National Research Support Project 9" supports efforts in livestock nutrition, including the National Research Council's committees on the nutrient requirements of animals. Our objective was to review the status of experimentation and data reporting in animal nutrition literature and to provide suggestions for the advancement of animal nutrition research and the ongoing improvement of field-applied nutrient requirement models. Improved data reporting consistency and completeness represent a substantial opportunity to improve nutrition-related mathematical models. We reviewed a body of nutrition research; recorded common phrases used to describe diets, animals, housing, and environmental conditions; and proposed equivalent numerical data that could be reported. With the increasing availability of online supplementary material sections in journals, we developed a comprehensive checklist of data that should be included in publications. To continue to improve our research effectiveness, studies utilizing multiple research methodologies to address complex systems and measure multiple variables will be necessary. From the current body of animal nutrition literature, we identified a series of opportunities to integrate research focuses (nutrition, reproduction and genetics) to advance the development of nutrient requirement models. From our survey of current experimentation and data reporting in animal nutrition, we identified 4 key opportunities to advance animal nutrition knowledge: (1) coordinated experiments should be designed to employ multiple research methodologies; (2) systems-oriented research approaches should be encouraged and supported; (3) publication guidelines should be updated to encourage and support sharing of more complete data sets; and (4) new experiments should be more rapidly integrated into our knowledge bases, research programs and practical applications. Copyright © 2016 American Dairy Science Association

  7. Calculation and comparison with experimental data of cascade curves for liquid xenon

    International Nuclear Information System (INIS)

    Strugal'skij, Z.S.; Yablonskij, Z.

    1975-01-01

    Cascade curves calculated by different methods are compared with the experimental data for showers caused by gamma-quanta with the energies from 40 to 2000 MeV in liquid xenon. The minimum energy of shower electrons (cut-off energy) taken into account by the experiment amounts to 3.1-+1.2 MeV, whereas the calculated cascade curves are given for the energies ranging from 40 to 4000 MeV at the cut-off energies 2.3; 3.5; 4.7 MeV. The depth of the shower development is reckoned from the point of generation of gamma-quanta which create showers. Cascade curves are calculated by the moment method with consideration for three moments. The following physical processes are taken into consideration: generation of electron-positron pairs; Compton effect; bremsstrahlung; ionization losses. The dependences of the mean number of particles on the depth of the shower development are obtained from measurements of photographs taken with a xenon bubble chamber. Presented are similar dependences calculated by the moment and Monte-Carlo methods. From the data analysis it follows that the calculation provides correct position of the shower development maximum, but different methods of calculation for small and low depths of shower development yield drastically different results. The Monte-Carlo method provides better agreement with the experimental data

  8. Statistical analysis on experimental calibration data for flowmeters in pressure pipes

    Science.gov (United States)

    Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto

    2017-08-01

    This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .

  9. Experimental Comparison of 56 Gbit/s PAM-4 and DMT for Data Center Interconnect Applications

    DEFF Research Database (Denmark)

    Eiselt, Nicklas; Dochhan, Annika; Griesser, Helmut

    2016-01-01

    Four-level pulse amplitude modulation (PAM-4) and discrete multi-tone transmission (DMT) in combination with intensity modulation and direct-detection are two promising approaches for a low-power and low-cost solution for the next generation of data center interconnect applications. We experiment......Four-level pulse amplitude modulation (PAM-4) and discrete multi-tone transmission (DMT) in combination with intensity modulation and direct-detection are two promising approaches for a low-power and low-cost solution for the next generation of data center interconnect applications. We...... experimentally investigate and compare both modulation formats at a data rate of 56 Gb/s and a transmission wavelength of 1544 nm using the same experimental setup. We show that PAM-4 outperforms double sideband DMT and also vestigial sideband DMT for the optical back-to-back (b2b) case and also...... for a transmission distance of 80 km SSMF in terms of required OSNR at a FEC-threshold of 3.8e-3. However, it is also pointed out that both versions of DMT do not require any optical dispersion compensation to transmit over 80 km SSMF while this is essential for PAM-4. Thus, implementation effort and cost may...

  10. Use of the dynamic stiffness method to interpret experimental data from a nonlinear system

    Science.gov (United States)

    Tang, Bin; Brennan, M. J.; Gatti, G.

    2018-05-01

    The interpretation of experimental data from nonlinear structures is challenging, primarily because of dependency on types and levels of excitation, and coupling issues with test equipment. In this paper, the use of the dynamic stiffness method, which is commonly used in the analysis of linear systems, is used to interpret the data from a vibration test of a controllable compressed beam structure coupled to a test shaker. For a single mode of the system, this method facilitates the separation of mass, stiffness and damping effects, including nonlinear stiffness effects. It also allows the separation of the dynamics of the shaker from the structure under test. The approach needs to be used with care, and is only suitable if the nonlinear system has a response that is predominantly at the excitation frequency. For the structure under test, the raw experimental data revealed little about the underlying causes of the dynamic behaviour. However, the dynamic stiffness approach allowed the effects due to the nonlinear stiffness to be easily determined.

  11. An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.

    Science.gov (United States)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-09-01

    Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.

  12. Radiological and environmental studies at uranium mills: a comparison of theoretical and experimental data

    International Nuclear Information System (INIS)

    Momeni, M.H.; Kisieleski, W.E.; Yuan, Y.; Roberts, C.J.

    1978-01-01

    Evaluation of radiological risk of uranium milling is based on identification and quantification of sources of release and consideration of dynamic coupling among the meteorological, physiographical, hydrological environments and the affected individuals. Dispersion pathways of radionuclides are through air, soil, and water, each demanding locally tailored procedures for estimation of the rate of release of radioactivity and the pattern of biological uptake and exposure. The Uranium Dispersion and Dosimetry Code (UDAD), a comprehensive method for estimating the concentrations of the released radionuclides, dose rates, doses, and radiological health effects, is described. Predicted concentrations and exposure rates are compared with experimental data obtained from field research at active mills and abandoned tailings

  13. Using simulation to interpret experimental data in terms of protein conformational ensembles.

    Science.gov (United States)

    Allison, Jane R

    2017-04-01

    In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Diffusion model analyses of the experimental data of 12C+27Al, 40Ca dissipative collisions

    International Nuclear Information System (INIS)

    SHEN Wen-qing; QIAO Wei-min; ZHU Yong-tai; ZHAN Wen-long

    1985-01-01

    Assuming that the intermediate system decays with a statistical lifetime, the general behavior of the threefold differential cross section d 3 tau/dZdEdtheta in the dissipative collisions of 68 MeV 12 C+ 27 Al and 68.6 MeV 12 C+ 40 Ca system is analyzed in the diffusion model framework. The lifetime of the intermediate system and the separation distance for the completely damped deep-inelastic component are obtained. The calculated results and the experimental data of the angular distributions and Wilczynski plots are compared. The probable reasons for the differences between them are briefly discussed

  15. Kinetic energy in the collective quadrupole Hamiltonian from the experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Jolos, R.V., E-mail: jolos@theor.jinr.ru [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Dubna State University, 141980 Dubna (Russian Federation); Kolganova, E.A. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Dubna State University, 141980 Dubna (Russian Federation)

    2017-06-10

    Dependence of the kinetic energy term of the collective nuclear Hamiltonian on collective momentum is considered. It is shown that the fourth order in collective momentum term of the collective quadrupole Hamiltonian generates a sizable effect on the excitation energies and the matrix elements of the quadrupole moment operator. It is demonstrated that the results of calculation are sensitive to the values of some matrix elements of the quadrupole moment. It stresses the importance for a concrete nucleus to have the experimental data for the reduced matrix elements of the quadrupole moment operator taken between all low lying states with the angular momenta not exceeding 4.

  16. Comparison of GEANT4 very low energy cross section models with experimental data in water

    DEFF Research Database (Denmark)

    Incerti, S; Ivanchenko, A; Karamitros, M

    2010-01-01

    The GEANT4 general-purpose Monte Carlo simulation toolkit is able to simulate physical interaction processes of electrons, hydrogen and helium atoms with charge states (H0, H+) and (He0, He+, He2+), respectively, in liquid water, the main component of biological systems, down to the electron volt...... of electromagnetic interactions within the GEANT4 toolkit framework (since GEANT4 version 9.3 beta). This work presents a quantitative comparison of these physics models with a collection of experimental data in water collected from the literature....

  17. Using numerical simulations to extract parameters of toroidal electron plasmas from experimental data

    DEFF Research Database (Denmark)

    Ha, B. N.; Stoneking,, M. R.; Marler, Joan

    2009-01-01

    Measurements of the image charge induced on electrodes provide the primary means of diagnosing plasmas in the Lawrence Non-neutral Torus II (LNT II) [Phys. Rev. Lett. 100, 155001 (2008)]. Therefore, it is necessary to develop techniques that determine characteristics of the electron plasma from......, as in the cylindrical case. In the toroidal case, additional information about the m=1 motion of the plasma can be obtained by analysis of the image charge signal amplitude and shape. Finally, results from the numerical simulations are compared to experimental data from the LNT II and plasma characteristics...

  18. Electromagnetic effects and scattering lengths extraction from experimental data on K → 3π decays

    International Nuclear Information System (INIS)

    Gevorkyan, S.R.; Madigozhin, D.T.; Tarasov, A.V.; Voskresenskaya, O.O.

    2008-01-01

    The final state interactions in K ± → π ± π 0 π 0 decays are considered using the methods of non-relativistic quantum mechanics. We show how to take into account the largest electromagnetic effect in the analysis of experimental data using the amplitudes calculated earlier. We propose the relevant expressions for amplitude corrections valid both above and below the two charged pion production threshold M π 0 π 0 2m π ± , including the average effect for the threshold bin. These formulae can be used in the procedure of pion scattering lengths measurement from M π 0 π 0 spectrum

  19. Acquiring, recording, and analyzing pathology data from experimental mice: an overview.

    Science.gov (United States)

    Scudamore, Cheryl L

    2014-03-21

    Pathology is often underutilized as an end point in mouse studies in academic research because of a lack of experience and expertise. The use of traditional pathology techniques including necropsy and microscopic analysis can be useful in identifying the basic processes underlying a phenotype and facilitating comparison with equivalent human diseases. This overview aims to provide a guide and reference to the acquisition, recording, and analysis of high-quality pathology data from experimental mice in an academic research setting. Copyright © 2014 John Wiley & Sons, Inc.

  20. CFD and experimental data of closed-loop wind tunnel flow

    Directory of Open Access Journals (Sweden)

    John Kaiser Calautit

    2016-06-01

    Full Text Available The data presented in this article were the basis for the study reported in the research articles entitled ‘A validated design methodology for a closed loop subsonic wind tunnel’ (Calautit et al., 2014 [1], which presented a systematic investigation into the design, simulation and analysis of flow parameters in a wind tunnel using Computational Fluid Dynamics (CFD. The authors evaluated the accuracy of replicating the flow characteristics for which the wind tunnel was designed using numerical simulation. Here, we detail the numerical and experimental set-up for the analysis of the closed-loop subsonic wind tunnel with an empty test section.

  1. Experimental demonstration of optical data links using a hybrid CAP/QAM modulation scheme.

    Science.gov (United States)

    Wei, J L; Ingham, J D; Cheng, Q; Cunningham, D G; Penty, R V; White, I H

    2014-03-15

    The first known experimental demonstrations of a 10  Gb/s hybrid CAP-2/QAM-2 and a 20  Gb/s hybrid CAP-4/QAM-4 transmitter/receiver-based optical data link are performed. Successful transmission over 4.3 km of standard single-mode fiber (SMF) is achieved, with a link power penalty ∼0.4  dBo for CAP-2/QAM-2 and ∼1.5  dBo for CAP-4/QAM-4 at BER=10(-9).

  2. Evaluation of experimental data for wax and diamondoids solubility in gaseous systems

    DEFF Research Database (Denmark)

    Mohammadi, Amir H.; Gharagheizi, Farhad; Eslamimanesh, Ali

    2012-01-01

    The Leverage statistical approach is herein applied for evaluation of experimental data of the paraffin waxes/diamondoids solubility in gaseous systems. The calculation steps of this algorithm consist of determination of the statistical Hat matrix, sketching the Williams Plot, and calculation......-Santiago and Teja correlations are used to calculate/estimate the solubility of paraffin waxes (including n-C24H50 to n-C33H68) and diamondoids (adamantane and diamantane) in carbon dioxide/ethane gases, respectively. It can be interpreted from the obtained results that the applied equations for calculation...

  3. Identification of material properties of orthotropic composite plate using experimental frequency response function data

    Science.gov (United States)

    Tam, Jun Hui; Ong, Zhi Chao; Ismail, Zubaidah; Ang, Bee Chin; Khoo, Shin Yee

    2018-05-01

    The demand for composite materials is increasing due to their great superiority in material properties, e.g., lightweight, high strength and high corrosion resistance. As a result, the invention of composite materials of diverse properties is becoming prevalent, and thus, leading to the development of material identification methods for composite materials. Conventional identification methods are destructive, time-consuming and costly. Therefore, an accurate identification approach is proposed to circumvent these drawbacks, involving the use of Frequency Response Function (FRF) error function defined by the correlation discrepancy between experimental and Finite-Element generated FRFs. A square E-glass epoxy composite plate is investigated under several different configurations of boundary conditions. It is notable that the experimental FRFs are used as the correlation reference, such that, during computation, the predicted FRFs are continuously updated with reference to the experimental FRFs until achieving a solution. The final identified elastic properties, namely in-plane elastic moduli, Ex and Ey, in-plane shear modulus, Gxy, and major Poisson's ratio, vxy of the composite plate are subsequently compared to the benchmark parameters as well as with those obtained using modal-based approach. As compared to the modal-based approach, the proposed method is found to have yielded relatively better results. This can be explained by the direct employment of raw data in the proposed method that avoids errors that might incur during the stage of modal extraction.

  4. Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa

    CERN Document Server

    Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F

    2014-01-01

    The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...

  5. SOFC regulation at constant temperature: Experimental test and data regression study

    International Nuclear Information System (INIS)

    Barelli, L.; Bidini, G.; Cinti, G.; Ottaviano, A.

    2016-01-01

    Highlights: • SOFC operating temperature impacts strongly on its performance and lifetime. • Experimental tests were carried out varying electric load and feeding mixture gas. • Three different anodic inlet gases were tested maintaining constant temperature. • Cathodic air flow rate was used to maintain constant its operating temperature. • Regression law was defined from experimental data to regulate the air flow rate. - Abstract: The operating temperature of solid oxide fuel cell stack (SOFC) is an important parameter to be controlled, which impacts the SOFC performance and its lifetime. Rapid temperature change implies a significant temperature differences between the surface and the mean body leading to a state of thermal shock. Thermal shock and thermal cycling introduce stress in a material due to temperature differences between the surface and the interior, or between different regions of the cell. In this context, in order to determine a control law that permit to maintain constant the fuel cell temperature varying the electrical load and the infeed fuel mixture, an experimental activity were carried out on a planar SOFC short stack to analyse stack temperature. Specifically, three different anodic inlet gas compositions were tested: pure hydrogen, reformed natural gas with steam to carbon ratio equal to 2 and 2.5. By processing the obtained results, a regression law was defined to regulate the air flow rate to be provided to the fuel cell to maintain constant its operating temperature varying its operating conditions.

  6. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  7. Development and implantation of application systems for reduction and analysis of experimental data in Nuclear Physics area

    International Nuclear Information System (INIS)

    Cardoso Junior, J.L.; Schelin, H.R.; Lemos, B.J.K.C.; Tanaka, E.H.; Castro, A.T.C.G.

    1984-01-01

    Several application systems for reduction and analysis of experimental data are described. These codes were development and/or implanted in tee IEAv/CTA CYBER 170/750 system. A brief description of the experimental data acquisition modes and the necessary reduction for analysis is given. Information on the purposes, uses and access of the codes are given [pt

  8. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  9. An Experimental Seismic Data and Parameter Exchange System for Tsunami Warning Systems

    Science.gov (United States)

    Hoffmann, T. L.; Hanka, W.; Saul, J.; Weber, B.; Becker, J.; Heinloo, A.; Hoffmann, M.

    2009-12-01

    For several years GFZ Potsdam is operating a global earthquake monitoring system. Since the beginning of 2008, this system is also used as an experimental seismic background data center for two different regional Tsunami Warning Systems (TWS), the IOTWS (Indian Ocean) and the interim NEAMTWS (NE Atlantic and Mediterranean). The SeisComP3 (SC3) software, developed within the GITEWS (German Indian Ocean Tsunami Early Warning System) project, capable to acquire, archive and process real-time data feeds, was extended for export and import of individual processing results within the two clusters of connected SC3 systems. Therefore not only real-time waveform data are routed to the attached warning centers through GFZ but also processing results. While the current experimental NEAMTWS cluster consists of SC3 systems in six designated national warning centers in Europe, the IOTWS cluster presently includes seven centers, with another three likely to join in 2009/10. For NEAMTWS purposes, the GFZ virtual real-time seismic network (GEOFON Extended Virtual Network -GEVN) in Europe was substantially extended by adding many stations from Western European countries optimizing the station distribution. In parallel to the data collection over the Internet, a GFZ VSAT hub for secured data collection of the EuroMED GEOFON and NEAMTWS backbone network stations became operational and first data links were established through this backbone. For the Southeast Asia region, a VSAT hub has been established in Jakarta already in 2006, with some other partner networks connecting to this backbone via the Internet. Since its establishment, the experimental system has had the opportunity to prove its performance in a number of relevant earthquakes. Reliable solutions derived from a minimum of 25 stations were very promising in terms of speed. For important events, automatic alerts were released and disseminated by emails and SMS. Manually verified solutions are added as soon as they become

  10. The Particle Physics Playground website: tutorials and activities using real experimental data

    Science.gov (United States)

    Bellis, Matthew; CMS Collaboration

    2016-03-01

    The CERN Open Data Portal provides access to data from the LHC experiments to anyone with the time and inclination to learn the analysis procedures. The CMS experiment has made a significant amount of data availible in basically the same format the collaboration itself uses, along with software tools and a virtual enviroment in which to run those tools. These same data have also been mined for educational exercises that range from very simple .csv files that can be analyzed in a spreadsheet to more sophisticated formats that use ROOT, a dominant software package in experimental particle physics but not used as much in the general computing community. This talk will present the Particle Physics Playground website (http://particle-physics-playground.github.io/), a project that uses data from the CMS experiment, as well as the older CLEO experiment, in tutorials and exercises aimed at high school and undergraduate students and other science enthusiasts. The data are stored as text files and the users are provided with starter Python/Jupyter notebook programs and accessor functions which can be modified to perform fairly high-level analyses. The status of the project, success stories, and future plans for the website will be presented. This work was supported in part by NSF Grant PHY-1307562.

  11. Prediction of the health effects of inhaled transuranium elements from experimental animal data

    International Nuclear Information System (INIS)

    Bair, W.J.; Thomas, J.M.

    1976-01-01

    Although animal experiments are conducted to obtain data that can be used to predict the consequences of exposure to alpha-emitting elements on human health, scientists have been hesitant to project the results of animal experiments to man. However, since a human data base does not exist for inhaled transuranics, the animal data cannot be overlooked. The paper describes the derivation of linear non-threshold response relationships for lung cancer in rats after inhalation of alpha-emitting transuranium elements. These relationships were used to calculate risk estimates, which were then compared with a value calculated from the incidence of lung cancer in humans who had been exposed to sources of radiation other than the transuranics. Both estimates were compared with the estimated cancer risk associated with the annual whole-body dose limit of 5 rems for occupational exposure. The rat data suggest that the risk from a working lifetime exposure of 15 rem/a to the lungs from transuranium elements may be 5 times the risk incurred with a whole-body exposure of 5 rem/a, while the human data suggest the risk may be less. Since the histological type of plutonium-induced lung cancer that occurs in experimental animals is rare in man, the use of animal data to estimate risks may be conservative. Risk estimates calculated directly from the results of experiments in which animals actually inhaled transuranic particles circumvent such controversial issues as 'hot particles'. (author)

  12. System identification by experimental data processing, application to turbulent transport of a tracer in pipe flow

    International Nuclear Information System (INIS)

    Burgos, Manuel; Getto, Daniel; Berne, Philippe

    2005-01-01

    System identification is the first, and probably the most important step in detecting abnormal behavior, control system design or performance improving. Data analysis is performed for studying the plant behavior, sensitivity of operation procedures and several other goals. In all these cases, the observed data is the convolution of an input function, and the system's impulse response. Practical discrete time convolutions may be performed multiplying a matrix built from the impulse response by the input vector, but for deconvolution it is necessary to invert the matrix which is singular in a causal system. Another method for deconvolution is by means of Fourier Transforms. Actual readings are usually corrupted by noise and, besides, their transform shows high low frequencies components and high frequency ones mainly due to additive noise. Subjective decisions as cut-off frequency should be taken as well. This paper proposes a deconvolution method based on parameters fitting of suitable models, where they exist, and estimation of values where analytical forms are not available. It is based on the global, non linear fitting of them, with a maximum likelihood criteria. An application of the method is shown using data from two fluid flow experiments. The experimental test rigs basically consist in a long section of straight pipe in which fluid is flowing. A pulse of tracer is injected at the entrance and detected at various locations along the pipe. An attempt of deconvolution of signals from successive probes using a classical model describing the flow of tracer as a plug moving with the average fluid velocity, plus some axial dispersion. The parameters are for instance the velocity of the plug and a dispersion coefficient. After parameter fitting, the model is found to reproduce the experimental data well. The flow rates deduced from the adjusted travel times are in very good agreement with the actual values. In addition, the flow dispersion coefficient is obtained

  13. Comparison of experimental data with results of some drying models for regularly shaped products

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Ahmet [Aksaray University, Department of Mechanical Engineering, Aksaray (Turkey); Aydin, Orhan [Karadeniz Technical University, Department of Mechanical Engineering, Trabzon (Turkey); Dincer, Ibrahim [University of Ontario Institute of Technology, Faculty of Engineering and Applied Science, Oshawa, ON (Canada)

    2010-05-15

    This paper presents an experimental and theoretical investigation of drying of moist slab, cylinder and spherical products to study dimensionless moisture content distributions and their comparisons. Experimental study includes the measurement of the moisture content distributions of slab and cylindrical carrot, slab and cylindrical pumpkin and spherical blueberry during drying at various temperatures (e.g., 30, 40, 50 and 60 C) at specific constant velocity (U = 1 m/s) and the relative humidity {phi}=30%. In theoretical analysis, two moisture transfer models are used to determine drying process parameters (e.g., drying coefficient and lag factor) and moisture transfer parameters (e.g., moisture diffusivity and moisture transfer coefficient), and to calculate the dimensionless moisture content distributions. The calculated results are then compared with the experimental moisture data. A considerably high agreement is obtained between the calculations and experimental measurements for the cases considered. The effective diffusivity values were evaluated between 0.741 x 10{sup -5} and 5.981 x 10{sup -5} m{sup 2}/h for slab products, 0.818 x 10{sup -5} and 6.287 x 10{sup -5} m{sup 2}/h for cylindrical products and 1.213 x 10{sup -7} and 7.589 x 10{sup -7} m{sup 2}/h spherical products using the model-I and 0.316 x 10{sup -5}-5.072 x 10{sup -5} m{sup 2}/h for slab products, 0.580 x 10{sup -5}-9.587 x 10{sup -5} m{sup 2}/h for cylindrical products and 1.408 x 10{sup -7}-13.913 x 10{sup -7} m{sup 2}/h spherical products using the model-II. (orig.)

  14. Comparison of experimental data with results of some drying models for regularly shaped products

    Science.gov (United States)

    Kaya, Ahmet; Aydın, Orhan; Dincer, Ibrahim

    2010-05-01

    This paper presents an experimental and theoretical investigation of drying of moist slab, cylinder and spherical products to study dimensionless moisture content distributions and their comparisons. Experimental study includes the measurement of the moisture content distributions of slab and cylindrical carrot, slab and cylindrical pumpkin and spherical blueberry during drying at various temperatures (e.g., 30, 40, 50 and 60°C) at specific constant velocity ( U = 1 m/s) and the relative humidity φ = 30%. In theoretical analysis, two moisture transfer models are used to determine drying process parameters (e.g., drying coefficient and lag factor) and moisture transfer parameters (e.g., moisture diffusivity and moisture transfer coefficient), and to calculate the dimensionless moisture content distributions. The calculated results are then compared with the experimental moisture data. A considerably high agreement is obtained between the calculations and experimental measurements for the cases considered. The effective diffusivity values were evaluated between 0.741 × 10-5 and 5.981 × 10-5 m2/h for slab products, 0.818 × 10-5 and 6.287 × 10-5 m2/h for cylindrical products and 1.213 × 10-7 and 7.589 × 10-7 m2/h spherical products using the Model-I and 0.316 × 10-5-5.072 × 10-5 m2/h for slab products, 0.580 × 10-5-9.587 × 10-5 m2/h for cylindrical products and 1.408 × 10-7-13.913 × 10-7 m2/h spherical products using the Model-II.

  15. Study of neutron spectra in extended uranium target. New experimental data

    Directory of Open Access Journals (Sweden)

    Paraipan M.

    2017-01-01

    Full Text Available The spatial distribution of neutron fluences in the extended uranium target (“Quinta” assembly irradiated with 0.66 GeV proton, 4 AGeV deuteron and carbon beams was studied using the reactions with different threshold energy (Eth. The data sets were obtained with 59Co samples. The accumulation rates for the following isotopes: 60Co (Eth 0 MeV, 59Fe (Eth 3 MeV, 58Co (Eth 10 MeV, 57Co (Eth 20 MeV, 56Co (Eth 32 MeV, 47Sc (Eth 55 MeV, and 48V (Eth 70 MeV were measured with HPGe spectrometer. The experimental accumulation rates were compared with the predictions of the simulations with Geant4 code. Substantial difference between the reconstructed and the simulated data for the hard part of the neutron spectrum was analyzed.

  16. Modeling long-term yield trends of Miscanthusxgiganteus using experimental data from across Europe

    DEFF Research Database (Denmark)

    Lesur, Claire; Jeuffroy, Marie-Hélène; Makowski, David

    2013-01-01

    and the ceiling phases and (ii) to determine whether M. giganteus ceiling phase is followed by a decline phase where yields decrease across years. Data were analyzed through comparisons between a set of statistical growth models. The model that best fitted the experimental data included a decline phase....... The decline intensity and the value of several other model parameters, such as the maximum yield reached during the ceiling phase or the duration of the establishment phase, were highly variable. The highest maximum yields were obtained in the experiments located in the southern part of the studied area....... giganteus is known to have an establishment phase during which annual yields increased as a function of crop age, followed by a ceiling phase, the duration of which is unknown. We built a database including 16 European long-term experiments (i) to describe the yield evolution during the establishment...

  17. First experimental data on the FEL - RF interaction at the Jefferson Lab IRFEL

    International Nuclear Information System (INIS)

    L. Merminga; P. Alexeev; S.V. Benson; A. Bolshakov; L.R. Doolittle; D.R. Douglas; C. Hovater; G.R. Neil

    1999-01-01

    High power FELs driven by recirculating, energy-recovering linacs can exhibit instabilities in the beam energy and laser output power. Fluctuations in the accelerating cavity fields can cause beam loss on apertures, phase oscillations and optical cavity detuning. These can affect the laser power and in turn the beam-induced voltage to further enhance the fluctuations of the rf fields. A theoretical model was developed to study the dynamics of the coupled system and was presented last year. Recently, a first set of experimental data was obtained at the Jefferson Lab IRFEL for direct comparisons with the model. The authors describe the experiment, present the data together with the modeling predictions and outline future directions

  18. Radiological characteristics of light-water reactor spent fuel: A literature survey of experimental data

    International Nuclear Information System (INIS)

    Roddy, J.W.; Mailen, J.C.

    1987-12-01

    This survey brings together the experimentally determined light-water reactor spent fuel data comprising radionuclide composition, decay heat, and photon and neutron generation rates as identified in a literature survey. Many citations compare these data with values calculated using a radionuclide generation and depletion computer code, ORIGEN, and these comparisons have been included. ORIGEN is a widely recognized method for estimating the actinide, fission product, and activation product contents of irradiated reactor fuel, as well as the resulting heat generation and radiation levels. These estimates are used as source terms in safety evaluations of operating reactors, for evaluation of fuel behavior and regulation of the at-reactor storage, for transportation studies, and for evaluation of the ultimate geologic storage of spent fuel. 82 refs., 4 figs., 17 tabs

  19. Developments of programs for the guidance of the experimental logics and the data acquisition

    International Nuclear Information System (INIS)

    Kraemer-Flecken, A.

    1988-01-01

    The new status of technique allows to construct the experimental electronics by the application of ECL modules essentially faster. By the use of the old CAMAC standard it is possible to calibrate experiment configurations by means of a calculator. New techniques in the fabrication of microprocessors and storage -IC's allow the use of microprocessors for the guidance of the experiment electronics and contribute to the creation of an independent on large calculators, modular, and transportable computer. For the calibration of complex detector systems new CAMAC plug-in's exist which allow a data acquisition on the CAMAC bus. With the new eightfold ADC's precision measurements can be perormed. An upgrading of such small data acquisition systems under inclusion of the VME bus is very soon realizable. By this nuclear spectroscopic experiments can be performed essentially more simply. (HSI)

  20. Measurement of aerosol size distribution by impaction and sedimentation An experimental study and data reduction

    International Nuclear Information System (INIS)

    Diouri, Mohamed.

    1981-09-01

    This study concerns essentially solid aerosols produced by combustion and more particulary the aerosol liberated by a sodium fire taken into account in safety studies related to sodium cooled nuclear reactors. The accurate determination of the aerosol size distribution depends on the selection device use. An experimental study of the parameters affecting the solid aerosol collection efficiency was made with the Andersen Mark II cascade impactor (blow off and bounce, electrical charge of particles, wall-loss). A sedimentation chamber was built and calibrated for the range between 4 and 10 μm. The second part describes a comparative study of different data reduction methods for the impactor and a new method for setting up the aerosol size distribution with data obtained by the sedimentation chamber [fr

  1. Experimental research data on stress state of salt rock mass around an underground excavation

    Science.gov (United States)

    Baryshnikov, VD; Baryshnikov, DV

    2018-03-01

    The paper presents the experimental stress state data obtained in surrounding salt rock mass around an excavation in Mir Mine, ALROSA. The deformation characteristics and the values of stresses in the adjacent rock mass are determined. Using the method of drilling a pair of parallel holes in a stressed area, the authors construct linear relationship for the radial displacements of the stress measurement hole boundaries under the short-term loading of the perturbing hole. The resultant elasticity moduli of rocks are comparable with the laboratory core test data. Pre-estimates of actual stresses point at the presence of a plasticity zone in the vicinity of the underground excavation. The stress state behavior at a distance from the excavation boundary disagrees with the Dinnik–Geim hypothesis.

  2. Geophysical data collection using an interactive personal computer system. Part 1. ; Experimental monitoring of Suwanosejima volcano

    Energy Technology Data Exchange (ETDEWEB)

    Iguchi, M. (Kyoto Univerdity, Kyoto (Japan). Disaster Prevention Reserach Institute)

    1991-10-15

    In the article, a computer-communication system was developed in order to collect geophysical data from remote volcanos via a public telephpne network. This system is composed of a host presonal computer at an observatory and several personal computers as terminals at remote stations. Each terminal acquires geophysical data, such as seismic, intrasonic, and ground deformation date. These gara are stored in the terminals temporarily, and transmitted to the host computer upon command from host computer. Experimental monitoring was conducted between Sakurajima Volcanological Observatory and several statins in the Satsunan Islands and southern Kyushu. The seismic and eruptive activities of Suwanosejima volcano were monitored by this system. Consequently, earthquakes and air-shocks accompanied by the explosive activity were observed. B-type earthquakes occurred prio to the relatively prolonged eruptive activity. Intermittent occurrences of volcanic tremors were also clearly recognized from the change in mean amplitubes of seismic waves. 7 refs., 10 figs., 2 tabs.

  3. TREAT experimental data base regarding fuel dispersals in LMFBR loss-of-flow accidents

    International Nuclear Information System (INIS)

    Simms, R.; Fink, C.L.; Stanford, G.S.; Regis, J.P.

    1981-01-01

    The reactivity feedback from fuel relocation is a central issue in the analysis of loss-of-flow (LOF) accidents in LMFBRs. Fuel relocation has been studied in a number of LOF simulations in the TREAT reactor. In this paper the results of these tests are analyzed, using, as the principal figure of merit, the changes in equivalent fuel worth associated with the fuel motion. The equivalent fuel worth was calculated from the measured axial fuel distributions by weighting the data with a typical LMFBR fuel-worth function. At nominal power, the initial fuel relocation resulted in increases in equivalent fuel worth. Above nominal power the fuel motion was dispersive, but the dispersive driving forces could not unequivocally be identified from the experimental data

  4. Processing and analyses of the pulsed-neutron experimental data of the YALINA facility

    International Nuclear Information System (INIS)

    Cao, Y.; Gohar, Y.; Smith, D.; Talamo, A.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: The YALINA subcritical assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus has been utilized to study the physics parameters of accelerator driven systems (ADS) with high intensity Deuterium-Tritium and Deuterium-Deuterium pulsed neutron sources. In particular, with the fast and thermal neutron zones of the YALINA-Booster subcritical assembly, the pulsed neutron experiments have been utilized to evaluate the pulsed neutron methods for determining the reactivity of the subcritical system. In this paper, the pulsed-neutron experiments performed in the YALINA-Booster 1141 configuration with 90% U 235 fuel and 1185 configuration with 36% and 21% U fuel are examined and analized. The Sjo:strand area-ratio method is utilized to determine the reactivities of the subcritical assembly configurations. The linear regression method is applied to obtain the prompt neutron decay constants from the pulsed-neutron experimental data. The reactivity values obtained from experimental data are shown to be dependent on the detector locations and also on the detector types. The large discrepancies between the reactivity values given by the detectors in the fast neutron zone was reduced by spatial correction methods, and the estimated reactivity after the spatial corrections are almost spatially independent.

  5. Combining experimental and simulation data of molecular processes via augmented Markov models.

    Science.gov (United States)

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  6. Validation of NEPTUNE-CFD two-phase flow models using experimental data

    International Nuclear Information System (INIS)

    Perez-Manes, Jorge; Sanchez Espinoza, Victor Hugo; Bottcher, Michael; Stieglitz, Robert; Sergio Chiva Vicent

    2014-01-01

    This paper deals with the validation of the two-phase flow models of the CFD code NEPTUNE-CFD using experimental data provided by the OECD BWR BFBT and PSBT Benchmark. Since the two-phase models of CFD codes are extensively being improved, the validation is a key step for the acceptability of such codes. The validation work is performed in the frame of the European NURISP Project and it was focused on the steady state and transient void fraction tests. The influence of different NEPTUNE-CFD model parameters on the void fraction prediction is investigated and discussed in detail. Due to the coupling of heat conduction solver SYRTHES with NEPTUNE-CFD, the description of the coupled fluid dynamics and heat transfer between the fuel rod and the fluid is improved significantly. The averaged void fraction predicted by NEPTUNE-CFD for selected PSBT and BFBT tests is in good agreement with the experimental data. Finally, areas for future improvements of the NEPTUNE-CFD code were identified, too. (authors)

  7. Radionuclides in fruit systems: Model prediction-experimental data intercomparison study

    International Nuclear Information System (INIS)

    Ould-Dada, Z.; Carini, F.; Eged, K.; Kis, Z.; Linkov, I.; Mitchell, N.G.; Mourlon, C.; Robles, B.; Sweeck, L.; Venter, A.

    2006-01-01

    This paper presents results from an international exercise undertaken to test model predictions against an independent data set for the transfer of radioactivity to fruit. Six models with various structures and complexity participated in this exercise. Predictions from these models were compared against independent experimental measurements on the transfer of 134 Cs and 85 Sr via leaf-to-fruit and soil-to-fruit in strawberry plants after an acute release. Foliar contamination was carried out through wet deposition on the plant at two different growing stages, anthesis and ripening, while soil contamination was effected at anthesis only. In the case of foliar contamination, predicted values are within the same order of magnitude as the measured values for both radionuclides, while in the case of soil contamination models tend to under-predict by up to three orders of magnitude for 134 Cs, while differences for 85 Sr are lower. Performance of models against experimental data is discussed together with the lessons learned from this exercise

  8. New experimental data on the influence of extranuclear factors on the probability of radioactive decay

    CERN Document Server

    Bondarevskij, S I; Skorobogatov, G A

    2002-01-01

    New experimental data on influence of various extranuclear factors on probability (lambda) of radioactive decay are presented. During redox processes in solutions containing sup 1 sup 3 sup 9 Ce relative change in lambda measured by the DELTA I/I method was [I(Ce sup I sup V)-I(Ce sup I sup I sup I)]/I sub m sub e sub a sub n +(1.4+-0.6)x10 sup - sup 4. Using a modification of the method based on displacement of the age-old radioactive equilibrium, when a source MgO( sup 1 sup 2 sup 1 sup m Te) was cooled to 78 K, growth of lambda of tellurium nuclear isomer by 0.04+-0.02% was detected. New experimental data on increase in gamma-radioactivity of sample Be sup 1 sup 2 sup 3 sup m Te at the expense of low-temperature induced reaction, i.e. collective nuclear superluminescence, are provided

  9. 232Th and 238U neutron emission cross section calculations and analysis of experimental data

    International Nuclear Information System (INIS)

    Tel, E.

    2004-01-01

    In this study, pre-equilibrium neutron-emission spectra produced by (n,xn) reactions on nuclei 2 32Th and 2 38U have been calculated. Angle-integrated cross sections in neutron induced reactions on targets 2 32Th and 2 38U have been calculated at the bombarding energies up to 18 MeV. We have investigated multiple pre-equilibrium matrix element constant from internal transition for 2 32Th (n,xn) neutron emission spectra. In the calculations, the geometry dependent hybrid model and the cascade exciton model including the effects of pre-equilibrium have been used. In addition, we have described how multiple pre-equilibrium emissions can be included in the Feshbach-Kerman-Koonin (FKK) fully quantum-mechanical theory. By analyzing (n,xn) reaction on 232 T h and 2 38U, with the incident energy from 2 Me V to 18 Me V, the importance of multiple pre-equilibrium emission can be seen cleady. All calculated results have been compared with experimental data. The obtained results have been discussed and compared with the available experimental data and found agreement with each other

  10. Numerical prediction of the cyclic behaviour of metallic polycrystals and comparison with experimental data

    International Nuclear Information System (INIS)

    Sauzay, M.; Ferrie, E.; Steckmeyer, A.

    2010-01-01

    Grain size seems to have only a minor influence on the cyclic strain strain curves (CSSCs) of metallic polycrystals of medium to high stacking fault energy (SFE). Many authors therefore tried to deduce the macroscopic CSSCs curves from the single crystals ones. Either crystals oriented for single slip or multiple slip were considered. In addition, a scale transition law should be used (from the grain scale to the macroscopic scale). The Sachs rule (homogeneous stress, single slip) or the Taylor one (homogeneous plastic strain, multiple slip) were usually used. But the predicted macroscopic CSSCs do not generally agree with the experimental data for metals and alloys, presenting various SFE values. In order to avoid the choice of a particular scale transition rule, many finite element (FE) computations are carried out using meshes of polycrystals including more than one hundred grains without texture. This allows the study of the influence of the crystalline constitutive laws on the macroscopic CSSCs. Activation of a secondary slip system in grains oriented for single slip is either allowed or hindered (slip planarity), which affects strongly the macroscopic CSSCs. The more planar the slip, the higher the predicted macroscopic stress amplitudes. If grains oriented for single slip obey slip planarity and two crystalline CSSCs are used (one for single slip grains and one for multiple slip grains), then the predicted macroscopic CSSCs agree well with experimental data provided the SFE is not too low (austenitic steel 316L, copper, nickel, aluminium). (authors)

  11. A unified approach to linking experimental, statistical and computational analysis of spike train data.

    Directory of Open Access Journals (Sweden)

    Liang Meng

    Full Text Available A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data, but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach--linking statistical, computational, and experimental neuroscience--provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.

  12. Assessment of the PIUS physics and thermal-hydraulic experimental data bases

    International Nuclear Information System (INIS)

    Boyack, B.E.

    1993-01-01

    The PIUS reactor utilizes simplified, inherent, passive, or other innovative means to accomplish safety functions. Accordingly, the PIUS reactor is subject to the requirements of 10CFR52.47(b)(2)(i)(A). This regulation requires that the applicant adequately demonstrate the performance of each safety feature, interdependent effects among the safety features, and a sufficient data base on the safety features of the design to assess the analytical tools used for safety analysis. Los Alamos has assessed the quality and completeness of the existing and planned data bases used by Asea Brown Boveri to validate its safety analysis codes and other relevant data bases. Only a limited data base of separate effect and integral tests exist at present. This data base is not adequate to fulfill the requirements of 10CFR52.47(b)(2)(i)(A). Asea Brown Boveri has stated that it plans to conduct more separate effect and integral test programs. If appropriately designed and conducted, these test programs have the potential to satisfy most of the data base requirements of 10CFR52.47(b)(2)(i)(A) and remedy most of the deficiencies of the currently existing combined data base. However, the most important physical processes in PIUS are related to reactor shutdown because the PIUS reactor does not contain rodded shutdown and control systems. For safety-related reactor shutdown, PIUS relies on negative reactivity insertions from the moderator temperature coefficient and from boron entering the core from the reactor pool. Asea Brown Boveri has neither developed a direct experimental data base for these important processes nor provided a rationale for indirect testing of these key PIUS processes. This is assessed as a significant shortcoming. In preparing the conclusions of this report, test documentation and results have been reviewed for only one integral test program, the small-scale integral tests conducted in the ATLE facility

  13. Experimental device, corresponding forward model and processing of the experimental data using wavelet analysis for tomographic image reconstruction applied to eddy current nondestructive evaluation

    International Nuclear Information System (INIS)

    Joubert, P.Y.; Madaoui, N.

    1999-01-01

    In the context of eddy current non destructive evaluation using a tomographic image reconstruction process, the success of the reconstruction depends not only on the choice of the forward model and of the inversion algorithms, but also on the ability to extract the pertinent data from the raw signal provided by the sensor. We present in this paper, an experimental device designed for imaging purposes, the corresponding forward model, and a pre-processing of the experimental data using wavelet analysis. These three steps implemented with an inversion algorithm, will allow in the future to perform image reconstruction of 3-D flaws. (authors)

  14. Laboratory-based Interpretation of Seismological Models: Dealing with Incomplete or Incompatible Experimental Data (Invited)

    Science.gov (United States)

    Jackson, I.; Kennett, B. L.; Faul, U. H.

    2009-12-01

    In parallel with cooperative developments in seismology during the past 25 years, there have been phenomenal advances in mineral/rock physics making laboratory-based interpretation of seismological models increasingly useful. However, the assimilation of diverse experimental data into a physically sound framework for seismological application is not without its challenges as demonstrated by two examples. In the first example, that of equation-of-state and elasticity data, an appropriate, thermodynamically consistent framework involves finite-strain expansion of the Helmholz free energy incorporating the Debye approximation to the lattice vibrational energy, as advocated by Stixrude and Lithgow-Bertelloni. Within this context, pressure, specific heat and entropy, thermal expansion, elastic constants and their adiabatic and isothermal pressure derivatives are all calculable without further approximation in an internally consistent manner. The opportunities and challenges of assimilating a wide range of sometimes marginally incompatible experimental data into a single model of this type will be demonstrated with reference to MgO, unquestionably the most thoroughly studied mantle mineral. A neighbourhood-algorithm inversion has identified a broadly satisfactory model, but uncertainties in key parameters associated particularly with pressure calibration remain sufficiently large as to preclude definitive conclusions concerning lower-mantle chemical composition and departures from adiabaticity. The second example is the much less complete dataset concerning seismic-wave dispersion and attenuation emerging from low-frequency forced-oscillation experiments. Significant progress has been made during the past decade towards an understanding of high-temperature, micro-strain viscoelastic relaxation in upper-mantle materials, especially as regards the roles of oscillation period, temperature, grain size and melt fraction. However, the influence of other potentially important

  15. A real-time data transmission method based on Linux for physical experimental readout systems

    International Nuclear Information System (INIS)

    Cao Ping; Song Kezhu; Yang Junfeng

    2012-01-01

    In a typical physical experimental instrument, such as a fusion or particle physical application, the readout system generally implements an interface between the data acquisition (DAQ) system and the front-end electronics (FEE). The key task of a readout system is to read, pack, and forward the data from the FEE to the back-end data concentration center in real time. To guarantee real-time performance, the VxWorks operating system (OS) is widely used in readout systems. However, VxWorks is not an open-source OS, which gives it has many disadvantages. With the development of multi-core processor and new scheduling algorithm, Linux OS exhibits performance in real-time applications similar to that of VxWorks. It has been successfully used even for some hard real-time systems. Discussions and evaluations of real-time Linux solutions for a possible replacement of VxWorks arise naturally. In this paper, a real-time transmission method based on Linux is introduced. To reduce the number of transfer cycles for large amounts of data, a large block of contiguous memory buffer for DMA transfer is allocated by modifying the Linux Kernel (version 2.6) source code slightly. To increase the throughput for network transmission, the user software is designed into formation of parallelism. To achieve high performance in real-time data transfer from hardware to software, mapping techniques must be used to avoid unnecessary data copying. A simplified readout system is implemented with 4 readout modules in a PXI crate. This system can support up to 48 MB/s data throughput from the front-end hardware to the back-end concentration center through a Gigabit Ethernet connection. There are no restrictions on the use of this method, hardware or software, which means that it can be easily migrated to other interrupt related applications.

  16. Program PLOTC4 (Version 86-1). Plot evaluated data from the ENDF/B format and/or experimental data which is in a computation format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1986-09-01

    Experimental and evaluated nuclear reaction data are world-wide compiled in EXFOR format and ENDF format, respectively. The computer program PLOTC4 described in the present document plots data from both formats; EXFOR data must be converted first to a ''computation format''. The program is available costfree from the IAEA Nuclear Data Section, upon request. (author)

  17. Watchdog - a workflow management system for the distributed analysis of large-scale experimental data.

    Science.gov (United States)

    Kluge, Michael; Friedel, Caroline C

    2018-03-13

    The development of high-throughput experimental technologies, such as next-generation sequencing, have led to new challenges for handling, analyzing and integrating the resulting large and diverse datasets. Bioinformatical analysis of these data commonly requires a number of mutually dependent steps applied to numerous samples for multiple conditions and replicates. To support these analyses, a number of workflow management systems (WMSs) have been developed to allow automated execution of corresponding analysis workflows. Major advantages of WMSs are the easy reproducibility of results as well as the reusability of workflows or their components. In this article, we present Watchdog, a WMS for the automated analysis of large-scale experimental data. Main features include straightforward processing of replicate data, support for distributed computer systems, customizable error detection and manual intervention into workflow execution. Watchdog is implemented in Java and thus platform-independent and allows easy sharing of workflows and corresponding program modules. It provides a graphical user interface (GUI) for workflow construction using pre-defined modules as well as a helper script for creating new module definitions. Execution of workflows is possible using either the GUI or a command-line interface and a web-interface is provided for monitoring the execution status and intervening in case of errors. To illustrate its potentials on a real-life example, a comprehensive workflow and modules for the analysis of RNA-seq experiments were implemented and are provided with the software in addition to simple test examples. Watchdog is a powerful and flexible WMS for the analysis of large-scale high-throughput experiments. We believe it will greatly benefit both users with and without programming skills who want to develop and apply bioinformatical workflows with reasonable overhead. The software, example workflows and a comprehensive documentation are freely

  18. Meteorological and snow distribution data in the Izas Experimental Catchment (Spanish Pyrenees) from 2011 to 2017

    Science.gov (United States)

    Revuelto, Jesús; Azorin-Molina, Cesar; Alonso-González, Esteban; Sanmiguel-Vallelado, Alba; Navarro-Serrano, Francisco; Rico, Ibai; López-Moreno, Juan Ignacio

    2017-12-01

    This work describes the snow and meteorological data set available for the Izas Experimental Catchment in the Central Spanish Pyrenees, from the 2011 to 2017 snow seasons. The experimental site is located on the southern side of the Pyrenees between 2000 and 2300 m above sea level, covering an area of 55 ha. The site is a good example of a subalpine environment in which the evolution of snow accumulation and melt are of major importance in many mountain processes. The climatic data set consists of (i) continuous meteorological variables acquired from an automatic weather station (AWS), (ii) detailed information on snow depth distribution collected with a terrestrial laser scanner (TLS, lidar technology) for certain dates across the snow season (between three and six TLS surveys per snow season) and (iii) time-lapse images showing the evolution of the snow-covered area (SCA). The meteorological variables acquired at the AWS are precipitation, air temperature, incoming and reflected solar radiation, infrared surface temperature, relative humidity, wind speed and direction, atmospheric air pressure, surface temperature (snow or soil surface), and soil temperature; all were taken at 10 min intervals. Snow depth distribution was measured during 23 field campaigns using a TLS, and daily information on the SCA was also retrieved from time-lapse photography. The data set (https://doi.org/10.5281/zenodo.848277) is valuable since it provides high-spatial-resolution information on the snow depth and snow cover, which is particularly useful when combined with meteorological variables to simulate snow energy and mass balance. This information has already been analyzed in various scientific studies on snow pack dynamics and its interaction with the local climatology or topographical characteristics. However, the database generated has great potential for understanding other environmental processes from a hydrometeorological or ecological perspective in which snow dynamics play a

  19. Achieving graphical excellence: suggestions and methods for creating high-quality visual displays of experimental data.

    Science.gov (United States)

    Schriger, D L; Cooper, R J

    2001-01-01

    Graphics are an important means of communicating experimental data and results. There is evidence, however, that many of the graphics printed in scientific journals contain errors, redundancies, and lack clarity. Perhaps more important, many graphics fail to portray data at an appropriate level of detail, presenting summary statistics rather than underlying distributions. We seek to aid investigators in the production of high-quality graphics that do their investigations justice by providing the reader with optimum access to the relevant aspects of the data. The depiction of by-subject data, the signification of pairing when present, and the use of symbolic dimensionality (graphing different symbols to identify relevant subgroups) and small multiples (the presentation of an array of similar graphics each depicting one group of subjects) to portray stratification are stressed. Step-by-step instructions for the construction of high-quality graphics are offered. We hope that authors will incorporate these suggestions when developing graphics to accompany their manuscripts and that this process will lead to improvements in the graphical literacy of scientific journals. We also hope that journal editors will keep these principles in mind when refereeing manuscripts submitted for peer review.

  20. Use of air monitoring and experimental aerosol data for intake assessment for Mayak plutonium workers

    International Nuclear Information System (INIS)

    Zaytseva, Yekaterina V.; Tretyakov, Fyodor D.; Romanov, Sergey A.; Miller, Guthrie; Bertelli, Luiz; Guilmette, Raymond A.

    2007-01-01

    One of the major uncertainties in reconstructing doses to Mayak Plutonium (Pu) workers is the unknown exposure patterns experienced by individuals. These uncertainties include the amounts of Pu inhaled, the temporal exposure pattern of Pu air concentration, the particle-size distribution and solubility of the inhaled aerosols. To date, little individual and workplace-specific information has been used to assess these parameters for the Mayak workforce. However, extensive workplace-specific alpha activity air monitoring data set has been collated, which, if coupled with individual occupational histories, can potentially provide customized intake scenarios for individual Mayak workers. The most available Pu air concentration data are annual averages, which exist for over 100 defined work stations at radiochemical and chemical-metallurgical manufacturing facilities and basically for the whole period of Mayak production operations. Much sparser but more accurate data on Pu concentrations in workers' breathing zone are available for some major workplaces and occupations. The latter demonstrate that within a working shift, Pu concentrations varied over a range of several orders of magnitude depending on the nature of the operations performed. An approach to use the collated data set for individual intake reconstruction is formulated and its practical application is demonstrated. Initial results of ongoing experimental study on historic particle size at Mayak PA and their implications for intake estimation are presented. (authors)

  1. Relevance and reliability of experimental data in human health risk assessment of pesticides.

    Science.gov (United States)

    Kaltenhäuser, Johanna; Kneuer, Carsten; Marx-Stoelting, Philip; Niemann, Lars; Schubert, Jens; Stein, Bernd; Solecki, Roland

    2017-08-01

    Evaluation of data relevance, reliability and contribution to uncertainty is crucial in regulatory health risk assessment if robust conclusions are to be drawn. Whether a specific study is used as key study, as additional information or not accepted depends in part on the criteria according to which its relevance and reliability are judged. In addition to GLP-compliant regulatory studies following OECD Test Guidelines, data from peer-reviewed scientific literature have to be evaluated in regulatory risk assessment of pesticide active substances. Publications should be taken into account if they are of acceptable relevance and reliability. Their contribution to the overall weight of evidence is influenced by factors including test organism, study design and statistical methods, as well as test item identification, documentation and reporting of results. Various reports make recommendations for improving the quality of risk assessments and different criteria catalogues have been published to support evaluation of data relevance and reliability. Their intention was to guide transparent decision making on the integration of the respective information into the regulatory process. This article describes an approach to assess the relevance and reliability of experimental data from guideline-compliant studies as well as from non-guideline studies published in the scientific literature in the specific context of uncertainty and risk assessment of pesticides. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. SPECT reconstruction of combined cone beam and parallel hole collimation with experimental data

    International Nuclear Information System (INIS)

    Li, Jianying; Jaszczak, R.J.; Turkington, T.G.; Greer, K.L.; Coleman, R.E.

    1993-01-01

    The authors have developed three methods to combine parallel and cone bean (P and CB) SPECT data using modified Maximum Likelihood-Expectation Maximization (ML-EM) algorithms. The first combination method applies both parallel and cone beam data sets to reconstruct a single intermediate image after each iteration using the ML-EM algorithm. The other two iterative methods combine the intermediate parallel beam (PB) and cone beam (CB) source estimates to enhance the uniformity of images. These two methods are ad hoc methods. In earlier studies using computer Monte Carlo simulation, they suggested that improved images might be obtained by reconstructing combined P and CB SPECT data. These combined collimation methods are qualitatively evaluated using experimental data. An attenuation compensation is performed by including the effects of attenuation in the transition matrix as a multiplicative factor. The combined P and CB images are compared with CB-only images and the result indicate that the combined P and CB approaches suppress artifacts caused by truncated projections and correct for the distortions of the CB-only images

  3. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  4. Experimental Space Shuttle Orbiter Studies to Acquire Data for Code and Flight Heating Model Validation

    Science.gov (United States)

    Wadhams, T. P.; Holden, M. S.; MacLean, M. G.; Campbell, Charles

    2010-01-01

    In an experimental study to obtain detailed heating data over the Space Shuttle Orbiter, CUBRC has completed an extensive matrix of experiments using three distinct models and two unique hypervelocity wind tunnel facilities. This detailed data will be employed to assess heating augmentation due to boundary layer transition on the Orbiter wing leading edge and wind side acreage with comparisons to computational methods and flight data obtained during the Orbiter Entry Boundary Layer Flight Experiment and HYTHIRM during STS-119 reentry. These comparisons will facilitate critical updates to be made to the engineering tools employed to make assessments about natural and tripped boundary layer transition during Orbiter reentry. To achieve the goals of this study data was obtained over a range of Mach numbers from 10 to 18, with flight scaled Reynolds numbers and model attitudes representing key points on the Orbiter reentry trajectory. The first of these studies were performed as an integral part of Return to Flight activities following the accident that occurred during the reentry of the Space Shuttle Columbia (STS-107) in February of 2003. This accident was caused by debris, which originated from the foam covering the external tank bipod fitting ramps, striking and damaging critical wing leading edge heating tiles that reside in the Orbiter bow shock/wing interaction region. During investigation of the accident aeroheating team members discovered that only a limited amount of experimental wing leading edge data existed in this critical peak heating area and a need arose to acquire a detailed dataset of heating in this region. This new dataset was acquired in three phases consisting of a risk mitigation phase employing a 1.8% scale Orbiter model with special temperature sensitive paint covering the wing leading edge, a 0.9% scale Orbiter model with high resolution thin-film instrumentation in the span direction, and the primary 1.8% scale Orbiter model with detailed

  5. Assessment of CANDU physics codes using experimental data - II: CANDU core physics measurements

    International Nuclear Information System (INIS)

    Roh, Gyu Hong; Jeong, Chang Joon; Choi, Hang Bok

    2001-11-01

    Benchmark calculations of the advanced CANDU reactor analysis tools (WIMS-AECL, SHETAN and RFSP) and the Monte Carlo code MCNP-4B have been performed using Wolsong Units 2 and 3 Phase-B measurement data. In this study, the benchmark calculations have been done for the criticality, boron worth, reactivity device worth, reactivity coefficient, and flux scan. For the validation of the WIMS-AECL/SHETANRFSP code system, the lattice parameters of the fuel channel were generated by the WIMS-AECL code, and incremental cross sections of reactivity devices and structural material were generated by the SHETAN code. The results have shown that the criticality is under-predicted by -4 mk. The reactivity device worths are generally consistent with the measured data except for the strong absorbers such as shutoff rod and mechanical control absorber. The heat transport system temperature coefficient and flux distributions are in good agreement with the measured data. However, the moderator temperature coefficient has shown a relatively large error, which could be caused by the incremental cross-section generation methodology for the reactivity device. For the MCNP-4B benchmark calculation, cross section libraries were newly generated from ENDF/B-VI release 3 through the NJOY97.114 data processing system and a three-dimensional full core model was developed. The simulation results have shown that the criticality is estimated within 4 mk and the estimated reactivity worth of the control devices are generally consistent with the measurement data, which implies that the MCNP code is valid for CANDU core analysis. In the future, therefore, the MCNP code could be used as a reference tool to benchmark design and analysis codes for the advanced fuels for which experimental data are not available

  6. Effect of impurities and post-experimental purification in SAD phasing with serial femtosecond crystallography data.

    Science.gov (United States)

    Zhang, Tao; Gu, Yuanxin; Fan, Haifu

    2016-06-01

    In serial crystallography (SX) with either an X-ray free-electron laser (XFEL) or synchrotron radiation as the light source, huge numbers of micrometre-sized crystals are used in diffraction data collection. For a SAD experiment using a derivative with introduced heavy atoms, it is difficult to completely exclude crystals of the native protein from the sample. In this paper, simulations were performed to study how the inclusion of native crystals in the derivative sample could affect the result of SAD phasing and how the post-experimental purification proposed by Zhang et al. [(2015), Acta Cryst. D71, 2513-2518] could be used to remove the impurities. A gadolinium derivative of lysozyme and the corresponding native protein were used in the test. Serial femtosecond crystallography (SFX) diffraction snapshots were generated by CrystFEL. SHELXC/D, Phaser, DM, ARP/wARP and REFMAC were used for automatic structure solution. It is shown that a small amount of impurities (snapshots from native crystals) in the set of derivative snapshots can strongly affect the SAD phasing results. On the other hand, post-experimental purification can efficiently remove the impurities, leading to results similar to those from a pure sample.

  7. Comparison of GEANT4 Simulations with Experimental Data for Thick Al Absorbers

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Lopes, Ricardo; Hormaza, Joel

    2009-01-01

    Proton beams in medical applications deal with relatively thick targets like the human head or trunk. Therefore, relatively small differences in the total proton stopping power given, for example, by the different models provided by GEANT4 can lead to significant disagreements in the final proton energy spectra when integrated along lengthy proton trajectories. This work presents proton energy spectra obtained by GEANT4.8.2 simulations using ICRU49, Ziegler1985 and Ziegler2000 models for 19.68 MeV protons passing through a number of Al absorbers with various thicknesses. The spectra were compared with the experimental data, with TRIM/SRIM2008 and MCNPX2.4.0 simulations, and with the Payne analytical solution for the transport equation in the Fokker-Plank approximation. It is shown that the MCNPX simulations reasonably reproduce well all experimental spectra. For the relatively thin targets all the methods give practically identical results but this is not the same for the thick absorbers. It should be noted that all the spectra were measured at the proton energies significantly above 2 MeV, i.e., in the so-called 'Bethe-Bloch region'. Therefore the observed disagreements in GEANT4 results, simulated with different models, are somewhat unexpected. Further studies are necessary for better understanding and definitive conclusions.

  8. Antenatal environmental stress and maturation of the breathing control, experimental data.

    Science.gov (United States)

    Cayetanot, F; Larnicol, N; Peyronnet, J

    2009-08-31

    The nervous respiratory system undergoes postnatal maturation and yet still must be functional at birth. Any antenatal suboptimal environment could upset either its building prenatally and/or its maturation after birth. Here, we would like to briefly summarize some of the major stresses leading to clinical postnatal respiratory dysfunction that can occur during pregnancy, we then relate them to experimental models that have been developed in order to better understand the underlying mechanisms implicated in the respiratory dysfunctions observed in neonatal care units. Four sections are aimed to review our current knowledge based on experimental data. The first will deal with the metabolic factors such as oxygen and glucose, the second with consumption of psychotropic substances (nicotine, cocaine, alcohol, morphine, cannabis and caffeine), the third with psychoactive molecules commonly consumed by pregnant women within a therapeutic context and/or delivered to premature neonates in critical care units (benzodiazepine, caffeine). In the fourth section, we take into account care protocols involving extended maternal-infant separation due to isolation in incubators. The effects of this stress potentially adds to those previously described.

  9. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    Directory of Open Access Journals (Sweden)

    F. Terzuoli

    2008-01-01

    Full Text Available Pressurized thermal shock (PTS modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV lifetime is the cold water emergency core cooling (ECC injection into the cold leg during a loss of coolant accident (LOCA. Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mécanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX, and a research code (NEPTUNE CFD. The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling.

  10. CFD Code Validation against Stratified Air-Water Flow Experimental Data

    International Nuclear Information System (INIS)

    Terzuoli, F.; Galassi, M.C.; Mazzini, D.; D'Auria, F.

    2008-01-01

    Pressurized thermal shock (PTS) modelling has been identified as one of the most important industrial needs related to nuclear reactor safety. A severe PTS scenario limiting the reactor pressure vessel (RPV) lifetime is the cold water emergency core cooling (ECC) injection into the cold leg during a loss of coolant accident (LOCA). Since it represents a big challenge for numerical simulations, this scenario was selected within the European Platform for Nuclear Reactor Simulations (NURESIM) Integrated Project as a reference two-phase problem for computational fluid dynamics (CFDs) code validation. This paper presents a CFD analysis of a stratified air-water flow experimental investigation performed at the Institut de Mecanique des Fluides de Toulouse in 1985, which shares some common physical features with the ECC injection in PWR cold leg. Numerical simulations have been carried out with two commercial codes (Fluent and Ansys CFX), and a research code (NEPTUNE CFD). The aim of this work, carried out at the University of Pisa within the NURESIM IP, is to validate the free surface flow model implemented in the codes against experimental data, and to perform code-to-code benchmarking. Obtained results suggest the relevance of three-dimensional effects and stress the importance of a suitable interface drag modelling

  11. Hemorheological changes in ischemia-reperfusion: an overview on our experimental surgical data.

    Science.gov (United States)

    Nemeth, Norbert; Furka, Istvan; Miko, Iren

    2014-01-01

    Blood vessel occlusions of various origin, depending on the duration and extension, result in tissue damage, causing ischemic or ischemia-reperfusion injuries. Necessary surgical clamping of vessels in vascular-, gastrointestinal or parenchymal organ surgery, flap preparation-transplantation in reconstructive surgery, as well as traumatological vascular occlusions, all present special aspects. Ischemia and reperfusion have effects on hemorheological state by numerous ways: besides the local metabolic and micro-environmental changes, by hemodynamic alterations, free-radical and inflammatory pathways, acute phase reactions and coagulation changes. These processes may be harmful for red blood cells, impairing their deformability and influencing their aggregation behavior. However, there are still many unsolved or non-completely answered questions on relation of hemorheology and ischemia-reperfusion. How do various organ (liver, kidney, small intestine) or limb ischemic-reperfusionic processes of different duration and temperature affect the hemorheological factors? What is the expected magnitude and dynamics of these alterations? Where is the border of irreversibility? How can hemorheological investigations be applied to experimental models using laboratory animals in respect of inter-species differences? This paper gives a summary on some of our research data on organ/tissue ischemia-reperfusion, hemorheology and microcirculation, related to surgical research and experimental microsurgery.

  12. Thermodynamic analysis of chromium solubility data in liquid lithium containing nitrogen: Comparison between experimental data and computer simulation

    International Nuclear Information System (INIS)

    Krasin, Valery P.; Soyustova, Svetlana I.

    2015-01-01

    The mathematical formalism for description of solute interactions in dilute solution of chromium and nitrogen in liquid lithium have been applied for calculating of the temperature dependence of the solubility of chromium in liquid lithium with the various nitrogen contents. It is shown that the derived equations are useful to provide understanding of a relationship between thermodynamic properties and local ordering in the Li–Cr–N melt. Comparison between theory and data reported in the literature for solubility of chromium in nitrogen-contaminated liquid lithium, was allowed to explain the reasons of the deviation of the experimental semi-logarithmic plot of chromium content in liquid lithium as a function of the reciprocal temperature from a straight line. - Highlights: • The activity coefficient of chromium in ternary melt can be obtained by means of integrating the Gibbs–Duhem equation. • In lithium with the high nitrogen content, the dependence of a logarithm of chromium solubility as a function of the reciprocal temperature has essentially nonlinear character. • At temperatures below a certain threshold, the process of dissolution of chromium in lithium will be controlled by the equilibrium concentration of nitrogen required for the formation of ternary nitride Li_9CrN_5at a given temperature.

  13. Texas Panhandle soil-crop-beef food chain for uranium: a dynamic model validated by experimental data

    International Nuclear Information System (INIS)

    Wenzel, W.J.; Wallwork-Barber, K.M.; Rodgers, J.C.; Gallegos, A.F.

    1982-01-01

    Long-term simulations of uranium transport in the soil-crop-beef food chain were performed using the BIOTRAN model. Experimental data means from an extensive Pantex beef cattle study are presented. Experimental data were used to validate the computer model. Measurements of uranium in air, soil, water, range grasses, feed, and cattle tissues are compared to simulated uranium output values in these matrices when the BIOTRAN model was set at the measured soil and air values. The simulations agreed well with experimental data even though metabolic details for ruminants and uranium chemical form in the environment remain to be studied

  14. Beauty photoproduction at HERA. k{sub T}-factorization versus experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Lipatov, A.V.; Zotov, N.P. [M.V. Lomonosov Moscow State Univ., Moscow (Russian Federation). D.V. Skobeltsyn Institute of Nuclear Physics

    2006-05-15

    We present calculations of the beauty photoproduction at HERA collider in the framework of the k{sub T}-factorization approach. Both direct and resolved photon contributions are taken into account. The unintegrated gluon densities in a proton and in a photon are obtained from the full CCFM, from unified BFKL-DGLAP evolution equations as well as from the Kimber-Martin-Ryskin prescription. We investigate different production rates (both inclusive and associated with hadronic jets) and compare our theoretical predictions with the recent experimental data taken by the H1 and ZEUS collaborations. Special attention is put on the x{sup obs}{sub {gamma}} variable which is sensitive to the relative contributions to the beauty production cross section. (Orig.)

  15. Intermediate Radical Termination Theory in Elucidation of RAFT Kinetics and Comparison to Experimental Data

    Directory of Open Access Journals (Sweden)

    M. Baqeri-Jagharq

    2008-12-01

    Full Text Available In current work a comprehensive mechanism based on intermediate radical termination theory is assumed for RAFT polymerization of styrene over cumyl dithiobenzoate as RAFT agent. Rate constants for addition (ka and fragmentation reactions (kf are set to 6×106 and 5×104 respectively, which lead to an equilibrium constant value of K = ka/kf = 1.2 x 102. Moment equations method was used to model this mechanism and the results were compared to experimental data to verify modeling. The effects of changing RAFT agent concentration on conversion, molecular weight and polydispersity index of the final product were investigated through the modeling. According to the results, the likelihood of living polymerization increases with raising RAFT agent concentration which leads to linearity of conversion and molecular weight curves and therefore lowering the polydispersity index and narrowing the molecular weight distribution.

  16. Comparison of SRIM, MCNPX and GEANT simulations with experimental data for thick Al absorbers

    International Nuclear Information System (INIS)

    Evseev, Ivan G.; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Yevseyeva, Olga; Assis, Joaquim T. de; Hormaza, Joel M.; Diaz, Katherin S.; Lopes, Ricardo T.

    2010-01-01

    Proton computerized tomography deals with relatively thick targets like the human head or trunk. In this case precise analytical calculation of the proton final energy is a rather complicated task, thus the Monte Carlo simulation stands out as a solution. We used the GEANT4.8.2 code to calculate the proton final energy spectra after passing a thick Al absorber and compared it with the same conditions of the experimental data. The ICRU49, Ziegler85 and Ziegler2000 models from the low energy extension pack were used. The results were also compared with the SRIM2008 and MCNPX2.4 simulations, and with solutions of the Boltzmann transport equation in the Fokker-Planck approximation.

  17. Comparison of SRIM, MCNPX and GEANT simulations with experimental data for thick Al absorbers

    Energy Technology Data Exchange (ETDEWEB)

    Evseev, Ivan G. [Federal University of Technology-Parana-UTFPR, Av.7 de Setembro 3165, Curitiba-PR (Brazil); Schelin, Hugo R. [Federal University of Technology-Parana-UTFPR, Av.7 de Setembro 3165, Curitiba-PR (Brazil)], E-mail: schelin@utfpr.edu.br; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P. [Federal University of Technology-Parana-UTFPR, Av.7 de Setembro 3165, Curitiba-PR (Brazil); Yevseyeva, Olga; Assis, Joaquim T. de [Instituto Politecnico da UERJ, Rua Alberto Rangel s/n, Nova Friburgo-RJ (Brazil); Hormaza, Joel M. [Instituto de Biociencias da UNESP, Distrito de Rubiao Junior s/n, Botucatu-SP (Brazil); Diaz, Katherin S. [CEADEN, Calle 30 502 e/5ta y 7ma Avenida, Playa, Ciudad Habana (Cuba); Lopes, Ricardo T. [Laboratorio de Instrumentacao Nuclear, COPPE, UFRJ, Rio de Janeiro-RJ (Brazil)

    2010-04-15

    Proton computerized tomography deals with relatively thick targets like the human head or trunk. In this case precise analytical calculation of the proton final energy is a rather complicated task, thus the Monte Carlo simulation stands out as a solution. We used the GEANT4.8.2 code to calculate the proton final energy spectra after passing a thick Al absorber and compared it with the same conditions of the experimental data. The ICRU49, Ziegler85 and Ziegler2000 models from the low energy extension pack were used. The results were also compared with the SRIM2008 and MCNPX2.4 simulations, and with solutions of the Boltzmann transport equation in the Fokker-Planck approximation.

  18. Adsorption of cellulases onto sugar beet shreds and modeling of the experimental data

    Directory of Open Access Journals (Sweden)

    Ivetić Darjana Ž.

    2014-01-01

    Full Text Available This study investigated the adsorption of cellulases onto sugar beet shreds. The experiments were carried out using untreated, as well as dried and not dried dilute acid and steam pretreated sugar beet shreds at different initial enzyme loads. Both dilute acid and steam pretreatment were beneficial in respect of cellulases adsorption providing 8 and 9 times higher amounts of adsorbed proteins, respectively, in comparison to the results obtained with the untreated substrate. Although the use of higher solids load enabled by drying of pretreated substrates, could be beneficial for process productivity, at the same time it decreases the adsorption of enzymes. The obtained experimental data were fitted to five adsorption models, and the Langmuir model having the lowest residual sum of squares was used for the determination of adsorption parameters which were used to calculate the strength of cellulases binding to the substrates.[Projekat Ministarstva nauke Republike Srbije, br. TR 31002

  19. Evaluation of CFD Turbulent Heating Prediction Techniques and Comparison With Hypersonic Experimental Data

    Science.gov (United States)

    Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)

    2001-01-01

    Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.

  20. A note on the analysis of germination data from complex experimental designs

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Andreasen, Christian; Streibig, Jens Carl

    2017-01-01

    from event-time models fitted separately to data from each germination test by means of meta-analytic random effects models. We show that this approach provides a more appropriate appreciation of the sources of variation in hierarchically structured germination experiments as both between- and within......In recent years germination experiments have become more and more complex. Typically, they are replicated in time as independent runs and at each time point they involve hierarchical, often factorial experimental designs, which are now commonly analysed by means of linear mixed models. However......, in order to characterize germination in response to time elapsed, specific event-time models are needed and mixed model extensions of these models are not readily available, neither in theory nor in practice. As a practical workaround we propose a two-step approach that combines and weighs together results...

  1. Soil class around me Serpong experimental power reactor (EPR) site plan base on micro tremor data

    International Nuclear Information System (INIS)

    Marjiyono; Soehaimi A; Hadi Suntoko; Yuliastuti; Syaeful H

    2015-01-01

    Surface geological characteristics has an important role on site response analysis in a region. In regard with experimental power reactor (EPR) construction plan in Serpong, the subsurface modeling from combination array and single station micro tremor data was done. The array and single station micro tremor measurement were performed in 9 and 90 sites, respectively, at ± 1 km radius around the EPR site plan. The Vs30 value was calculated from shear wave velocity structure around the investigated area. The soil classification based on Vs30 in the investigated area generally consists of SD (medium soil) and SC (soft rock) class. The EPR site plan its self in the SD class region. (author)

  2. Application of artificial neural networks in analysis of CHF experimental data in round tubes

    International Nuclear Information System (INIS)

    Huang Yanping; Chen Bingde; Lang Xuemei; Wang Xiaojun; Shan Jianqiang; Jia Dounan

    2004-01-01

    Artificial neural networks (ANNs) are applied successfully to analyze the critical heat flux (CHF) experimental data from some round tubes in this paper. A set of software adopting artificial neural network method for predicting CHF in round tube and a set of CHF database are gotten. Comparing with common CHF correlations and CHF look-up table, ANN method has stronger ability of allow-wrong and nice robustness. The CHF predicting software adopting artificial neural network technology can improve the predicting accuracy in a wider parameter range, and is easier to update and to use. The artificial neural network method used in this paper can be applied to some similar physical problems. (authors)

  3. Experimental Data and Guidelines for Stone Masonry Structures: a Comparative Review

    International Nuclear Information System (INIS)

    Romano, Alessandra

    2008-01-01

    Indications about the mechanical properties of masonry structures contained in many Italian guidelines are based on different aspects both concerning the constituents material (units and mortar) and their assemblage. Indeed, the documents define different classes (depending on the type, the arrangement and the unit properties) and suggest the use of amplification coefficients for taking into account the influence of different factors on the mechanical properties of masonry. In this paper, a critical discussion about the indications proposed by some Italian guidelines for stone masonry structures is presented. Particular attention is addressed to the classification criteria of the masonry type and to the choice of the amplification factors. Finally, a detailed analytical comparison among the suggested values and some inherent experimental data recently published is performed

  4. Comparative study of methods on outlying data detection in experimental results

    International Nuclear Information System (INIS)

    Oliveira, P.M.S.; Munita, C.S.; Hazenfratz, R.

    2009-01-01

    The interpretation of experimental results through multivariate statistical methods might reveal the outliers existence, which is rarely taken into account by the analysts. However, their presence can influence the results interpretation, generating false conclusions. This paper shows the importance of the outliers determination for one data base of 89 samples of ceramic fragments, analyzed by neutron activation analysis. The results were submitted to five procedures to detect outliers: Mahalanobis distance, cluster analysis, principal component analysis, factor analysis, and standardized residual. The results showed that although cluster analysis is one of the procedures most used to identify outliers, it can fail by not showing the samples that are easily identified as outliers by other methods. In general, the statistical procedures for the identification of the outliers are little known by the analysts. (author)

  5. Verification of experimental modal modeling using HDR (Heissdampfreaktor) dynamic test data

    International Nuclear Information System (INIS)

    Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.

    1983-01-01

    Experimental modal modeling involves the determination of the modal parameters of the model of a structure from recorded input-output data from dynamic tests. Though commercial modal analysis algorithms are being widely used in many industries their ability to identify a set of reliable modal parameters of an as-built nuclear power plant structure has not been systematically verified. This paper describes the effort to verify MODAL-PLUS, a widely used modal analysis code, using recorded data from the dynamic tests performed on the reactor building of the Heissdampfreaktor, situated near Frankfurt, Federal Republic of Germany. In the series of dynamic tests on HDR in 1979, the reactor building was subjected to forced vibrations from different types and levels of dynamic excitations. Two sets of HDR containment building input-output data were chosen for MODAL-PLUS analyses. To reduce the influence of nonlinear behavior on the results, these sets were chosen so that the levels of excitation are relatively low and about the same in the two sets. The attempted verification was only partially successful in that only one modal model, with a limited range of validity, could be synthesized and in that the goodness of fit could be verified only in this limited range

  6. The new real-time control and data acquisition system for an experimental tritium removal facility

    International Nuclear Information System (INIS)

    Stefan, Iuliana; Stefan, Liviu; Retevoi, Carmen; Balteanu, Ovidiu; Bucur, Ciprian

    2006-01-01

    Full text: The purpose of the paper is to present a real-time control and data acquisition system based on virtual instrumentation (LabView, compact I/O) applicable to an experimental heavy water detritiation plant. The initial data acquisition system based on analogue instruments is now upgraded to a fully digital system, this because of greater flexibility and capability than analogue hardware what allows easy modifications of the control system. Virtual instrumentation became lately much used for monitoring and controlling the operational parameters in plants. In the specific case of ETRF there are a lot of process parameters which have to be monitored and controlled. The essential improvement in the new system is the collection of all signals and control functions by a PC, what makes any changes in configuration easy. The system hardware-PC with embedded controllers is selected, as most cost effective. The LabView platform provides faster program development with a convenient user interface. The system provides independent digital control of each parameters and records data of the process. The system is flexible and has the advantage of further extension. (authors)

  7. Observed versus simulated meteorological data: a comparison study for Centro Experimental Aramar

    International Nuclear Information System (INIS)

    Beu, Cássia Maria Leme

    2017-01-01

    Centro Experimental Aramar (CEA) is a campus of the Centro Tecnológico da Marinha em São Paulo (CTMSP), responsible for carrying out the Brazilian Navy´s Nuclear Program. As a nuclear facility, the atmosphere is one of the environmental parameters that must be monitored, both for normal operation and accidental situations. Atmospheric dispersion models are powerful tools in this direction, but their results strongly depend of the quality of the input data. Therefore, good information must be provided to the dispersion models, and data from weather forecast models can be suitable for this role. The purpose of this work is to evaluate the performance of regional weather forecast models for CEA site. CEA is located at a complex terrain area, which can add complexity to the air fluxes and reduce the forecast accuracy, which may be critical during an accidental situation. For this work, two regional atmospheric models were chosen: BRAMS and Eta. These models have been intensively improved for Brazilian researchers for the South America characteristics, are free software and offer the possibility to run locally with higher resolution than are currently available by research organizations. Basic variables (temperature, relative humidity, and wind speed and direction) for 48 hours simulations from Eta and BRAMS were compared with CEA observed data. Results from this work will conducted the next steps for running dispersion atmospheric models on an operational basis for CEA site. (author)

  8. Plutonium chemistry: a synthesis of experimental data and a quantitative model for plutonium oxide solubility

    International Nuclear Information System (INIS)

    Haschke, J.M.; Oversby, V.M.

    2002-01-01

    The chemistry of plutonium is important for assessing potential behavior of radioactive waste under conditions of geologic disposal. This paper reviews experimental data on dissolution of plutonium oxide solids, describes a hybrid kinetic-equilibrium model for predicting steady-state Pu concentrations, and compares laboratory results with predicted Pu concentrations and oxidation-state distributions. The model is based on oxidation of PuO 2 by water to produce PuO 2+x , an oxide that can release Pu(V) to solution. Kinetic relationships between formation of PuO 2+x , dissolution of Pu(V), disproportionation of Pu(V) to Pu(IV) and Pu(VI), and reduction of Pu(VI) are given and used in model calculations. Data from tests of pyrochemical salt wastes in brines are discussed and interpreted using the conceptual model. Essential data for quantitative modeling at conditions relevant to nuclear waste repositories are identified and laboratory experiments to determine rate constants for use in the model are discussed

  9. Tau-U: A Quantitative Approach for Analysis of Single-Case Experimental Data in Aphasia.

    Science.gov (United States)

    Lee, Jaime B; Cherney, Leora R

    2018-03-01

    Tau-U is a quantitative approach for analyzing single-case experimental design (SCED) data. It combines nonoverlap between phases with intervention phase trend and can correct for a baseline trend (Parker, Vannest, & Davis, 2011). We demonstrate the utility of Tau-U by comparing it with the standardized mean difference approach (Busk & Serlin, 1992) that is widely reported within the aphasia SCED literature. Repeated writing measures from 3 participants with chronic aphasia who received computer-based writing treatment are analyzed visually and quantitatively using both Tau-U and the standardized mean difference approach. Visual analysis alone was insufficient for determining an effect between the intervention and writing improvement. The standardized mean difference yielded effect sizes ranging from 4.18 to 26.72 for trained items and 1.25 to 3.20 for untrained items. Tau-U yielded significant (p data from 2 of 3 participants. Tau-U has the unique advantage of allowing for the correction of an undesirable baseline trend. Although further study is needed, Tau-U shows promise as a quantitative approach to augment visual analysis of SCED data in aphasia.

  10. Observed versus simulated meteorological data: a comparison study for Centro Experimental Aramar

    Energy Technology Data Exchange (ETDEWEB)

    Beu, Cássia Maria Leme, E-mail: cassia.beu@marinha.mil.br [Centro Tecnológico da Marinha em São Paulo (CEA/CTMSP), Iperó, SP (Brazil). Centro Experimental Aramar

    2017-07-01

    Centro Experimental Aramar (CEA) is a campus of the Centro Tecnológico da Marinha em São Paulo (CTMSP), responsible for carrying out the Brazilian Navy´s Nuclear Program. As a nuclear facility, the atmosphere is one of the environmental parameters that must be monitored, both for normal operation and accidental situations. Atmospheric dispersion models are powerful tools in this direction, but their results strongly depend of the quality of the input data. Therefore, good information must be provided to the dispersion models, and data from weather forecast models can be suitable for this role. The purpose of this work is to evaluate the performance of regional weather forecast models for CEA site. CEA is located at a complex terrain area, which can add complexity to the air fluxes and reduce the forecast accuracy, which may be critical during an accidental situation. For this work, two regional atmospheric models were chosen: BRAMS and Eta. These models have been intensively improved for Brazilian researchers for the South America characteristics, are free software and offer the possibility to run locally with higher resolution than are currently available by research organizations. Basic variables (temperature, relative humidity, and wind speed and direction) for 48 hours simulations from Eta and BRAMS were compared with CEA observed data. Results from this work will conducted the next steps for running dispersion atmospheric models on an operational basis for CEA site. (author)

  11. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  12. Examining dynamic interactions among experimental factors influencing hydrologic data assimilation with the ensemble Kalman filter

    Science.gov (United States)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Cai, X. M.; Ancell, B. C.; Fan, Y. R.

    2017-11-01

    The ensemble Kalman filter (EnKF) is recognized as a powerful data assimilation technique that generates an ensemble of model variables through stochastic perturbations of forcing data and observations. However, relatively little guidance exists with regard to the proper specification of the magnitude of the perturbation and the ensemble size, posing a significant challenge in optimally implementing the EnKF. This paper presents a robust data assimilation system (RDAS), in which a multi-factorial design of the EnKF experiments is first proposed for hydrologic ensemble predictions. A multi-way analysis of variance is then used to examine potential interactions among factors affecting the EnKF experiments, achieving optimality of the RDAS with maximized performance of hydrologic predictions. The RDAS is applied to the Xiangxi River watershed which is the most representative watershed in China's Three Gorges Reservoir region to demonstrate its validity and applicability. Results reveal that the pairwise interaction between perturbed precipitation and streamflow observations has the most significant impact on the performance of the EnKF system, and their interactions vary dynamically across different settings of the ensemble size and the evapotranspiration perturbation. In addition, the interactions among experimental factors vary greatly in magnitude and direction depending on different statistical metrics for model evaluation including the Nash-Sutcliffe efficiency and the Box-Cox transformed root-mean-square error. It is thus necessary to test various evaluation metrics in order to enhance the robustness of hydrologic prediction systems.

  13. Statistical Multipath Model Based on Experimental GNSS Data in Static Urban Canyon Environment

    Directory of Open Access Journals (Sweden)

    Yuze Wang

    2018-04-01

    Full Text Available A deep understanding of multipath characteristics is essential to design signal simulators and receivers in global navigation satellite system applications. As a new constellation is deployed and more applications occur in the urban environment, the statistical multipath models of navigation signal need further study. In this paper, we present statistical distribution models of multipath time delay, multipath power attenuation, and multipath fading frequency based on the experimental data in the urban canyon environment. The raw data of multipath characteristics are obtained by processing real navigation signal to study the statistical distribution. By fitting the statistical data, it shows that the probability distribution of time delay follows a gamma distribution which is related to the waiting time of Poisson distributed events. The fading frequency follows an exponential distribution, and the mean of multipath power attenuation decreases linearly with an increasing time delay. In addition, the detailed statistical characteristics for different elevations and orbits satellites is studied, and the parameters of each distribution are quite different. The research results give useful guidance for navigation simulator and receiver designers.

  14. Integral analyses of fission product retention at mitigated thermally-induced SGTR using ARTIST experimental data

    International Nuclear Information System (INIS)

    Rýdl, Adolf; Lind, Terttaliisa; Birchley, Jonathan

    2016-01-01

    Highlights: • Source term analyses in a PWR of mitigated thermally-induced SGTR scenario performed. • Experimental ARTIST program results on aerosol scrubbing efficiency used in analyses. • Results demonstrate enhanced aerosol retention in a flooded steam generator. • High aerosol retention cannot be predicted by current theoretical scrubbing models. - Abstract: Integral source-term analyses are performed using MELCOR for a PWR Station Blackout (SBO) sequence leading to induced steam generator tube rupture (SGTR). In the absence of any mitigation measures, such a sequence can result in a containment bypass where the radioactive materials can be released directly to the environment. In some SGTR scenarios flooding of the faulted SG secondary side with water can mitigate the accident escalation and also the release of aerosol-borne and volatile radioactive materials. Data on the efficiency of aerosol scrubbing in an SG tube bundle were obtained in the international ARTIST project. In this paper ARTIST data are used directly with parametric MELCOR analyses of a mitigated SGTR sequence to provide more realistic estimates of the releases to environment in such a type of scenario or similar. Comparison is made with predictions using the default scrubbing model in MELCOR, as a representative of the aerosol scrubbing models in current integral codes. Specifically, simulations are performed for an unmitigated sequence and 2 cases where the SG secondary was refilled at different times after the tube rupture. The results, reflecting the experimental observations from ARTIST, demonstrate enhanced aerosol retention in the highly turbulent two-phase flow conditions caused by the complex geometry of the SG secondary side. This effect is not captured by any of the models currently available. The underlying physics remains only partly understood, indicating need for further studies to support a more mechanistic treatment of the retention process.

  15. Comparison of various structural damage tracking techniques with unknown excitations based on experimental data

    Science.gov (United States)

    Huang, Hongwei; Yang, Jann N.; Zhou, Li

    2009-03-01

    An early detection of structural damages is critical for the decision making of repair and replacement maintenance in order to guarantee a specified structural reliability. Consequently, the structural damage detection, based on vibration data measured from the structural health monitoring (SHM) system, has received considerable attention recently. The traditional time-domain analysis techniques, such as the least square estimation (LSE) method and the extended Kalman filter (EKF) approach, require that all the external excitations (inputs) be available, which may not be the case for some SHM systems. Recently, these two approaches have been extended to cover the general case where some of the external excitations (inputs) are not measured, referred to as the LSE with unknown inputs (LSE-UI) and the EKF with unknown inputs (EKF-UI). Also, new analysis methods, referred to as the sequential non-linear least-square estimation with unknown inputs and unknown outputs (SNLSE-UI-UO) and the quadratic sum-square error with unknown inputs (QSSE-UI), have been proposed for the damage tracking of structures when some of the acceleration responses are not measured and the external excitations are not available. In this paper, these newly proposed analysis methods will be compared in terms of accuracy, convergence and efficiency, for damage identification of structures based on experimental data obtained through a series of experimental tests using a small-scale 3-story building model with white noise excitation. The capability of the LSE-UI, EKF-UI, SNLSE-UI-UO and QSSE-UI approaches in tracking the structural damages will be demonstrated.

  16. Spontaneous burp phenomenon: analysis of experimental data and approach to theoretical interpretation

    International Nuclear Information System (INIS)

    Shabalin, E.

    2004-01-01

    Cases of spontaneous release of energy (''burp'') were observed numerously in solid methane, water ice and other hydrogeneous compounds irradiated in fast neutron fields with absorbed dose 2-10 MGy. Nature of this phenomenon is that accumulation of radicals and thermal instability of a sample under due concentration of radicals culminate to autocatalytic reaction of their recombination. The most odious feature of this phenomenon is that irradiation time, before a burp occurs, varies significantly even in identical irradiation condition. Respectively, amount of released energy varies as well. For example, in URAM-2 experiments with water ice, irradiation time before appearance of spontaneous burp varied from 5 hours to 11 hours for equal samples and up to 20 hours for the smallest sample. Another feature of the phenomenon is that regularities for occurrence of spontaneous release of energy (that is, dependence on temperature, size of sample, absorbed dose) can't be extracted explicitly from experimental data, if applying known relations for critical concentration of radicals, namely: Semenov's and Frank-Kamenetski's conditions of thermal instability of a sample, Jackson's relation for chain process of recombination of uniformly distributed radicals, critical condition based on accounting for acceleration of a process of recombination in the regions of micro-cracks, and, finally, critical condition for irregular distribution of radicals proposed by the author of this article. Not one of these theories predicts casual character of a burp on macro-scale basis. So that, notwithstanding fair amount of experimental data on spontaneous burps, it is still impossible to derive well-balanced theory of this phenomenon. One attempt to understand a reason for the strange behavior of spontaneous burping is a probabilistic, cluster model of spontaneous release of energy, developed in the chapter 4 of this paper. Spontaneous burps observed during URAM-2 project and in solid methane

  17. Correlation between the meteorological data acquisition systems of the Centro Experimental ARAMAR

    International Nuclear Information System (INIS)

    Oliveira, Rando M.; Beu, Cássia M.L.

    2017-01-01

    Centro Experimental ARAMAR (CEA) is a Brazilian Navy Technological Center located in the rural area of lperó (São Paulo State), about 10-km distant from the nearest urban area. One of the most important activities at CEA is the nuclear fuel cycle research, as well as the development of a small-scale pressurized water reactor (PWR) land based prototype, The Laboratório Radioecológico (LAR E) is responsible for the meteorological observation program which relies on an automatic data collection system, The following variables are continuously measured: pressure, precipitation, wind speed, wind direction, temperature and relative humidity, The obtained data is refined and used in the annual reports to Comissão Nacional de Energia Nuclear (CNEN), and are an important input data for atmospheric dispersion models. Due to the construction of the Laboratório de Geraç!o Nucleoclétrica (LABGENE), it will be necessary to change tbe location of the towers and meteorological sensors, Thus, since 20 14, a new set of towers and sensors (Torre Nova) are in operation. The new location is 900 m distant from the old set (Torre Velha). Therefore, CEA has currently two meteorological data acquisition systems operating concurrently for approximately three years. The present work aims to compare the meteorological data of both systems in order to verify their agreement. The meteorological time series of both systems were submitted to a statistical analysis to evaluate their correlation. The results of this work confirm the compatibility of the two systems, showing that the Torre Velha can be deactivated without impairment to the meteorological time series. (author)

  18. Correlation between the meteorological data acquisition systems of the Centro Experimental ARAMAR

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Rando M.; Beu, Cássia M.L., E-mail: rando.oliveira@marinha.mil.br, E-mail: cassia.beu@marinha.mil.br [Centro Tecnológico da Marinha em São Paulo (CEA/CTMSP), Iperó, SP (Brazil). Centro Experimental ARAMAR

    2017-07-01

    Centro Experimental ARAMAR (CEA) is a Brazilian Navy Technological Center located in the rural area of lperó (São Paulo State), about 10-km distant from the nearest urban area. One of the most important activities at CEA is the nuclear fuel cycle research, as well as the development of a small-scale pressurized water reactor (PWR) land based prototype, The Laboratório Radioecológico (LAR E) is responsible for the meteorological observation program which relies on an automatic data collection system, The following variables are continuously measured: pressure, precipitation, wind speed, wind direction, temperature and relative humidity, The obtained data is refined and used in the annual reports to Comissão Nacional de Energia Nuclear (CNEN), and are an important input data for atmospheric dispersion models. Due to the construction of the Laboratório de Geraç!o Nucleoclétrica (LABGENE), it will be necessary to change tbe location of the towers and meteorological sensors, Thus, since 20 14, a new set of towers and sensors (Torre Nova) are in operation. The new location is 900 m distant from the old set (Torre Velha). Therefore, CEA has currently two meteorological data acquisition systems operating concurrently for approximately three years. The present work aims to compare the meteorological data of both systems in order to verify their agreement. The meteorological time series of both systems were submitted to a statistical analysis to evaluate their correlation. The results of this work confirm the compatibility of the two systems, showing that the Torre Velha can be deactivated without impairment to the meteorological time series. (author)

  19. Experimental Energy Consumption of Frame Slotted ALOHA and Distributed Queuing for Data Collection Scenarios

    Directory of Open Access Journals (Sweden)

    Pere Tuset-Peiro

    2014-07-01

    Full Text Available Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC protocols, Frame Slotted ALOHA (FSA and Distributed Queuing (DQ. We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.

  20. Experimental energy consumption of Frame Slotted ALOHA and Distributed Queuing for data collection scenarios.

    Science.gov (United States)

    Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier

    2014-07-24

    Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.

  1. Evaluation of CHF experimental data for non-square lattice 7-rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae Hyun; Yoo, Y. J.; Kim, K. K.; Zee, S. Q

    2001-01-01

    A series of CHF experiments are conducted for 7-rod hexagonal test bundles in order to investigate the CHF characteristics of self-sustained square finned (SSF) rod bundles. The experiments are performed in the freon-loop and water-loop located at IPPE in Russia, and 609 data of freon-12 and 229 data of water are obtained from 7 kinds of test bundles classified by the combination of heated length and axial/radial power distributions. As the result of the evaluation of four representative CHF correlations, the EPRI-1 correlation reveals good prediction capability for SSF test bundles. The inlet parameter CHF correlation suggested by IPPE calculates the mean and the standard deviation of P/M for uniformly heated test bundles as 1.002 and 0.049, respectively. In spite of its excellent accuracy, the correlation has a discontinuity point at the boundary between the low velocity and high velocity conditions. KAERI's inlet parameter correlation eliminates this defect by introducing the complete evaporation model at low velocity condition, and calculates the mean and standard deviation of P/M as 0.095 and 0.062 for uniformly heated 496 data points, respectively. The mean/standard deviation of local parameter CHF correlations suggested by IPPE and KAERI are evaluated as 1.023/0.178 and 1.002/0.158, respectively. The inlet parameter correlation developed from uniformly heated test bundles tends to under-predict CHF about 3% for axially non-uniformly heated test bundles. On the other hand, the local parameter correlation reveals large scattering of P/M, and requires re-optimization of the correlation for non-uniform axial power distributions. As the result of the analysis of experimental data, it reveals that the correction model of axial power shapes suggested by IPPE is applicable to the inlet parameter correlations. For the test bundle of radial non-uniform power distribution, the physically unexpected results are obtained at some experimental conditions. In addition

  2. An Experimental Seismic Data and Parameter Exchange System for Interim NEAMTWS

    Science.gov (United States)

    Hanka, W.; Hoffmann, T.; Weber, B.; Heinloo, A.; Hoffmann, M.; Müller-Wrana, T.; Saul, J.

    2009-04-01

    In 2008 GFZ Potsdam has started to operate its global earthquake monitoring system as an experimental seismic background data centre for the interim NEAMTWS (NE Atlantic and Mediterranean Tsunami Warning System). The SeisComP3 (SC3) software, developed within the GITEWS (German Indian Ocean Tsunami Early Warning System) project was extended to test the export and import of individual processing results within a cluster of SC3 systems. The initiated NEAMTWS SC3 cluster consists presently of the 24/7 seismic services at IMP, IGN, LDG/EMSC and KOERI, whereas INGV and NOA are still pending. The GFZ virtual real-time seismic network (GEOFON Extended Virtual Network - GEVN) was substantially extended by many stations from Western European countries optimizing the station distribution for NEAMTWS purposes. To amend the public seismic network (VEBSN - Virtual European Broadband Seismic Network) some attached centres provided additional private stations for NEAMTWS usage. In parallel to the data collection by Internet the GFZ VSAT hub for the secured data collection of the EuroMED GEOFON and NEAMTWS backbone network stations became operational and the first data links were established. In 2008 the experimental system could already prove its performance since a number of relevant earthquakes have happened in NEAMTWS area. The results are very promising in terms of speed as the automatic alerts (reliable solutions based on a minimum of 25 stations and disseminated by emails and SMS) were issued between 2 1/2 and 4 minutes for Greece and 5 minutes for Iceland. They are also promising in terms of accuracy since epicenter coordinates, depth and magnitude estimates were sufficiently accurate from the very beginning, usually don't differ substantially from the final solutions and provide a good starting point for the operations of the interim NEAMTWS. However, although an automatic seismic system is a good first step, 24/7 manned RTWCs are mandatory for regular manual verification

  3. Experimental investigation of auroral generator regions with conjugate Cluster and FAST data

    Directory of Open Access Journals (Sweden)

    O. Marghitu

    2006-03-01

    Full Text Available Here and in the companion paper, Hamrin et al. (2006, we present experimental evidence for the crossing of auroral generator regions, based on conjugate Cluster and FAST data. To our knowledge, this is the first investigation that concentrates on the evaluation of the power density, E·J, in auroral generator regions, by using in-situ measurements. The Cluster data we discuss were collected within the Plasma Sheet Boundary Layer (PSBL, during a quiet magnetospheric interval, as judged from the geophysical indices, and several minutes before the onset of a small substorm, as indicated by the FAST data. Even at quiet times, the PSBL is an active location: electric fields are associated with plasma motion, caused by the dynamics of the plasma-sheet/lobe interface, while electrical currents are induced by pressure gradients. In the example we show, these ingredients do indeed sustain the conversion of mechanical energy into electromagnetic energy, as proved by the negative power density, E·J<0. The plasma characteristics in the vicinity of the generator regions indicate a complicated 3-D wavy structure of the plasma sheet boundary. Consistent with this structure, we suggest that at least part of the generated electromagnetic energy is carried away by Alfvén waves, to be dissipated in the ionosphere, near the polar cap boundary. Such a scenario is supported by the FAST data, which show energetic electron precipitation conjugated with the generator regions crossed by Cluster. A careful examination of the conjunction timing contributes to the validation of the generator signatures.

  4. Experimental investigation of auroral generator regions with conjugate Cluster and FAST data

    Directory of Open Access Journals (Sweden)

    O. Marghitu

    2006-03-01

    Full Text Available Here and in the companion paper, Hamrin et al. (2006, we present experimental evidence for the crossing of auroral generator regions, based on conjugate Cluster and FAST data. To our knowledge, this is the first investigation that concentrates on the evaluation of the power density, E·J, in auroral generator regions, by using in-situ measurements. The Cluster data we discuss were collected within the Plasma Sheet Boundary Layer (PSBL, during a quiet magnetospheric interval, as judged from the geophysical indices, and several minutes before the onset of a small substorm, as indicated by the FAST data. Even at quiet times, the PSBL is an active location: electric fields are associated with plasma motion, caused by the dynamics of the plasma-sheet/lobe interface, while electrical currents are induced by pressure gradients. In the example we show, these ingredients do indeed sustain the conversion of mechanical energy into electromagnetic energy, as proved by the negative power density, E·J<0. The plasma characteristics in the vicinity of the generator regions indicate a complicated 3-D wavy structure of the plasma sheet boundary. Consistent with this structure, we suggest that at least part of the generated electromagnetic energy is carried away by Alfvén waves, to be dissipated in the ionosphere, near the polar cap boundary. Such a scenario is supported by the FAST data, which show energetic electron precipitation conjugated with the generator regions crossed by Cluster. A careful examination of the conjunction timing contributes to the validation of the generator signatures.

  5. Integrated system for production of neutronics and photonics calculational constants. Neutron-induced interactions: index of experimental data

    International Nuclear Information System (INIS)

    MacGregor, M.H.; Cullen, D.E.; Howerton, R.J.; Perkins, S.T.

    1976-01-01

    Indexes to the neutron-induced interaction data in the Experimental Cross Section Information Library (ECSIL) as of July 4, 1976 are tabulated. The tabulation has two arrangements: isotope (ZA) order and reaction-number order

  6. Integrated system for production of neutronics and photonics calculational constants. Neutron-induced interactions: index of experimental data

    Energy Technology Data Exchange (ETDEWEB)

    MacGregor, M.H.; Cullen, D.E.; Howerton, R.J.; Perkins, S.T.

    1976-07-04

    Indexes to the neutron-induced interaction data in the Experimental Cross Section Information Library (ECSIL) as of July 4, 1976 are tabulated. The tabulation has two arrangements: isotope (ZA) order and reaction-number order.

  7. Experimental data on compressive strength and durability of sulfur concrete modified by styrene and bitumen.

    Science.gov (United States)

    Dehestani, M; Teimortashlu, E; Molaei, M; Ghomian, M; Firoozi, S; Aghili, S

    2017-08-01

    In this data article experimental data on the compressive strength, and the durability of styrene and bitumen modified sulfur concrete against acidic water and ignition are presented. The percent of the sulfur cement and the gradation of the aggregates used are according to the ACI 548.2R-93 and ASTM 3515 respectively. For the styrene modified sulfur concrete different percentages of styrene are used. Also for the bitumen modified sulfur concrete, different percentages of bitumen and the emulsifying agent (triton X-100) are utilized. From each batch three 10×10×10 cm cubic samples were casted. One of the samples was used for the compressive strength on the second day of casting, and one on the twenty-eighth day. Then the two samples were put under the high pressure flame of the burning liquid gas for thirty seconds and their ignition resistances were observed. The third sample was put into the acidic water and after twenty eight days immersion in water was dried in the ambient temperature. After drying its compressive strength has been evaluated.

  8. Performance Evaluation of Large Aperture 'Polished Panel' Optical Receivers Based on Experimental Data

    Science.gov (United States)

    Vilnrotter, Victor

    2013-01-01

    Recent interest in hybrid RF/Optical communications has led to the development and installation of a "polished-panel" optical receiver evaluation assembly on the 34-meter research antenna at Deep-Space Station 13 (DSS-13) at NASA's Goldstone Communications Complex. The test setup consists of a custom aluminum panel polished to optical smoothness, and a large-sensor CCD camera designed to image the point-spread function (PSF) generated by the polished aluminum panel. Extensive data has been obtained via realtime tracking and imaging of planets and stars at DSS-13. Both "on-source" and "off-source" data were recorded at various elevations, enabling the development of realistic simulations and analytic models to help determine the performance of future deep-space communications systems operating with on-off keying (OOK) or pulse-position-modulated (PPM) signaling formats with photon-counting detection, and compared with the ultimate quantum bound on detection performance for these modulations. Experimentally determined PSFs were scaled to provide realistic signal-distributions across a photon-counting detector array when a pulse is received, and uncoded as well as block-coded performance analyzed and evaluated for a well-known class of block codes.

  9. Investigation of intermittency in simulated and experimental turbulence data by wavelet analysis

    International Nuclear Information System (INIS)

    Mahdizadeh, N.; Ramisch, M.; Stroth, U.; Lechte, C.; Scott, B.D.

    2004-01-01

    Turbulent transport in magnetized plasmas has an intermittent nature. Peaked probability density functions and a 1/frequency decay of the power spectra have been interpreted as signs of self-organized criticality generated, similar to a sand pile, by the critical gradients of ion- (ITG) or electron-temperature-gradient (ETG) driven instabilities. In order to investigate the degree of intermittency in toroidally confined plasmas in the absence of critical pressure or temperature gradients, data from the drift-Alfven-wave turbulence code DALF3 [B. Scott, Plasma Phys. Controlled Fusion 39, 1635 (1997)], running with a fixed background pressure gradient, and from a weakly driven low-temperature plasma are analyzed. The intermittency is studied on different temporal scales, which are separated by a wavelet transform. Simulated and experimental data reproduce the results on intermittent transport found in fusion plasmas. It can therefore be expected that in fusion plasmas, too, a substantial fraction of the bursty nature of turbulent transport is not related to avalanches caused by a critical gradient as generated by ITG or ETG turbulence

  10. The modelling of condensation in horizontal tubes and the comparison with experimental data

    Directory of Open Access Journals (Sweden)

    Bryk Rafał

    2017-01-01

    Full Text Available The condensation in horizontal tubes plays an important role in determining the operation mode of passive safety systems of modern nuclear power plants. In this paper, two different approaches for modelling of this phenomenon are compared and verified against experimental data. The first approach is based on the flow regime map developed by Tandon. Depending on the regime, the heat transfer coefficient is calculated according to corresponding semi-empirical correlation. The second approach uses a general, fully empirical correlation proposed by Shah. Both models are developed with utilization of the object-oriented, equation-based Modelica language and the open-source Open-Modelica environment. The results are compared with data obtained during a large scale integral test, simulating a Loss of Coolant Accident scenario performed at the dedicated Integral Test Facility Karlstein (INKA which was built at the Components Testing Department of AREVA in Karlstein, Germany. The INKA facility was designed to test the performance of the passive safety systems of KERENA, the new AREVA boiling water reactor design. INKA represents the KERENA containment with a volume scaling of 1:24. Components heights and levels over the ground are in the full scale. The comparison of simulations results shows a good agreement.

  11. A survey and experimental comparison of distributed SPARQL engines for very large RDF data

    KAUST Repository

    Abdelaziz, Ibrahim; Harbi, Razen; Khayyat, Zuhair; Kalnis, Panos

    2017-01-01

    Distributed SPARQL engines promise to support very large RDF datasets by utilizing shared-nothing computer clusters. Some are based on distributed frameworks such as MapReduce; others implement proprietary distributed processing; and some rely on expensive preprocessing for data partitioning. These systems exhibit a variety of trade-offs that are not well-understood, due to the lack of any comprehensive quantitative and qualitative evaluation. In this paper, we present a survey of 22 state-of-the-art systems that cover the entire spectrum of distributed RDF data processing and categorize them by several characteristics. Then, we select 12 representative systems and perform extensive experimental evaluation with respect to preprocessing cost, query performance, scalability and workload adaptability, using a variety of synthetic and real large datasets with up to 4.3 billion triples. Our results provide valuable insights for practitioners to understand the trade-offs for their usage scenarios. Finally, we publish online our evaluation framework, including all datasets and workloads, for researchers to compare their novel systems against the existing ones.

  12. Comparison of Instrumentation and Control Parameters Based on Simulation and Experimental Data for Reactor TRIGA PUSPATI

    International Nuclear Information System (INIS)

    Anith Khairunnisa Ghazali; Mohd Sabri Minhat

    2015-01-01

    Reactor TRIGA PUSPATI (RTP) undergoes safe operation for more than 30 years and the only research reactor in Malaysia. The main safety feature of Instrumentation and Control (I and C) system design is such that any failure in the electronic, or its associated components, does not lead to an uncontrolled rate of reactivity. There are no best models for RTP simulation was designed for study and research. Therefore, the comparison for I&C parameters are very essential, to design the best RTP model using MATLAB/ Simulink as close as the RTP. The simulation of TRIGA reactor type already develop using desktop reactor simulator such as Personal Computer Transient Analyzer (PCTRAN). The experimental data from RTP and simulation of PCTRAN shows some similarities and differences due to certain limitation. Currently, the structured RTP simulation was designed using MATLAB and Simulink tool that consist of ideal fission chamber, controller, control rod position, height to worth and RTP model. The study on this paper focus on comparison between real data from RTP and simulation result from PCTRAN on I&C parameters such as water level, fuel temperature, bulk temperature, power rated and rod position. The error analysis due to some similarities and differences of I&C parameters shall be obtained and analysed. The result will be used as reference for proposed new structured of RTP model. (author)

  13. A survey and experimental comparison of distributed SPARQL engines for very large RDF data

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-10-19

    Distributed SPARQL engines promise to support very large RDF datasets by utilizing shared-nothing computer clusters. Some are based on distributed frameworks such as MapReduce; others implement proprietary distributed processing; and some rely on expensive preprocessing for data partitioning. These systems exhibit a variety of trade-offs that are not well-understood, due to the lack of any comprehensive quantitative and qualitative evaluation. In this paper, we present a survey of 22 state-of-the-art systems that cover the entire spectrum of distributed RDF data processing and categorize them by several characteristics. Then, we select 12 representative systems and perform extensive experimental evaluation with respect to preprocessing cost, query performance, scalability and workload adaptability, using a variety of synthetic and real large datasets with up to 4.3 billion triples. Our results provide valuable insights for practitioners to understand the trade-offs for their usage scenarios. Finally, we publish online our evaluation framework, including all datasets and workloads, for researchers to compare their novel systems against the existing ones.

  14. Application of a two-region kinetic model for reflected reactors to experimental data

    International Nuclear Information System (INIS)

    Busch, R.D.; Spriggs, G.D.; Williams, J.G.

    1996-01-01

    Reflected reactors constitute one of the most important classes of nuclear reactors. Yet, during the past 50 yr, a plethora of experimental data involving reflected systems has been reported in the literature that cannot be satisfactorily explained using the open-quotes standardclose quotes (i.e., one-region) point-kinetic model. In particular, many have observed that the prompt-decay a curves obtained from Rossi-α and pulsed-neutron experiments can exhibit multiple decay modes in the vicinity near delayed critical in some types of reflected systems. When analyzed using theories based on the standard point-kinetic model, these data yielded system lifetimes that do not always agree well with the lifetimes predicted by numerical solutions of the multigroup, multidimensional diffusion or transport equations. In several cases, when the longest lived decay mode (i.e., the dominant root) was plotted as a function of reactivity, the a curve intercepted the reactivity axis at a reactivity significantly greater than 1$. Brunson dubbed this seemingly inexplicable behavior as the open-quotes dollar discrepancy.close quotes Furthermore, it has also been observed that the kinetic behavior of some reflected, fast-burst assemblies exhibits a very pronounced nonlinear relationship between reactivity and the initial inverse period for reactivity insertions > 1 $

  15. A multi-agent architecture for sharing knowledge and experimental data about waste water treatment plants through the Internet

    International Nuclear Information System (INIS)

    Abu Yaman, I. R.; Kerckhoffs, J. E.

    1998-01-01

    In this paper, we present a first prototype of a local multi-agent architecture for the sharing of knowledge and experimental data about waste water treatment plants through the Internet, or more specifically the WWW. Applying a net browser such as nets cape, a user can have access to a CLIPS expert system (advising on waste water cleaning technologies) and experimental data files. The discussed local prototype is part of proposed global agent architecture. (authors)

  16. Calculating the parameters of experimental data Gauss distribution using the least square fit method and evaluation of their accuracy

    International Nuclear Information System (INIS)

    Guseva, E.V.; Peregudov, V.N.

    1982-01-01

    The FITGAV program for calculation of parameters of the Gauss curve describing experimental data is considered. The calculations are based on the least square fit method. The estimations of errors in the parameter determination as a function of experimental data sample volume and their statistical significance are obtained. The curve fit using 100 points occupies less than 1 s at the SM-4 type computer

  17. An easy-to-build remote laboratory with data transfer using the Internet School Experimental System

    International Nuclear Information System (INIS)

    Schauer, Frantisek; Ozvoldova, Miroslava; Lustig, Frantisek; Dvorak, JirI

    2008-01-01

    The present state of information communication technology makes it possible to devise and run computer-based e-laboratories accessible to any user with a connection to the Internet, equipped with very simple technical means and making full use of web services. Thus, the way is open for a new strategy of physics education with strongly global features, based on experiment and experimentation. We name this strategy integrated e-learning, and remote experiments across the Internet are the foundation for this strategy. We present both pedagogical and technical reasoning for the remote experiments and outline a simple system based on a server-client approach, and on web services and Java applets. We give here an outline of the prospective remote laboratory system with data transfer using the Internet School Experimental System (ISES) as hardware and ISES WEB Control kit as software. This approach enables the simple construction of remote experiments without building any hardware and virtually no programming, using a paste and copy approach with typical prebuilt blocks such as a camera view, controls, graphs, displays, etc. We have set up and operate at present seven experiments, running round the clock, with more than 12 000 connections since 2005. The experiments are widely used in practical teaching of both university and secondary level physics. The recording of the detailed steps the experimentor takes during the measurement enables detailed study of the psychological aspects of running the experiments. The system is ready for a network of universities to start covering the basic set of physics experiments. In conclusion we summarize the results achieved and experiences of using remote experiments built on the ISES hardware system

  18. An easy-to-build remote laboratory with data transfer using the Internet School Experimental System

    Science.gov (United States)

    Schauer, František; Lustig, František; Dvořák, Jiří; Ožvoldová, Miroslava

    2008-07-01

    The present state of information communication technology makes it possible to devise and run computer-based e-laboratories accessible to any user with a connection to the Internet, equipped with very simple technical means and making full use of web services. Thus, the way is open for a new strategy of physics education with strongly global features, based on experiment and experimentation. We name this strategy integrated e-learning, and remote experiments across the Internet are the foundation for this strategy. We present both pedagogical and technical reasoning for the remote experiments and outline a simple system based on a server-client approach, and on web services and Java applets. We give here an outline of the prospective remote laboratory system with data transfer using the Internet School Experimental System (ISES) as hardware and ISES WEB Control kit as software. This approach enables the simple construction of remote experiments without building any hardware and virtually no programming, using a paste and copy approach with typical prebuilt blocks such as a camera view, controls, graphs, displays, etc. We have set up and operate at present seven experiments, running round the clock, with more than 12 000 connections since 2005. The experiments are widely used in practical teaching of both university and secondary level physics. The recording of the detailed steps the experimentor takes during the measurement enables detailed study of the psychological aspects of running the experiments. The system is ready for a network of universities to start covering the basic set of physics experiments. In conclusion we summarize the results achieved and experiences of using remote experiments built on the ISES hardware system.

  19. An easy-to-build remote laboratory with data transfer using the Internet School Experimental System

    Energy Technology Data Exchange (ETDEWEB)

    Schauer, Frantisek; Ozvoldova, Miroslava [Trnava University, Faculty of Pedagogy, Department of Physics, Trnava (Slovakia); Lustig, Frantisek; Dvorak, JirI [Charles University, Faculty of Mathematics and Physics, Department of Didactics of Physics, Prague (Czech Republic)], E-mail: fschauer@ft.utb.cz

    2008-07-15

    The present state of information communication technology makes it possible to devise and run computer-based e-laboratories accessible to any user with a connection to the Internet, equipped with very simple technical means and making full use of web services. Thus, the way is open for a new strategy of physics education with strongly global features, based on experiment and experimentation. We name this strategy integrated e-learning, and remote experiments across the Internet are the foundation for this strategy. We present both pedagogical and technical reasoning for the remote experiments and outline a simple system based on a server-client approach, and on web services and Java applets. We give here an outline of the prospective remote laboratory system with data transfer using the Internet School Experimental System (ISES) as hardware and ISES WEB Control kit as software. This approach enables the simple construction of remote experiments without building any hardware and virtually no programming, using a paste and copy approach with typical prebuilt blocks such as a camera view, controls, graphs, displays, etc. We have set up and operate at present seven experiments, running round the clock, with more than 12 000 connections since 2005. The experiments are widely used in practical teaching of both university and secondary level physics. The recording of the detailed steps the experimentor takes during the measurement enables detailed study of the psychological aspects of running the experiments. The system is ready for a network of universities to start covering the basic set of physics experiments. In conclusion we summarize the results achieved and experiences of using remote experiments built on the ISES hardware system.

  20. Experimental aerodynamic and acoustic model testing of the Variable Cycle Engine (VCE) testbed coannular exhaust nozzle system: Comprehensive data report

    Science.gov (United States)

    Nelson, D. P.; Morris, P. M.

    1980-01-01

    The component detail design drawings of the one sixth scale model of the variable cycle engine testbed demonstrator exhaust syatem tested are presented. Also provided are the basic acoustic and aerodynamic data acquired during the experimental model tests. The model drawings, an index to the acoustic data, an index to the aerodynamic data, tabulated and graphical acoustic data, and the tabulated aerodynamic data and graphs are discussed.

  1. Using the /phi/resund experimental data to evaluate the ARAC emergency response models

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Gryning, S.E.

    1988-07-01

    A series of meteorological and tracer experiments, was conducted during May and June 1984 over the 20-km wide /O/resund strait between Denmark and Sweden for the purpose of studying atmospheric dispersion processes over cold water and warm land surfaces and providing the data needed to evaluate meso-scale models in a coastal environment. In concert with these objectives the data from these experiments have been used as part of a continuing effort to evaluate the capability of the three-dimensional MATHEW/ADPIC (M/A) atmospheric dispersion models to simulate pollutant transport and diffusion characteristics of the atmospheric during a wide variety of meteorological conditions. Since previous studies have focused primarily on M/A model evaluations over rolling and complex terrain at inland sites, the /O/resund experiments provide a unique opportunity to evaluate the models in a coastal environment. The M/A models are used by the Atmospheric Release Advisory Capability (ARAC), developed by the Lawrence Livermore National Laboratory, for performing real-time assessments of the environmental consequences of potential or actual releases of radioactivity into the atmosphere. These assessments include estimation of radiation doses to nearby population centers and of the extent of surface contamination. Model evaluations, using field experimental data such as those generated by the /O/resund experiments, serve as a basis for providing emergency response managers with estimated of the uncertainties associated with accident consequence assessments. This report provides a brief description of the /O/resund experiments, the current understanding of the meteorological processes governing pollutant dispersion over the /O/resund strait, and the results of the M/A model simulations of these experiments. 11 refs., 7 figs., 1 tab

  2. Evaluation of medical countermeasures against organophosphorus compounds: the value of experimental data and computer simulations.

    Science.gov (United States)

    Worek, Franz; Aurbek, Nadine; Herkert, Nadja M; John, Harald; Eddleston, Michael; Eyer, Peter; Thiermann, Horst

    2010-09-06

    Despite extensive research for more than six decades on medical countermeasures against poisoning by organophosphorus compounds (OP) the treatment options are meagre. The presently established acetylcholinesterase (AChE) reactivators (oximes), e.g. obidoxime and pralidoxime, are insufficient against a number of nerve agents and there is ongoing debate on the benefit of oxime treatment in human OP pesticide poisoning. Up to now, the therapeutic efficacy of oximes was mostly evaluated in animal models but substantial species differences prevent direct extrapolation of animal data to humans. Hence, it was considered essential to establish relevant experimental in vitro models for the investigation of oximes as antidotes and to develop computer models for the simulation of oxime efficacy in different scenarios of OP poisoning. Kinetic studies on the various interactions between erythrocyte AChE from various species, structurally different OP and different oximes provided a basis for the initial assessment of the ability of oximes to reactivate inhibited AChE. In the present study, in vitro enzyme-kinetic and pharmacokinetic data from a minipig model of dimethoate poisoning and oxime treatment were used to calculate dynamic changes of AChE activities. It could be shown that there is a close agreement between calculated and in vivo AChE activities. Moreover, computer simulations provided insight into the potential and limitations of oxime treatment. In the end, such data may be a versatile tool for the ongoing discussion of the pros and cons of oxime treatment in human OP pesticide poisoning. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.-M.

    2008-01-01

    CFD code validation requires experimental data that characterize the distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The paper reports about the use of wire-mesh sensors to study turbulent mixing processes in single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of other non

  4. Novel experimental measuring techniques required to provide data for CFD validation

    International Nuclear Information System (INIS)

    Prasser, H.M.

    2007-01-01

    CFD code validation requires experimental data that characterize distributions of parameters within large flow domains. On the other hand, the development of geometry-independent closure relations for CFD codes have to rely on instrumentation and experimental techniques appropriate for the phenomena that are to be modelled, which usually requires high spatial and time resolution. The presentation reports about the use of wire-mesh sensors to study turbulent mixing processes in the single-phase flow as well as to characterize the dynamics of the gas-liquid interface in a vertical pipe flow. Experiments at a pipe of a nominal diameter of 200 mm are taken as the basis for the development and test of closure relations describing bubble coalescence and break-up, interfacial momentum transfer and turbulence modulation for a multi-bubble-class model. This is done by measuring the evolution of the flow structure along the pipe. The transferability of the extended CFD code to more complicated 3D flow situations is assessed against measured data from tests involving two-phase flow around an asymmetric obstacle placed in a vertical pipe. The obstacle, a half-moon-shaped diaphragm, is movable in the direction of the pipe axis; this allows the 3D gas fraction field to be recorded without changing the sensor position. In the outlook, the pressure chamber of TOPFLOW is presented, which will be used as the containment for a test facility, in which experiments can be conducted in pressure equilibrium with the inner atmosphere of the tank. In this way, flow structures can be observed by optical means through large-scale windows even at pressures of up to 5 MPa. The so-called 'Diving Chamber' technology will be used for Pressurized Thermal Shock (PTS) tests. Finally, some important trends in instrumentation for multi-phase flows will be given. This includes the state-of-art of X-ray and gamma tomography, new multi-component wire-mesh sensors, and a discussion of the potential of

  5. Analysis of the experimental data of air pollution using atmospheric dispersion modeling and rough set

    International Nuclear Information System (INIS)

    Halfa, I.K.I

    2008-01-01

    This thesis contains four chapters and list of references:In chapter 1, we introduce a brief survey about the atmospheric concepts and the topological methods for data analysis.In section 1.1, we give introduce a general introduction. We recall some of atmospheric fundamentals in Section 1.2. Section 1.3, shows the concepts of modern topological methods for data analysis.In chapter 2, we have studied the properties of atmosphere and focus on concept of Rough set and its properties. This concepts of rough set has been applied to analyze the atmospheric data.In section 2.1, we introduce a general introduction about concept of rough set and properties of atmosphere. Section 2.2 focuses on the concept of rough set and its properties and generalization of approximation of rough set theory by using topological space. In section 2.3 we have studied the stabilities of atmosphere for Inshas location for all seasons using different schemes and compared these schemes using statistical and rough set methods. In section 2.4, we introduce mixing height of plume for all seasons. Section 2.5 introduced seasonal surface layer turbulence processes for Inshas location. Section 2.6 gives a comparison between the seasonal surface layer turbulence processes for Inshas location and for different locations using rough set theory.In chapter 3 we focus on the concept of variable precision rough set (VPRS) and its properties and using it to compare, between the estimated and observed data of the concentration of air pollution for Inshas location. In Section 3.1 we introduce a general introduction about VPRS and air pollution. In Section 3.2 we have focused on the concept and properties of VPRS. In Section 3.3 we have introduced a method to estimate the concentration of air pollution for Inshas location using Gaussian plume model. Section 3.4 has showed the experimental data. The estimated data have been compared with the observed data using statistical methods in Section 3.5. In Section 3

  6. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    International Nuclear Information System (INIS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-01-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available

  7. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Science.gov (United States)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  8. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  9. Program PLOTC4. (Version 87-1). Plot evaluated data from the ENDF/B format and/or experimental data which is in a computation format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1987-06-01

    Experimental and evaluated nuclear reaction data are world-wide compiled in EXFOR format (see document IAEA-NDS-1) and ENDF format (see document IAEA-NDS-10), respectively. The computer program PLOTC4 described in the present document plots data from both formats; EXFOR data must be converted first to a ''computation format'' (see document IAEA-NDS-80). The program is available upon request costfree from the IAEA Nuclear Data Section. (author)

  10. Experimental data and theoretical predictions for the rate of electrophoretic clarification of colloidal suspensions

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, T.J.; Davis, E.J.

    2000-05-01

    An experimental and theoretical investigation of the electrophoretic clarification rate of colloidal suspensions was conducted. The suspensions included a coal-washing effluent and a model system of TiO{sub 2} particles. A parametric study of TiO{sub 2} suspensions was performed to validate and analysis of the electrophoretic motion of the clarification front formed between a clear zone and the suspension. To measure the electric field strength needed in the prediction of the location of the front, a moveable probe and salt bridge were connected to a reference electrode. Using the measured electric field strengths, it was found that the numerical solution to the unit cell electrophoresis model agrees with the measured clarification rates. For suspensions with moderately thick electric double layers and high particle volume fractions the deviations from classical Smoluchowski theory are substantial, and the numerical analysis is in somewhat better agreement with the data than a prior solution of the problem. The numerical model reduces to the predictions of previous theories as the thickness of the electric double layer decreases, and it is in good agreement with the clarification rate measured for a coal-washing effluent suspension with thin electric double layers.

  11. Experimental data and theoretical predictions of the rate of electrophoretic clarification of colloidal suspensions

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, T.J.; Davis, E.J. [University of Washington, Seattle, WA (USA). Dept. of Chemical Engineering

    2000-05-01

    An experimental and theoretical investigation of the electrophoretic clarification rate of colloidal suspensions was conducted. The suspensions included a coal-washing effluent and a model system of TiO{sub 2} particles. A parametric study of TiO{sub 2} suspensions was performed to validate an analysis of the electrophoretic motion of the clarification front formed between a clear zone and the suspension. To measure the electric field strength needed in the prediction of the location of the front, a moveable probe and salt bridge were connected to a reference electrode. Using the measured electric field strength, it was found that the numerical solution to the unit cell electrophoresis model agrees with the measured clarification rates. For suspensions with moderately thick electric double layers and high particle volume fractions the deviations from classical Smoluchowski theory are substantial, and the numerical analysis is in somewhat better agreement with the data than a prior solution of the problem. The numerical model reduces to the predictions of previous theories as the thickness of the electric double layer decreases, and it is in good agreement with the clarification rate measured for a coal-washing effluent suspension with thin electric double layers. 21 refs., 8 figs., 4 tabs.

  12. Extrapolation of experimental data on late effects of low-dose radionuclides in man

    International Nuclear Information System (INIS)

    Kalistratova, V.S.; Nisimov, P.G.

    1997-01-01

    The situation of living of population on radionuclide contamination areas was simulated in the experimental study using white strainless rats of different ages. The significance of age for late stochastic effects of internal radionuclide contamination with low doses of 131 I, 137 Cs, 144 Ce and 106 Ru was studied. Some common regularities and differences in late effects formation depending on age were found. Results of the study showed that the number of tumors developed increased in groups of animals exposed at the youngest age. The younger animal at the moment of internal radionuclide contamination, the higher percentage of malignant tumors appeared. It was especially so for tumors of endocrine glands (pituitary, suprarenal,- and thyroid). Differences in late effects formation related to different type of radionuclide distribution within the body were estimated. On the base of extrapolation the conclusion was made that human organism being exposed at early postnatal or pubertal period could be the most radiosensitive (1.5-2.0 or sometimes even 3-5 times higher than adults). Data confirmed the opinion that children are the most critical part of population even in case of low dose radiation exposure. (author)

  13. Radiation effects modeling and experimental data on I2L devices

    International Nuclear Information System (INIS)

    Long, D.M.; Repper, C.J.; Ragonese, L.J.; Yang, N.T.

    1976-01-01

    This paper reports on an Integrated Injection Logic (I 2 L) radiation effects model which includes radiation effects phenomena. Twenty-five individual current components were identified for an I 2 L logic gate by assuming wholly vertical or wholly horizontal current flow. Equations were developed for each component in terms of basic parameters such as doping profiles, distances, and diffusion lengths, and set up on a computer for specific logic cell configurations. For neutron damage, the model shows excellent agreement with experimental data. Reactor test results on GE I 2 L samples showed a neutron hardness level in the range of 6 x 10 12 to 3 x 10 13 n/cm 2 (1 MeV Eq), and cobalt-60 tests showed a total dose hardness of 6 x 10 4 to greater than 1 x 10 6 Rads(Si) (all device types at an injection current of 50 microamps per gate). It was found that significant hardness improvements could be achieved by: (a) diffusion profile variation, (b) utilizing a tight N + collar around the cell, and (c) locating the collector close to the injector. Flash X-ray tests showed a transient logic upset threshold of 1 x 10 9 Rads(Si)/sec for a 28 ns pulse, and a survival level greater than 2 x 10 12 Rads(Si)/sec

  14. Experimental and theoretical data on ion-molecule-reactions relevant for plasma modelling

    International Nuclear Information System (INIS)

    Hansel, A.; Praxmarer, C.; Lindinger, W.

    1995-01-01

    Despite the fact that the rate coefficients of hundreds of ion-molecule-reactions have been published in the literature, much more data are required for the purpose of plasma modelling. Many ion molecule reactions have rate coefficients, k, as large as the collisional limiting value, k c , i.e. the rate coefficients k c at which ion-neutral collision complexes are formed are close to the actual rate coefficients observed. In the case of the interaction of an ion with a non polar molecule, k c , is determined by the Langevin limiting value k L being typically 10 -9 cm 3 s -1 . However, when ions react with polar molecules k c is predicted by the average dipole orientation (ADO) theory. These classical theories yield accurate rate coefficients at thermal and elevated temperatures for practically all proton transfer as well as for many charge transfer and hydrogen abstraction reactions. The agreement between experimental and calculated values is usually better than ±20% and in the case of proton transfer reactions the agreement seems to be even better as recent investigations have shown. Even the interaction of the permanent ion dipole with non polar and polar neutrals can be taken into account to predict reaction rate coefficients as has been shown very recently in reactions of the highly polar ion ArH 3 + with various neutrals

  15. Spontaneous Time Symmetry Breaking in System with Mixed Strategy Nash Equilibrium: Evidences in Experimental Economics Data

    Science.gov (United States)

    Wang, Zhijian; Xu, Bin; Zhejiang Collaboration

    2011-03-01

    In social science, laboratory experiment with human subjects' interaction is a standard test-bed for studying social processes in micro level. Usually, as in physics, the processes near equilibrium are suggested as stochastic processes with time-reversal symmetry (TRS). To the best of our knowledge, near equilibrium, the breaking time symmetry, as well as the existence of robust time anti-symmetry processes, has not been reported clearly in experimental economics till now. By employing Markov transition method to analysis the data from human subject 2x2 Games with wide parameters and mixed Nash equilibrium, we study the time symmetry of the social interaction process near Nash equilibrium. We find that, the time symmetry is broken, and there exists a robust time anti-symmetry processes. We also report the weight of the time anti-symmetry processes in the total processes of each the games. Evidences in laboratory marketing experiments, at the same time, are provided as one-dimension cases. In these cases, time anti-symmetry cycles can also be captured. The proposition of time anti-symmetry processes is small, but the cycles are distinguishable.

  16. Straw combustion on slow-moving grates - a comparison of model predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Kaer, S.K. [Aalborg Univ. (Denmark). Inst. of Energy Technology

    2005-03-01

    Combustion of straw in grate-based boilers is often associated with high emission levels and relatively poor fuel burnout. A numerical grate combustion model was developed to assist in improving the combustion performance of these boilers. The model is based on a one-dimensional ''walking-column'' approach and includes the energy equations for both the fuel and the gas accounting for heat transfer between the two phases. The model gives important insight into the combustion process and provides inlet conditions for a computational fluid dynamics analysis of the freeboard. The model predictions indicate the existence of two distinct combustion modes. Combustion air temperature and mass flow-rate are the two parameters determining the mode. There is a significant difference in reaction rates (ignition velocity) and temperature levels between the two modes. Model predictions were compared to measurements in terms of ignition velocity and temperatures for five different combinations of air mass flow and temperature. In general, the degree of correspondence with the experimental data is favorable. The largest difference between measurements and predictions occurs when the combustion mode changes. The applicability to full-scale is demonstrated by predictions made for an existing straw-fired boiler located in Denmark. (author)

  17. Experimental neutron capture data of $^{58}$Ni from the CERN n_TOF facility

    CERN Document Server

    Žugec, P.; Colonna, N.; Bosnar, D.; Altstadt, S.; Andrzejewski, J.; Audouin, L.; Bécares, V.; Bečvář, F.; Belloni, F.; Berthoumieux, E.; Billowes, J.; Boccone, V.; Brugger, M.; Calviani, M.; Calviño, F.; Cano-Ott, D.; Carrapiço, C.; Cerutti, F.; Chiaveri, E.; Chin, M.; Cortés, G.; Cortés-Giraldo, M.A.; Diakaki, M.; Domingo-Pardo, C.; Duran, I.; Dzysiuk, N.; Eleftheriadis, C.; Ferrari, A.; Fraval, K.; Ganesan, S.; García, A.R.; Giubrone, G.; Gómez-Hornillos, M.B.; Gonçalves, I.F.; González-Romero, E.; Griesmayer, E.; Guerrero, C.; Gunsing, F.; Gurusamy, P.; Jenkins, D.G.; Jericha, E.; Kadi, Y.; Käppeler, F.; Karadimos, D.; Koehler, P.; Kokkoris, M.; Krtička, M.; Kroll, J.; Langer, C.; Lederer, C.; Leeb, H.; Leong, L.S.; Losito, R.; Manousos, A.; Marganiec, J.; Martìnez, T.; Massimi, C.; Mastinu, P.F.; Mastromarco, M.; Meaze, M.; Mendoza, E.; Mengoni, A.; Milazzo, P.M.; Mingrone, F.; Mirea, M.; Mondalaers, W.; Paradela, C.; Pavlik, A.; Perkowski, J.; Pignatari, M.; Plompen, A.; Praena, J.; Quesada, J.M.; Rauscher, T.; Reifarth, R.; Riego, A.; Roman, F.; Rubbia, C.; Sarmento, R.; Schillebeeckx, P.; Schmidt, S.; Tagliente, G.; Tain, J.L.; Tarrío, D.; Tassan-Got, L.; Tsinganis, A.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Versaci, R.; Vermeulen, M.J.; Vlachoudis, V.; Vlastou, R.; Wallner, A.; Ware, T.; Weigand, M.; Weiß, C.; Wright, T.

    2013-01-01

    The $^{58}$Ni $(n,\\gamma)$ cross section has been measured at the neutron time of flight facility n_TOF at CERN, in the energy range from 27 meV up to 400 keV. In total, 51 resonances have been analyzed up to 122 keV. Maxwellian averaged cross sections (MACS) have been calculated for stellar temperatures of kT$=$5-100 keV with uncertainties of less than 6%, showing fair agreement with recent experimental and evaluated data up to kT = 50 keV. The MACS extracted in the present work at 30 keV is 34.2$\\pm$0.6$_\\mathrm{stat}\\pm$1.8$_\\mathrm{sys}$ mb, in agreement with latest results and evaluations, but 12% lower relative to the recent KADoNIS compilation of astrophysical cross sections. When included in models of the s-process nucleosynthesis in massive stars, this change results in a 60% increase of the abundance of $^{58}$Ni, with a negligible propagation on heavier isotopes. The reason is that, using both the old or the new MACS, 58Ni is efficiently depleted by neutron captures.

  18. State residence restrictions and forcible rape rates: a multistate quasi-experimental analysis of UCR data.

    Science.gov (United States)

    Socia, Kelly M

    2015-04-01

    This study examines whether the presence of state residence restrictions resulted in changes in statewide rates of forcible rape. It builds on the limited geographic coverage of prior studies by including state-level Uniform Crime Report (UCR) data across 19 years for 49 states and the District of Columbia. It uses a quasi-experimental research method based on a longitudinal fixed-effects panel model design, which can help control for relatively static differences between states. Results indicate that when a state residence restriction was present, regardless of how it was measured, rates of UCR forcible rape were higher in the state than when the policy was not present. This suggests that residence restrictions, at least at the state level, are not useful as an overall crime prevention measure, but may be useful for increasing detection or reporting levels of such crimes. However, results also suggest that the size of the increase varied by whether the policy only applied to offenders with child victims or also included those with adult victims. Implications for research and policy are discussed. © The Author(s) 2013.

  19. Pitting corrosion and structural reliability of corroding RC structures: Experimental data and probabilistic analysis

    International Nuclear Information System (INIS)

    Stewart, Mark G.; Al-Harthy, Ali

    2008-01-01

    A stochastic analysis is developed to assess the temporal and spatial variability of pitting corrosion on the reliability of corroding reinforced concrete (RC) structures. The structure considered herein is a singly reinforced RC beam with Y16 or Y27 reinforcing bars. Experimental data obtained from corrosion tests are used to characterise the probability distribution of pit depth. The RC beam is discretised into a series of small elements and maximum pit depths are generated for each reinforcing steel bar in each element. The loss of cross-sectional area, reduction in yield strength and reduction in flexural resistance are then inferred. The analysis considers various member spans, loading ratios, bar diameters and numbers of bars in a given cross-section, and moment diagrams. It was found that the maximum corrosion loss in a reinforcing bar conditional on beam collapse was no more than 16%. The probabilities of failure considering spatial variability of pitting corrosion were up to 200% higher than probabilities of failure obtained from a non-spatial analysis after 50 years of corrosion. This shows the importance of considering spatial variability in a structural reliability analysis for deteriorating structures, particularly for corroding RC beams in flexure

  20. Recent developments in identification of kinetic and transport models from experimental data. Contributed Paper IT-08

    International Nuclear Information System (INIS)

    Bhatt, Nirav P.

    2014-01-01

    In this presentation, we will discuss recent developments in area of identification of kinetic and transport models from experimental data, and their importance in spent fuel reprocessing. The traditional kinetic modelling approaches, differentiation and integral methods, will be presented to set the stage. Then, two frameworks of identifying kinetic and transport models will be presented in details. These frameworks can be classified as follows: (i) simultaneous or global model identification (SMI), and (ii) incremental model identification (IMI). In the SMI framework, as name indicates, rate expressions of all reactions are integrated to predict concentrations that are fitted to measured values via a least-squares problem simultaneously. Alternatively, the identification task can be split into a sequence of sub-problems such as the identification of stoichiometry and rate expressions. For each subproblem, the number of model candidates can be kept small. In addition, the information available at a given step can be used to refine the model in subsequent steps. Further, the advantages and disadvantages of these frameworks will be presented

  1. Experimental data showing the thermal behavior of a flat roof with phase change material.

    Science.gov (United States)

    Tokuç, Ayça; Başaran, Tahsin; Yesügey, S Cengiz

    2015-12-01

    The selection and configuration of building materials for optimal energy efficiency in a building require some assumptions and models for the thermal behavior of the utilized materials. Although the models for many materials can be considered acceptable for simulation and calculation purposes, the work for modeling the real time behavior of phase change materials is still under development. The data given in this article shows the thermal behavior of a flat roof element with a phase change material (PCM) layer. The temperature and energy given to and taken from the building element are reported. In addition the solid-liquid behavior of the PCM is tracked through images. The resulting thermal behavior of the phase change material is discussed and simulated in [1] A. Tokuç, T. Başaran, S.C. Yesügey, An experimental and numerical investigation on the use of phase change materials in building elements: the case of a flat roof in Istanbul, Build. Energy, vol. 102, 2015, pp. 91-104.

  2. Experimental data showing the thermal behavior of a flat roof with phase change material

    Directory of Open Access Journals (Sweden)

    Ayça Tokuç

    2015-12-01

    Full Text Available The selection and configuration of building materials for optimal energy efficiency in a building require some assumptions and models for the thermal behavior of the utilized materials. Although the models for many materials can be considered acceptable for simulation and calculation purposes, the work for modeling the real time behavior of phase change materials is still under development. The data given in this article shows the thermal behavior of a flat roof element with a phase change material (PCM layer. The temperature and energy given to and taken from the building element are reported. In addition the solid–liquid behavior of the PCM is tracked through images. The resulting thermal behavior of the phase change material is discussed and simulated in [1] A. Tokuç, T. Başaran, S.C. Yesügey, An experimental and numerical investigation on the use of phase change materials in building elements: the case of a flat roof in Istanbul, Build. Energy, vol. 102, 2015, pp. 91–104.

  3. Estimation of surface absorptivity in laser surface heating process with experimental data

    International Nuclear Information System (INIS)

    Chen, H-T; Wu, X-Y

    2006-01-01

    This study applies a hybrid technique of the Laplace transform and finite-difference methods in conjunction with the least-squares method and experimental temperature data inside the test material to predict the unknown surface temperature, heat flux and absorptivity for various surface conditions in the laser surface heating process. In this study, the functional form of the surface temperature is unknown a priori and is assumed to be a function of time before performing the inverse calculation. In addition, the whole time domain is divided into several analysis sub-time intervals and then these unknown estimates on each analysis interval can be predicted. In order to show the accuracy of the present inverse method, comparisons are made among the present estimates, direct results and previous results, showing that the present estimates agree with the direct results for the simulated problem. However, the present estimates of the surface absorptivity deviate slightly from previous estimated results under the assumption of constant thermal properties. The effect of the surface conditions on the surface absorptivity and temperature is not negligible

  4. New types of experimental data shape the use of enzyme kinetics for dynamic network modeling.

    Science.gov (United States)

    Tummler, Katja; Lubitz, Timo; Schelker, Max; Klipp, Edda

    2014-01-01

    Since the publication of Leonor Michaelis and Maude Menten's paper on the reaction kinetics of the enzyme invertase in 1913, molecular biology has evolved tremendously. New measurement techniques allow in vivo characterization of the whole genome, proteome or transcriptome of cells, whereas the classical enzyme essay only allows determination of the two Michaelis-Menten parameters V and K(m). Nevertheless, Michaelis-Menten kinetics are still commonly used, not only in the in vitro context of enzyme characterization but also as a rate law for enzymatic reactions in larger biochemical reaction networks. In this review, we give an overview of the historical development of kinetic rate laws originating from Michaelis-Menten kinetics over the past 100 years. Furthermore, we briefly summarize the experimental techniques used for the characterization of enzymes, and discuss web resources that systematically store kinetic parameters and related information. Finally, describe the novel opportunities that arise from using these data in dynamic mathematical modeling. In this framework, traditional in vitro approaches may be combined with modern genome-scale measurements to foster thorough understanding of the underlying complex mechanisms. © 2013 FEBS.

  5. Comparison of Heavy Water Reactor Thermalhydraulic Code Predictions with Small Break LOCA Experimental Data

    International Nuclear Information System (INIS)

    2012-08-01

    Activities within the frame of the IAEA's Technical Working Group on Advanced Technologies for HWRs (TWG-HWR) are conducted in a project within the IAEA's subprogramme on nuclear power reactor technology development. The objective of the activities on HWRs is to foster, within the frame of the TWG-HWR, information exchange and cooperative research on technology development for current and future HWRs, with an emphasis on safety, economics and fuel resource sustainability. One of the activities recommended by the TWG-HWR was an international standard problem exercise entitled Intercomparison and Validation of Computer Codes for Thermalhydraulics Safety Analyses. Intercomparison and validation of computer codes used in different countries for thermalhydraulics safety analyses will enhance the confidence in the predictions made by these codes. However, the intercomparison and validation exercise needs a set of reliable experimental data. Two RD-14M small break loss of coolant accident (SBLOCA) tests, simulating HWR LOCA behaviour, conducted by Atomic Energy of Canada Ltd (AECL), were selected for this validation project. This report provides a comparison of the results obtained from eight participating organizations from six countries (Argentina, Canada, China, India, Republic of Korea, and Romania), utilizing four different computer codes (ATMIKA, CATHENA, MARS-KS, and RELAP5). General conclusions are reached and recommendations made.

  6. Confronting Theoretical Predictions With Experimental Data; Fitting Strategy For Multi-Dimensional Distributions

    Directory of Open Access Journals (Sweden)

    Tomasz Przedziński

    2015-01-01

    Full Text Available After developing a Resonance Chiral Lagrangian (RχL model to describe hadronic τ lepton decays [18], the model was confronted with experimental data. This was accomplished using a fitting framework which was developed to take into account the complexity of the model and to ensure the numerical stability for the algorithms used in the fitting. Since the model used in the fit contained 15 parameters and there were only three 1-dimensional distributions available, we could expect multiple local minima or even whole regions of equal potential to appear. Our methods had to thoroughly explore the whole parameter space and ensure, as well as possible, that the result is a global minimum. This paper is focused on the technical aspects of the fitting strategy used. The first approach was based on re-weighting algorithm published in [17] and produced results in around two weeks. Later approach, with improved theoretical model and simple parallelization algorithm based on Inter-Process Communication (IPC methods of UNIX system, reduced computation time down to 2-3 days. Additional approximations were introduced to the model decreasing time to obtain the preliminary results down to 8 hours. This allowed to better validate the results leading to a more robust analysis published in [12].

  7. Deposition behaviour of model biofuel ash in mixtures with quartz sand. Part 1: Experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Mischa Theis; Christian Mueller; Bengt-Johan Skrifvars; Mikko Hupa; Honghi Tran [Aabo Akademi Process Chemistry Centre, Aabo (Finland). Combustion and Materials Chemistry

    2006-10-15

    Model biofuel ash of well-defined size and melting properties was fed into an entrained flow reactor (EFR) to simulate the deposition behaviour of commercially applied biofuel mixtures in large-scale boilers. The aim was to obtain consistent experimental data that can be used for validation of computational fluid dynamics (CFD)-based deposition models. The results showed that while up to 80 wt% of the feed was lost to the EFR wall, the composition of the model ash particles collected at the reactor exit did not change. When model ashes were fed into the reactor individually, the ash particles were found to be sticky when they contained more than 15 wt% molten phase. When model ashes were fed in mixtures with silica sand, it was found that only a small amount of sand particles was captured in the deposits; the majority rebounded upon impact. The presence of sand in the feed mixture reduced the deposit buildup by more than could be expected from linear interpolation between the model ash and the sand. The results suggested that sand addition to model ash may prevent deposit buildup through erosion. 22 refs., 6 figs., 3 tabs.

  8. Optimization technique applied to interpretation of experimental data and research of constitutive laws

    International Nuclear Information System (INIS)

    Grossette, J.C.

    1982-01-01

    The feasibility of identification technique applied to one dimensional numerical analysis of the split-Hopkinson pressure bar experiment is proven. A general 1-D elastic-plastic-viscoplastic computer program was written down so as to give an adequate solution for elastic-plastic-viscoplastic response of a pressure bar subjected to a general Heaviside step loading function in time which is applied over one end of the bar. Special emphasis is placed on the response of the specimen during the first microseconds where no equilibrium conditions can be stated. During this transient phase discontinuity conditions related to wave propagation are encountered and must be carefully taken into account. Having derived an adequate numerical model, then Pontryagin identification technique has been applied in such a way that the unknowns are physical parameters. The solutions depend mainly on the selection of a class of proper eigen objective functionals (cost functions) which may be combined so as to obtain a convenient numerical objective function. A number of significant questions arising in the choice of parameter adjustment algorithms are discussed. In particular, this technique leads to a two point boundary value problem which has been solved using an iterative gradient like technique usually referred to as a double operator gradient method. This method combines the classical Fletcher-Powell technique and a partial quadratic technique with an automatic parameter step size selection. This method is much more efficient than usual ones. Numerical experimentation with simulated data was performed to test the accuracy and stability of the identification algorithm and to determine the most adequate type and quantity of data for estimation purposes

  9. Data Processing and Experimental Design for Micrometeorite Impacts in Small Bodies

    Science.gov (United States)

    Jensen, E.; Lederer, S.; Smith, D.; Strojia, C.; Cintala, M.; Zolensky, M.; Keller, L.

    2014-01-01

    as whole mineral rocks to investigate the differences in shock propagation when voids are present. By varying velocity, ambient temperature, and porosity, we can investigate different variables affecting impacts in the solar system. -Data indicates that there is a non-linear relationship between peak shock pressure and the variation in infrared spectral absorbances by the distorted crystal structure. The maximum variability occurs around 37 GPa in enstatite and forsterite. The particle size distribution of the impacted material similarly changes with velocity/peak shock pressure. -The experiments described above are designed to measure the near- to mid-IR effects from these changes to the mineral structure. See Lederer et al., this meeting for additional experimental results.

  10. Experimental Seismic Event-screening Criteria at the Prototype International Data Center

    Science.gov (United States)

    Fisk, M. D.; Jepsen, D.; Murphy, J. R.

    - Experimental seismic event-screening capabilities are described, based on the difference of body-and surface-wave magnitudes (denoted as Ms:mb) and event depth. These capabilities have been implemented and tested at the prototype International Data Center (PIDC), based on recommendations by the IDC Technical Experts on Event Screening in June 1998. Screening scores are presented that indicate numerically the degree to which an event meets, or does not meet, the Ms:mb and depth screening criteria. Seismic events are also categorized as onshore, offshore, or mixed, based on their 90% location error ellipses and an onshore/offshore grid with five-minute resolution, although this analysis is not used at this time to screen out events.Results are presented of applications to almost 42,000 events with mb>=3.5 in the PIDC Standard Event Bulletin (SEB) and to 121 underground nuclear explosions (UNE's) at the U.S. Nevada Test Site (NTS), the Semipalatinsk and Novaya Zemlya test sites in the Former Soviet Union, the Lop Nor test site in China, and the Indian, Pakistan, and French Polynesian test sites. The screening criteria appear to be quite conservative. None of the known UNE's are screened out, while about 41 percent of the presumed earthquakes in the SEB with mb>=3.5 are screened out. UNE's at the Lop Nor, Indian, and Pakistan test sites on 8 June 1996, 11 May 1998, and 28 May 1998, respectively, have among the lowest Ms:mb scores of all events in the SEB.To assess the validity of the depth screening results, comparisons are presented of SEB depth solutions to those in other bulletins that are presumed to be reliable and independent. Using over 1600 events, the comparisons indicate that the SEB depth confidence intervals are consistent with or shallower than over 99.8 percent of the corresponding depth estimates in the other bulletins. Concluding remarks are provided regarding the performance of the experimental event-screening criteria, and plans for future

  11. Sixty years of research, 60 years of data: long-term US Forest Service data management on the Penobscot Experimental Forest

    Science.gov (United States)

    Matthew B. Russell; Spencer R. Meyer; John C. Brissette; Laura Kenefic

    2014-01-01

    The U.S. Department of Agriculture, Forest Service silvicultural experiment on the Penobscot Experimental Forest (PEF) in Maine represents 60 years of research in the northern conifer and mixedwood forests of the Acadian Forest Region. The objective of this data management effort, which began in 2008, was to compile, organize, and archive research data collected in the...

  12. Reservoir capacity estimates in shale plays based on experimental adsorption data

    Science.gov (United States)

    Ngo, Tan

    Fine-grained sedimentary rocks are characterized by a complex porous framework containing pores in the nanometer range that can store a significant amount of natural gas (or any other fluids) through adsorption processes. Although the adsorbed gas can take up to a major fraction of the total gas-in-place in these reservoirs, the ability to produce it is limited, and the current technology focuses primarily on the free gas in the fractures. A better understanding and quantification of adsorption/desorption mechanisms in these rocks is therefore required, in order to allow for a more efficient and sustainable use of these resources. Additionally, while water is still predominantly used to fracture the rock, other fluids, such as supercritical CO2 are being considered; here, the idea is to reproduce a similar strategy as for the enhanced recovery of methane in deep coal seams (ECBM). Also in this case, the feasibility of CO2 injection and storage in hydrocarbon shale reservoirs requires a thorough understanding of the rock behavior when exposed to CO2, thus including its adsorption characteristics. The main objectives of this Master's Thesis are as follows: (1) to identify the main controls on gas adsorption in mudrocks (TOC, thermal maturity, clay content, etc.); (2) to create a library of adsorption data measured on shale samples at relevant conditions and to use them for estimating GIP and gas storage in shale reservoirs; (3) to build an experimental apparatus to measure adsorption properties of supercritical fluids (such as CO2 or CH 4) in microporous materials; (4) to measure adsorption isotherms on microporous samples at various temperatures and pressures. The main outcomes of this Master's Thesis are summarized as follows. A review of the literature has been carried out to create a library of methane and CO2 adsorption isotherms on shale samples from various formations worldwide. Large discrepancies have been found between estimates of the adsorbed gas density

  13. The picture of the nuclei disintegration mechanism - from nucleus-nucleus collision experimental data at high energies

    International Nuclear Information System (INIS)

    Strugalska-Gola, E.; Strugalski, Z.

    1997-01-01

    Experimental data on nuclear collisions at high energies, mainly obtained from photographic emulsions, are considered from the point of view of the picture of the nuclear collision processes mechanisms prompted experimentally. In fact, the disintegration products of each nucleus involved in a nuclear collision, in its own rest-frame, are similar to that produced by the impact of a number of nucleons of velocity equal to that of the moving primary nucleus

  14. Promoting the experimental dialogue between working memory and chunking: Behavioral data and simulation.

    Science.gov (United States)

    Portrat, Sophie; Guida, Alessandro; Phénix, Thierry; Lemaire, Benoît

    2016-04-01

    Working memory (WM) is a cognitive system allowing short-term maintenance and processing of information. Maintaining information in WM consists, classically, in rehearsing or refreshing it. Chunking could also be considered as a maintenance mechanism. However, in the literature, it is more often used to explain performance than explicitly investigated within WM paradigms. Hence, the aim of the present paper was (1) to strengthen the experimental dialogue between WM and chunking, by studying the effect of acronyms in a computer-paced complex span task paradigm and (2) to formalize explicitly this dialogue within a computational model. Young adults performed a WM complex span task in which they had to maintain series of 7 letters for further recall while performing a concurrent location judgment task. The series to be remembered were either random strings of letters or strings containing a 3-letter acronym that appeared in position 1, 3, or 5 in the series. Together, the data and simulations provide a better understanding of the maintenance mechanisms taking place in WM and its interplay with long-term memory. Indeed, the behavioral WM performance lends evidence to the functional characteristics of chunking that seems to be, especially in a WM complex span task, an attentional time-based mechanism that certainly enhances WM performance but also competes with other processes at hand in WM. Computational simulations support and delineate such a conception by showing that searching for a chunk in long-term memory involves attentionally demanding subprocesses that essentially take place during the encoding phases of the task.

  15. Evaluation of supercritical CO2 centrifugal compressor experimental data by CFD analysis

    International Nuclear Information System (INIS)

    Takagi, Kazuhisa; Muto, Yasushi; Ishizuka, Takao; Watanabe, Noriyuki; Aritomi, Masanori

    2011-01-01

    A supercritical CO2 gas turbine of 20MPa is suitable to couple with the Na-cooled fast reactor since Na - CO2 reaction is mild at the outlet temperature of 800K, the cycle thermal efficiency is relatively high and the size of CO2 gas turbine is very compact. In this gas turbine cycle, a compressor operates near the critical point. The property of CO2 and then the behavior of compressible flow near the critical point changes very sharply. So far, such a behavior is not examined sufficiently. Then, it is important to clarify compressible flow near the critical point. In this paper, the experimental data of the centrifugal supercritical CO2 compressor have been evaluated by CFD analyses using a computer code 'CFX'. In the analyses, real gas properties of CO2 were achieved by simulating density. The test compressor consists of three kinds of impeller. First, impeller A has 16 blades and the overall diameter is 110mm. Second, impeller B has 16 blades and the overall diameter is 76mm. Third, impeller C has 12 blades and the overall diameter is 56mm. Each impeller has each diffuser. So, CFD analysis was conducted for each impeller and each diffuser. The results were compared and evaluated for the three different impeller and diffuser sets. Main output of calculation is a value of the total pressure at diffuser outlet, which agreed very well with that of the experiment. Total and static pressure distributions, relative velocity distributions and temperature distributions surrounding impeller and diffuser were obtained. Adiabatic efficiency was also evaluated. (author)

  16. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    Science.gov (United States)

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. LCA of management strategies for RDF incineration and gasification bottom ash based on experimental leaching data.

    Science.gov (United States)

    Di Gianfilippo, Martina; Costa, Giulia; Pantini, Sara; Allegrini, Elisa; Lombardi, Francesco; Astrup, Thomas Fruergaard

    2016-01-01

    The main characteristics and environmental properties of the bottom ash (BA) generated from thermal treatment of waste may vary significantly depending on the type of waste and thermal technology employed. Thus, to ensure that the strategies selected for the management of these residues do not cause adverse environmental impacts, the specific properties of BA, in particular its leaching behavior, should be taken into account. This study focuses on the evaluation of potential environmental impacts associated with two different management options for BA from thermal treatment of Refuse Derived Fuel (RDF): landfilling and recycling as a filler for road sub bases. Two types of thermal treatment were considered: incineration and gasification. Potential environmental impacts were evaluated by life-cycle assessment (LCA) using the EASETECH model. Both non-toxicity related impact categories (i.e. global warming and mineral abiotic resource depletion) and toxic impact categories (i.e. human toxicity and ecotoxicity) were assessed. The system boundaries included BA transport from the incineration/gasification plants to the landfills and road construction sites, leaching of potentially toxic metals from the BA, the avoided extraction, crushing, transport and leaching of virgin raw materials for the road scenarios, and material and energy consumption for the construction of the landfills. To provide a quantitative assessment of the leaching properties of the two types of BA, experimental leaching data were used to estimate the potential release from each of the two types of residues. Specific attention was placed on the sensitivity of leaching properties and the determination of emissions by leaching, including: leaching data selection, material properties and assumptions related to emission modeling. The LCA results showed that for both types of BA, landfilling was associated with the highest environmental impacts in the non-toxicity related categories. For the toxicity

  18. Is there any evidence that cerebral protection is beneficial? Experimental data.

    Science.gov (United States)

    Macdonald, S

    2006-04-01

    This article presents the available experimental data from the world literature on the use of cerebral protection devices during carotid artery stenting (CAS). Clinical studies relying on surrogate markers of cerebral embolisation in place of neurological event rate as primary outcome measures are evaluated alongside bench-top and animal studies. These surrogate markers include evaluations of outcomes using procedural transcranial Doppler (TCD) and diffusion-weighted magnetic resonance imaging of brain (DWI). Pathological analyses of debris retrieved from in-vivo analyses of protection devices are also included in this review because although the focus of these studies was primarily clinical, the laboratory data will be preferentially presented and it provides interesting insights. It can be shown that each of the three philosophies of cerebral protection, namely flow arrest (proximal or distal), flow reversal and distal filtration is capable of the entrapment of sizeable debris that would logically threaten devastating stroke if it embolized to the brain. Whilst balloon occlusion significantly reduces the procedural microembolic load (particles less than 60 mm) and flow reversal may be the first means to entirely eliminate it, filters may be associated with increased microembolization. This has been described by some workers as controlled embolization. Certainly, particles smaller than the pore size of currently available filters (60-140 mm) will readily evade capture due to filter periflow and through-flow. There is evidence to suggest that tens of thousands of particles of this size may be released during CAS and there is some evidence that this may be associated with more new white lesions on DWI of brain. The clinical consequences of this controlled embolization however, remain unclear and sophisticated neuropsychometric test batteries may need to be applied at later time points to detect subtle injury that may be compounded by a late inflammatory response

  19. Overview of Experimental Pulse-Doppler Radar Data Collected Oct 1999

    National Research Council Canada - National Science Library

    Hughes, Steven

    2000-01-01

    The Defence Research Establishment Ottawa has designed and constructed an experimental air-to-air radar system as the first step in demonstrating an air-to-air surveillance capability for the Canadian...

  20. Pore Size Distribution Influence on Suction Properties of Calcareous Stones in Cultural Heritage: Experimental Data and Model Predictions

    Directory of Open Access Journals (Sweden)

    Giorgio Pia

    2016-01-01

    Full Text Available Water sorptivity symbolises an important property associated with the preservation of porous construction materials. The water movement into the microstructure is responsible for deterioration of different types of materials and consequently for the indoor comfort worsening. In this context, experimental sorptivity tests are incompatible, because they require large quantities of materials in order to statistically validate the results. Owing to these reasons, the development of analytical procedure for indirect sorptivity valuation from MIP data would be highly beneficial. In this work, an Intermingled Fractal Units’ model has been proposed to evaluate sorptivity coefficient of calcareous stones, mostly used in historical buildings of Cagliari, Sardinia. The results are compared with experimental data as well as with other two models found in the literature. IFU model better fits experimental data than the other two models, and it represents an important tool for estimating service life of porous building materials.

  1. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  2. MONJU experimental data analysis and its feasibility evaluation to build up the standard data base for large FBR nuclear core design

    International Nuclear Information System (INIS)

    Sugino, K.; Iwai, T.

    2006-01-01

    MONJU experimental data analysis was performed by using the detailed calculation scheme for fast reactor cores developed in Japan. Subsequently, feasibility of the MONJU integral data was evaluated by the cross-section adjustment technique for the use of FBR nuclear core design. It is concluded that the MONJU integral data is quite valuable for building up the standard data base for large FBR nuclear core design. In addition, it is found that the application of the updated data base has a possibility to considerably improve the prediction accuracy of neutronic parameters for MONJU. (authors)

  3. Comparisons of experimental beta-ray spectra important to decay heat predictions with ENSDF [Evaluated Nuclear Structure Data File] evaluations

    International Nuclear Information System (INIS)

    Dickens, J.K.

    1990-03-01

    Graphical comparisons of recently obtained experimental beta-ray spectra with predicted beta-ray spectra based on the Evaluated Nuclear Structure Data File are exhibited for 77 fission products having masses 79--99 and 130--146 and lifetimes between 0.17 and 23650 sec. The comparisons range from very poor to excellent. For beta decay of 47 nuclides, estimates are made of ground-state transition intensities. For 14 cases the value in ENSDF gives results in very good agreement with the experimental data. 12 refs., 77 figs., 1 tab

  4. Current status of the European contribution to the Remote Data Access System of the ITER Remote Experimentation Centre

    International Nuclear Information System (INIS)

    De Tommasi, G.; Manduchi, G.; Muir, D.G.; Ide, S.; Naito, O.; Urano, H.; Clement-Lorenzo, S.; Nakajima, N.; Ozeki, T.; Sartori, F.

    2015-01-01

    The ITER Remote Experimentation Centre (REC) is one of the projects under implementation within the BA agreement. The final objective of the REC is to allow researchers to take part in the experimentation on ITER from a remote location. Before ITER first operations, the REC will be used to evaluate ITER-relevant technologies for remote participation. Among the different software tools needed for remote participation, an important one is the Remote Data Access System (RDA), which provides a single software infrastructure to access data stored at the remotely participating experiment, regardless of the geographical location of the users. This paper introduces the European contribution to the RDA system for the REC.

  5. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  6. Experimental simulation: using generative modelling and palaeoecological data to understand human-environment interactions

    Directory of Open Access Journals (Sweden)

    George Perry

    2016-10-01

    Full Text Available The amount of palaeoecological information available continues to grow rapidly, providing improved descriptions of the dynamics of past ecosystems and enabling them to be seen from new perspectives. At the same time, there has been concern over whether palaeoecological enquiry needs to move beyond descriptive inference to a more hypothesis-focussed or experimental approach; however, the extent to which conventional hypothesis-driven scientific frameworks can be applied to historical contexts (i.e., the past is the subject of ongoing debate. In other disciplines concerned with human-environment interactions, including physical geography and archaeology, there has been growing use of generative simulation models, typified by agent-based approaches. Generative modelling encourages counter-factual questioning (what if…?, a mode of argument that is particularly important in systems and time-periods, such as the Holocene and now the Anthropocene, where the effects of humans and other biophysical processes are deeply intertwined. However, palaeoecologically focused simulation of the dynamics of the ecosystems of the past either seems to be conducted to assess the applicability of some model to the future or treats humans simplistically as external forcing factors. In this review we consider how generative simulation-modelling approaches could contribute to our understanding of past human-environment interactions. We consider two key issues: the need for null models for understanding past dynamics and the need to be able learn more from pattern-based analysis. In this light, we argue that there is considerable scope for palaeocology to benefit from developments in generative models and their evaluation. We discuss the view that simulation is a form of experiment and, by using case studies, consider how the many patterns available to palaeoecologists can support model evaluation in a way that moves beyond simplistic pattern-matching and how such models

  7. Navier-Stokes analysis and experimental data comparison of compressible flow within ducts

    Science.gov (United States)

    Harloff, G. J.; Reichert, B. A.; Sirbaugh, J. R.; Wellborn, S. R.

    1992-01-01

    Many aircraft employ ducts with centerline curvature or changing cross-sectional shape to join the engine with inlet and exhaust components. S-ducts convey air to the engine compressor from the intake and often decelerate the flow to achieve an acceptable Mach number at the engine compressor by increasing the cross-sectional area downstream. Circular-to-rectangular transition ducts are used on aircraft with rectangular exhaust nozzles to connect the engine and nozzle. To achieve maximum engine performance, the ducts should minimize flow total pressure loss and total pressure distortion at the duct exit. Changes in the curvature of the duct centerline or the duct cross-sectional shape give rise to streamline curvature which causes cross stream pressure gradients. Secondary flows can be caused by deflection of the transverse vorticity component of the boundary layer. This vortex tilting results in counter-rotating vortices. Additionally, the adverse streamwise pressure gradient caused by increasing cross-sectional area can lead to flow separation. Vortex pairs have been observed in the exit planes of both duct types. These vortices are due to secondary flows induced by pressure gradients resulting from streamline curvature. Regions of low total pressure are produced when the vortices convect boundary layer fluid into the main flow. The purpose of the present study is to predict the measured flow field in a diffusing S-duct and a circular-to-rectangular transition duct with a full Navier-Stokes computer program, PARC3D, and to compare the numerical predictions with new detailed experimental measurements. The work was undertaken to extend previous studies and to provide additional CFD validation data needed to help model flows with strong secondary flow and boundary layer separation. The S-duct computation extends the study of Smith et al, and Harloff et al, which concluded that the computation might be improved by using a finer grid and more advanced turbulence models

  8. Experimental data and boundary conditions for a Double - Skin Facade building in preheating mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    Frequent discussions of double skin façade energy performance have started a dialogue about the methods, models and tools for simulation of double façade systems and reliability of their results. Their reliability will increase with empirical validation of the software. Detailed experimental work......’. This covers such problem areas as measurements of naturally induced air flow, measurements of air temperature under direct solar radiation exposure, etc. Finally, in order to create a solid foundation for software validation, the uncertainty and limitations in the experimental results are discussed. In part...

  9. Experimental Demonstration of 32 Gbaud 4-PAM for Data Center Interconnections of up to 320 km

    DEFF Research Database (Denmark)

    Madsen, Peter; Suhr, Lau Frejstrup; Clausen, Anders

    2017-01-01

    This paper presents experimental results demonstrating a 64 Gbps 4-PAM transmission over a 320 km SSMF span employing standard 80 km fiber spans for metro links. The receiver consists of a LPF and a DFE utilizing the DD-LMS algorithm.......This paper presents experimental results demonstrating a 64 Gbps 4-PAM transmission over a 320 km SSMF span employing standard 80 km fiber spans for metro links. The receiver consists of a LPF and a DFE utilizing the DD-LMS algorithm....

  10. Experimental data and boundary conditions for a Double-Skin Facade building in external air curtain mode

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Heiselberg, Per; Jensen, Rasmus Lund

    Frequent discussions of double skin façade energy performance have started a dialogue about the methods, models and tools for simulation of double façade systems and reliability of their results. Their reliability will increase with empirical validation of the software. Detailed experimental work...... was carried out in a full scale test facility ‘The Cube’, in order to compile three sets of high quality experimental data for validation purposes. The data sets are available for preheating mode, external air curtain mode and transparent insulation mode. The objective of this article is to provide the reader......’. This covers such problem areas as measurements of naturally induced air flow, measurements of air temperature under direct solar radiation exposure, etc. Finally, in order to create a solid foundation for software validation, the uncertainty and limitations in the experimental results are discussed. In part...

  11. Neutron Elastic Scattering Cross Sections Experimental Data and Optical Model Cross Section Calculations. A Compilation of Neutron Data from the Studsvik Neutron Physics Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Holmqvist, B; Wiedling, T

    1969-06-15

    Neutron elastic scattering cross section measurements have been going on for a long period at the Studsvik Van de Graaff laboratory. The cross sections of a range of elements have been investigated in the energy interval 1.5 to 8 MeV. The experimental data have been compared with cross sections calculated with the optical model when using a local nuclear potential.

  12. Extraction of potential energy in charge asymmetry coordinate from experimental fission data

    Energy Technology Data Exchange (ETDEWEB)

    Pasca, H. [Joint Institute for Nuclear Research, Dubna (Russian Federation); ' ' Babes-Bolyai' ' Univ., Cluj-Napoca (Romania); Andreev, A.V.; Adamian, G.G. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Antonenko, N.V. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Tomsk Polytechnic Univ. (Russian Federation). Mathematical Physics Dept.

    2016-12-15

    For fissioning isotopes of Ra, Ac, Th, Pa, and U, the potential energies as a function of the charge asymmetry coordinate are extracted from the experimental charge distributions of the fission fragment and compared with the calculated scission-point driving potentials. The role of the potential energy surfaces in the description of the fission charge distribution is discussed. (orig.)

  13. Randomization and Data-Analysis Items in Quality Standards for Single-Case Experimental Studies

    Science.gov (United States)

    Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick

    2015-01-01

    Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…

  14. Computer subroutines to aid analysis of experimental data from thermocouples and pressure transducers

    International Nuclear Information System (INIS)

    Durham, M.E.

    1976-08-01

    Three subroutines (CALSET, CALBR8 and PTRCAL) have been written to provide a convenient system for converting experimental measurements obtained from thermocouples and pressure transducers to temperatures and pressures. The method of operation and the application of the subroutines are described. (author)

  15. Studies of thermal-reactor benchmark-data interpretation: experimental corrections

    International Nuclear Information System (INIS)

    Sher, R.; Fiarman, S.

    1976-10-01

    Experimental values of integral parameters of the lattices studied in this report, i.e., the MIT(D 2 O) and TRX benchmark lattices have been re-examined and revised. The revisions correct several systematic errors that have been previously ignored or considered insignificant. These systematic errors are discussed in detail. The final corrected values are presented

  16. Numerical analysis of an experimental data base for tubes pulled in flexion

    International Nuclear Information System (INIS)

    Langlois, R.

    1998-01-01

    The aim of this study is the simulation and the interpretation of experimental results about maximal loading that tubes are able to carry. The tubes are products from primary circuit of german power reactors light water moderated boiling and not boiling cooled. The crack propagation is evaluate under loading. (A.L.B.)

  17. End-to-side nerve neurorrhaphy: critical appraisal of experimental and clinical data.

    Science.gov (United States)

    Fernandez, E; Lauretti, L; Tufo, T; D'Ercole, M; Ciampini, A; Doglietto, F

    2007-01-01

    End-to-side neurorrhaphy (ESN) or terminolateral neurorraphy consists of connecting the distal stump of a transected nerve, named the recipient nerve, to the side of an intact adjacent nerve, named the donor nerve, "in which only an epineurial window is performed". This procedure was reintroduced in 1994 by Viterbo, who presented a report on an experimental study in rats. Several experimental and clinical studies followed this report with various and sometimes conflicting results. In this paper we present a review of the pertinent literature. Our personal experience using a sort of end-to-side nerve anastomosis, in which the donor nerve is partially transected, is also presented and compared with ESN as defined above. When the proximal nerve stump of a transected nerve is not available, ESN, which is claimed to permit anatomic and functional preservation of the donor nerve, seems an attractive technique, though yet not proven to be effective. Deliberate axotomy of the donor nerve yields results that are proportional to the entity of axotomy, but such technique, though resembling ESN, is an end-to-end neurorrhaphy. Neither experimental or clinical evidence support liberalizing the clinical use of ESN, a procedure with only an epineurial window in the donor nerve and without deliberate axotomy. Much more experimental investigation needs to be done to explain the ability of normal, intact nerves to sprout laterally. Such procedure appears justified only in an investigational setting.

  18. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .2. DESIGN EVALUATION ON MEASURED DATA

    NARCIS (Netherlands)

    DUINEVELD, C. A. A.; Smilde, A. K.; Doornbos, D. A.

    1993-01-01

    The construction of a small experimental design for a combination of process and mixture variables is a problem which has not been solved completely by now. In a previous paper we evaluated some designs with theoretical measures. This second paper evaluates the capabilities of the best of these

  19. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .2. DESIGN EVALUATION ON MEASURED DATA

    NARCIS (Netherlands)

    DUINEVELD, CAA; SMILDE, AK; DOORNBOS, DA

    The construction of a small experimental design for a combination of process and mixture variables is a problem which has not been solved completely by now. In a previous paper we evaluated some designs with theoretical measures. This second paper evaluates the capabilities of the best of these

  20. Presentation and comparison of experimental critical heat flux data at conditions prototypical of light water small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Greenwood, M.S., E-mail: 1greenwoodms@ornl.gov; Duarte, J.P.; Corradini, M.

    2017-06-15

    Highlights: • Low mass flux and moderate to high pressure CHF experimental results are presented. • Facility uses chopped-cosine heater profile in a 2 × 2 square bundle geometry. • The EPRI, CISE-GE, and W-3 CHF correlations provide reasonable average CHF prediction. • Neural network analysis predicts experimental data and demonstrates utility of method. - Abstract: The critical heat flux (CHF) is a two-phase flow phenomenon which rapidly decreases the efficiency of the heat transfer performance at a heated surface. This phenomenon is one of the limiting criteria in the design and operation of light water reactors. Deviations of operating parameters greatly alters the CHF condition and must be experimentally determined for any new parameters such as those proposed in small modular reactors (SMR) (e.g. moderate to high pressure and low mass fluxes). Current open literature provides too little data for functional use at the proposed conditions of prototypical SMRs. This paper presents a brief summary of CHF data acquired from an experimental facility at the University of Wisconsin-Madison designed and built to study CHF at high pressure and low mass flux ranges in a 2 × 2 chopped cosine rod bundle prototypical of conceptual SMR designs. The experimental CHF test inlet conditions range from pressures of 8–16 MPa, mass fluxes of 500–1600 kg/m2 s, and inlet water subcooling from 250 to 650 kJ/kg. The experimental data is also compared against several accepted prediction methods whose application ranges are most similar to the test conditions.

  1. Numerical Validation of a Vortex Model against ExperimentalData on a Straight-Bladed Vertical Axis Wind Turbine

    Directory of Open Access Journals (Sweden)

    Eduard Dyachuk

    2015-10-01

    Full Text Available Cyclic blade motion during operation of vertical axis wind turbines (VAWTs imposes challenges on the simulations models of the aerodynamics of VAWTs. A two-dimensional vortex model is validated against the new experimental data on a 12-kW straight-bladed VAWT, which is operated at an open site. The results on the normal force on one blade are analyzed. The model is assessed against the measured data in the wide range of tip speed ratios: from 1.8 to 4.6. The predicted results within one revolution have a similar shape and magnitude as the measured data, though the model does not reproduce every detail of the experimental data. The present model can be used when dimensioning the turbine for maximum loads.

  2. Experimental data from irradiation of physical detectors disclose weaknesses in basic assumptions of the δ ray theory of track structure

    DEFF Research Database (Denmark)

    Olsen, K. J.; Hansen, Jørgen-Walther

    1985-01-01

    The applicability of track structure theory has been tested by comparing predictions based on the theory with experimental high-LET dose-response data for an amino acid alanine and a nylon based radiochromic dye film radiation detector. The linear energy transfer LET, has been varied from 28...

  3. Proposal for the transmittal of data to LASL and the reporting of TRAC analyses for the multinational reflood experimental program

    International Nuclear Information System (INIS)

    Bleiweis, P.B.; Kirchner, W.L.; Sicilian, J.M.

    1979-04-01

    The proposed form of the digital tape containing the reduced experimental data from any of the 2D/3D facilities (CCTF, SCTF, UPTF, and possibly PKL Core-II) and the procedures which LASL will use in performing TRAC calculations and reporting results are described in this document

  4. Development and testing of an Internet-based data collection technique for simulator and real world experimentation

    International Nuclear Information System (INIS)

    Droeivoldsmo, Asgeir; Johnsen, Terje

    2005-09-01

    With experience from many years of data collection in the Man - Machine and Virtual Reality Laboratories at the OECD Halden Reactor Project, an evident need for more efficient handling of questionnaire data was documented. A working prototype on-line system for World Wide Web (www) questionnaire generation and data collection was developed and tested. This paper discusses the use of www-based data collection and the need for system functionality in experiments and surveys. Insights from the development of the system are reported together with experiences using such tools in simulation and realistic field experimentation. (Author)

  5. Airborne release fractions/rates and respirable fractions for nonreactor nuclear facilities. Volume 1, Analysis of experimental data

    International Nuclear Information System (INIS)

    1994-12-01

    This handbook contains (1) a systematic compilation of airborne release and respirable fraction experimental data for nonreactor nuclear facilities, (2) assessments of the data, and (3) values derived from assessing the data that may be used in safety analyses when the data are applicable. To assist in consistent and effective use of this information, the handbook provides: identification of a consequence determination methodology in which the information can be used; discussion of the applicability of the information and its general technical limits; identification of specific accident phenomena of interest for which the information is applicable; and examples of use of the consequence determination methodology and airborne release and respirable fraction information

  6. System for the experimental data acquisition, processing and output on the base of the double-input CAMAC modules

    International Nuclear Information System (INIS)

    Avramenko, A.E.; Ariskin, N.I.; Samojlov, V.V.

    1983-01-01

    A system for experimental data acquisition, processing and output developed on the base of the double-input CAMAC module is described. Use of the double-input on-line memory unit at the capacity of up to 64k bite for experimental data storage and an external input controller permitted to obtain the time of the data input and output cycle in the storage equal to 1.6 μs. Rates of experimental data acquisition and output do not depend on the computer response or CAMAC cycle duration. They are determined only by the potentialities of the functional moduls. Combination of operations on data acquisi tion, processing and output is possible. Library of subroutines assuring processing in an on-line system with the SM-4, SM-3, ''Electronika-60'' computers is developed for the system. Subroutiines of this library can be fetched from the code written in the FORTRAN and MLCROASSEMBER and they assure: input/output to/from the computer buffer storage, synchronization of ipput/output operations redout from the buffer storage to the computer storage, recording data from the storage to the huffer storage

  7. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DEFF Research Database (Denmark)

    Morell, William C.; Birkel, Garrett W.; Forrer, Mark

    2017-01-01

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which are not gener......Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which...... algorithms. In this paper, we describe EDD and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes....

  8. Loss of vacuum accident (LOVA): Comparison of computational fluid dynamics (CFD) flow velocities against experimental data for the model validation

    International Nuclear Information System (INIS)

    Bellecci, C.; Gaudio, P.; Lupelli, I.; Malizia, A.; Porfiri, M.T.; Quaranta, R.; Richetta, M.

    2011-01-01

    A recognized safety issue for future fusion reactors fueled with deuterium and tritium is the generation of sizeable quantities of dust. Several mechanisms resulting from material response to plasma bombardment in normal and off-normal conditions are responsible for generating dust of micron and sub-micron length scales inside the VV (Vacuum Vessel) of experimental fusion facilities. The loss of coolant accidents (LOCA), loss of coolant flow accidents (LOFA) and loss of vacuum accidents (LOVA) are types of accidents, expected in experimental fusion reactors like ITER, that may jeopardize components and plasma vessel integrity and cause dust mobilization risky for workers and public. The air velocity is the driven parameter for dust resuspension and its characterization, in the very first phase of the accidents, is critical for the dust release. To study the air velocity trend a small facility, Small Tank for Aerosol Removal and Dust (STARDUST), was set up at the University of Rome 'Tor Vergata', in collaboration with ENEA Frascati laboratories. It simulates a low pressurization rate (300 Pa/s) LOVA event in ITER due to a small air inlet from two different positions of the leak: at the equatorial port level and at the divertor port level. The velocity magnitude in STARDUST was investigated in order to map the velocity field by means of a punctual capacitive transducer placed inside STARDUST without obstacles. FLUENT was used to simulate the flow behavior for the same LOVA scenarios used during the experimental tests. The results of these simulations were compared against the experimental data for CFD code validation. For validation purposes, the CFD simulation data were extracted at the same locations as the experimental data were collected for the first four seconds, because at the beginning of the experiments the maximum velocity values (that could cause the almost complete dust mobilization) have been measured. In this paper the authors present and discuss the

  9. Three-dimensional inviscid analysis of radial-turbine flow and a limited comparison with experimental data

    Science.gov (United States)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  10. Some experimental data on accommodation coefficients for the noble ions on metal surfaces

    International Nuclear Information System (INIS)

    Gusev, K.I.; Rijov, Y.A.; Shkarban, I.I.

    1974-01-01

    Methods and results of experimental measurements of energy accommodation for Ar + , Kr + , and Xe + ions with initial energy E 0 - 100-500eV bombarding Cu, Mo, Ag and other (including Mo - monocrystal) foil target are presented. The angular dependencies for the energy accommodation coefficient are obtained within the range of angle phi=0+70 deg (phi is the angle between the target surface normal and the beam direction)

  11. A local effect model-based interpolation framework for experimental nanoparticle radiosensitisation data

    OpenAIRE

    Brown, Jeremy M. C.; Currell, Fred J.

    2017-01-01

    A local effect model (LEM)-based framework capable of interpolating nanoparticle-enhanced photon-irradiated clonogenic cell survival fraction measurements as a function of nanoparticle concentration was developed and experimentally benchmarked for gold nanoparticle (AuNP)-doped bovine aortic endothelial cells (BAECs) under superficial kilovoltage X-ray irradiation. For three different superficial kilovoltage X-ray spectra, the BAEC survival fraction response was predicted for two different Au...

  12. Comparison between 2D turbulence model ESEL and experimental data from AUG and COMPASS tokamaks

    DEFF Research Database (Denmark)

    Ondac, Peter; Horacek, Jan; Seidl, Jakub

    2015-01-01

    In this article we have used the 2D fluid turbulence numerical model, ESEL, to simulate turbulent transport in edge tokamak plasma. Basic plasma parameters from the ASDEX Upgrade and COMPASS tokamaks are used as input for the model, and the output is compared with experimental observations obtain...... for an extension of the ESEL model from 2D to 3D to fully resolve the parallel dynamics, and the coupling from the plasma to the sheath....

  13. On the uncertainty of experimental nuclear data. Taking a lesson from the other

    International Nuclear Information System (INIS)

    Harada, Hideo

    2013-01-01

    Possible paths to obtain the nuclear data with the required target accuracy are discussed based on the lessons from the research field of fundamental physical constants and recent advancements on nuclear data measurement techniques. (author)

  14. Component failure-rate data with potential applicability to the hot experimental facility. Technical information

    International Nuclear Information System (INIS)

    Dexter, A.H.

    1980-12-01

    A literature search, that was aided by computer searches of a number of data bases, resulted in the compilation of approximately 1223 pieces of component failure-rate data under 136 subject categories. The data bank can be provided upon request as a punched-card deck or on magnetic tape

  15. JEF-PC 2.0. A PC program for viewing evaluated and experimental data

    International Nuclear Information System (INIS)

    Konieczny, M.

    1997-01-01

    In an attempt to make nuclear data more easily accessible to a wider user community, as well as providing a useful tool for experienced users, the NEA has supported the development of PC software for accessing and displaying nuclear data in a user-friendly and intuitive manner. The data contained in JEF-PC version 2.0 is predominantly taken from the Joint Evaluated File (JEF-2.2). The JEF-2.2 library comprises sets of evaluated nuclear data, mainly for fission reactor applications; it contains a number of different types of data, including neutron interaction data, radioactive decay data and fission yield data. The package consists of a central 'driver' program displaying an electronic representation of the Chart of the Nuclides, from which a target nuclide is selected. Through this interface a number of peripheral database modules, containing different categories of basic nuclear data, can be accessed. Cross section data, radioactive decay data, and fission yield data are available in separate modules named CROSS, DECAY and FISSION respectively. (K.A.)

  16. KiMoSys: a web-based repository of experimental data for KInetic MOdels of biological SYStems.

    Science.gov (United States)

    Costa, Rafael S; Veríssimo, André; Vinga, Susana

    2014-08-13

    The kinetic modeling of biological systems is mainly composed of three steps that proceed iteratively: model building, simulation and analysis. In the first step, it is usually required to set initial metabolite concentrations, and to assign kinetic rate laws, along with estimating parameter values using kinetic data through optimization when these are not known. Although the rapid development of high-throughput methods has generated much omics data, experimentalists present only a summary of obtained results for publication, the experimental data files are not usually submitted to any public repository, or simply not available at all. In order to automatize as much as possible the steps of building kinetic models, there is a growing requirement in the systems biology community for easily exchanging data in combination with models, which represents the main motivation of KiMoSys development. KiMoSys is a user-friendly platform that includes a public data repository of published experimental data, containing concentration data of metabolites and enzymes and flux data. It was designed to ensure data management, storage and sharing for a wider systems biology community. This community repository offers a web-based interface and upload facility to turn available data into publicly accessible, centralized and structured-format data files. Moreover, it compiles and integrates available kinetic models associated with the data.KiMoSys also integrates some tools to facilitate the kinetic model construction process of large-scale metabolic networks, especially when the systems biologists perform computational research. KiMoSys is a web-based system that integrates a public data and associated model(s) repository with computational tools, providing the systems biology community with a novel application facilitating data storage and sharing, thus supporting construction of ODE-based kinetic models and collaborative research projects.The web application implemented using Ruby

  17. Experimental data of biomaterial derived from Malva sylvestris and charcoal tablet powder for Hg2+ removal from aqueous solutions

    Directory of Open Access Journals (Sweden)

    Alireza Rahbar

    2016-09-01

    Full Text Available In this experimental data article, a novel biomaterial was provided from Malva sylvestris and characterized its properties using various instrumental techniques. The operating parameters consisted of pH and adsorbent dose on Hg2+ adsorption from aqueous solution using M. sylvestris powder (MSP were compared with charcoal tablet powder (CTP, a medicinal drug. The data acquired showed that M. sylvestris is a viable and very promising alternative adsorbent for Hg2+ removal from aqueous solutions. The experimental data suggest that the MSP is a potential adsorbent to use in medicine for treatment of poisoning with heavy metals; however, the application in animal models is a necessary step before the eventual application of MSP in situations involving humans. Keywords: Adsorption, Biomaterial, Hg2+ ion, Malva sylvestris, Charcoal tablet

  18. Comparison of Monte Carlo simulation of gamma ray attenuation coefficients of amino acids with XCOM program and experimental data

    Science.gov (United States)

    Elbashir, B. O.; Dong, M. G.; Sayyed, M. I.; Issa, Shams A. M.; Matori, K. A.; Zaid, M. H. M.

    2018-06-01

    The mass attenuation coefficients (μ/ρ), effective atomic numbers (Zeff) and electron densities (Ne) of some amino acids obtained experimentally by the other researchers have been calculated using MCNP5 simulations in the energy range 0.122-1.330 MeV. The simulated values of μ/ρ, Zeff, and Ne were compared with the previous experimental work for the amino acids samples and a good agreement was noticed. Moreover, the values of mean free path (MFP) for the samples were calculated using MCNP5 program and compared with the theoretical results obtained by XCOM. The investigation of μ/ρ, Zeff, Ne and MFP values of amino acids using MCNP5 simulations at various photon energies when compared with the XCOM values and previous experimental data for the amino acids samples revealed that MCNP5 code provides accurate photon interaction parameters for amino acids.

  19. Experimental Peptide Identification Repository (EPIR): an integrated peptide-centric platform for validation and mining of tandem mass spectrometry data

    DEFF Research Database (Denmark)

    Kristensen, Dan Bach; Brønd, Jan Christian; Nielsen, Peter Aagaard

    2004-01-01

    LC MS/MS has become an established technology in proteomic studies, and with the maturation of the technology the bottleneck has shifted from data generation to data validation and mining. To address this bottleneck we developed Experimental Peptide Identification Repository (EPIR), which...... is an integrated software platform for storage, validation, and mining of LC MS/MS-derived peptide evidence. EPIR is a cumulative data repository where precursor ions are linked to peptide assignments and protein associations returned by a search engine (e.g. Mascot, Sequest, or PepSea). Any number of datasets can...

  20. Quarks, QCD [quantum chromodynamics] and the real world of experimental data

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1987-07-01

    The experimental evidence that supports quantum chromodynamics as the theory that describes how the quarks interact is briefly discussed. The indications of the existence of quarks are reviewed, and calculation of hadron masses is discussed. Additional evidence of hadron substructure as seen in the antiproton is reviewed. Arguments for the existence of color as the ''charge'' carried by quarks by which they interact are given. Hadron masses and the hyperfine interaction are presented, followed by more exotic quark systems and a study of multiquark systems. Weak interactions in the quark model are discussed

  1. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Actis, S [Paul-Scherrer-Institute Wuerenlingen and Villigen, Villigen (Switzerland); Arbuzov, A [Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics; Balossini, G [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Pavia [IT; and others

    2009-12-15

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e{sup +}e{sup -} colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on {tau} decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and {tau} decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed. (orig.)

  2. Electromagnetic three-dimensional reconstruction of targets from free space experimental data

    International Nuclear Information System (INIS)

    Geffrin, J.-M.; Chaumet, P. C.; Eyraud, C.; Belkebir, K.; Sabouroux, P.

    2008-01-01

    This paper deals with the problem of reconstructing the relative permittivity of three-dimensional targets using experimental scattered fields. The fields concerned were measured in an anechoic chamber on the surface of a sphere surrounding the target. The inverse scattering problem is reformulated as an optimization problem that is iteratively solved thanks to a conjugate gradient method and by using the coupled dipoles method as a forward problem solver. The measurement technique and the inversion procedure are briefly described with the inversion results. This work demonstrates the reliability of the experiments and the efficiency of the proposed inverse scattering scheme

  3. CORRELATION OF EXPERIMENTAL AND THEORETICAL DATA FOR MANTLE TANKS USED IN LOW FLOW SDHW SYSTEMS

    DEFF Research Database (Denmark)

    Shah, Louise Jivan; Furbo, Simon

    1997-01-01

    -calculations, a detailed analysis of the heat transfer from the solar collector fluid to the wall of the hot water tank is performed. The analysis has resulted in a correlation for the heat transfer between the solar collector fluid and the wall of the hot water........1. The model is validated against the experimental tests, and good agreement between measured and calculated results is achieved. The results from the CFD-calculations are used to illustrate the thermal behaviour and the fluid dynamics in the mantle and in the hot water tank. With the CFD...

  4. Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data

    International Nuclear Information System (INIS)

    Actis, S.; Arbuzov, A.

    2009-12-01

    We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e + e - colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on τ decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and τ decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed. (orig.)

  5. A new model for the inference of population characteristics from experimental data using uncertainties

    International Nuclear Information System (INIS)

    Cofino, Wim P.; Stokkum, Ivo H.M. van; Steenwijk, Jaap van; Wells, David E.

    2005-01-01

    This paper extends a recent report on a model to establish population characteristics to include censored data. The theoretical background is given. The application given in this paper is limited to left-censored data, i.e. less than values, but the principles can also be adopted for other types of censored data. The model gives robust estimates of population characteristics for datasets with complicated underlying distributions including less than values of different magnitude and less than values exceeding the values of numerical data. The extended model is illustrated with simulated datasets, data from interlaboratory studies and temporal trend data on dissolved cadmium in the Rhine river. The calculations confirm that inclusion of left-censored values in the computation of population characteristics improves assessment procedures

  6. Software systems for processing and analysis of experimental data at the Nova laser facility

    International Nuclear Information System (INIS)

    Auerbach, J.M.; McCauley, E.W.; Stone, G.F.; Montgomery, D.S.

    1986-01-01

    A typical laser-plasma interaction experiment at the Nova laser facility produces in excess of 20 megabytes of digitized data. Extensive processing and analysis of this raw data from a wide variety of instruments is necessary to produce data that can be readily used to interpret the experiment. The authors describe how using VAX based computer hardware, a software system has been set up to convert the digitized instrument output to physics quantities describing the experiment. A relational data base management system is used to coordinate all levels of processing and analysis. Extensive data bases of instrument response and set-up parameters are used at all levels of processing and archiving. An extensive set of programs is used to handle the large amounts of X, Y, Z data recorded on film by the bulk of Nova diagnostics. Software development emphasizes structured design, flexibility, automation and ease of use

  7. Animal mortality resulting from uniform exposures to photon radiations: Calculated LD50s and a compilation of experimental data

    International Nuclear Information System (INIS)

    Jones, T.D.; Morris, M.D.; Wells, S.M.; Young, R.W.

    1986-12-01

    Studies conducted during the 1950s and 1960s of radiation-induced mortality to diverse animal species under various exposure protocols were compiled into a mortality data base. Some 24 variables were extracted and recomputed from each of the published studies, which were collected from a variety of available sources, primarily journal articles. Two features of this compilation effort are (1) an attempt to give an estimate of the uniform dose received by the bone marrow in each treatment so that interspecies differences due to body size were minimized and (2) a recomputation of the LD 50 where sufficient experimental data are available. Exposure rates varied in magnitude from about 10 -2 to 10 3 R/min. This report describes the data base, the sources of data, and the data-handling techniques; presents a bibliography of studies compiled; and tabulates data from each study. 103 refs., 44 tabs

  8. Comparison study on transformation of iron oxyhydroxides: Based on theoretical and experimental data

    International Nuclear Information System (INIS)

    Lu Bin; Guo Hui; Li Ping; Liu Hui; Wei Yu; Hou Denglu

    2011-01-01

    We have investigated the catalytic transformation of ferrihydrite, feroxyhyte, and lepidocrocite in the presence of Fe(II). In this paper, the transformation from akaganeite and goethite to hematite in the presence of trace Fe(II) was studied in detail. The result indicates that trace Fe(II) can accelerate the transformation of akaganeite and goethite. Compared with the transformation of other iron oxyhydroxides (e.g., ferrihydrite, feroxyhyte, lepidocrocite, and akaganeite), a complete transformation from goethite to hematite was not observed in the presence of Fe(II). On the basis of our earlier and present experimental results, the transformation of various iron oxyhydroxides was compared based on their thermodynamic stability, crystalline structure, transformation mechanism, and transformation time. - Graphical abstract: The transformation of various iron oxyhydroxides in the presence of trace Fe(II) was compared based on experimental results, thermodynamic stability, crystalline structure, and transformation mechanism. Highlights: → Fe(II) can accelerate the transformation from akaganeite to hematite. → Small particles of goethite can transform to hematite in the presence of Fe(II). → Some hematite particles were found to be embedded within the crystal of goethite. → The relationship between structure and transformation mechanism was revealed.

  9. Enthalpies of formation of dihydroxybenzenes revisited: Combining experimental and high-level ab initio data

    International Nuclear Information System (INIS)

    Gonçalves, Elsa M.; Agapito, Filipe; Almeida, Tânia S.; Martinho Simões, José A.

    2014-01-01

    Highlights: • Thermochemistry of hydroxyphenols probed by experimental and theoretical methods. • A new paradigm for obtaining enthalpies of formation of crystalline compounds. • High-level ab initio results for the thermochemistry of gas-phase hydroxyphenols. • Sublimation enthalpies of hydroxyphenols determined by Calvet microcalorimetry. - Abstract: Accurate values of standard molar enthalpies of formation in condensed phases can be obtained by combining high-level quantum chemistry calculations of gas-phase enthalpies of formation with experimentally determined enthalpies of sublimation or vapourization. The procedure is illustrated for catechol, resorcinol, and hydroquinone. Using W1-F12, the gas-phase enthalpies of formation of these compounds at T = 298.15 K were computed as (−270.6, −269.4, and −261.0) kJ · mol −1 , respectively, with an uncertainty of ∼0.4 kJ · mol −1 . Using well characterised solid samples, the enthalpies of sublimation were determined with a Calvet microcalorimeter, leading to the following values at T = 298.15 K: (88.3 ± 0.3) kJ · mol −1 , (99.7 ± 0.4) kJ · mol −1 , and (102.0 ± 0.9) kJ · mol −1 , respectively. It is shown that these results are consistent with the crystalline structures of the compounds

  10. Review of nuclear data improvement needs for nuclear radiation measurement techniques used at the CEA experimental reactor facilities

    Directory of Open Access Journals (Sweden)

    Destouches Christophe

    2016-01-01

    Full Text Available The constant improvement of the neutron and gamma calculation codes used in experimental nuclear reactors goes hand in hand with that of the associated nuclear data libraries. The validation of these calculation schemes always requires the confrontation with integral experiments performed in experimental reactors to be completed. Nuclear data of interest, straight as cross sections, or elaborated ones such as reactivity, are always derived from a reaction rate measurement which is the only measurable parameter in a nuclear sensor. So, in order to derive physical parameters from the electric signal of the sensor, one needs specific nuclear data libraries. This paper presents successively the main features of the measurement techniques used in the CEA experimental reactor facilities for the on-line and offline neutron/gamma flux characterizations: reactor dosimetry, neutron flux measurements with miniature fission chambers and Self Power Neutron Detector (SPND and gamma flux measurements with chamber ionization and TLD. For each technique, the nuclear data necessary for their interpretation will be presented, the main identified needs for improvement identified and an analysis of their impact on the quality of the measurement. Finally, a synthesis of the study will be done.

  11. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    Science.gov (United States)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  12. Modeling the basin of attraction as a two-dimensional manifold from experimental data: Applications to balance in humans

    Science.gov (United States)

    Zakynthinaki, Maria S.; Stirling, James R.; Cordente Martínez, Carlos A.; Díaz de Durana, Alfonso López; Quintana, Manuel Sillero; Romo, Gabriel Rodríguez; Molinuevo, Javier Sampedro

    2010-03-01

    We present a method of modeling the basin of attraction as a three-dimensional function describing a two-dimensional manifold on which the dynamics of the system evolves from experimental time series data. Our method is based on the density of the data set and uses numerical optimization and data modeling tools. We also show how to obtain analytic curves that describe both the contours and the boundary of the basin. Our method is applied to the problem of regaining balance after perturbation from quiet vertical stance using data of an elite athlete. Our method goes beyond the statistical description of the experimental data, providing a function that describes the shape of the basin of attraction. To test its robustness, our method has also been applied to two different data sets of a second subject and no significant differences were found between the contours of the calculated basin of attraction for the different data sets. The proposed method has many uses in a wide variety of areas, not just human balance for which there are many applications in medicine, rehabilitation, and sport.

  13. Data acquisition and online processing requirements for experimentation at the superconducting super collider

    International Nuclear Information System (INIS)

    Lankford, A.J.; Barsotti, E.; Gaines, I.

    1990-01-01

    Differences in scale between data acquisition and online processing requirements for detectors at the Superconducting Super Collider and systems for existing large detectors will require new architectures and technological advances in these systems. Emerging technologies will be employed for data transfer, processing, and recording. (orig.)

  14. Data acquisition and online processing requirements for experimentation at the Superconducting Super Collider

    International Nuclear Information System (INIS)

    Lankford, A.J.; Barsotti, E.; Gaines, I.

    1989-07-01

    Differences in scale between data acquisition and online processing requirements for detectors at the Superconducting Super Collider and systems for existing large detectors will require new architectures and technological advances in these systems. Emerging technologies will be employed for data transfer, processing, and recording. 9 refs., 3 figs

  15. Using regional broccoli trial data to select experimental hybrids for input into advanced yield trials

    Science.gov (United States)

    A large amount of phenotypic trait data are being generated in regional trials that are implemented as part of the Specialty Crop Research Initiative (SCRI) project entitled “Establishing an Eastern Broccoli Industry”. These data are used to identify the best entries in the trials for inclusion in ...

  16. SPoRT: Transitioning NASA and NOAA Experimental Data to the Operational Weather Community

    Science.gov (United States)

    Jedlovec, Gary J.

    2013-01-01

    Established in 2002 to demonstrate the weather and forecasting application of real-time EOS measurements, the NASA Short-term Prediction Research and Transition (SPoRT) program has grown to be an end-to-end research to operations activity focused on the use of advanced NASA modeling and data assimilation approaches, nowcasting techniques, and unique high-resolution multispectral data from EOS satellites to improve short-term weather forecasts on a regional and local scale. With the ever-broadening application of real-time high resolution satellite data from current EOS, Suomi NPP, and planned JPSS and GOES-R sensors to weather forecast problems, significant challenges arise in the acquisition, delivery, and integration of the new capabilities into the decision making process of the operational weather community. For polar orbiting sensors such as MODIS, AIRS, VIIRS, and CRiS, the use of direct broadcast ground stations is key to the real-time delivery of the data and derived products in a timely fashion. With the ABI on the geostationary GOES-R satellite, the data volumes will likely increase by a factor of 5-10 from current data streams. However, the high data volume and limited bandwidth of end user facilities presents a formidable obstacle to timely access to the data. This challenge can be addressed through the use of subsetting techniques, innovative web services, and the judicious selection of data formats. Many of these approaches have been implemented by SPoRT for the delivery of real-time products to NWS forecast offices and other weather entities. Once available in decision support systems like AWIPS II, these new data and products must be integrated into existing and new displays that allow for the integration of the data with existing operational products in these systems. SPoRT is leading the way in demonstrating this enhanced capability. This paper will highlight the ways SPoRT is overcoming many of the challenges presented by the enormous data

  17. On the calibration strategies of the Johnson–Cook strength model: Discussion and applications to experimental data

    International Nuclear Information System (INIS)

    Gambirasio, Luca; Rizzi, Egidio

    2014-01-01

    The present paper aims at assessing the various procedures adoptable for calibrating the parameters of the so-called Johnson–Cook strength model, expressing the deviatoric behavior of elastoplastic materials, with particular reference to the description of High Strain Rate (HSR) phenomena. The procedures rely on input experimental data corresponding to a set of hardening functions recorded at different equivalent plastic strain rates and temperatures. After a brief review of the main characteristics of the Johnson–Cook strength model, five different calibration strategies are framed and widely described. The assessment is implemented through a systematic application of each calibration strategy to three different real material cases, i.e. a DH-36 structural steel, a commercially pure niobium and an AL-6XN stainless steel. Experimental data available in the literature are considered. Results are presented in terms of plots showing the predicted Johnson–Cook hardening functions against the experimental trends, together with tables describing the fitting problematics which arise in each case, by assessing both lower yield stress and overall plastic flow introduced errors. The consequences determined by each calibration approach are then carefully compared and evaluated. A discussion on the positive and negative aspects of each strategy is presented and some suggestions on how to choose the best calibration approach are outlined, by considering the available experimental data and the objectives of the following modeling process. The proposed considerations should provide a useful guideline in the process of determining the best Johnson–Cook parameters in each specific situation in which the model is going to be adopted. A last section introduces some considerations about the calibration of the Johnson–Cook strength model through experimental data different from those consisting in a set of hardening functions relative to different equivalent plastic strain

  18. Experimental processing of a model data set using Geobit seismic software

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Sang Yong [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    A seismic data processing software, Geobit, has been developed and is continuously updated to implement newer processing techniques and to support more hardware platforms. Geobit is intended to support all Unix platforms ranging from PC to CRAY. The current version supports two platform, i.e., PC/Linux and Sun Sparc based Sun OS 4.1.x. PC/Linux attracted geophysicists in some universities trying to install Geobit in their laboratories to be used as their research tool. However, one of the problem is the difficulty in getting the seismic data. The primary reason is its huge volume. The field data is too bulky to fit their relatively small storage media, such as PC disk. To solve the problem, KIGAM released a model seismic data set via ftp.kigam.re.kr. This study aims two purposes. The first one is testing Geobit software for its suitability in seismic data processing. The test includes reproducing the model through the seismic data processing. If it fails to reproduce the original model, the software is considered buggy and incomplete. However, if it can successfully reproduce the input model, I would be proud of what I have accomplished for the last few years in writing Geobit. The second purpose is to give a guide on Geobit usage by providing an example set of job files needed to process a given data. This example will help scientists lacking Geobit experience to concentrate on their study more easily. Once they know the Geobit processing technique, and later on Geobit programming, they can implement their own processing idea, contributing newer technologies to Geobit. The complete Geobit job files needed to process the model data is written, in the following job sequence: (1) data loading, (2) CDP sort, (3) decon analysis, (4) velocity analysis, (5) decon verification, (6) stack, (7) filter analysis, (8) filtered stack, (9) time migration, (10) depth migration. The control variables in the job files are discussed. (author). 10 figs., 1 tab.

  19. Experimental test of the variability of G using Viking lander ranging data

    International Nuclear Information System (INIS)

    Hellings, R.W.; Adams, P.J.; Anderson, J.D.; Keesey, M.S.; Lau, E.L.; Standish, E.M.; Canuto, V.M.; Goldman, I.

    1983-01-01

    Results are presented from the analysis of solar system astrometric data, notably the range data to the Viking landers on Mars. A least-squares fit of the parameters of the solar system model to these data limits a simple time variation in the effective Newtonian gravitational constant to (0.2 +- 0.4) x 10 -11 yr -1 and a rate of drift of atomic clocks relative to the implicit clock of relativistic dynamics to (0.1 +- 0.8) x 10 -11 yr -1 . The error limits quoted are the result of uncertainties in the masses of the asteroids

  20. Portable audio magnetotellurics - experimental measurements and joint inversion with radiomagnetotelluric data from Gotland, Sweden

    Science.gov (United States)

    Shan, Chunling; Kalscheuer, Thomas; Pedersen, Laust B.; Erlström, Mikael; Persson, Lena

    2017-08-01

    Field setup of an audio magnetotelluric (AMT) station is a very time consuming and heavy work load. In contrast, radio magnetotelluric (RMT) equipment is more portable and faster to deploy but has shallower investigation depth owing to its higher signal frequencies. To increase the efficiency in the acquisition of AMT data from 10 to 300 Hz, we introduce a modification of the AMT method, called portable audio magnetotellurics (PAMT), that uses a lighter AMT field system and (owing to the disregard of signals at frequencies of less than 10 Hz) shortened data acquisition time. PAMT uses three magnetometers pre-mounted on a rigid frame to measure magnetic fields and steel electrodes to measure electric fields. Field tests proved that the system is stable enough to measure AMT fields in the given frequency range. A PAMT test measurement was carried out on Gotland, Sweden along a 3.5 km profile to study the ground conductivity and to map shallow Silurian marlstone and limestone formations, deeper Silurian, Ordovician and Cambrian sedimentary structures and crystalline basement. RMT data collected along a coincident profile and regional airborne very low frequency (VLF) data support the interpretation of our PAMT data. While only the RMT and VLF data constrain a shallow ( 20-50 m deep) transition between Silurian conductive ( 1000 Ωm resistivity) limestone, the single-method inversion models of both the PAMT and the RMT data show a transition into a conductive layer of 3 to 30 Ωm resistivity at 80 m depth suggesting the compatibility of the two data sets. This conductive layer is interpreted as saltwater saturated succession of Silurian, Ordovician and Cambrian sedimentary units. Towards the lower boundary of this succession (at 600 m depth according to boreholes), only the PAMT data constrain the structure. As supported by modelling tests and sensitivity analysis, the PAMT data only contain a vague indication of the underlying crystalline basement. A PAMT and RMT