WorldWideScience

Sample records for broadline simulation methodology

  1. Broad-line high-excitation gas in the elliptical galaxy NGC5128

    International Nuclear Information System (INIS)

    Phillips, M.M.; Taylor, K.; Axon, D.J.; Atherton, P.D.; Hook, R.N.

    1984-01-01

    A faint, but extensive component of broad-line ionized gas has been discovered in the peculiar giant elliptical galaxy NGC5128. This component has a radically different spatial distribution from the well-studied rotating photoionized gas associated with the dust lane although the velocity fields of the two components are similar. The origin of the broad-line gas is considered as its possible relation to the active nucleus and the X-ray jet discussed. (author)

  2. SN 2009bb: A PECULIAR BROAD-LINED TYPE Ic SUPERNOVA ,

    International Nuclear Information System (INIS)

    Pignata, Giuliano; Stritzinger, Maximilian; Phillips, M. M.; Morrell, Nidia; Boldt, Luis; Campillay, Abdo; Contreras, Carlos; Gonzalez, Sergio; Krzeminski, Wojtek; Roth, Miguel; Salgado, Francisco; Soderberg, Alicia; Mazzali, Paolo; Anderson, J. P.; Folatelli, Gaston; Foerster, Francisco; Hamuy, Mario; Maza, Jose; Levesque, Emily M.; Rest, Armin

    2011-01-01

    Ultraviolet, optical, and near-infrared photometry and optical spectroscopy of the broad-lined Type Ic supernova (SN) 2009bb are presented, following the flux evolution from -10 to +285 days past B-band maximum. Thanks to the very early discovery, it is possible to place tight constraints on the SN explosion epoch. The expansion velocities measured from near maximum spectra are found to be only slightly smaller than those measured from spectra of the prototype broad-lined SN 1998bw associated with GRB 980425. Fitting an analytical model to the pseudobolometric light curve of SN 2009bb suggests that 4.1 ± 1.9 M sun of material was ejected with 0.22 ± 0.06 M sun of it being 56 Ni. The resulting kinetic energy is 1.8 ± 0.7 x 10 52 erg. This, together with an absolute peak magnitude of M B = -18.36 ± 0.44, places SN 2009bb on the energetic and luminous end of the broad-lined Type Ic (SN Ic) sequence. Detection of helium in the early time optical spectra accompanied with strong radio emission and high metallicity of its environment makes SN 2009bb a peculiar object. Similar to the case for gamma-ray bursts (GRBs), we find that the bulk explosion parameters of SN 2009bb cannot account for the copious energy coupled to relativistic ejecta, and conclude that another energy reservoir (a central engine) is required to power the radio emission. Nevertheless, the analysis of the SN 2009bb nebular spectrum suggests that the failed GRB detection is not imputable to a large angle between the line-of-sight and the GRB beamed radiation. Therefore, if a GRB was produced during the SN 2009bb explosion, it was below the threshold of the current generation of γ-ray instruments.

  3. Temperature dependence of broadline NMR spectra of water-soaked, epoxy-graphite composites

    Science.gov (United States)

    Lawing, David; Fornes, R. E.; Gilbert, R. D.; Memory, J. D.

    1981-10-01

    Water-soaked, epoxy resin-graphite fiber composites show a waterline in their broadline proton NMR spectrum which indicates a state of intermediate mobility between the solid and free water liquid states. The line is still present at -42 °C, but shows a reversible decrease in amplitude with decreasing temperature. The line is isotropic upon rotation of the fiber axis with respect to the external magnetic field.

  4. Optical Variability of Narrow-line and Broad-line Seyfert 1 Galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Rakshit, Suvendu; Stalin, C. S., E-mail: suvenduat@gmail.com [Indian Institute of Astrophysics, Block II, Koramangala, Bangalore-560034 (India)

    2017-06-20

    We studied the optical variability (OV) of a large sample of narrow-line Seyfert 1 (NLSy1) and broad-line Seyfert 1 (BLSy1) galaxies with z < 0.8 to investigate any differences in their OV properties. Using archival optical V -band light curves from the Catalina Real Time Transient Survey that span 5–9 years and modeling them using damped random walk, we estimated the amplitude of variability. We found that NLSy1 galaxies as a class show lower amplitude of variability than their broad-line counterparts. In the sample of both NLSy1 and BLSy1 galaxies, radio-loud sources are found to have higher variability amplitude than radio-quiet sources. Considering only sources that are detected in the X-ray band, NLSy1 galaxies are less optically variable than BLSy1 galaxies. The amplitude of variability in the sample of both NLSy1 and BLSy1 galaxies is found to be anti-correlated with Fe ii strength but correlated with the width of the H β line. The well-known anti-correlation of variability–luminosity and the variability–Eddington ratio is present in our data. Among the radio-loud sample, variability amplitude is found to be correlated with radio-loudness and radio-power, suggesting that jets also play an important role in the OV in radio-loud objects, in addition to the Eddington ratio, which is the main driving factor of OV in radio-quiet sources.

  5. Nuclear power plant simulation facility evaluation methodology

    International Nuclear Information System (INIS)

    Haas, P.M.; Carter, R.J.; Laughery, K.R. Jr.

    1985-01-01

    A methodology for evaluation of nuclear power plant simulation facilities with regard to their acceptability for use in the US Nuclear Regulatory Commission (NRC) operator licensing exam is described. The evaluation is based primarily on simulator fidelity, but incorporates some aspects of direct operator/trainee performance measurement. The panel presentation and paper discuss data requirements, data collection, data analysis and criteria for conclusions regarding the fidelity evaluation, and summarize the proposed use of direct performance measurment. While field testing and refinement of the methodology are recommended, this initial effort provides a firm basis for NRC to fully develop the necessary methodology

  6. Methodological issues in lipid bilayer simulations

    NARCIS (Netherlands)

    Anezo, C; de Vries, AH; Holtje, HD; Tieleman, DP; Marrink, SJ

    2003-01-01

    Methodological issues in molecular dynamics (MD) simulations, such as the treatment of long-range electrostatic interactions or the type of pressure coupling, have important consequences for the equilibrium properties observed. We report a series of long (up to 150 ns) MD simulations of

  7. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  8. SHIPBUILDING PRODUCTION PROCESS DESIGN METHODOLOGY USING COMPUTER SIMULATION

    OpenAIRE

    Marko Hadjina; Nikša Fafandjel; Tin Matulja

    2015-01-01

    In this research a shipbuilding production process design methodology, using computer simulation, is suggested. It is expected from suggested methodology to give better and more efficient tool for complex shipbuilding production processes design procedure. Within the first part of this research existing practice for production process design in shipbuilding was discussed, its shortcomings and problem were emphasized. In continuing, discrete event simulation modelling method, as basis of sugge...

  9. Optical Variability of Narrow-line and Broad-line Seyfert 1 Galaxies

    Science.gov (United States)

    Rakshit, Suvendu; Stalin, C. S.

    2017-06-01

    We studied the optical variability (OV) of a large sample of narrow-line Seyfert 1 (NLSy1) and broad-line Seyfert 1 (BLSy1) galaxies with z anti-correlated with Fe II strength but correlated with the width of the Hβ line. The well-known anti-correlation of variability-luminosity and the variability-Eddington ratio is present in our data. Among the radio-loud sample, variability amplitude is found to be correlated with radio-loudness and radio-power, suggesting that jets also play an important role in the OV in radio-loud objects, in addition to the Eddington ratio, which is the main driving factor of OV in radio-quiet sources.

  10. An optical and near-infrared polarization survey of Seyfert and broad-line radio galaxies. Pt. 2

    International Nuclear Information System (INIS)

    Brindle, C.; Hough, J.H.; Bailey, J.A.; Axon, D.J.; Ward, M.J.; McLean, I.S.

    1990-01-01

    We discuss the wavelength dependence (0.44-2.2 μm) of polarization of the sample of 71 Seyfert and three broad-line radio galaxies presented in a previous paper. For four galaxies, 3A 0557-383, Fairall 51, IC 4392A and NGC 3783, we also present spectropolarimetry covering the wavelength range of 0.4-0.6 μm. (author)

  11. Methodology for the interactive graphic simulator construction

    International Nuclear Information System (INIS)

    Milian S, Idalmis; Rodriguez M, Lazaro; Lopez V, Miguel A.

    1997-01-01

    The PC-supported Interactive Graphic Simulators (IGS) have successfully been used for industrial training programs in many countries. This paper is intended to illustrate the general methodology applied by our research team for the construction of this kind of conceptual or small scale simulators. The information and tools available to achieve this goal are also described. The applicability of the present methodology was confirmed with the construction of a set of IGS for nuclear power plants operators training programs in Cuba. One of them, relating reactor kinetics, is shown and briefly described in this paper. (author). 11 refs., 3 figs

  12. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  13. Methodology for Developing a Diesel Exhaust After Treatment Simulation Tool

    DEFF Research Database (Denmark)

    Christiansen, Tine; Jensen, Johanne; Åberg, Andreas

    2018-01-01

    A methodology for the development of catalyst models is presented. Also, a methodology of the implementation of such models into a modular simulation tool, which simulates the units in succession, is presented. A case study is presented illustrating how suitable models can be found and used for s...

  14. Tornado missile simulation and design methodology. Volume 1: simulation methodology, design applications, and TORMIS computer code. Final report

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and risk has been assessed for a hypothetical nuclear power plant design case study

  15. Simulation Methodology in Nursing Education and Adult Learning Theory

    Science.gov (United States)

    Rutherford-Hemming, Tonya

    2012-01-01

    Simulation is often used in nursing education as a teaching methodology. Simulation is rooted in adult learning theory. Three learning theories, cognitive, social, and constructivist, explain how learners gain knowledge with simulation experiences. This article takes an in-depth look at each of these three theories as each relates to simulation.…

  16. Novel Methodology for Functional Modeling and Simulation of Wireless Embedded Systems

    Directory of Open Access Journals (Sweden)

    Sosa Morales Emma

    2008-01-01

    Full Text Available Abstract A novel methodology is presented for the modeling and the simulation of wireless embedded systems. Tight interaction between the analog and the digital functionality makes the design and verification of such systems a real challenge. The applied methodology brings together the functional models of the baseband algorithms written in C language with the circuit descriptions at behavioral level in Verilog or Verilog-AMS for the system simulations in a single kernel environment. The physical layer of an ultrawideband system has been successfully modeled and simulated. The results confirm that this methodology provides a standardized framework in order to efficiently and accurately simulate complex mixed signal applications for embedded systems.

  17. A Dynamic Defense Modeling and Simulation Methodology using Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Kangsun Lee

    2010-04-01

    Full Text Available Defense Modeling and Simulations require interoperable and autonomous federates in order to fully simulate complex behavior of war-fighters and to dynamically adapt themselves to various war-game events, commands and controls. In this paper, we propose a semantic web service based methodology to develop war-game simulations. Our methodology encapsulates war-game logic into a set of web services with additional semantic information in WSDL (Web Service Description Language and OWL (Web Ontology Language. By utilizing dynamic discovery and binding power of semantic web services, we are able to dynamically reconfigure federates according to various simulation events. An ASuW (Anti-Surface Warfare simulator is constructed to demonstrate the methodology and successfully shows that the level of interoperability and autonomy can be greatly improved.

  18. Robust Optimization in Simulation : Taguchi and Response Surface Methodology

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, J.P.C.; Meloni, C.

    2008-01-01

    Optimization of simulated systems is tackled by many methods, but most methods assume known environments. This article, however, develops a 'robust' methodology for uncertain environments. This methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  19. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  20. A framework for using simulation methodology in ergonomics interventions in design projects

    DEFF Research Database (Denmark)

    Broberg, Ole; Duarte, Francisco; Andersen, Simone Nyholm

    2014-01-01

    The aim of this paper is to outline a framework of simulation methodology in design processes from an ergonomics perspective......The aim of this paper is to outline a framework of simulation methodology in design processes from an ergonomics perspective...

  1. An EPRI methodology for determining and monitoring simulator operating limits

    International Nuclear Information System (INIS)

    Eichelberg, R.; Pellechi, M.; Wolf, B.; Colley, R.

    1989-01-01

    Of paramount concern to nuclear utilities today is whether their plant-referenced simulator(s) comply with ANSI/ANS 3.5-1985. Of special interest is Section 4.3 of the Standard which requires, in part, that a means be provided to alert the instructor when certain parameters approach values indicative of events beyond the implemented model or known plant behavior. EPRI established Research Project 2054-2 to develop a comprehensive plan for determining, monitoring, and implementing simulator operating limits. As part of the project, a survey was conducted to identify the current/anticipated approach each of the sampled utilities was using to meet the requirements of Section 4.3. A preliminary methodology was drafted and host utilities interviewed. The interview process led to redefining the methodology. This paper covers the objectives of the EPRI project, survey responses, overview of the methodology, resource requirements and conclusions

  2. Unusual broad-line Mg II emitters among luminous galaxies in the baryon oscillation spectroscopic survey

    International Nuclear Information System (INIS)

    Roig, Benjamin; Blanton, Michael R.; Ross, Nicholas P.

    2014-01-01

    Many classes of active galactic nuclei (AGNs) have been observed and recorded since the discovery of Seyfert galaxies. In this paper, we examine the sample of luminous galaxies in the Baryon Oscillation Spectroscopic Survey. We find a potentially new observational class of AGNs, one with strong and broad Mg II λ2799 line emission, but very weak emission in other normal indicators of AGN activity, such as the broad-line Hα, Hβ, and the near-ultraviolet AGN continuum, leading to an extreme ratio of broad Hα/Mg II flux relative to normal quasars. Meanwhile, these objects' narrow-line flux ratios reveal AGN narrow-line regions with levels of activity consistent with the Mg II fluxes and in agreement with that of normal quasars. These AGN may represent an extreme case of the Baldwin effect, with very low continuum and high equivalent width relative to typical quasars, but their ratio of broad Mg II to broad Balmer emission remains very unusual. They may also be representative of a class of AGN where the central engine is observed indirectly with scattered light. These galaxies represent a small fraction of the total population of luminous galaxies (≅ 0.1%), but are more likely (about 3.5 times) to have AGN-like nuclear line emission properties than other luminous galaxies. Because Mg II is usually inaccessible for the population of nearby galaxies, there may exist a related population of broad-line Mg II emitters in the local universe which is currently classified as narrow-line emitters (Seyfert 2 galaxies) or low ionization nuclear emission-line regions.

  3. The Different Nature in Seyfert 2 Galaxies With and Without Hidden Broad-Line Regions

    OpenAIRE

    Wu, Yu-Zhong; Zhang, En-Peng; Liang, Yan-Chun; Zhang, Cheng-Min; Zhao, Yong-Heng

    2011-01-01

    We compile a large sample of 120 Seyfert 2 galaxies (Sy2s) which contains 49 hidden broad-line region (HBLR) Sy2s and 71 non-HBLR Sy2s. From the difference in the power sources between two groups, we test if HBLR Sy2s are dominated by active galactic nuclei (AGNs), and if non-HBLR Sy2s are dominated by starbursts. We show that: (1) HBLR Sy2s have larger accretion rates than non-HBLR Sy2s; (2) HBLR Sy2s have larger \\Nev $\\lambda 14.32$/\\Neii $\\lambda 12.81$ and \\oiv $\\lambda 25.89$/\\Neii $\\lam...

  4. A methodology for the rigorous verification of plasma simulation codes

    Science.gov (United States)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  5. Application of the spine-layer jet radiation model to outbursts in the broad-line radio galaxy 3C 120

    Science.gov (United States)

    Janiak, M.; Sikora, M.; Moderski, R.

    2016-05-01

    We present a detailed Fermi/LAT data analysis for the broad-line radio galaxy 3C 120. This source has recently entered into a state of increased γ-ray activity which manifested itself in two major flares detected by Fermi/LAT in 2014 September and 2015 April with no significant flux changes reported in other wavelengths. We analyse available data focusing our attention on aforementioned outbursts. We find very fast variability time-scale during flares (of the order of hours) together with a significant γ-ray flux increase. We show that the ˜6.8 yr averaged γ-ray emission of 3C 120 is likely a sum of the external radiation Compton and the synchrotron self-Compton radiative components. To address the problem of violent γ-ray flares and fast variability we model the jet radiation dividing the jet structure into two components: the wide and relatively slow outer layer and the fast, narrow spine. We show that with the addition of the fast spine occasionally bent towards the observer we are able to explain observed spectral energy distribution of 3C 120 during flares with the Compton upscattered broad-line region and dusty torus photons as main γ-rays emission mechanism.

  6. CAGE IIIA Distributed Simulation Design Methodology

    Science.gov (United States)

    2014-05-01

    2 VHF Very High Frequency VLC Video LAN Codec – an Open-source cross-platform multimedia player and framework VM Virtual Machine VOIP Voice Over...Implementing Defence Experimentation (GUIDEx). The key challenges for this methodology are with understanding how to: • design it o define the...operation and to be available in the other nation’s simulations. The challenge for the CAGE campaign of experiments is to continue to build upon this

  7. Neutrino-heated stars and broad-line emission from active galactic nuclei

    Science.gov (United States)

    Macdonald, James; Stanev, Todor; Biermann, Peter L.

    1991-01-01

    Nonthermal radiation from active galactic nuclei indicates the presence of highly relativistic particles. The interaction of these high-energy particles with matter and photons gives rise to a flux of high-energy neutrinos. In this paper, the influence of the expected high neutrino fluxes on the structure and evolution of single, main-sequence stars is investigated. Sequences of models of neutrino-heated stars in thermal equilibrium are presented for masses 0.25, 0.5, 0.8, and 1.0 solar mass. In addition, a set of evolutionary sequences for mass 0.5 solar mass have been computed for different assumed values for the incident neutrino energy flux. It is found that winds driven by the heating due to high-energy particles and hard electromagnetic radiation of the outer layers of neutrino-bloated stars may satisfy the requirements of the model of Kazanas (1989) for the broad-line emission clouds in active galactic nuclei.

  8. Using soft systems methodology to develop a simulation of out-patient services.

    Science.gov (United States)

    Lehaney, B; Paul, R J

    1994-10-01

    Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.

  9. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  10. Development and demonstration of a validation methodology for vehicle lateral dynamics simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Kutluay, Emir

    2013-02-01

    In this thesis a validation methodology to be used in the assessment of the vehicle dynamics simulation models is presented. Simulation of vehicle dynamics is used to estimate the dynamic responses of existing or proposed vehicles and has a wide array of applications in the development of vehicle technologies. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. The developed validation paradigm has a top-down approach to the problem. It is ascertained that vehicle dynamics simulation models can only be validated using test maneuvers although they are aimed for real world maneuvers. Test maneuvers are determined according to the requirements of the real event at the start of the model development project and data handling techniques, validation metrics and criteria are declared for each of the selected maneuvers. If the simulation results satisfy these criteria, then the simulation is deemed ''not invalid''. If the simulation model fails to meet the criteria, the model is deemed invalid, and model iteration should be performed. The results are analyzed to determine if the results indicate a modeling error or a modeling inadequacy; and if a conditional validity in terms of system variables can be defined. Three test cases are used to demonstrate the application of the methodology. The developed methodology successfully identified the shortcomings of the tested simulation model, and defined the limits of application. The tested simulation model is found to be acceptable but valid only in a certain dynamical range. Several insights for the deficiencies of the model are reported in the analysis but the iteration step of the methodology is not demonstrated. Utilizing the proposed methodology will help to achieve more time and cost efficient simulation projects with

  11. INVESTIGATING THE COMPLEX X-RAY SPECTRUM OF A BROAD-LINE 2MASS RED QUASAR: XMM-NEWTON OBSERVATION OF FTM 0830+3759

    International Nuclear Information System (INIS)

    Piconcelli, Enrico; Nicastro, Fabrizio; Fiore, Fabrizio; Vignali, Cristian; Bianchi, Stefano; Miniutti, Giovanni

    2010-01-01

    We report results from a 50 ks XMM-Newton observation of the dust-reddened broad-line quasar FTM 0830+3759 (z = 0.413) selected from the Faint Images of the Radio Sky at Twenty cm/Two Micron All Sky Survey red quasar survey. For this active galactic nucleus (AGN), a very short 9 ks Chandra exposure had suggested a feature-rich X-ray spectrum and Hubble Space Telescope images revealed a very disturbed host galaxy morphology. Contrary to classical, optically selected quasars, the X-ray properties of red (i.e., with J - K s > 1.7 and R - K s > 4.0) broad-line quasars are still quite unexplored, although there is a growing consensus that, due to moderate obscuration, these objects can offer a unique view of spectral components typically swamped by the AGN light in normal, blue quasars. The XMM-Newton observation discussed here has definitely confirmed the complexity of the X-ray spectrum revealing the presence of a cold (or mildly ionized) absorber with N H ∼ 10 22 cm -2 along the line of sight to the nucleus and a Compton reflection component accompanied by an intense Fe Kα emission line in this quasar with a L 2-10 k eV ∼ 5 x 10 44 erg s -1 . A soft-excess component is also required by the data. The match between the column density derived by our spectral analysis and that expected on the basis of reddening due to the dust suggests the possibility that both absorptions occur in the same medium. FTM 0830+3759 is characterized by an extinction/absorption-corrected X-ray-to-optical flux ratio α ox = -2.3, which is steeper than expected on the basis of its UV luminosity. These findings indicate that the X-ray properties of FTM 0830+3759 differ from those typically observed for optically selected broad-line quasars with comparable hard X-ray luminosity.

  12. A methodological proposal for ancient kiln simulation analyzed by Moessbauer spectroscopy

    International Nuclear Information System (INIS)

    Flores, E.; Fernandes, B.; Flores, E.; Fernandez, B.

    1988-01-01

    In previous papers scientistis from different countries have reported simulating incomplete methodologies for ancient kiln firing conditions. Results from clays fired in a first working hypothesis are presented to end as a methodological proposal for simulation. This study was done through Moessbauer spectroscopy. The fired clay used presented a Fe 2+ and Fe 3+ spectrum in octahedric sites. Moessbauer parameters complemented by other studies indicate illite is the predominant clay mineral. A trait of the parameters behaviour in presence of temperature is reported. (author)

  13. A methodology to simulate the cutting process for a nuclear dismantling simulation based on a digital manufacturing platform

    International Nuclear Information System (INIS)

    Hyun, Dongjun; Kim, Ikjune; Lee, Jonghwan; Kim, Geun-Ho; Jeong, Kwan-Seong; Choi, Byung Seon; Moon, Jeikwon

    2017-01-01

    Highlights: • Goal is to provide existing tech. with cutting function handling dismantling process. • Proposed tech. can handle various cutting situations in the dismantlement activities. • Proposed tech. can be implemented in existing graphical process simulation software. • Simulation results have demonstrated that the proposed technology achieves its goal. • Proposed tech. enlarges application of graphic simulation into dismantlement activity. - Abstract: This study proposes a methodology to simulate the cutting process in a digital manufacturing platform for the flexible planning of nuclear facility decommissioning. During the planning phase of decommissioning, visualization and verification using process simulation can be powerful tools for the flexible planning of the dismantling process of highly radioactive, large and complex nuclear facilities. However, existing research and commercial solutions are not sufficient for such a situation because complete segmented digital models for the dismantling objects such as the reactor vessel, internal assembly, and closure head must be prepared before the process simulation. The preparation work has significantly impeded the broad application of process simulation due to the complexity and workload. The methodology of process simulation proposed in this paper can flexibly handle various dismantling processes including repetitive object cuttings over heavy and complex structures using a digital manufacturing platform. The proposed methodology, which is applied to dismantling scenarios of a Korean nuclear power plant in this paper, is expected to reduce the complexity and workload of nuclear dismantling simulations.

  14. Broad-lined Supernova 2016coi with a Helium Envelope

    Energy Technology Data Exchange (ETDEWEB)

    Yamanaka, Masayuki [Department of Physics, Faculty of Science and Engineering, Konan University, Okamoto, Kobe, Hyogo 658-8501 (Japan); Nakaoka, Tatsuya; Kawabata, Miho [Department of Physical Science, Hiroshima University, Kagamiyama 1-3-1, Higashi-Hiroshima 739-8526 (Japan); Tanaka, Masaomi [National Astronomical Observatory of Japan, National Institutes of Natural Sciences, Osawa, Mitaka, Tokyo 181-8588 (Japan); Maeda, Keiichi [Department of Astronomy, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan); Honda, Satoshi; Hosoya, Kensuke; Karita, Mayu; Morihana, Kumiko [Nishi-Harima Astronomical Observatory, Center for Astronomy, University of Hyogo, 407-2 Nishigaichi, Sayo-cho, Sayo, Hyogo 679-5313 (Japan); Hanayama, Hidekazu [Ishigakijima Astronomical Observatory, National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 1024-1 Arakawa, Ishigaki, Okinawa 907-0024 (Japan); Morokuma, Tomoki [Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Imai, Masataka [Department of Cosmosciences, Graduate School of Science, Hokkaido University, Kita 10 Nishi8, Kita-ku, Sapporo 060-0810 (Japan); Kinugasa, Kenzo [Nobeyama Radio Observatory, National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 462-2 Nobeyama, Minamimaki, Minamisaku, Nagano 384-1305 (Japan); Murata, Katsuhiro L. [Department of Astrophysics, Nagoya University, Chikusa-ku, Nagoya 464-8602 (Japan); Nishimori, Takefumi; Gima, Hirotaka; Ito, Ayano; Morikawa, Yuto; Murakami, Kotone [Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Korimoto, Kagoshima 890-0065 (Japan); Hashimoto, Osamu, E-mail: yamanaka@center.konan-u.ac.jp [Gunma Astronomical Observatory, Takayama, Gunma 377-0702 (Japan); and others

    2017-03-01

    We present the early-phase spectra and the light curves of the broad-lined (BL) supernova (SN) 2016coi from t = 7 to 67 days after the estimated explosion date. This SN was initially reported as a BL Type SN Ic (SN Ic-BL). However, we found that spectra up to t = 12 days exhibited the He i λ 5876, λ 6678, and λ 7065 absorption lines. We show that the smoothed and blueshifted spectra of normal SNe Ib are remarkably similar to the observed spectrum of SN 2016coi. The line velocities of SN 2016coi were similar to those of SNe Ic-BL and significantly faster than those of SNe Ib. Analyses of the line velocity and light curve suggest that the kinetic energy and the total ejecta mass of SN 2016coi are similar to those of SNe Ic-BL. Together with BL SNe 2009bb and 2012ap, for which the detection of He i was also reported, these SNe could be transitional objects between SNe Ic-BL and SNe Ib, and be classified as BL Type “Ib” SNe (SNe “Ib”-BL). Our work demonstrates the diversity of the outermost layer in BL SNe, which should be related to the variety of the evolutionary paths.

  15. EDDINGTON RATIO DISTRIBUTION OF X-RAY-SELECTED BROAD-LINE AGNs AT 1.0 < z < 2.2

    International Nuclear Information System (INIS)

    Suh, Hyewon; Hasinger, Günther; Steinhardt, Charles; Silverman, John D.; Schramm, Malte

    2015-01-01

    We investigate the Eddington ratio distribution of X-ray-selected broad-line active galactic nuclei (AGNs) in the redshift range 1.0 < z < 2.2, where the number density of AGNs peaks. Combining the optical and Subaru/Fiber Multi Object Spectrograph near-infrared spectroscopy, we estimate black hole masses for broad-line AGNs in the Chandra Deep Field South (CDF-S), Extended Chandra Deep Field South (E-CDF-S), and the XMM-Newton Lockman Hole (XMM-LH) surveys. AGNs with similar black hole masses show a broad range of AGN bolometric luminosities, which are calculated from X-ray luminosities, indicating that the accretion rate of black holes is widely distributed. We find a substantial fraction of massive black holes accreting significantly below the Eddington limit at z ≲ 2, in contrast to what is generally found for luminous AGNs at high redshift. Our analysis of observational selection biases indicates that the “AGN cosmic downsizing” phenomenon can be simply explained by the strong evolution of the comoving number density at the bright end of the AGN luminosity function, together with the corresponding selection effects. However, one might need to consider a correlation between the AGN luminosity and the accretion rate of black holes, in which luminous AGNs have higher Eddington ratios than low-luminosity AGNs, in order to understand the relatively small fraction of low-luminosity AGNs with high accretion rates in this epoch. Therefore, the observed downsizing trend could be interpreted as massive black holes with low accretion rates, which are relatively fainter than less-massive black holes with efficient accretion

  16. Evolutionary-Simulative Methodology in the Management of Social and Economic Systems

    Directory of Open Access Journals (Sweden)

    Konyavskiy V.A.

    2017-01-01

    Full Text Available The article outlines the main provisions of the evolutionary-simulative methodology (ESM which is a methodology of mathematical modeling of equilibrium random processes (CPR, widely used in the economy. It discusses the basic directions of use of ESM solutions for social problems and economic management systems.

  17. An automated methodology development. [software design for combat simulation

    Science.gov (United States)

    Hawley, L. R.

    1985-01-01

    The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.

  18. Methodology for eliciting, encoding and simulating human decision making behaviour

    OpenAIRE

    Rider, Conrad Edgar Scott

    2012-01-01

    Agent-based models (ABM) are an increasingly important research tool for describing and predicting interactions among humans and their environment. A key challenge for such models is the ability to faithfully represent human decision making with respect to observed behaviour. This thesis aims to address this challenge by developing a methodology for empirical measurement and simulation of decision making in humanenvironment systems. The methodology employs the Beliefs-Desires-I...

  19. Validation of response simulation methodology of Albedo dosemeter

    International Nuclear Information System (INIS)

    Freitas, B.M.; Silva, A.X. da

    2016-01-01

    The Instituto de Radioprotecao e Dosimetria developed and runs a neutron TLD albedo individual monitoring service. To optimize the dose calculation algorithm and to infer new calibration factors, the response of this dosemeter was simulated. In order to validate this employed methodology, it was applied in the simulation of the problem of the QUADOS (Quality Assurance of Computational Tools for Dosimetry) intercomparison, aimed to evaluate dosimetric problems, one being to calculate the response of a generic albedo dosemeter. The obtained results were compared with those of other modeling and the reference one, with good agreements. (author)

  20. THE COMPLEX CIRCUMNUCLEAR ENVIRONMENT OF THE BROAD-LINE RADIO GALAXY 3C 390.3 REVEALED BY CHANDRA HETG

    Energy Technology Data Exchange (ETDEWEB)

    Tombesi, F.; Kallman, T.; Leutenegger, M. A. [X-ray Astrophysics Laboratory, NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Reeves, J. N. [Center for Space Science and Technology, University of Maryland Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250 (United States); Reynolds, C. S.; Mushotzky, R. F.; Behar, E. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Braito, V. [INAF—Osservatorio Astronomico di Brera, Via Bianchi 46, I-23807 Merate (Italy); Cappi, M., E-mail: francesco.tombesi@nasa.gov, E-mail: ftombesi@astro.umd.edu [Department of Physics, Technion 32000, Haifa 32000 (Israel)

    2016-10-20

    We present the first high spectral resolution X-ray observation of the broad-line radio galaxy 3C 390.3 obtained with the high-energy transmission grating spectrometer on board the Chandra X-ray Observatory . The spectrum shows complex emission and absorption features in both the soft X-rays and Fe K band. We detect emission and absorption lines in the energy range E = 700–1000 eV associated with ionized Fe L transitions (Fe XVII–XX). An emission line at the energy of E ≃ 6.4 keV consistent with the Fe K α is also observed. Our best-fit model requires at least three different components: (i) a hot emission component likely associated with the hot interstellar medium in this elliptical galaxy with temperature kT = 0.5 ± 0.1 keV; (ii) a warm absorber with ionization parameter log ξ = 2.3 ± 0.5 erg s{sup −1} cm, column density log N {sub H} = 20.7 ± 0.1 cm{sup −2}, and outflow velocity v {sub out} < 150 km s{sup −1}; and (iii) a lowly ionized reflection component in the Fe K band likely associated with the optical broad-line region or the outer accretion disk. These evidences suggest the possibility that we are looking directly down the ionization cone of this active galaxy and that the central X-ray source only photoionizes along the unobscured cone. This is overall consistent with the angle-dependent unified picture of active galactic nuclei.

  1. SN 2010ay IS A LUMINOUS AND BROAD-LINED TYPE Ic SUPERNOVA WITHIN A LOW-METALLICITY HOST GALAXY

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, N. E.; Soderberg, A. M.; Foley, R. J.; Chornock, R.; Chomiuk, L.; Berger, E. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Valenti, S.; Smartt, S.; Botticella, M. T. [Astrophysics Research Centre, School of Maths and Physics, Queen' s University, Belfast BT7 1NN (United Kingdom); Hurley, K. [Space Sciences Laboratory, University of California Berkeley, 7 Gauss Way, Berkeley, CA 94720 (United States); Barthelmy, S. D.; Gehrels, N.; Cline, T. [NASA Goddard Space Flight Center, Code 661, Greenbelt, MD 20771 (United States); Levesque, E. M. [CASA, Department of Astrophysical and Planetary Sciences, University of Colorado, 389-UCB, Boulder, CO 80309 (United States); Narayan, G. [Department of Physics, Harvard University, Cambridge, MA 02138 (United States); Briggs, M. S.; Connaughton, V. [CSPAR, University of Alabama in Huntsville, Huntsville, AL (United States); Terada, Y. [Department of Physics, Saitama University, Shimo-Okubo, Sakura-ku, Saitama-shi, Saitama 338-8570 (Japan); Golenetskii, S.; Mazets, E., E-mail: nsanders@cfa.harvard.edu [Ioffe Physico-Technical Institute, Laboratory for Experimental Astrophysics, 26 Polytekhnicheskaya, St. Petersburg 194021 (Russian Federation); and others

    2012-09-10

    We report on our serendipitous pre-discovery detection and follow-up observations of the broad-lined Type Ic supernova (SN Ic) 2010ay at z = 0.067 imaged by the Pan-STARRS1 3{pi} survey just {approx}4 days after explosion. The supernova (SN) had a peak luminosity, M{sub R} Almost-Equal-To -20.2 mag, significantly more luminous than known GRB-SNe and one of the most luminous SNe Ib/c ever discovered. The absorption velocity of SN 2010ay is v{sub Si} Almost-Equal-To 19 Multiplication-Sign 10{sup 3} km s{sup -1} at {approx}40 days after explosion, 2-5 times higher than other broad-lined SNe and similar to the GRB-SN 2010bh at comparable epochs. Moreover, the velocity declines {approx}2 times slower than other SNe Ic-BL and GRB-SNe. Assuming that the optical emission is powered by radioactive decay, the peak magnitude implies the synthesis of an unusually large mass of {sup 56}Ni, M{sub Ni} = 0.9 M{sub Sun }. Applying scaling relations to the light curve, we estimate a total ejecta mass, M{sub ej} Almost-Equal-To 4.7 M{sub Sun }, and total kinetic energy, E{sub K} Almost-Equal-To 11 Multiplication-Sign 10{sup 51} erg. The ratio of M{sub Ni} to M{sub ej} is {approx}2 times as large for SN 2010ay as typical GRB-SNe and may suggest an additional energy reservoir. The metallicity (log (O/H){sub PP04} + 12 = 8.19) of the explosion site within the host galaxy places SN 2010ay in the low-metallicity regime populated by GRB-SNe, and {approx}0.5(0.2) dex lower than that typically measured for the host environments of normal (broad-lined) SNe Ic. We constrain any gamma-ray emission with E{sub {gamma}} {approx}< 6 Multiplication-Sign 10{sup 48} erg (25-150 keV), and our deep radio follow-up observations with the Expanded Very Large Array rule out relativistic ejecta with energy E {approx}> 10{sup 48} erg. We therefore rule out the association of a relativistic outflow like those that accompanied SN 1998bw and traditional long-duration gamma-ray bursts (GRBs), but we place less

  2. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  3. A simulation methodology of spacer grid residual spring deflection for predictive and interpretative purposes

    International Nuclear Information System (INIS)

    Kim, K. T.; Kim, H. K.; Yoon, K. H.

    1994-01-01

    The in-reactor fuel rod support conditions against the fretting wear-induced damage can be evaluated by spacer grid residual spring deflection. In order to predict the spacer grid residual spring deflection as a function of burnup for various spring designs, a simulation methodology of spacer grid residual spring deflection has been developed and implemented in the GRIDFORCE program. The simulation methodology takes into account cladding creep rate, initial spring deflection, initial spring force, and spring force relaxation rate as the key parameters affecting the residual spring deflection. The simulation methodology developed in this study can be utilized as an effective tool in evaluating the capability of a newly designed spacer grid spring to prevent the fretting wear-induced damage

  4. An optical and near-infrared polarization survey of Seyfert and broad-line radio galaxies. Pt. 1

    International Nuclear Information System (INIS)

    Brindle, C.; Hough, J.H.; Bailey, J.A.; Axon, D.J.; Ward, M.J.; McLean, I.S.

    1990-01-01

    We present new broad-band optical and near-infrared (0.44-2.2 μm) flux density and polarization measurements of a sample of 71 Seyfert galaxies and three broad-line radio galaxies. We confirm the results of earlier studies which show that the polarization of Seyferts is generally low in the V-band and at longer wavelengths, but in the B-band somewhat higher polarizations are commonly found. After correction has been made for the effects of stellar dilution, we find that Seyfert 2 nuclei are probably more highly polarized than Seyfert 1's. The small sample of Seyfert 2's selected using the 'warm' IRAS colour criterion tend to be more highly polarised than those selected by optical techniques. (author)

  5. Data-driven simulation methodology using DES 4-layer architecture

    Directory of Open Access Journals (Sweden)

    Aida Saez

    2016-05-01

    Full Text Available In this study, we present a methodology to build data-driven simulation models of manufacturing plants. We go further than other research proposals and we suggest focusing simulation model development under a 4-layer architecture (network, logic, database and visual reality. The Network layer includes system infrastructure. The Logic layer covers operations planning and control system, and material handling equipment system. The Database holds all the information needed to perform the simulation, the results used to analyze and the values that the Logic layer is using to manage the Plant. Finally, the Visual Reality displays an augmented reality system including not only the machinery and the movement but also blackboards and other Andon elements. This architecture provides numerous advantages as helps to build a simulation model that consistently considers the internal logistics, in a very flexible way.

  6. Collecting real-time data with a behavioral simulation: A new methodological trait

    DEFF Research Database (Denmark)

    Jespersen, Kristina Risom

    interactive methods of collecting data [1, 2]. To collect real-time data as opposed to retrospective data, new methodological traits are needed. The paper proposes that a behavioral simulation supported by Web technology is a valid new research strategy to handle the collection of real-time data. Adapting...... the knowledge on agent-based modeling [3, 4], a behavioral simulation synergizes the benefits of self-administered questionnaires and the experimental design, and furthermore, introduces role-playing [5, 6] and scenario [7-11] strategies as very effective methods to ensure high interaction with the respondents....... The Web technology is the key to make a simulation for data collection objectives 'light'. Additionally, Web technology can be a solution to some of the challenges facing the traditional research methodologies such as time, ease, flexibility and cost, but perhaps more interesting, a possible solution...

  7. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    Science.gov (United States)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  8. Simulation enabled safeguards assessment methodology

    International Nuclear Information System (INIS)

    Bean, Robert; Bjornard, Trond; Larson, Tom

    2007-01-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wire-frame construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed. (authors)

  9. Simulation Enabled Safeguards Assessment Methodology

    International Nuclear Information System (INIS)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-01-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment Methodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed

  10. Hidden Broad-line Regions in Seyfert 2 Galaxies: From the Spectropolarimetric Perspective

    International Nuclear Information System (INIS)

    Du, Pu; Wang, Jian-Min; Zhang, Zhi-Xiang

    2017-01-01

    The hidden broad-line regions (BLRs) in Seyfert 2 galaxies, which display broad emission lines (BELs) in their polarized spectra, are a key piece of evidence in support of the unified model for active galactic nuclei (AGNs). However, the detailed kinematics and geometry of hidden BLRs are still not fully understood. The virial factor obtained from reverberation mapping of type 1 AGNs may be a useful diagnostic of the nature of hidden BLRs in type 2 objects. In order to understand the hidden BLRs, we compile six type 2 objects from the literature with polarized BELs and dynamical measurements of black hole masses. All of them contain pseudobulges. We estimate their virial factors, and find the average value is 0.60 and the standard deviation is 0.69, which agree well with the value of type 1 AGNs with pseudobulges. This study demonstrates that (1) the geometry and kinematics of BLR are similar in type 1 and type 2 AGNs of the same bulge type (pseudobulges), and (2) the small values of virial factors in Seyfert 2 galaxies suggest that, similar to type 1 AGNs, BLRs tend to be very thick disks in type 2 objects.

  11. Non-plant referenced simulator methodology to meet new 10 CFR 55.45 rule

    International Nuclear Information System (INIS)

    Ibarra, J.G.

    1988-01-01

    The new 10CFR55.45 rule on Operating Tests necessitates that simulators be upgraded to meet the new requirements. This paper presents the human factors work done on an NRC approved guidance document sponsored by four utilities to develop a non-plant reference simulator facility. Human factors developed the simulator process flow and criteria, and integrated all the development work into the simulation facility plan. The human factors work provided the mechanism to solidify ideas and provided the foundation for the simulator development methodology

  12. A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators.

    Science.gov (United States)

    Sánchez, Borja Bordel; Alcarria, Ramón; Sánchez-Picot, Álvaro; Sánchez-de-Rivera, Diego

    2017-09-22

    Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users' needs and requirements and various additional factors such as the development team's experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal.

  13. A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators

    Directory of Open Access Journals (Sweden)

    Borja Bordel Sánchez

    2017-09-01

    Full Text Available Cyber-Physical Social Sensing (CPSS is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users’ needs and requirements and various additional factors such as the development team’s experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal.

  14. Discontinuous Galerkin methodology for Large-Eddy Simulations of wind turbine airfoils

    DEFF Research Database (Denmark)

    Frére, A.; Sørensen, Niels N.; Hillewaert, K.

    2016-01-01

    This paper aims at evaluating the potential of the Discontinuous Galerkin (DG) methodology for Large-Eddy Simulation (LES) of wind turbine airfoils. The DG method has shown high accuracy, excellent scalability and capacity to handle unstructured meshes. It is however not used in the wind energy...... sector yet. The present study aims at evaluating this methodology on an application which is relevant for that sector and focuses on blade section aerodynamics characterization. To be pertinent for large wind turbines, the simulations would need to be at low Mach numbers (M ≤ 0.3) where compressible...... at low and high Reynolds numbers and compares the results to state-of-the-art models used in industry, namely the panel method (XFOIL with boundary layer modeling) and Reynolds Averaged Navier-Stokes (RANS). At low Reynolds number (Re = 6 × 104), involving laminar boundary layer separation and transition...

  15. 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Koziel, Slawomir; Kacprzyk, Janusz; Leifsson, Leifur; Ören, Tuncer

    2015-01-01

    This book includes extended and revised versions of a set of selected papers from the 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2013) which was co-organized by the Reykjavik University (RU) and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC). SIMULTECH 2013 was held in cooperation with the ACM SIGSIM - Special Interest Group (SIG) on SImulation and Modeling (SIM), Movimento Italiano Modellazione e Simulazione (MIMOS) and AIS Special Interest Group on Modeling and Simulation (AIS SIGMAS) and technically co-sponsored by the Society for Modeling & Simulation International (SCS), Liophant Simulation, Simulation Team and International Federation for Information Processing (IFIP). This proceedings brings together researchers, engineers, applied mathematicians and practitioners working in the advances and applications in the field of system simulation.

  16. 2014 International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Ören, Tuncer; Kacprzyk, Janusz; Filipe, Joaquim

    2015-01-01

    The present book includes a set of selected extended papers from the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014), held in Vienna, Austria, from 28 to 30 August 2014. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2014 received 167 submissions, from 45 countries, in all continents. After a double blind paper review performed by the Program Committee, 23% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2014. Commitment to high quality standards is a major concern of SIMULTEC...

  17. The SIMRAND methodology - Simulation of Research and Development Projects

    Science.gov (United States)

    Miles, R. F., Jr.

    1984-01-01

    In research and development projects, a commonly occurring management decision is concerned with the optimum allocation of resources to achieve the project goals. Because of resource constraints, management has to make a decision regarding the set of proposed systems or tasks which should be undertaken. SIMRAND (Simulation of Research and Development Projects) is a methodology which was developed for aiding management in this decision. Attention is given to a problem description, aspects of model formulation, the reduction phase of the model solution, the simulation phase, and the evaluation phase. The implementation of the considered approach is illustrated with the aid of an example which involves a simplified network of the type used to determine the price of silicon solar cells.

  18. A methodology for thermodynamic simulation of high temperature, internal reforming fuel cell systems

    Science.gov (United States)

    Matelli, José Alexandre; Bazzo, Edson

    This work presents a methodology for simulation of fuel cells to be used in power production in small on-site power/cogeneration plants that use natural gas as fuel. The methodology contemplates thermodynamics and electrochemical aspects related to molten carbonate and solid oxide fuel cells (MCFC and SOFC, respectively). Internal steam reforming of the natural gas hydrocarbons is considered for hydrogen production. From inputs as cell potential, cell power, number of cell in the stack, ancillary systems power consumption, reformed natural gas composition and hydrogen utilization factor, the simulation gives the natural gas consumption, anode and cathode stream gases temperature and composition, and thermodynamic, electrochemical and practical efficiencies. Both energetic and exergetic methods are considered for performance analysis. The results obtained from natural gas reforming thermodynamics simulation show that the hydrogen production is maximum around 700 °C, for a steam/carbon ratio equal to 3. As shown in the literature, the found results indicate that the SOFC is more efficient than MCFC.

  19. Development and Application of a Clinical Microsystem Simulation Methodology for Human Factors-Based Research of Alarm Fatigue.

    Science.gov (United States)

    Kobayashi, Leo; Gosbee, John W; Merck, Derek L

    2017-07-01

    (1) To develop a clinical microsystem simulation methodology for alarm fatigue research with a human factors engineering (HFE) assessment framework and (2) to explore its application to the comparative examination of different approaches to patient monitoring and provider notification. Problems with the design, implementation, and real-world use of patient monitoring systems result in alarm fatigue. A multidisciplinary team is developing an open-source tool kit to promote bedside informatics research and mitigate alarm fatigue. Simulation, HFE, and computer science experts created a novel simulation methodology to study alarm fatigue. Featuring multiple interconnected simulated patient scenarios with scripted timeline, "distractor" patient care tasks, and triggered true and false alarms, the methodology incorporated objective metrics to assess provider and system performance. Developed materials were implemented during institutional review board-approved study sessions that assessed and compared an experimental multiparametric alerting system with a standard monitor telemetry system for subject response, use characteristics, and end-user feedback. A four-patient simulation setup featuring objective metrics for participant task-related performance and response to alarms was developed along with accompanying structured HFE assessment (questionnaire and interview) for monitor systems use testing. Two pilot and four study sessions with individual nurse subjects elicited true alarm and false alarm responses (including diversion from assigned tasks) as well as nonresponses to true alarms. In-simulation observation and subject questionnaires were used to test the experimental system's approach to suppressing false alarms and alerting providers. A novel investigative methodology applied simulation and HFE techniques to replicate and study alarm fatigue in controlled settings for systems assessment and experimental research purposes.

  20. DISCOVERY OF ULTRA-FAST OUTFLOWS IN A SAMPLE OF BROAD-LINE RADIO GALAXIES OBSERVED WITH SUZAKU

    International Nuclear Information System (INIS)

    Tombesi, F.; Sambruna, R. M.; Mushotzky, R. F.; Reeves, J. N.; Gofford, J.; Braito, V.; Ballo, L.; Cappi, M.

    2010-01-01

    We present the results of a uniform and systematic search for blueshifted Fe K absorption lines in the X-ray spectra of five bright broad-line radio galaxies observed with Suzaku. We detect, for the first time in radio-loud active galactic nuclei (AGNs) at X-rays, several absorption lines at energies greater than 7 keV in three out of five sources, namely, 3C 111, 3C 120, and 3C 390.3. The lines are detected with high significance according to both the F-test and extensive Monte Carlo simulations. Their likely interpretation as blueshifted Fe XXV and Fe XXVI K-shell resonance lines implies an origin from highly ionized gas outflowing with mildly relativistic velocities, in the range v ≅ 0.04-0.15c. A fit with specific photoionization models gives ionization parameters in the range log ξ ≅ 4-5.6 erg s -1 cm and column densities of N H ≅ 10 22 -10 23 cm -2 . These characteristics are very similar to those of the ultra-fast outflows (UFOs) previously observed in radio-quiet AGNs. Their estimated location within ∼0.01-0.3 pc of the central super-massive black hole suggests a likely origin related with accretion disk winds/outflows. Depending on the absorber covering fraction, the mass outflow rate of these UFOs can be comparable to the accretion rate and their kinetic power can correspond to a significant fraction of the bolometric luminosity and is comparable to their typical jet power. Therefore, these UFOs can play a significant role in the expected feedback from the AGN to the surrounding environment and can give us further clues on the relation between the accretion disk and the formation of winds/jets in both radio-quiet and radio-loud AGNs.

  1. 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Kacprzyk, Janusz; Ören, Tuncer; Filipe, Joaquim

    2016-01-01

    The present book includes a set of selected extended papers from the 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2015), held in Colmar, France, from 21 to 23 July 2015. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2015 received 102 submissions, from 36 countries, in all continents. After a double blind paper review performed by the Program Committee, 19% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2015. Commitment to high quality standards is a major concern of SIMULTECH t...

  2. CHEMICAL EVOLUTION OF THE UNIVERSE AT 0.7 < z < 1.6 DERIVED FROM ABUNDANCE DIAGNOSTICS OF THE BROAD-LINE REGION OF QUASARS

    Energy Technology Data Exchange (ETDEWEB)

    Sameshima, H. [Laboratory of Infrared High-resolution Spectroscopy, Koyama Astronomical Observatory, Kyoto Sangyo University, Motoyama, Kamigamo, Kita-ku, Kyoto 603-8555 (Japan); Yoshii, Y.; Kawara, K., E-mail: sameshima@cc.kyoto-su.ac.jp [Institute of Astronomy, School of Science, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan)

    2017-01-10

    We present an analysis of Mg ii λ 2798 and Fe ii UV emission lines for archival Sloan Digital Sky Survey (SDSS) quasars to explore the diagnostics of the magnesium-to-iron abundance ratio in a broad-line region cloud. Our sample consists of 17,432 quasars selected from the SDSS Data Release 7 with a redshift range of 0.72 <  z  < 1.63. A strong anticorrelation between the Mg ii equivalent width (EW) and the Eddington ratio is found, while only a weak positive correlation is found between the Fe ii EW and the Eddington ratio. To investigate the origin of these differing behaviors of Mg ii and Fe ii emission lines, we perform photoionization calculations using the Cloudy code, where constraints from recent reverberation mapping studies are considered. We find from calculations that (1) Mg ii and Fe ii emission lines are created at different regions in a photoionized cloud, and (2) their EW correlations with the Eddington ratio can be explained by just changing the cloud gas density. These results indicate that the Mg ii/Fe ii flux ratio, which has been used as a first-order proxy for the Mg/Fe abundance ratio in chemical evolution studies with quasar emission lines, depends largely on the cloud gas density. By correcting this density dependence, we propose new diagnostics of the Mg/Fe abundance ratio for a broad-line region cloud. In comparing the derived Mg/Fe abundance ratios with chemical evolution models, we suggest that α -enrichment by mass loss from metal-poor intermediate-mass stars occurred at z  ∼ 2 or earlier.

  3. Long-Term Monitoring of the Broad-Line Region Properties in a Selected Sample of AGN

    Energy Technology Data Exchange (ETDEWEB)

    Ilić, Dragana [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Shapovalova, Alla I. [Special Astrophysical Observatory, Russian Academy of Sciences, Nizhnii Arkhyz (Russian Federation); Popović, Luka Č. [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Astronomical Observatory, Belgrade (Serbia); Chavushyan, Vahram [Instituto Nacional de Astrofísica, Óptica y Electrónica, Puebla (Mexico); Burenkov, Alexander N. [Special Astrophysical Observatory, Russian Academy of Sciences, Nizhnii Arkhyz (Russian Federation); Kollatschny, Wolfram [Institut fuer Astrophysik, Universitaet Goettingen, Göttingen (Germany); Kovačević, Andjelka [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Marčeta-Mandić, Sladjana [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Astronomical Observatory, Belgrade (Serbia); Rakić, Nemanja [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Faculty of Science, University of Banjaluka, Banjaluka, Republic of Srpska (Bosnia and Herzegovina); La Mura, Giovanni; Rafanelli, Piero, E-mail: dilic@math.rs [Department of Physics and Astronomy, University of Padova, Padova (Italy)

    2017-09-14

    We present the results of the long-term optical monitoring campaign of active galactic nuclei (AGN) coordinated by the Special Astrophysical Observatory of the Russian Academy of Science. This campaign has produced a remarkable set of optical spectra, since we have monitored for several decades different types of broad-line (type 1) AGN, from a Seyfert 1, double-peaked line, radio loud and radio quiet AGN, to a supermassive binary black hole candidate. Our analysis of the properties of the broad line region (BLR) of these objects is based on the variability of the broad emission lines. We hereby give a comparative review of the variability properties of the broad emission lines and the BLR of seven different type 1 AGNs, emphasizing some important results, such as the variability rate, the BLR geometry, and the presence of the intrinsic Baldwin effect. We are discussing the difference and similarity in the continuum and emission line variability, focusing on what is the impact of our results to the supermassive black hole mass determination from the BLR properties.

  4. Long-Term Monitoring of the Broad-Line Region Properties in a Selected Sample of AGN

    Directory of Open Access Journals (Sweden)

    Dragana Ilić

    2017-09-01

    Full Text Available We present the results of the long-term optical monitoring campaign of active galactic nuclei (AGN coordinated by the Special Astrophysical Observatory of the Russian Academy of Science. This campaign has produced a remarkable set of optical spectra, since we have monitored for several decades different types of broad-line (type 1 AGN, from a Seyfert 1, double-peaked line, radio loud and radio quiet AGN, to a supermassive binary black hole candidate. Our analysis of the properties of the broad line region (BLR of these objects is based on the variability of the broad emission lines. We hereby give a comparative review of the variability properties of the broad emission lines and the BLR of seven different type 1 AGNs, emphasizing some important results, such as the variability rate, the BLR geometry, and the presence of the intrinsic Baldwin effect. We are discussing the difference and similarity in the continuum and emission line variability, focusing on what is the impact of our results to the supermassive black hole mass determination from the BLR properties.

  5. Stability of the Broad-line Region Geometry and Dynamics in Arp 151 Over Seven Years

    Science.gov (United States)

    Pancoast, A.; Barth, A. J.; Horne, K.; Treu, T.; Brewer, B. J.; Bennert, V. N.; Canalizo, G.; Gates, E. L.; Li, W.; Malkan, M. A.; Sand, D.; Schmidt, T.; Valenti, S.; Woo, J.-H.; Clubb, K. I.; Cooper, M. C.; Crawford, S. M.; Hönig, S. F.; Joner, M. D.; Kandrashoff, M. T.; Lazarova, M.; Nierenberg, A. M.; Romero-Colmenero, E.; Son, D.; Tollerud, E.; Walsh, J. L.; Winkler, H.

    2018-04-01

    The Seyfert 1 galaxy Arp 151 was monitored as part of three reverberation mapping campaigns spanning 2008–2015. We present modeling of these velocity-resolved reverberation mapping data sets using a geometric and dynamical model for the broad-line region (BLR). By modeling each of the three data sets independently, we infer the evolution of the BLR structure in Arp 151 over a total of 7 yr and constrain the systematic uncertainties in nonvarying parameters such as the black hole mass. We find that the BLR geometry of a thick disk viewed close to face-on is stable over this time, although the size of the BLR grows by a factor of ∼2. The dynamics of the BLR are dominated by inflow, and the inferred black hole mass is consistent for the three data sets, despite the increase in BLR size. Combining the inference for the three data sets yields a black hole mass and statistical uncertainty of log10({M}BH}/{M}ȯ ) = {6.82}-0.09+0.09 with a standard deviation in individual measurements of 0.13 dex.

  6. Development and validation of a new turbocharger simulation methodology for marine two stroke diesel engine modelling and diagnostic applications

    International Nuclear Information System (INIS)

    Sakellaridis, Nikolaos F.; Raptotasios, Spyridon I.; Antonopoulos, Antonis K.; Mavropoulos, Georgios C.; Hountalas, Dimitrios T.

    2015-01-01

    Engine cycle simulation models are increasingly used in diesel engine simulation and diagnostic applications, reducing experimental effort. Turbocharger simulation plays an important role in model's ability to accurately predict engine performance and emissions. The present work describes the development of a complete engine simulation model for marine Diesel engines based on a new methodology for turbocharger modelling utilizing physically based meanline models for compressor and turbine. Simulation accuracy is evaluated against engine bench measurements. The methodology was developed to overcome the problem of limited experimental maps availability for compressor and turbine, often encountered in large marine diesel engine simulation and diagnostic studies. Data from the engine bench are used to calibrate the models, as well as to estimate turbocharger shaft mechanical efficiency. Closed cycle and gas exchange are modelled using an existing multizone thermodynamic model. The proposed methodology is applied on a 2-stroke marine diesel engine and its evaluation is based on the comparison of predictions against measured engine data. It is demonstrated model's ability to predict engine response with load variation regarding both turbocharger performance and closed cycle parameters, as well as NOx emission trends, making it an effective tool for both engine diagnostic and optimization studies. - Highlights: • Marine two stroke diesel engine simulation model. • Turbine and compressor simulation using physical meanline models. • Methodology to derive T/C component efficiency and T/C shaft mechanical efficiency. • Extensive validation of predictions against experimental data.

  7. Multiphysics Simulation of Welding-Arc and Nozzle-Arc System: Mathematical-Model, Solution-Methodology and Validation

    Science.gov (United States)

    Pawar, Sumedh; Sharma, Atul

    2018-01-01

    This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.

  8. Concepts and methodologies for modeling and simulation a tribute to Tuncer Oren

    CERN Document Server

    Yilmaz, Levent

    2015-01-01

    This comprehensive text/reference presents cutting-edge advances in the theory and methodology of modeling and simulation (M&S), and reveals how this work has been influenced by the fundamental contributions of Professor Tuncer Ören to this field. Exploring the synergies among the domains of M&S and systems engineering (SE), the book describes how M&S and SE can help to address the complex problems identified as "Grand Challenges" more effectively under a model-driven and simulation-directed systems engineering framework. Topics and features: examines frameworks for the development of advan

  9. Randomized controlled trials of simulation-based interventions in Emergency Medicine: a methodological review.

    Science.gov (United States)

    Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri

    2018-04-01

    The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.

  10. Methodological Aspects of Modelling and Simulation of Robotized Workstations

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2018-05-01

    Full Text Available From the point of view of development of application and program products, key directions that need to be respected in computer support for project activities are quite clearly specified. User interfaces with a high degree of graphical interactive convenience, two-dimensional and three-dimensional computer graphics contribute greatly to streamlining project methodologies and procedures in particular. This is mainly due to the fact that a high number of solved tasks is clearly graphic in the modern design of robotic systems. Automation of graphical character tasks is therefore a significant development direction for the subject area. The authors present results of their research in the area of automation and computer-aided design of robotized systems. A new methodical approach to modelling robotic workstations, consisting of ten steps incorporated into the four phases of the logistics process of creating and implementing a robotic workplace, is presented. The emphasis is placed on the modelling and simulation phase with verification of elaborated methodologies on specific projects or elements of the robotized welding plant in automotive production.

  11. Methodology for digital radiography simulation using the Monte Carlo code MCNPX for industrial applications

    International Nuclear Information System (INIS)

    Souza, E.M.; Correa, S.C.A.; Silva, A.X.; Lopes, R.T.; Oliveira, D.F.

    2008-01-01

    This work presents a methodology for digital radiography simulation for industrial applications using the MCNPX radiography tally. In order to perform the simulation, the energy-dependent response of a BaFBr imaging plate detector was modeled and introduced in the MCNPX radiography tally input. In addition, a post-processing program was used to convert the MCNPX radiography tally output into 16-bit digital images. Simulated and experimental images of a steel pipe containing corrosion alveoli and stress corrosion cracking were compared, and the results showed good agreement between both images

  12. GPS system simulation methodology

    Science.gov (United States)

    Ewing, Thomas F.

    1993-01-01

    The following topics are presented: background; Global Positioning System (GPS) methodology overview; the graphical user interface (GUI); current models; application to space nuclear power/propulsion; and interfacing requirements. The discussion is presented in vugraph form.

  13. The case for cases B and C: intrinsic hydrogen line ratios of the broad-line region of active galactic nuclei, reddenings, and accretion disc sizes

    Science.gov (United States)

    Gaskell, C. Martin

    2017-05-01

    Low-redshift active galactic nuclei (AGNs) with extremely blue optical spectral indices are shown to have a mean, velocity-averaged, broad-line Hα/Hβ ratio of ≈2.72 ± 0.04, consistent with a Baker-Menzel Case B value. Comparison of a wide range of properties of the very bluest AGNs with those of a luminosity-matched subset of the Dong et al. blue AGN sample indicates that the only difference is the internal reddening. Ultraviolet fluxes are brighter for the bluest AGNs by an amount consistent with the flat AGN reddening curve of Gaskell et al. The lack of a significant difference in the GALEX (far-ultraviolet-near-ultraviolet) colour index strongly rules out a steep Small Magellanic Cloud-like reddening curve and also argues against an intrinsically harder spectrum for the bluest AGNs. For very blue AGNs, the Ly α/Hβ ratio is also consistent with being the Case B value. The Case B ratios provide strong support for the self-shielded broad-line model of Gaskell, Klimek & Nazarova. It is proposed that the greatly enhanced Ly α/Hβ ratio at very high velocities is a consequence of continuum fluorescence in the Lyman lines (Case C). Reddenings of AGNs mean that the far-UV luminosity is often underestimated by up to an order of magnitude. This is a major factor causing the discrepancies between measured accretion disc sizes and the predictions of simple accretion disc theory. Dust covering fractions for most AGNs are lower than has been estimated. The total mass in lower mass supermassive black holes must be greater than hitherto estimated.

  14. A systematic methodology to extend the applicability of a bioconversion model for the simulation of various co-digestion scenarios

    DEFF Research Database (Denmark)

    Kovalovszki, Adam; Alvarado-Morales, Merlin; Fotidis, Ioannis

    2017-01-01

    Detailed simulation of anaerobic digestion (AD) requires complex mathematical models and the optimization of numerous model parameters. By performing a systematic methodology and identifying parameters with the highest impact on process variables in a well-established AD model, its applicability...... was extended to various co-digestion scenarios. More specifically, the application of the step-by-step methodology led to the estimation of a general and reduced set of parameters, for the simulation of scenarios where either manure or wastewater were co-digested with different organic substrates. Validation...... experimental data quite well, indicating that it offers a reliable reference point for future simulations of anaerobic co-digestion scenarios....

  15. INTERACTION BETWEEN THE BROAD-LINED TYPE Ic SUPERNOVA 2012ap AND CARRIERS OF DIFFUSE INTERSTELLAR BANDS

    International Nuclear Information System (INIS)

    Milisavljevic, Dan; Margutti, Raffaella; Crabtree, Kyle N.; Soderberg, Alicia M.; Sanders, Nathan E.; Drout, Maria R.; Kamble, Atish; Chakraborti, Sayan; Kirshner, Robert P.; Foster, Jonathan B.; Fesen, Robert A.; Parrent, Jerod T.; Pickering, Timothy E.; Cenko, S. Bradley; Silverman, Jeffrey M.; Marion, G. H. Howie; Vinko, Jozsef; Filippenko, Alexei V.; Mazzali, Paolo; Maeda, Keiichi

    2014-01-01

    Diffuse interstellar bands (DIBs) are absorption features observed in optical and near-infrared spectra that are thought to be associated with carbon-rich polyatomic molecules in interstellar gas. However, because the central wavelengths of these bands do not correspond to electronic transitions of any known atomic or molecular species, their nature has remained uncertain since their discovery almost a century ago. Here we report on unusually strong DIBs in optical spectra of the broad-lined Type Ic supernova SN 2012ap that exhibit changes in equivalent width over short (≲ 30 days) timescales. The 4428 Å and 6283 Å DIB features get weaker with time, whereas the 5780 Å feature shows a marginal increase. These nonuniform changes suggest that the supernova is interacting with a nearby source of DIBs and that the DIB carriers possess high ionization potentials, such as small cations or charged fullerenes. We conclude that moderate-resolution spectra of supernovae with DIB absorptions obtained within weeks of outburst could reveal unique information about the mass-loss environment of their progenitor systems and provide new constraints on the properties of DIB carriers

  16. INTERACTION BETWEEN THE BROAD-LINED TYPE Ic SUPERNOVA 2012ap AND CARRIERS OF DIFFUSE INTERSTELLAR BANDS

    Energy Technology Data Exchange (ETDEWEB)

    Milisavljevic, Dan; Margutti, Raffaella; Crabtree, Kyle N.; Soderberg, Alicia M.; Sanders, Nathan E.; Drout, Maria R.; Kamble, Atish; Chakraborti, Sayan; Kirshner, Robert P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Foster, Jonathan B. [Yale Center for Astronomy and Astrophysics, Yale University, New Haven, CT 06520 (United States); Fesen, Robert A.; Parrent, Jerod T. [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Lab, Hanover, NH 03755 (United States); Pickering, Timothy E. [Southern African Large Telescope, P.O. Box 9, Observatory 7935, Cape Town (South Africa); Cenko, S. Bradley [Astrophysics Science Division, NASA Goddard Space Flight Center, Mail Code 661, Greenbelt, MD 20771 (United States); Silverman, Jeffrey M.; Marion, G. H. Howie; Vinko, Jozsef [University of Texas at Austin, 1 University Station C1400, Austin, TX 78712-0259 (United States); Filippenko, Alexei V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Mazzali, Paolo [Astrophysics Research Institute, Liverpool John Moores University, Liverpool L3 5RF (United Kingdom); Maeda, Keiichi, E-mail: dmilisav@cfa.harvard.edu [Department of Astronomy, Kyoto University Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan); and others

    2014-02-10

    Diffuse interstellar bands (DIBs) are absorption features observed in optical and near-infrared spectra that are thought to be associated with carbon-rich polyatomic molecules in interstellar gas. However, because the central wavelengths of these bands do not correspond to electronic transitions of any known atomic or molecular species, their nature has remained uncertain since their discovery almost a century ago. Here we report on unusually strong DIBs in optical spectra of the broad-lined Type Ic supernova SN 2012ap that exhibit changes in equivalent width over short (≲ 30 days) timescales. The 4428 Å and 6283 Å DIB features get weaker with time, whereas the 5780 Å feature shows a marginal increase. These nonuniform changes suggest that the supernova is interacting with a nearby source of DIBs and that the DIB carriers possess high ionization potentials, such as small cations or charged fullerenes. We conclude that moderate-resolution spectra of supernovae with DIB absorptions obtained within weeks of outburst could reveal unique information about the mass-loss environment of their progenitor systems and provide new constraints on the properties of DIB carriers.

  17. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    Energy Technology Data Exchange (ETDEWEB)

    Aldemir, Tunc [The Ohio State Univ., Columbus, OH (United States); Denning, Richard [The Ohio State Univ., Columbus, OH (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States); Unwin, Stephen [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-23

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.

  18. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    International Nuclear Information System (INIS)

    Aldemir, Tunc; Denning, Richard; Catalyurek, Umit; Unwin, Stephen

    2015-01-01

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.

  19. Methodology of the On-Iine FoIIow Simulation of Pebble-bed High-temperature Reactors

    International Nuclear Information System (INIS)

    Xia Bing; Li Fu; Wei Chunlin; Zheng Yanhua; Chen Fubing; Zhang Jian; Guo Jiong

    2014-01-01

    The on-line fuel management is an essential feature of the pebble-bed high-temperature reactors (PB-HTRs), which is strongly coupled with the normal operation of the reactor. For the purpose of on-line analysis of the continuous shuffling scheme of numerous fuel pebbles, the follow simulation upon the real operation is necessary for the PB-HTRs. In this work, the on-line follow simulation methodology of the PB-HTRs’ operation is described, featured by the parallel treatments of both neutronics analysis and fuel cycling simulation. During the simulation, the operation history of the reactor is divided into a series of burn-up cycles according to the behavior of operation data, in which the steady-state neutron transport equations are solved and the diffusion theory is utilized to determine the physical features of the reactor core. The burn-up equations of heavy metals, fission products and neutron poisons including B-10, decoupled from the pebble flow term, are solved to analyze the burn-up process within a single burn-up cycle. The effect of pebble flow is simulated separately through a discrete fuel shuffling pattern confined by curved pebble flow channels, and the effect of multiple pass of the fuel is represented by logical batches within each spatial region of the core. The on-line thermal-hydraulics feedback is implemented for each bur-up cycle by using the real thermal-hydraulics data of the core operation. The treatment of control rods and absorber balls is carried out by utilizing a coupled neutron transport-diffusion calculation along with discontinuity factors. The physical models mentioned above are established mainly by using a revised version of the V.S.O.P program system. The real operation data of HTR-10 is utilized to verify the methodology presented in this work, which gives good agreement between simulation results and operation data. (author)

  20. Montecarlo simulation for a new high resolution elemental analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto [Universidad de La Frontera, Temuco (Chile). Facultad de Ingenieria y Administracion

    1996-12-31

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2{pi} solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  1. Montecarlo simulation for a new high resolution elemental analysis methodology

    International Nuclear Information System (INIS)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto

    1996-01-01

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2π solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  2. Simulation Environment Based on the Universal Verification Methodology

    CERN Document Server

    AUTHOR|(SzGeCERN)697338

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC desi...

  3. Tornado missile simulation and design methodology. Volume 2: model verification and data base updates. Final report

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments

  4. THE SIZE, STRUCTURE, AND IONIZATION OF THE BROAD-LINE REGION IN NGC 3227

    International Nuclear Information System (INIS)

    Devereux, Nick

    2013-01-01

    Hubble Space Telescope spectroscopy of the Seyfert 1.5 galaxy, NGC 3227, confirms previous reports that the broad Hα emission line flux is time variable, decreasing by a modest ∼11% between 1999 and 2000 in response to a corresponding ∼37% decrease in the underlying continuum. Modeling the gas distribution responsible for the broad Hα, Hβ, and Hγ emission lines favors a spherically symmetric inflow as opposed to a thin disk. Adopting a central black hole mass of 7.6 × 10 6 M ☉ , determined from prior reverberation mapping, leads to the following dimensions for the size of the region emitting the broad Hα line: an outer radius ∼90 lt-days and an inner radius ∼3 lt-days. Thus, the previously determined reverberation size for the broad-line region (BLR) consistently coincides with the inner radius of a much larger volume of ionized gas. However, the perceived size of the BLR is an illusion, a consequence of the fact that the emitting region is ionization bounded at the outer radius and diminished by Doppler broadening at the inner radius. The actual dimensions of the inflow remain to be determined. Nevertheless, the steady-state mass inflow rate is estimated to be ∼10 –2 M ☉ yr –1 which is sufficient to explain the X-ray luminosity of the active galactic nucleus (AGN) in terms of radiatively inefficient accretion. Collectively, the results challenge many preconceived notions concerning the nature of BLRs in AGNs.

  5. Integration of an iterative methodology for exergoeconomic improvement of thermal systems with a process simulator

    International Nuclear Information System (INIS)

    Vieira, Leonardo S.; Donatelli, Joao L.; Cruz, Manuel E.

    2004-01-01

    In this paper, we present the development and automated implementation of an iterative methodology for exergoeconomic improvement of thermal systems integrated with a process simulator, so as to be applicable to real, complex plants. The methodology combines recent available exergoeconomic techniques with new qualitative and quantitative criteria for the following tasks: (i) identification of decision variables that affect system total cost and exergetic efficiency; (ii) hierarchical classification of components; (iii) identification of predominant terms in the component total cost; and (iv) choice of main decision variables in the iterative process. To show the strengths and potential advantages of the proposed methodology, it is here applied to the benchmark CGAM cogeneration system. The results obtained are presented and discussed in detail and are compared to those reached using a mathematical optimization procedure

  6. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    Science.gov (United States)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the

  7. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  8. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  9. The SIMRAND methodology: Theory and application for the simulation of research and development projects

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.

  10. Towards an in-plane methodology to track breast lesions using mammograms and patient-specific finite-element simulations

    Science.gov (United States)

    Lapuebla-Ferri, Andrés; Cegoñino-Banzo, José; Jiménez-Mocholí, Antonio-José; Pérez del Palomar, Amaya

    2017-11-01

    In breast cancer screening or diagnosis, it is usual to combine different images in order to locate a lesion as accurately as possible. These images are generated using a single or several imaging techniques. As x-ray-based mammography is widely used, a breast lesion is located in the same plane of the image (mammogram), but tracking it across mammograms corresponding to different views is a challenging task for medical physicians. Accordingly, simulation tools and methodologies that use patient-specific numerical models can facilitate the task of fusing information from different images. Additionally, these tools need to be as straightforward as possible to facilitate their translation to the clinical area. This paper presents a patient-specific, finite-element-based and semi-automated simulation methodology to track breast lesions across mammograms. A realistic three-dimensional computer model of a patient’s breast was generated from magnetic resonance imaging to simulate mammographic compressions in cranio-caudal (CC, head-to-toe) and medio-lateral oblique (MLO, shoulder-to-opposite hip) directions. For each compression being simulated, a virtual mammogram was obtained and posteriorly superimposed to the corresponding real mammogram, by sharing the nipple as a common feature. Two-dimensional rigid-body transformations were applied, and the error distance measured between the centroids of the tumors previously located on each image was 3.84 mm and 2.41 mm for CC and MLO compression, respectively. Considering that the scope of this work is to conceive a methodology translatable to clinical practice, the results indicate that it could be helpful in supporting the tracking of breast lesions.

  11. General methodology for exergy balance in ProSimPlus® process simulator

    International Nuclear Information System (INIS)

    Ghannadzadeh, Ali; Thery-Hetreux, Raphaële; Baudouin, Olivier; Baudet, Philippe; Floquet, Pascal; Joulia, Xavier

    2012-01-01

    This paper presents a general methodology for exergy balance in chemical and thermal processes integrated in ProSimPlus ® as a well-adapted process simulator for energy efficiency analysis. In this work, as well as using the general expressions for heat and work streams, the whole exergy balance is presented within only one software in order to fully automate exergy analysis. In addition, after exergy balance, the essential elements such as source of irreversibility for exergy analysis are presented to help the user for modifications on either process or utility system. The applicability of the proposed methodology in ProSimPlus ® is shown through a simple scheme of Natural Gas Liquids (NGL) recovery process and its steam utility system. The methodology does not only provide the user with necessary exergetic criteria to pinpoint the source of exergy losses, it also helps the user to find the way to reduce the exergy losses. These features of the proposed exergy calculator make it preferable for its implementation in ProSimPlus ® to define the most realistic and profitable retrofit projects on the existing chemical and thermal plants. -- Highlights: ► A set of new expressions for calculation of exergy of material streams is developed. ► A general methodology for exergy balance in ProSimPlus ® is presented. ► A panel of solutions based on exergy analysis is provided to help the user for modifications on process flowsheets. ► The exergy efficiency is chosen as a variable in a bi-criteria optimization.

  12. The case for inflow of the broad-line region of active galactic nuclei

    Science.gov (United States)

    Gaskell, C. Martin; Goosmann, René W.

    2016-02-01

    The high-ionization lines of the broad-line region (BLR) of thermal active galactic nuclei (AGNs) show blueshifts of a few hundred km/s to several thousand km/sec with respect to the low-ionization lines. This has long been thought to be due to the high-ionization lines of the BLR arising in a wind of which the far side of the outflow is blocked from our view by the accretion disc. Evidence for and against the disc-wind model is discussed. The biggest problem for the model is that velocity-resolved reverberation mapping repeatedly fails to show the expected kinematic signature of outflow of the BLR. The disc-wind model also cannot readily reproduce the red side of the line profiles of high-ionization lines. The rapidly falling density in an outflow makes it difficult to obtain high equivalent widths. We point out a number of major problems with associating the BLR with the outflows producing broad absorption lines. An explanation which avoids all these problems and satisfies the constraints of both the line profiles and velocity-resolved reverberation-mapping is a model in which the blueshifting is due to scattering off material spiraling inwards with an inflow velocity of half the velocity of the blueshifting. We discuss how recent reverberation mapping results are consistent with the scattering-plus-inflow model but do not support a disc-wind model. We propose that the anti-correlation of the apparent redshifting of Hβ with the blueshifting of C iv is a consequence of contamination of the red wings of Hβ by the broad wings of [O iii].

  13. Methodological approach to simulation and choice of ecologically efficient and energetically economic wind turbines (WT)

    Science.gov (United States)

    Bespalov, Vadim; Udina, Natalya; Samarskaya, Natalya

    2017-10-01

    Use of wind energy is related to one of the prospective directions among renewed energy sources. A methodological approach is reviewed in the article to simulation and choice of ecologically efficient and energetically economic wind turbines on the designing stage taking into account characteristics of natural-territorial complex and peculiarities of anthropogenic load in the territory of WT location.

  14. Development of a cross-section methodology and a real-time core model for VVER-1000 simulator application

    Energy Technology Data Exchange (ETDEWEB)

    Georgieva, Emiliya Lyudmilova

    2016-06-06

    The novel academic contributions are summarized as follows. A) A cross-section modelling methodology and a cycle-specific cross-section update procedure are developed to meet fidelity requirements applicable to a cycle-specific reactor core simulation, as well as particular customer needs and practices supporting VVER-1000 operation and safety. B) A real-time version of the Nodal Expansion Method code is developed and implemented into Kozloduy 6 full-scope replica control room simulator.

  15. Analysis and design of the SI-simulator software system for the VHTR-SI process by using the object-oriented analysis and object-oriented design methodology

    International Nuclear Information System (INIS)

    Chang, Jiwoon; Shin, Youngjoon; Kim, Jihwan; Lee, Kiyoung; Lee, Wonjae; Chang, Jonghwa; Youn, Cheung

    2008-01-01

    The SI-simulator is an application software system that simulates the dynamic behavior of the VHTR-SI process by the use of mathematical models. Object-oriented analysis (OOA) and object-oriented design (OOD) methodologies were employed for the SI simulator system development. OOA is concerned with developing software engineering requirements and specifications that are expressed as a system's object model (which is composed of a population of interacting objects), as opposed to the traditional data or functional views of systems. OOD techniques are useful for the development of large complex systems. Also, OOA/OOD methodology is usually employed to maximize the reusability and extensibility of a software system. In this paper, we present a design feature for the SI simulator software system by the using methodologies of OOA and OOD

  16. Determination of phase diagrams via computer simulation: methodology and applications to water, electrolytes and proteins

    International Nuclear Information System (INIS)

    Vega, C; Sanz, E; Abascal, J L F; Noya, E G

    2008-01-01

    In this review we focus on the determination of phase diagrams by computer simulation, with particular attention to the fluid-solid and solid-solid equilibria. The methodology to compute the free energy of solid phases will be discussed. In particular, the Einstein crystal and Einstein molecule methodologies are described in a comprehensive way. It is shown that both methodologies yield the same free energies and that free energies of solid phases present noticeable finite size effects. In fact, this is the case for hard spheres in the solid phase. Finite size corrections can be introduced, although in an approximate way, to correct for the dependence of the free energy on the size of the system. The computation of free energies of solid phases can be extended to molecular fluids. The procedure to compute free energies of solid phases of water (ices) will be described in detail. The free energies of ices Ih, II, III, IV, V, VI, VII, VIII, IX, XI and XII will be presented for the SPC/E and TIP4P models of water. Initial coexistence points leading to the determination of the phase diagram of water for these two models will be provided. Other methods to estimate the melting point of a solid, such as the direct fluid-solid coexistence or simulations of the free surface of the solid, will be discussed. It will be shown that the melting points of ice Ih for several water models, obtained from free energy calculations, direct coexistence simulations and free surface simulations agree within their statistical uncertainty. Phase diagram calculations can indeed help to improve potential models of molecular fluids. For instance, for water, the potential model TIP4P/2005 can be regarded as an improved version of TIP4P. Here we will review some recent work on the phase diagram of the simplest ionic model, the restricted primitive model. Although originally devised to describe ionic liquids, the model is becoming quite popular to describe the behavior of charged colloids

  17. Further developments of multiphysics and multiscale methodologies for coupled nuclear reactor simulations

    International Nuclear Information System (INIS)

    Gomez Torres, Armando Miguel

    2011-01-01

    This doctoral thesis describes the methodological development of coupled neutron-kinetics/thermal-hydraulics codes for the design and safety analysis of reactor systems taking into account the feedback mechanisms on the fuel rod level, according to different approaches. A central part of this thesis is the development and validation of a high fidelity simulation tool, DYNSUB, which results from the ''two-way-coupling'' of DYN3D-SP3 and SUBCHANFLOW. It allows the determination of local safety parameters through a detailed description of the core behavior under stationary and transient conditions at fuel rod level.

  18. 3CE Methodology for Conducting a Modeling, Simulation, and Instrumentation Tool Capability Analysis

    Science.gov (United States)

    2010-05-01

    flRmurn I F )T:Ir,tir)l! MCr)lto.-lng DHin nttbli..’"Ollc:~ E,;m:a..liut .!,)’l’lt’Mn:l’lll.ll~ t Managemen t F unction a l Arem 1 .5 Toola na...a modeling, simulation, and instrumentation (MS&I) environment. This methodology uses the DoDAF product set to document operational and systems...engineering process were identified and resolved, such as duplication of data elements derived from DoDAF operational and system views used to

  19. The End of the Lines for OX 169: No Binary Broad-Line Region

    Science.gov (United States)

    Halpern, J. P.; Eracleous, M.

    2000-03-01

    We show that unusual Balmer emission-line profiles of the quasar OX 169, frequently described as either self-absorbed or double peaked, are actually neither. The effect is an illusion resulting from two coincidences. First, the forbidden lines are quite strong and broad. Consequently, the [N II] λ6583 line and the associated narrow-line component of Hα present the appearance of twin Hα peaks. Second, the redshift of 0.2110 brings Hβ into coincidence with Na I D at zero redshift, and ISM absorption in Na I D divides the Hβ emission line. In spectra obtained over the past decade, we see no substantial change in the character of the line profiles and no indication of intrinsic double-peaked structure. The Hγ, Mg II, and Lyα emission lines are single peaked, and all of the emission-line redshifts are consistent once they are correctly attributed to their permitted and forbidden-line identifications. A systematic shift of up to 700 km s-1 between broad and narrow lines is seen, but such differences are common and could be due to gravitational and transverse redshift in a low-inclination disk. Stockton & Farnham had called attention to an apparent tidal tail in the host galaxy of OX 169 and speculated that a recent merger had supplied the nucleus with a coalescing pair of black holes that was now revealing its existence in the form of two physically distinct broad-line regions. Although there is no longer any evidence for two broad emission-line regions in OX 169, binary black holes should form frequently in galaxy mergers, and it is still worthwhile to monitor the radial velocities of emission lines that could supply evidence of their existence in certain objects.

  20. Groundwater flow simulations in support of the Local Scale Hydrogeological Description developed within the Laxemar Methodology Test Project

    International Nuclear Information System (INIS)

    Follin, Sven; Svensson, Urban

    2002-05-01

    The deduced Site Descriptive Model of the Laxemar area has been parameterised from a hydraulic point of view and subsequently put into practice in terms of a numerical flow model. The intention of the subproject has been to explore the adaptation of a numerical flow model to site-specific surface and borehole data, and to identify potential needs for development and improvement in the planned modelling methodology and tools. The experiences made during this process and the outcome of the simulations have been presented to the methodology test project group in course of the project. The discussion and conclusions made in this particular report concern two issues mainly, (i) the use of numerical simulations as a means of gaining creditability, e.g. discrimination between alternative geological models, and (ii) calibration and conditioning of probabilistic (Monte Carlo) realisations

  1. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    Energy Technology Data Exchange (ETDEWEB)

    Izzuddin, Nur; Sunarsih,; Priyanto, Agoes [Faculty of Mechanical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor (Malaysia)

    2015-05-15

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.

  2. Probing the accretion flow and emission-line regions of M81, the nearest broad-lined low-luminosity AGN

    Science.gov (United States)

    Barth, Aaron

    2017-08-01

    The nucleus of M81 is an object of singular importance as a template for low-luminosity accretion flows onto supermassive black holes. We propose to obtain a complete, small-aperture, high S/N STIS UV/optical spectrum of the M81 nucleus and multi-filter WFC3 imaging covering the UV through near-IR. Such data have never previously been obtained with HST; the only prior archival UV/optical spectra of M81 have low S/N, incomplete wavelength coverage, and are strongly contaminated by starlight. Combined with new Chandra X-ray data, our proposed observations will comprise the definitive reference dataset on the spectral energy distribution of this benchmark low-luminosity AGN. These data will provide unique new constraints on the possible contribution of a truncated thin accretion disk to the AGN emission spectrum, clarifying a fundamental property of low-luminosity accretion flows. The data will additionally provide new insights into broad-line region structure and black hole mass scaling relationships at the lowest AGN luminosities, and spatially resolved diagnostics of narrow-line region excitation conditions at unprecedented spatial resolution to assess the impact of the AGN on the ionization state of the gas in the host galaxy bulge.

  3. Development of a methodology for simulation of gas cooled reactors with purpose of transmutation

    International Nuclear Information System (INIS)

    Silva, Clarysson Alberto da

    2009-01-01

    This work proposes a methodology of MHR (Modular Helium Reactor) simulation using the WIMSD-5B (Winfrith Improved Multi/group Scheme) nuclear code which is validated by MCNPX 2.6.0 (Monte Carlo N-Particle transport eXtend) nuclear code. The goal is verify the capability of WIMSD-5B to simulate a reactor type GT-MHR (Gas Turbine Modular Helium Reactor), considering all the fuel recharges possibilities. Also is evaluated the possibility of WIMSD-5B to represent adequately the fuel evolution during the fuel recharge. Initially was verified the WIMSD-5B capability to simulate the recharge specificities of this model by analysis of neutronic parameters and isotopic composition during the burnup. After the model was simulated using both WIMSD-5B and MCNPX 2.6.0 codes and the results of k eff , neutronic flux and isotopic composition were compared. The results show that the deterministic WIMSD-5B code can be applied to a qualitative evaluation, representing adequately the core behavior during the fuel recharges being possible in a short period of time to inquire about the burned core that, once optimized, can be quantitatively evaluated by a code type MCNPX 2.6.0. (author)

  4. Training simulators in nuclear power plants: Experience, programme design and assessment methodology. Proceedings of a specialists' meeting

    International Nuclear Information System (INIS)

    1997-11-01

    Simulators became an indispensable part of training world-wide. Therefore, international exchange of information is important to share the experience gained in different countries in order to assure high international standards. A second aspects is the tremendous evolution in the computing capacities of the simulator hardware and the increasing functionality of the simulator software. This background has let the IAEA to invite the simulator experts for an experience exchange. The German Simulator Centre in Essen, which is operated by the companies KSG and GfS, was asked to host this Specialists' Meeting. The Specialists' Meeting on ''Training Simulators in Nuclear Power Plants: Experience, Programme Design and Assessment Methodology'' was organized by IAEA in-cooperation with the German Simulator Centre operated by KSG Kraftwerks-Simulator-Gesellschaft mbH and GfS Gesellschaft fuer Simulatorschulung mbH and was held from 17 - 19 November 1997 in Essen, Germany. The meeting focused on developments in simulation technology, experiences with simulator upgrades, utilization of computerized tools as support and complement of simulator training, use of simulators for other purposes. The meeting was attended by 50 participants from 16 countries. In the course of four sessions 21 technical presentations were made. The present volume contains the papers by national delegates at the Specialists' Meeting

  5. Training simulators in nuclear power plants: Experience, programme design and assessment methodology. Proceedings of a specialists` meeting

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    Simulators became an indispensable part of training world-wide. Therefore, international exchange of information is important to share the experience gained in different countries in order to assure high international standards. A second aspects is the tremendous evolution in the computing capacities of the simulator hardware and the increasing functionality of the simulator software. This background has let the IAEA to invite the simulator experts for an experience exchange. The German Simulator Centre in Essen, which is operated by the companies KSG and GfS, was asked to host this Specialists` Meeting. The Specialists` Meeting on ``Training Simulators in Nuclear Power Plants: Experience, Programme Design and Assessment Methodology`` was organized by IAEA in-cooperation with the German Simulator Centre operated by KSG Kraftwerks-Simulator-Gesellschaft mbH and GfS Gesellschaft fuer Simulatorschulung mbH and was held from 17 - 19 November 1997 in Essen, Germany. The meeting focused on developments in simulation technology, experiences with simulator upgrades, utilization of computerized tools as support and complement of simulator training, use of simulators for other purposes. The meeting was attended by 50 participants from 16 countries. In the course of four sessions 21 technical presentations were made. The present volume contains the papers by national delegates at the Specialists` Meeting Refs, figs, tabs

  6. Simulation environment based on the Universal Verification Methodology

    International Nuclear Information System (INIS)

    Fiergolski, A.

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.

  7. Wind Farm LES Simulations Using an Overset Methodology

    Science.gov (United States)

    Ananthan, Shreyas; Yellapantula, Shashank

    2017-11-01

    Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.

  8. Simulating fission product transients via the history-based local-parameter methodology

    International Nuclear Information System (INIS)

    Jenkins, D.A.; Rouben, B.; Salvatore, M.

    1993-01-01

    This paper describes the fission-product-calculation capacity of the history-based local-parameter methodology for evaluating lattice properties for use in core-tracking calculations in CANDU reactors. In addition to taking into account the individual past history of each bundles flux/power level, fuel temperature, and coolant density and temperature that the bundle has seen during its stay in the core, the latest refinement of the history-based method provides the capability of fission-product-drivers. It allows the bundle-specific concentrations of the three basic groups of saturating fission products to be calculated in steady state or following a power transient, including long shutdowns. The new capability is illustrated by simulating the startup period following a typical long-shutdown, starting from a snapshot in the Point Lepreau operating history. 9 refs., 7 tabs

  9. Baccalaureate nursing students' perspectives of peer tutoring in simulation laboratory, a Q methodology study.

    Science.gov (United States)

    Li, Ting; Petrini, Marcia A; Stone, Teresa E

    2018-02-01

    The study aim was to identify the perceived perspectives of baccalaureate nursing students toward the peer tutoring in the simulation laboratory. Insight into the nursing students' experiences and baseline data related to their perception of peer tutoring will assist to improve nursing education. Q methodology was applied to explore the students' perspectives of peer tutoring in the simulation laboratory. A convenience P-sample of 40 baccalaureate nursing students was used. Fifty-eight selected Q statements from each participant were classified into the shape of a normal distribution using an 11-point bipolar scale form with a range from -5 to +5. PQ Method software analyzed the collected data. Three discrete factors emerged: Factor I ("Facilitate or empower" knowledge acquisition), Factor II ("Safety Net" Support environment), and Factor III ("Mentoring" learn how to learn). The findings of this study support and indicate that peer tutoring is an effective supplementary strategy to promote baccalaureate students' knowledge acquisition, establishing a supportive safety net and facilitating their abilities to learn in the simulation laboratory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    Energy Technology Data Exchange (ETDEWEB)

    Negrete, C. Alenka [Instituto Nacional de Astrofisica, Optica y Electronica (Mexico); Dultzin, Deborah [Instituto de Astronomia, Universidad Nacional Autonoma de Mexico (Mexico); Marziani, Paola [INAF, Astronomical Observatory of Padova, I-35122 Padova (Italy); Sulentic, Jack W., E-mail: cnegrete@inaoep.mx, E-mail: deborah@astro.unam.mx, E-mail: paola.marziani@oapd.inaf.it, E-mail: sulentic@iaa.es [Instituto de Astrofisica de Andalucia, E-18008 Granada (Spain)

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR} directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.

  11. Methodology for application of field rainfall simulator to revise c-factor database for conditions of the Czech Republic

    Science.gov (United States)

    Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan

    2016-04-01

    The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final

  12. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  13. A methodology for determining the dynamic exchange of resources in nuclear fuel cycle simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gidden, Matthew J., E-mail: gidden@iiasa.ac.at [International Institute for Applied Systems Analysis, Schlossplatz 1, A-2361 Laxenburg (Austria); University of Wisconsin – Madison, Department of Nuclear Engineering and Engineering Physics, Madison, WI 53706 (United States); Wilson, Paul P.H. [University of Wisconsin – Madison, Department of Nuclear Engineering and Engineering Physics, Madison, WI 53706 (United States)

    2016-12-15

    Highlights: • A novel fuel cycle simulation entity interaction mechanism is proposed. • A framework and implementation of the mechanism is described. • New facility outage and regional interaction scenario studies are described and analyzed. - Abstract: Simulation of the nuclear fuel cycle can be performed using a wide range of techniques and methodologies. Past efforts have focused on specific fuel cycles or reactor technologies. The CYCLUS fuel cycle simulator seeks to separate the design of the simulation from the fuel cycle or technologies of interest. In order to support this separation, a robust supply–demand communication and solution framework is required. Accordingly an agent-based supply-chain framework, the Dynamic Resource Exchange (DRE), has been designed implemented in CYCLUS. It supports the communication of complex resources, namely isotopic compositions of nuclear fuel, between fuel cycle facilities and their managers (e.g., institutions and regions). Instances of supply and demand are defined as an optimization problem and solved for each timestep. Importantly, the DRE allows each agent in the simulation to independently indicate preference for specific trading options in order to meet both physics requirements and satisfy constraints imposed by potential socio-political models. To display the variety of possible simulations that the DRE enables, example scenarios are formulated and described. Important features include key fuel-cycle facility outages, introduction of external recycled fuel sources (similar to the current mixed oxide (MOX) fuel fabrication facility in the United States), and nontrivial interactions between fuel cycles existing in different regions.

  14. A methodology for determining the dynamic exchange of resources in nuclear fuel cycle simulation

    International Nuclear Information System (INIS)

    Gidden, Matthew J.; Wilson, Paul P.H.

    2016-01-01

    Highlights: • A novel fuel cycle simulation entity interaction mechanism is proposed. • A framework and implementation of the mechanism is described. • New facility outage and regional interaction scenario studies are described and analyzed. - Abstract: Simulation of the nuclear fuel cycle can be performed using a wide range of techniques and methodologies. Past efforts have focused on specific fuel cycles or reactor technologies. The CYCLUS fuel cycle simulator seeks to separate the design of the simulation from the fuel cycle or technologies of interest. In order to support this separation, a robust supply–demand communication and solution framework is required. Accordingly an agent-based supply-chain framework, the Dynamic Resource Exchange (DRE), has been designed implemented in CYCLUS. It supports the communication of complex resources, namely isotopic compositions of nuclear fuel, between fuel cycle facilities and their managers (e.g., institutions and regions). Instances of supply and demand are defined as an optimization problem and solved for each timestep. Importantly, the DRE allows each agent in the simulation to independently indicate preference for specific trading options in order to meet both physics requirements and satisfy constraints imposed by potential socio-political models. To display the variety of possible simulations that the DRE enables, example scenarios are formulated and described. Important features include key fuel-cycle facility outages, introduction of external recycled fuel sources (similar to the current mixed oxide (MOX) fuel fabrication facility in the United States), and nontrivial interactions between fuel cycles existing in different regions.

  15. De-individualized psychophysiological strain assessment during a flight simulation test—Validation of a space methodology

    Science.gov (United States)

    Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen

    For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.

  16. A REVISED BROAD-LINE REGION RADIUS AND BLACK HOLE MASS FOR THE NARROW-LINE SEYFERT 1 NGC 4051

    International Nuclear Information System (INIS)

    Denney, K. D.; Watson, L. C.; Peterson, B. M.

    2009-01-01

    We present the first results from a high sampling rate, multimonth reverberation mapping campaign undertaken primarily at MDM Observatory with supporting observations from telescopes around the world. The primary goal of this campaign was to obtain either new or improved Hβ reverberation lag measurements for several relatively low luminosity active galactic nuclei (AGNs). We feature results for NGC 4051 here because, until now, this object has been a significant outlier from AGN scaling relationships, e.g., it was previously a ∼2-3σ outlier on the relationship between the broad-line region (BLR) radius and the optical continuum luminosity-the R BLR -L relationship. Our new measurements of the lag time between variations in the continuum and Hβ emission line made from spectroscopic monitoring of NGC 4051 lead to a measured BLR radius of R BLR = 1.87 +0.54 -0.50 light days and black hole mass of M BH = (1.73 +0.55 -0.52 ) x 10 6 M sun . This radius is consistent with that expected from the R BLR -L relationship, based on the present luminosity of NGC 4051 and the most current calibration of the relation by Bentz et al.. We also present a preliminary look at velocity-resolved Hβ light curves and time delay measurements, although we are unable to reconstruct an unambiguous velocity-resolved reverberation signal.

  17. Methodology for the LABIHS PWR simulator modernization

    Energy Technology Data Exchange (ETDEWEB)

    Jaime, Guilherme D.G.; Oliveira, Mauro V., E-mail: gdjaime@ien.gov.b, E-mail: mvitor@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  18. Methodology for the LABIHS PWR simulator modernization

    Energy Technology Data Exchange (ETDEWEB)

    Jaime, Guilherme D.G.; Oliveira, Mauro V., E-mail: gdjaime@ien.gov.b, E-mail: mvitor@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  19. Methodology for the LABIHS PWR simulator modernization

    International Nuclear Information System (INIS)

    Jaime, Guilherme D.G.; Oliveira, Mauro V.

    2011-01-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  20. On-line diagnosis and recovery of adversary attack using logic flowgraph methodology simulation

    International Nuclear Information System (INIS)

    Guarro, S.B.

    1986-01-01

    The Logic Flowgraph Methodology (LFM) allows the construction of special graph models for simulation of complex processes of causality, including feedback loops and sequential effects. Among the most notable features of LFM is the formal inclusion in its models of causality conditioning by logic switches imbedded in the modeled process, such as faults or modes of operation. The LFM model of a process is a graph structure that captures, in one synthetic representation, the relevant success and fault space characterization of that process. LFM is very similar to an artificial intelligence expert system shell. To illustrate the utilization of LFM, an application to the assessment and on-line monitoring of a material control facility is presented. The LFM models are used to model adversary action and control response, and to generate mini-diagnostic and recovery trees in real time, as well as reliability tress for off-line evaluation. Although the case study presented is for an imaginary facility, most of the conceptual elements that would be present in a real application have been retained in order to highlight the features and capabilities of the methodology

  1. Methodology for simulation of geomagnetically induced currents in power systems

    Directory of Open Access Journals (Sweden)

    Boteler David

    2014-07-01

    Full Text Available To assess the geomagnetic hazard to power systems it is useful to be able to simulate the geomagnetically induced currents (GIC that are produced during major geomagnetic disturbances. This paper examines the methodology used in power system analysis and shows how it can be applied to modelling GIC. Electric fields in the area of the power network are used to determine the voltage sources or equivalent current sources in the transmission lines. The power network can be described by a mesh impedance matrix which is combined with the voltage sources to calculate the GIC in each loop. Alternatively the power network can be described by a nodal admittance matrix which is combined with the sum of current sources into each node to calculate the nodal voltages which are then used to calculate the GIC in the transmission lines and GIC flowing to ground at each substation. Practical calculations can be made by superposition of results calculated separately for northward and eastward electric fields. This can be done using magnetic data from a single observatory to calculate an electric field that is a uniform approximation of the field over the area of the power system. It is also shown how the superposition of results can be extended to use data from two observatories: approximating the electric field by a linear variation between the two observatory locations. These calculations provide an efficient method for simulating the GIC that would be produced by historically significant geomagnetic storm events.

  2. Verification and validation methodology of training simulators

    International Nuclear Information System (INIS)

    Hassan, M.W.; Khan, N.M.; Ali, S.; Jafri, M.N.

    1997-01-01

    A full scope training simulator comprising of 109 plant systems of a 300 MWe PWR plant contracted by Pakistan Atomic Energy Commission (PAEC) from China is near completion. The simulator has its distinction in the sense that it will be ready prior to fuel loading. The models for the full scope training simulator have been developed under APROS (Advanced PROcess Simulator) environment developed by the Technical Research Center (VTT) and Imatran Voima (IVO) of Finland. The replicated control room of the plant is contracted from Shanghai Nuclear Engineering Research and Design Institute (SNERDI), China. The development of simulation models to represent all the systems of the target plant that contribute to plant dynamics and are essential for operator training has been indigenously carried out at PAEC. This multifunctional simulator is at present under extensive testing and will be interfaced with the control planes in March 1998 so as to realize a full scope training simulator. The validation of the simulator is a joint venture between PAEC and SNERDI. For the individual components and the individual plant systems, the results have been compared against design data and PSAR results to confirm the faithfulness of the simulator against the physical plant systems. The reactor physics parameters have been validated against experimental results and benchmarks generated using design codes. Verification and validation in the integrated state has been performed against the benchmark transients conducted using the RELAP5/MOD2 for the complete spectrum of anticipated transient covering the well known five different categories. (author)

  3. A Methodology for Validating Safety Heuristics Using Clinical Simulations: Identifying and Preventing Possible Technology-Induced Errors Related to Using Health Information Systems

    Science.gov (United States)

    Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher

    2013-01-01

    Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902

  4. NuSTAR reveals the Comptonizing corona of the broad-line radio galaxy 3C 382

    Energy Technology Data Exchange (ETDEWEB)

    Ballantyne, D. R.; Bollenbacher, J. M. [Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States); Brenneman, L. W. [Harvard-Smithsonian CfA, 60 Garden Street MS-67, Cambridge, MA 02138 (United States); Madsen, K. K.; Baloković, M.; Harrison, F. A.; Walton, D. J. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Boggs, S. E. [Space Science Laboratory, University of California, Berkeley, CA 94720 (United States); Christensen, F. E.; Craig, W. W. [DTU SpaceNational Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby (Denmark); Gandhi, P. [Department of Physics, University of Durham, South Road, Durham DH1 3LE (United Kingdom); Hailey, C. J. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Lohfink, A. M. [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States); Marinucci, A. [Dipartimento di Matematica e Fisica, Università degli Studi Roma Tre, via della Vasca Navale 84, I-00146 Roma (Italy); Markwardt, C. B.; Zhang, W. W. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Stern, D., E-mail: david.ballantyne@physics.gatech.edu [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States)

    2014-10-10

    Broad-line radio galaxies (BLRGs) are active galactic nuclei that produce powerful, large-scale radio jets, but appear as Seyfert 1 galaxies in their optical spectra. In the X-ray band, BLRGs also appear like Seyfert galaxies, but with flatter spectra and weaker reflection features. One explanation for these properties is that the X-ray continuum is diluted by emission from the jet. Here, we present two NuSTAR observations of the BLRG 3C 382 that show clear evidence that the continuum of this source is dominated by thermal Comptonization, as in Seyfert 1 galaxies. The two observations were separated by over a year and found 3C 382 in different states separated by a factor of 1.7 in flux. The lower flux spectrum has a photon-index of Γ=1.68{sub −0.02}{sup +0.03}, while the photon-index of the higher flux spectrum is Γ=1.78{sub −0.03}{sup +0.02}. Thermal and anisotropic Comptonization models provide an excellent fit to both spectra and show that the coronal plasma cooled from kT{sub e} = 330 ± 30 keV in the low flux data to 231{sub −88}{sup +50} keV in the high flux observation. This cooling behavior is typical of Comptonizing corona in Seyfert galaxies and is distinct from the variations observed in jet-dominated sources. In the high flux observation, simultaneous Swift data are leveraged to obtain a broadband spectral energy distribution and indicates that the corona intercepts ∼10% of the optical and ultraviolet emitting accretion disk. 3C 382 exhibits very weak reflection features, with no detectable relativistic Fe Kα line, that may be best explained by an outflowing corona combined with an ionized inner accretion disk.

  5. A Radiative Transfer Modeling Methodology in Gas-Liquid Multiphase Flow Simulations

    Directory of Open Access Journals (Sweden)

    Gautham Krishnamoorthy

    2014-01-01

    Full Text Available A methodology for performing radiative transfer calculations in computational fluid dynamic simulations of gas-liquid multiphase flows is presented. By considering an externally irradiated bubble column photoreactor as our model system, the bubble scattering coefficients were determined through add-on functions by employing as inputs the bubble volume fractions, number densities, and the fractional contribution of each bubble size to the bubble volume from four different multiphase modeling options. The scattering coefficient profiles resulting from the models were significantly different from one another and aligned closely with their predicted gas-phase volume fraction distributions. The impacts of the multiphase modeling option, initial bubble diameter, and gas flow rates on the radiation distribution patterns within the reactor were also examined. An increase in air inlet velocities resulted in an increase in the fraction of larger sized bubbles and their contribution to the scattering coefficient. However, the initial bubble sizes were found to have the strongest impact on the radiation field.

  6. DISCOVERY OF THE BROAD-LINED TYPE Ic SN 2013cq ASSOCIATED WITH THE VERY ENERGETIC GRB 130427A

    Energy Technology Data Exchange (ETDEWEB)

    Xu, D.; Krühler, T.; Hjorth, J.; Malesani, D.; Fynbo, J. P. U.; Watson, D. J.; Geier, S. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 København Ø (Denmark); De Ugarte Postigo, A.; Thöne, C. C.; Sánchez-Ramírez, R. [Instituto de Astrofísica de Andalucía, CSIC, Glorieta de la Astronomía s/n, E-18008 Granada (Spain); Leloudas, G. [The Oskar Klein Centre, Department of Physics, Stockholm University, AlbaNova, SE-10691 Stockholm (Sweden); Cano, Z.; Jakobsson, P. [Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, IS-107 Reykjavik (Iceland); Schulze, S. [Departamento de Astronomía y Astrofísica, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22 (Chile); Kaper, L. [Astronomical Institute Anton Pannekoek, University of Amsterdam, Science Park 904, NL-1098 XH Amsterdam (Netherlands); Sollerman, J. [The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-10691 Stockholm (Sweden); Cabrera-Lavers, A. [Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife (Spain); Cao, C. [Department of Space Science and Physics, Shandong University at Weihai, Weihai, Shandong 264209 (China); Covino, S. [INAF/Brera Astronomical Observatory, via Bianchi 46, I-23807 Merate (Italy); Flores, H., E-mail: dong@dark-cosmology.dk [Laboratoire Galaxies Etoiles Physique et Instrumentation, Observatoire de Paris, 5 place Jules Janssen, F-92195 Meudon (France); and others

    2013-10-20

    Long-duration gamma-ray bursts (GRBs) at z < 1 are found in most cases to be accompanied by bright, broad-lined Type Ic supernovae (SNe Ic-BL). The highest-energy GRBs are mostly located at higher redshifts, where the associated SNe are hard to detect observationally. Here, we present early and late observations of the optical counterpart of the very energetic GRB 130427A. Despite its moderate redshift, z = 0.3399 ± 0.0002, GRB 130427A is at the high end of the GRB energy distribution, with an isotropic-equivalent energy release of E{sub iso} ∼ 9.6 × 10{sup 53} erg, more than an order of magnitude more energetic than other GRBs with spectroscopically confirmed SNe. In our dense photometric monitoring, we detect excess flux in the host-subtracted r-band light curve, consistent with that expected from an emerging SN, ∼0.2 mag fainter than the prototypical SN 1998bw. A spectrum obtained around the time of the SN peak (16.7 days after the GRB) reveals broad undulations typical of SNe Ic-BL, confirming the presence of an SN, designated SN 2013cq. The spectral shape and early peak time are similar to those of the high expansion velocity SN 2010bh associated with GRB 100316D. Our findings demonstrate that high-energy, long-duration GRBs, commonly detected at high redshift, can also be associated with SNe Ic-BL, pointing to a common progenitor mechanism.

  7. Superluminous Transients at AGN Centers from Interaction between Black Hole Disk Winds and Broad-line Region Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Moriya, Takashi J.; Tanaka, Masaomi; Ohsuga, Ken [Division of Theoretical Astronomy, National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Morokuma, Tomoki, E-mail: takashi.moriya@nao.ac.jp [Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan)

    2017-07-10

    We propose that superluminous transients that appear at central regions of active galactic nuclei (AGNs) such as CSS100217:102913+404220 (CSS100217) and PS16dtm, which reach near- or super-Eddington luminosities of the central black holes, are powered by the interaction between accretion-disk winds and clouds in broad-line regions (BLRs) surrounding them. If the disk luminosity temporarily increases by, e.g., limit–cycle oscillations, leading to a powerful radiatively driven wind, strong shock waves propagate in the BLR. Because the dense clouds in the AGN BLRs typically have similar densities to those found in SNe IIn, strong radiative shocks emerge and efficiently convert the ejecta kinetic energy to radiation. As a result, transients similar to SNe IIn can be observed at AGN central regions. Since a typical black hole disk-wind velocity is ≃0.1 c , where c is the speed of light, the ejecta kinetic energy is expected to be ≃10{sup 52} erg when ≃1 M {sub ⊙} is ejected. This kinetic energy is transformed to radiation energy in a timescale for the wind to sweep up a similar mass to itself in the BLR, which is a few hundred days. Therefore, both luminosities (∼10{sup 44} erg s{sup −1}) and timescales (∼100 days) of the superluminous transients from AGN central regions match those expected in our interaction model. If CSS100217 and PS16dtm are related to the AGN activities triggered by limit–cycle oscillations, they become bright again in coming years or decades.

  8. Radio/X-ray monitoring of the broad-line radio galaxy 3C 382. High-energy view with XMM-Newtonand NuSTAR

    Science.gov (United States)

    Ursini, F.; Petrucci, P.-O.; Matt, G.; Bianchi, S.; Cappi, M.; Dadina, M.; Grandi, P.; Torresi, E.; Ballantyne, D. R.; De Marco, B.; De Rosa, A.; Giroletti, M.; Malzac, J.; Marinucci, A.; Middei, R.; Ponti, G.; Tortosa, A.

    2018-05-01

    We present the analysis of five joint XMM-Newton/NuSTARobservations, 20 ks each and separated by 12 days, of the broad-line radio galaxy 3C 382. The data were obtained as part of a campaign performed in September-October 2016 simultaneously with VLBA. The radio data and their relation with the X-ray ones will be discussed in a following paper. The source exhibits a moderate flux variability in the UV/X-ray bands, and a limited spectral variability especially in the soft X-ray band. In agreement with past observations, we find the presence of a warm absorber, an iron Kα line with no associated Compton reflection hump, and a variable soft excess well described by a thermal Comptonization component. The data are consistent with a "two-corona" scenario, in which the UV emission and soft excess are produced by a warm (kT ≃ 0.6 keV), optically thick (τ ≃ 20) corona consistent with being a slab fully covering a nearly passive accretion disc, while the hard X-ray emission is due to a hot corona intercepting roughly 10% of the soft emission. These results are remarkably similar to those generally found in radio-quiet Seyferts, thus suggesting a common accretion mechanism.

  9. Validation of response simulation methodology of Albedo dosemeter; Validacao da metodologia de simulacao de resposta de dosimetro de Albedo

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, B.M.; Silva, A.X. da, E-mail: bfreitas@nuclear.ufrj.br [Coordenacao do Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Mauricio, C.L.P. [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2016-07-01

    The Instituto de Radioprotecao e Dosimetria developed and runs a neutron TLD albedo individual monitoring service. To optimize the dose calculation algorithm and to infer new calibration factors, the response of this dosemeter was simulated. In order to validate this employed methodology, it was applied in the simulation of the problem of the QUADOS (Quality Assurance of Computational Tools for Dosimetry) intercomparison, aimed to evaluate dosimetric problems, one being to calculate the response of a generic albedo dosemeter. The obtained results were compared with those of other modeling and the reference one, with good agreements. (author)

  10. Numerical simulation methodologies for design and development of Diffuser-Augmented Wind Turbines – analysis and comparison

    Directory of Open Access Journals (Sweden)

    Michał Lipian

    2016-01-01

    Full Text Available Different numerical computation methods used to develop a methodology for fast, efficient, reliable design and comparison of Diffuser-Augmented Wind Turbine (DAWT geometries are presented. The demand for such methods is evident, following the multitude of geometrical parameters that influence the flow character through ducted turbines. The results of the Actuator Disk Model (ADM simulations will be confronted with a simulation method of higher order of accuracy, i.e. the 3D Fully-resolved Rotor Model (FRM in the rotor design point. Both will be checked for consistency with the experimental results measured in the wind tunnel at the Institute of Turbo-machinery (IMP, Lodz University of Technology (TUL. An attempt to find an efficient method (with a compromise between accuracy and design time for the flow analysis pertinent to the DAWT is a novel approach presented in this paper.

  11. METHODOLOGICAL PROBLEMS OF E-LEARNING DIDACTICS

    Directory of Open Access Journals (Sweden)

    Sergey F. Sergeev

    2015-01-01

    Full Text Available The article is devoted to the discussion of the methodological problems of e-learning, didactic issues the use of advanced networking and Internet technologies to create training systems and simulators based on the methodological principles of non-classical and post-non-classical psychology and pedagogy. 

  12. Development of a numerical methodology for flowforming process simulation of complex geometry tubes

    Science.gov (United States)

    Varela, Sonia; Santos, Maite; Arroyo, Amaia; Pérez, Iñaki; Puigjaner, Joan Francesc; Puigjaner, Blanca

    2017-10-01

    Nowadays, the incremental flowforming process is widely explored because of the usage of complex tubular products is increasing due to the light-weighting trend and the use of expensive materials. The enhanced mechanical properties of finished parts combined with the process efficiency in terms of raw material and energy consumption are the key factors for its competitiveness and sustainability, which is consistent with EU industry policy. As a promising technology, additional steps for extending the existing flowforming limits in the production of tubular products are required. The objective of the present research is to further expand the current state of the art regarding limitations on tube thickness and diameter, exploring the feasibility to flowform complex geometries as tubes of elevated thickness of up to 60 mm. In this study, the analysis of the backward flowforming process of 7075 aluminum tubular preform is carried out to define the optimum process parameters, machine requirements and tooling geometry as demonstration case. Numerical simulation studies on flowforming of thin walled tubular components have been considered to increase the knowledge of the technology. The calculation of the rotational movement of the mesh preform, the high ratio thickness/length and the thermomechanical condition increase significantly the computation time of the numerical simulation model. This means that efficient and reliable tools able to predict the forming loads and the quality of flowformed thick tubes are not available. This paper aims to overcome this situation by developing a simulation methodology based on FEM simulation code including new strategies. Material characterization has also been performed through tensile test to able to design the process. Finally, to check the reliability of the model, flowforming tests at industrial environment have been developed.

  13. Response Surface Methodology

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.

    2014-01-01

    Abstract: This chapter first summarizes Response Surface Methodology (RSM), which started with Box and Wilson’s article in 1951 on RSM for real, non-simulated systems. RSM is a stepwise heuristic that uses first-order polynomials to approximate the response surface locally. An estimated polynomial

  14. DETERMINING QUASAR BLACK HOLE MASS FUNCTIONS FROM THEIR BROAD EMISSION LINES: APPLICATION TO THE BRIGHT QUASAR SURVEY

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Fan Xiaohui; Vestergaard, Marianne

    2009-01-01

    We describe a Bayesian approach to estimating quasar black hole mass functions (BHMF) using the broad emission lines to estimate black hole mass. We show how using the broad-line mass estimates in combination with statistical techniques developed for luminosity function estimation (e.g., the 1/V a correction) leads to statistically biased results. We derive the likelihood function for the BHMF based on the broad-line mass estimates, and derive the posterior distribution for the BHMF, given the observed data. We develop our statistical approach for a flexible model where the BHMF is modeled as a mixture of Gaussian functions. Statistical inference is performed using Markov chain Monte Carlo (MCMC) methods, and we describe a Metropolis-Hastings algorithm to perform the MCMC. The MCMC simulates random draws from the probability distribution of the BHMF parameters, given the data, and we use a simulated data set to show how these random draws may be used to estimate the probability distribution for the BHMF. In addition, we show how the MCMC output may be used to estimate the probability distribution of any quantities derived from the BHMF, such as the peak in the space density of quasars. Our method has the advantage that it is able to constrain the BHMF even beyond the survey detection limits at the adopted confidence level, accounts for measurement errors and the intrinsic uncertainty in broad-line mass estimates, and provides a natural way of estimating the probability distribution of any quantities derived from the BHMF. We conclude by using our method to estimate the local active BHMF using the z BH ∼> 10 8 M sun . Our analysis implies that at a given M BH , z < 0.5 broad-line quasars have a typical Eddington ratio of ∼0.4 and a dispersion in Eddington ratio of ∼<0.5 dex.

  15. Contribution to the electrothermal simulation in power electronics. Development of a simulation methodology applied to switching circuits under variable operating conditions; Contribution a la simulation electrothermique en electronique de puissance. Developpement d`une methode de simulation pour circuits de commutation soumis a des commandes variables

    Energy Technology Data Exchange (ETDEWEB)

    Vales, P.

    1997-03-19

    In modern hybrid or monolithic integrated power circuits, electrothermal effects can no longer be ignored. A methodology is proposed in order to simulate electrothermal effects in power circuits, with a significant reduction of the computation time while taking into account electrical and thermal time constants which are usually widely different. A supervising program, written in Fortran, uses system call sequences and manages an interactive dialog between a fast thermal simulator and a general electrical simulator. This explicit coupling process between two specific simulators requires a multi-task operating system. The developed software allows for the prediction of the electrothermal power dissipation drift in the active areas of components, and the prediction of thermally-induced coupling effects between adjacent components. An application to the study of hard switching circuits working under variable operating conditions is presented

  16. A methodology of selection of exercises for operator training on a control room simulator and its application to the data bank of exercises at the Dukovany NPP

    International Nuclear Information System (INIS)

    Holy, J.

    2005-07-01

    The report describes the preparation of methodology for the selection of scenarios to be used during operator training on a full-scope simulator. The scenarios are selected from a data bank of scenarios, which is under preparation based on feedback from the operational history and theoretical analyses. The new methodology takes into account 3 basic attributes defining the priority for use within the training programme: frequency of occurrence, safety-related significance, and difficulty. The attributes are scored and based on a joint score, the importance of inclusion of the scenario in the training programme is also scored. The methodology was applied to the data bank of scenarios for simulation of abnormal states and incidents trained on the up-to-date simulator of the Dukovany NPP, and the results of this pilot application were made available to Dukovany operator training staff as a tool for the preparation of training plans for the years to come. The results of a PSA study are used for a non-trivial selection of the scenarios

  17. A Novel Methodology for the Simulation of Athletic Tasks on Cadaveric Knee Joints with Respect to In Vivo Kinematics

    Science.gov (United States)

    Bates, Nathaniel A.; Nesbitt, Rebecca J.; Shearn, Jason T.; Myer, Gregory D.; Hewett, Timothy E.

    2015-01-01

    Six degree of freedom (6-DOF) robotic manipulators have simulated clinical tests and gait on cadaveric knees to examine knee biomechanics. However, these activities do not necessarily emulate the kinematics and kinetics that lead to anterior cruciate ligament (ACL) rupture. The purpose of this study was to determine the techniques needed to derive reproducible, in vitro simulations from in vivo skin-marker kinematics recorded during simulated athletic tasks. Input of raw, in vivo, skin-marker-derived motion capture kinematics consistently resulted in specimen failure. The protocol described in this study developed an in-depth methodology to adapt in vivo kinematic recordings into 6-DOF knee motion simulations for drop vertical jumps and sidestep cutting. Our simulation method repeatably produced kinetics consistent with vertical ground reaction patterns while preserving specimen integrity. Athletic task simulation represents an advancement that allows investigators to examine ACL-intact and graft biomechanics during motions that generate greater kinetics, and the athletic tasks are more representative of documented cases of ligament rupture. Establishment of baseline functional mechanics within the knee joint during athletic tasks will serve to advance the prevention, repair and rehabilitation of ACL injuries. PMID:25869454

  18. Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory

    Directory of Open Access Journals (Sweden)

    Joshua Rodewald

    2016-10-01

    Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.

  19. Civil migration and risk assessment methodology

    International Nuclear Information System (INIS)

    Onishi, Y.; Brown, S.M.; Olsen, A.R.; Parkhurst, M.A.

    1981-01-01

    To provide a scientific basis for risk assessment and decision making, the Chemical Migration and Risk Assessment (CMRA) Methodology was developed to simulate overland and instream toxic containment migration and fate, and to predict the probability of acute and chronic impacts on aquatic biota. The simulation results indicated that the time between the pesticide application and the subsequent runoff producing event was the most important factor determining the amount of the alachlor. The study also revealed that sediment transport has important effects on contaminant migration when sediment concentrations in receiving streams are high or contaminants are highly susceptible to adsorption by sediment. Although the capabilities of the CMRA methodology were only partially tested in this study, the results demonstrate that methodology can be used as a scientific decision-making tool for toxic chemical regulations, a research tool to evaluate the relative significance of various transport and degradation phenomena, as well as a tool to examine the effectiveness of toxic chemical control practice

  20. Theory of mind and Verstehen (understanding) methodology.

    Science.gov (United States)

    Kumazaki, Tsutomu

    2016-09-01

    Theory of mind is a prominent, but highly controversial, field in psychology, psychiatry, and philosophy of mind. Simulation theory, theory-theory and other views have been presented in recent decades, none of which are monolithic. In this article, various views on theory of mind are reviewed, and methodological problems within each view are investigated. The relationship between simulation theory and Verstehen (understanding) methodology in traditional human sciences is an intriguing issue, although the latter is not a direct ancestor of the former. From that perspective, lessons for current clinical psychiatry are drawn. © The Author(s) 2016.

  1. Simplified methodology for Angra 1 containment analysis

    International Nuclear Information System (INIS)

    Neves Conti, T. das; Souza, A.L. de; Sabundjian, G.

    1991-08-01

    A simplified methodology of analysis was developed to simulate a Large Break Loss of Coolant Accident in the Angra 1 Nuclear Power Station. Using the RELAP5/MOD1, RELAP4/MOD5 and CONTEMPT-LT Codes, the time variation of pressure and temperature in the containment was analysed. The obtained data was compared with the Angra 1 Final Safety Analysis Report, and too those calculated by a Detailed Model. The results obtained by this new methodology such as the small computational time of simulation, were satisfactory when getting the preliminary evaluation of the Angra 1 global parameters. (author)

  2. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    Science.gov (United States)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  3. Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Krause, E.; et al.

    2017-06-28

    We present the methodology for and detail the implementation of the Dark Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines configuration-space two-point statistics from three different cosmological probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data from the first year of DES observations. We have developed two independent modeling pipelines and describe the code validation process. We derive expressions for analytical real-space multi-probe covariances, and describe their validation with numerical simulations. We stress-test the inference pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters plus 20 nuisance parameters and precisely resemble the analysis to be presented in the DES 3x2pt analysis paper, using a variety of simulated input data vectors with varying assumptions. We find that any disagreement between pipelines leads to changes in assigned likelihood $\\Delta \\chi^2 \\le 0.045$ with respect to the statistical error of the DES Y1 data vector. We also find that angular binning and survey mask do not impact our analytic covariance at a significant level. We determine lower bounds on scales used for analysis of galaxy clustering (8 Mpc$~h^{-1}$) and galaxy-galaxy lensing (12 Mpc$~h^{-1}$) such that the impact of modeling uncertainties in the non-linear regime is well below statistical errors, and show that our analysis choices are robust against a variety of systematics. These tests demonstrate that we have a robust analysis pipeline that yields unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1 analysis. We emphasize that the level of independent code development and subsequent code comparison as demonstrated in this paper is necessary to produce credible constraints from increasingly complex multi-probe analyses of current data.

  4. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    International Nuclear Information System (INIS)

    Ko, Soon Heum; Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel; Jha, Shantenu

    2014-01-01

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  5. USGS Methodology for Assessing Continuous Petroleum Resources

    Science.gov (United States)

    Charpentier, Ronald R.; Cook, Troy A.

    2011-01-01

    The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.

  6. Adaptive LES Methodology for Turbulent Flow Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oleg V. Vasilyev

    2008-06-12

    Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulations that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic

  7. A methodological study of environmental simulation in architecture and engineering. Integrating daylight and thermal performance across the urban and building scales

    DEFF Research Database (Denmark)

    Sattrup, Peter Andreas; Strømann-Andersen, Jakob Bjørn

    2011-01-01

    This study presents a methodological and conceptual framework that allows for the integration and creation of knowledge across professional borders in the field of environmental simulation. The framework has been developed on the basis of interviews with leading international practitioners, key...... in pointing out the need for improving metrics, software and not least the performance of the built environment itself....

  8. Response surface methodological approach for the decolorization of simulated dye effluent using Aspergillus fumigatus fresenius.

    Science.gov (United States)

    Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj

    2009-01-30

    The aim of our research was to study, effect of temperature, pH and initial dye concentration on decolorization of diazo dye Acid Red 151 (AR 151) from simulated dye solution using a fungal isolate Aspergillus fumigatus fresenius have been investigated. The central composite design matrix and response surface methodology (RSM) have been applied to design the experiments to evaluate the interactive effects of three most important operating variables: temperature (25-35 degrees C), pH (4.0-7.0), and initial dye concentration (100-200 mg/L) on the biodegradation of AR 151. The total 20 experiments were conducted in the present study towards the construction of a quadratic model. Very high regression coefficient between the variables and the response (R(2)=0.9934) indicated excellent evaluation of experimental data by second-order polynomial regression model. The RSM indicated that initial dye concentration of 150 mg/L, pH 5.5 and a temperature of 30 degrees C were optimal for maximum % decolorization of AR 151 in simulated dye solution, and 84.8% decolorization of AR 151 was observed at optimum growth conditions.

  9. WE-H-BRC-04: Implement Lean Methodology to Make Our Current Process of CT Simulation to Treatment More Efficient

    Energy Technology Data Exchange (ETDEWEB)

    Boddu, S; Morrow, A; Krishnamurthy, N; McVicker, A; Deb, N; Rangaraj, D [Scott & White Hospital, Temple, TX (United States)

    2016-06-15

    Purpose: Our goal is to implement lean methodology to make our current process of CT simulation to treatment more efficient. Methods: In this study, we implemented lean methodology and tools and employed flowchart in excel for process-mapping. We formed a group of physicians, physicists, dosimetrists, therapists and a clinical physics assistant and huddled bi-weekly to map current value streams. We performed GEMBA walks and observed current processes from scheduling patient CT Simulations to treatment plan approval. From this, the entire workflow was categorized into processes, sub-processes, and tasks. For each process we gathered data on touch time, first time quality, undesirable effects (UDEs), and wait-times from relevant members of each task. UDEs were binned per frequency of their occurrence. We huddled to map future state and to find solutions to high frequency UDEs. We implemented visual controls, hard stops, and documented issues found during chart checks prior to treatment plan approval. Results: We have identified approximately 64 UDEs in our current workflow that could cause delays, re-work, compromise the quality and safety of patient treatments, or cause wait times between 1 – 6 days. While some UDEs are unavoidable, such as re-planning due to patient weight loss, eliminating avoidable UDEs is our goal. In 2015, we found 399 issues with patient treatment plans, of which 261, 95 and 43 were low, medium and high severity, respectively. We also mapped patient-specific QA processes for IMRT/Rapid Arc and SRS/SBRT, involving 10 and 18 steps, respectively. From these, 13 UDEs were found and 5 were addressed that solved 20% of issues. Conclusion: We have successfully implemented lean methodology and tools. We are further mapping treatment site specific workflows to identify bottlenecks, potential breakdowns and personnel allocation and employ tools like failure mode effects analysis to mitigate risk factors to make this process efficient.

  10. WE-H-BRC-04: Implement Lean Methodology to Make Our Current Process of CT Simulation to Treatment More Efficient

    International Nuclear Information System (INIS)

    Boddu, S; Morrow, A; Krishnamurthy, N; McVicker, A; Deb, N; Rangaraj, D

    2016-01-01

    Purpose: Our goal is to implement lean methodology to make our current process of CT simulation to treatment more efficient. Methods: In this study, we implemented lean methodology and tools and employed flowchart in excel for process-mapping. We formed a group of physicians, physicists, dosimetrists, therapists and a clinical physics assistant and huddled bi-weekly to map current value streams. We performed GEMBA walks and observed current processes from scheduling patient CT Simulations to treatment plan approval. From this, the entire workflow was categorized into processes, sub-processes, and tasks. For each process we gathered data on touch time, first time quality, undesirable effects (UDEs), and wait-times from relevant members of each task. UDEs were binned per frequency of their occurrence. We huddled to map future state and to find solutions to high frequency UDEs. We implemented visual controls, hard stops, and documented issues found during chart checks prior to treatment plan approval. Results: We have identified approximately 64 UDEs in our current workflow that could cause delays, re-work, compromise the quality and safety of patient treatments, or cause wait times between 1 – 6 days. While some UDEs are unavoidable, such as re-planning due to patient weight loss, eliminating avoidable UDEs is our goal. In 2015, we found 399 issues with patient treatment plans, of which 261, 95 and 43 were low, medium and high severity, respectively. We also mapped patient-specific QA processes for IMRT/Rapid Arc and SRS/SBRT, involving 10 and 18 steps, respectively. From these, 13 UDEs were found and 5 were addressed that solved 20% of issues. Conclusion: We have successfully implemented lean methodology and tools. We are further mapping treatment site specific workflows to identify bottlenecks, potential breakdowns and personnel allocation and employ tools like failure mode effects analysis to mitigate risk factors to make this process efficient.

  11. OPTICAL MONITORING OF THE BROAD-LINE RADIO GALAXY 3C 390.3

    International Nuclear Information System (INIS)

    Dietrich, Matthias; Peterson, Bradley M.; Grier, Catherine J.; Bentz, Misty C.; Eastman, Jason; Frank, Stephan; Gonzalez, Raymond; Marshall, Jennifer L.; DePoy, Darren L.; Prieto, Jose L.

    2012-01-01

    We have undertaken a new ground-based monitoring campaign on the broad-line radio galaxy 3C 390.3 to improve the measurement of the size of the broad emission-line region and to estimate the black hole mass. Optical spectra and g-band images were observed in late 2005 for three months using the 2.4 m telescope at MDM Observatory. Integrated emission-line flux variations were measured for the hydrogen Balmer lines Hα, Hβ, Hγ, and for the helium line He IIλ4686, as well as g-band fluxes and the optical active galactic nucleus (AGN) continuum at λ = 5100 Å. The g-band fluxes and the optical AGN continuum vary simultaneously within the uncertainties, τ cent (0.2 ± 1.1) days. We find that the emission-line variations are delayed with respect to the variable g-band continuum by τ(Hα) 56.3 +2.4 –6.6 days, τ(Hβ) = 44.3 +3.0 –3.3 days, τ(Hγ) = 58.1 +4.3 –6.1 days, and τ(He II 4686) = 22.3 +6.5 –3.8 days. The blue and red peaks in the double-peaked line profiles, as well as the blue and red outer profile wings, vary simultaneously within ±3 days. This provides strong support for gravitationally bound orbital motion of the dominant part of the line-emitting gas. Combining the time delay of the strong Balmer emission lines of Hα and Hβ and the separation of the blue and red peaks in the broad double-peaked profiles in their rms spectra, we determine M vir bh = 1.77 +0.29 –0.31 × 10 8 M ☉ and using σ line of the rms spectra M vir bh 2.60 +0.23 –0.31 × 10 8 M ☉ for the central black hole of 3C 390.3, respectively. Using the inclination angle of the line-emitting region which is measured from superluminal motion detected in the radio range, accretion disk models to fit the optical double-peaked emission-line profiles, and X-ray observations, the mass of the black hole amounts to M bh = 0.86 +0.19 –0.18 × 10 9 M ☉ (peak separation) and M bh 1.26 +0.21 –0.16 × 10 9 M ☉ (σ line ), respectively. This result is consistent with the black

  12. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  13. Methodology for functional MRI of simulated driving.

    Science.gov (United States)

    Kan, Karen; Schweizer, Tom A; Tam, Fred; Graham, Simon J

    2013-01-01

    The developed world faces major socioeconomic and medical challenges associated with motor vehicle accidents caused by risky driving. Functional magnetic resonance imaging (fMRI) of individuals using virtual reality driving simulators may provide an important research tool to assess driving safety, based on brain activity and behavior. A fMRI-compatible driving simulator was developed and evaluated in the context of straight driving, turning, and stopping in 16 young healthy adults. Robust maps of brain activity were obtained, including activation of the primary motor cortex, cerebellum, visual cortex, and parietal lobe, with limited head motion (driving is a feasible undertaking.

  14. Development of sodium droplet combustion analysis methodology using direct numerical simulation in 3-dimensional coordinate (COMET)

    International Nuclear Information System (INIS)

    Okano, Yasushi; Ohira, Hiroaki

    1998-08-01

    In the early stage of sodium leak event of liquid metal fast breeder reactor, LMFBR, liquid sodium flows out from a piping, and ignition and combustion of liquid sodium droplet might occur under certain environmental condition. Compressible forced air flow, diffusion of chemical species, liquid sodium droplet behavior, chemical reactions and thermodynamic properties should be evaluated with considering physical dependence and numerical connection among them for analyzing combustion of sodium liquid droplet. A direct numerical simulation code was developed for numerical analysis of sodium liquid droplet in forced convection air flow. The numerical code named COMET, 'Sodium Droplet COmbustion Analysis METhodology using Direct Numerical Simulation in 3-Dimensional Coordinate'. The extended MAC method was used to calculate compressible forced air flow. Counter diffusion among chemical species is also calculated. Transport models of mass and energy between droplet and surrounding atmospheric air were developed. Equation-solving methods were used for computing multiphase equilibrium between sodium and air. Thermodynamic properties of chemical species were evaluated using dynamic theory of gases. Combustion of single sphere liquid sodium droplet in forced convection, constant velocity, uniform air flow was numerically simulated using COMET. Change of droplet diameter with time was closely agree with d 2 -law of droplet combustion theory. Spatial distributions of combustion rate and heat generation and formation, decomposition and movement of chemical species were analyzed. Quantitative calculations of heat generation and chemical species formation in spray combustion are enabled for various kinds of environmental condition by simulating liquid sodium droplet combustion using COMET. (author)

  15. Probabilistic methodology for turbine missile risk analysis

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.; Frank, R.A.

    1984-01-01

    A methodology has been developed for estimation of the probabilities of turbine-generated missile damage to nuclear power plant structures and systems. Mathematical models of the missile generation, transport, and impact events have been developed and sequenced to form an integrated turbine missile simulation methodology. Probabilistic Monte Carlo techniques are used to estimate the plant impact and damage probabilities. The methodology has been coded in the TURMIS computer code to facilitate numerical analysis and plant-specific turbine missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and probabilities have been estimated for a hypothetical nuclear power plant case study. (orig.)

  16. Simplified methodology for analysis of Angra-1 containing

    International Nuclear Information System (INIS)

    Neves Conti, T. das; Souza, A.L. de; Sabundjian, G.

    1988-01-01

    A simplified methodology of analysis was developed to simulate a Large Break Loss of Coolant Accident in the Angra 1 Nuclear Power Station. Using the RELAP5/MOD1, RELAP4/MOD5 and CONTEMPT-LT Codes, the time the variation of pressure and temperature in the containment was analysed. The obtained data was compared with the Angra 1 Final Safety Analysis Report, and too those calculated by a Detailed Model. The results obtained by this new methodology such as the small computational time of simulation, were satisfactory when getting the preliminar avaliation of the Angra 1 global parameters. (author) [pt

  17. Use of the AHP methodology in system dynamics: Modelling and simulation for health technology assessments to determine the correct prosthesis choice for hernia diseases.

    Science.gov (United States)

    Improta, Giovanni; Russo, Mario Alessandro; Triassi, Maria; Converso, Giuseppe; Murino, Teresa; Santillo, Liberatina Carmela

    2018-05-01

    Health technology assessments (HTAs) are often difficult to conduct because of the decisive procedures of the HTA algorithm, which are often complex and not easy to apply. Thus, their use is not always convenient or possible for the assessment of technical requests requiring a multidisciplinary approach. This paper aims to address this issue through a multi-criteria analysis focusing on the analytic hierarchy process (AHP). This methodology allows the decision maker to analyse and evaluate different alternatives and monitor their impact on different actors during the decision-making process. However, the multi-criteria analysis is implemented through a simulation model to overcome the limitations of the AHP methodology. Simulations help decision-makers to make an appropriate decision and avoid unnecessary and costly attempts. Finally, a decision problem regarding the evaluation of two health technologies, namely, the evaluation of two biological prostheses for incisional infected hernias, will be analysed to assess the effectiveness of the model. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. A 3D Hybrid Integration Methodology for Terabit Transceivers

    DEFF Research Database (Denmark)

    Dong, Yunfeng; Johansen, Tom Keinicke; Zhurbenko, Vitaliy

    2015-01-01

    integration are described. An equivalent circuit model of the via-throughs connecting the RF circuitry to the modulator is proposed and its lumped element parameters are extracted. Wire bonding transitions between the driving and RF circuitry were designed and simulated. An optimized 3D interposer design......This paper presents a three-dimensional (3D) hybrid integration methodology for terabit transceivers. The simulation methodology for multi-conductor structures are explained. The effect of ground vias on the RF circuitry and the preferred interposer substrate material for large bandwidth 3D hybrid...

  19. Methodology for Analysis, Modeling and Simulation of Airport Gate-waiting Delays

    Science.gov (United States)

    Wang, Jianfeng

    This dissertation presents methodologies to estimate gate-waiting delays from historical data, to identify gate-waiting-delay functional causes in major U.S. airports, and to evaluate the impact of gate operation disruptions and mitigation strategies on gate-waiting delay. Airport gates are a resource of congestion in the air transportation system. When an arriving flight cannot pull into its gate, the delay it experiences is called gate-waiting delay. Some possible reasons for gate-waiting delay are: the gate is occupied, gate staff or equipment is unavailable, the weather prevents the use of the gate (e.g. lightning), or the airline has a preferred gate assignment. Gate-waiting delays potentially stay with the aircraft throughout the day (unless they are absorbed), adding costs to passengers and the airlines. As the volume of flights increases, ensuring that airport gates do not become a choke point of the system is critical. The first part of the dissertation presents a methodology for estimating gate-waiting delays based on historical, publicly available sources. Analysis of gate-waiting delays at major U.S. airports in the summer of 2007 identifies the following. (i) Gate-waiting delay is not a significant problem on majority of days; however, the worst delay days (e.g. 4% of the days at LGA) are extreme outliers. (ii) The Atlanta International Airport (ATL), the John F. Kennedy International Airport (JFK), the Dallas/Fort Worth International Airport (DFW) and the Philadelphia International Airport (PHL) experience the highest gate-waiting delays among major U.S. airports. (iii) There is a significant gate-waiting-delay difference between airlines due to a disproportional gate allocation. (iv) Gate-waiting delay is sensitive to time of a day and schedule peaks. According to basic principles of queueing theory, gate-waiting delay can be attributed to over-scheduling, higher-than-scheduled arrival rate, longer-than-scheduled gate-occupancy time, and reduced gate

  20. Multilevel Methodology for Simulation of Spatio-Temporal Systems with Heterogeneous Activity; Application to Spread of Valley Fever Fungus

    Science.gov (United States)

    Jammalamadaka, Rajanikanth

    2009-01-01

    This report consists of a dissertation submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Graduate College, The University of Arizona, 2008. Spatio-temporal systems with heterogeneity in their structure and behavior have two major problems associated with them. The first one is that such complex real world systems extend over very large spatial and temporal domains and consume so many computational resources to simulate that they are infeasible to study with current computational platforms. The second one is that the data available for understanding such systems is limited because they are spread over space and time making it hard to obtain micro and macro measurements. This also makes it difficult to get the data for validation of their constituent processes while simultaneously considering their global behavior. For example, the valley fever fungus considered in this dissertation is spread over a large spatial grid in the arid Southwest and typically needs to be simulated over several decades of time to obtain useful information. It is also hard to get the temperature and moisture data (which are two critical factors on which the survival of the valley fever fungus depends) at every grid point of the spatial domain over the region of study. In order to address the first problem, we develop a method based on the discrete event system specification which exploits the heterogeneity in the activity of the spatio-temporal system and which has been shown to be effective in solving relatively simple partial differential equation systems. The benefit of addressing the first problem is that it now makes it feasible to address the second problem. We address the second problem by making use of a multilevel methodology based on modeling and simulation and systems theory. This methodology helps us in the construction of models with different resolutions (base and

  1. Comparing a simple methodology to evaluate hydrodynamic parameters with rainfall simulation experiments

    Science.gov (United States)

    Di Prima, Simone; Bagarello, Vincenzo; Bautista, Inmaculada; Burguet, Maria; Cerdà, Artemi; Iovino, Massimo; Prosdocimi, Massimo

    2016-04-01

    Studying soil hydraulic properties is necessary for interpreting and simulating many hydrological processes having environmental and economic importance, such as rainfall partition into infiltration and runoff. The saturated hydraulic conductivity, Ks, exerts a dominating influence on the partitioning of rainfall in vertical and lateral flow paths. Therefore, estimates of Ks are essential for describing and modeling hydrological processes (Zimmermann et al., 2013). According to several investigations, Ks data collected by ponded infiltration tests could be expected to be unusable for interpreting field hydrological processes, and particularly infiltration. In fact, infiltration measured by ponding give us information about the soil maximum or potential infiltration rate (Cerdà, 1996). Moreover, especially for the hydrodynamic parameters, many replicated measurements have to be carried out to characterize an area of interest since they are known to vary widely both in space and time (Logsdon and Jaynes, 1996; Prieksat et al., 1994). Therefore, the technique to be applied at the near point scale should be simple and rapid. Bagarello et al. (2014) and Alagna et al. (2015) suggested that the Ks values determined by an infiltration experiment carried applying water at a relatively large distance from the soil surface could be more appropriate than those obtained with a low height of water pouring to explain surface runoff generation phenomena during intense rainfall events. These authors used the Beerkan Estimation of Soil Transfer parameters (BEST) procedure for complete soil hydraulic characterization (Lassabatère et al., 2006) to analyze the field infiltration experiment. This methodology, combining low and high height of water pouring, seems appropriate to test the effect of intense and prolonged rainfall events on the hydraulic characteristics of the surface soil layer. In fact, an intense and prolonged rainfall event has a perturbing effect on the soil surface

  2. Partial dust obscuration in active galactic nuclei as a cause of broad-line profile and lag variability, and apparent accretion disc inhomogeneities

    Science.gov (United States)

    Gaskell, C. Martin; Harrington, Peter Z.

    2018-04-01

    The profiles of the broad emission lines of active galactic nuclei (AGNs) and the time delays in their response to changes in the ionizing continuum ("lags") give information about the structure and kinematics of the inner regions of AGNs. Line profiles are also our main way of estimating the masses of the supermassive black holes (SMBHs). However, the profiles often show ill-understood, asymmetric structure and velocity-dependent lags vary with time. Here we show that partial obscuration of the broad-line region (BLR) by outflowing, compact, dusty clumps produces asymmetries and velocity-dependent lags similar to those observed. Our model explains previously inexplicable changes in the ratios of the hydrogen lines with time and velocity, the lack of correlation of changes in line profiles with variability of the central engine, the velocity dependence of lags, and the change of lags with time. We propose that changes on timescales longer than the light-crossing time do not come from dynamical changes in the BLR, but are a natural result of the effect of outflowing dusty clumps driven by radiation pressure acting on the dust. The motion of these clumps offers an explanation of long-term changes in polarization. The effects of the dust complicate the study of the structure and kinematics of the BLR and the search for sub-parsec SMBH binaries. Partial obscuration of the accretion disc can also provide the local fluctuations in luminosity that can explain sizes deduced from microlensing.

  3. OPTICAL MONITORING OF THE BROAD-LINE RADIO GALAXY 3C 390.3

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, Matthias; Peterson, Bradley M.; Grier, Catherine J.; Bentz, Misty C.; Eastman, Jason; Frank, Stephan; Gonzalez, Raymond; Marshall, Jennifer L.; DePoy, Darren L.; Prieto, Jose L., E-mail: dietrich@astronomy.ohio-state.edu [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States)

    2012-09-20

    We have undertaken a new ground-based monitoring campaign on the broad-line radio galaxy 3C 390.3 to improve the measurement of the size of the broad emission-line region and to estimate the black hole mass. Optical spectra and g-band images were observed in late 2005 for three months using the 2.4 m telescope at MDM Observatory. Integrated emission-line flux variations were measured for the hydrogen Balmer lines H{alpha}, H{beta}, H{gamma}, and for the helium line He II{lambda}4686, as well as g-band fluxes and the optical active galactic nucleus (AGN) continuum at {lambda} = 5100 A. The g-band fluxes and the optical AGN continuum vary simultaneously within the uncertainties, {tau}{sub cent} (0.2 {+-} 1.1) days. We find that the emission-line variations are delayed with respect to the variable g-band continuum by {tau}(H{alpha}) 56.3{sup +2.4}{sub -6.6} days, {tau}(H{beta}) = 44.3{sup +3.0}{sub -3.3} days, {tau}(H{gamma}) = 58.1{sup +4.3}{sub -6.1} days, and {tau}(He II 4686) = 22.3{sup +6.5}{sub -3.8} days. The blue and red peaks in the double-peaked line profiles, as well as the blue and red outer profile wings, vary simultaneously within {+-}3 days. This provides strong support for gravitationally bound orbital motion of the dominant part of the line-emitting gas. Combining the time delay of the strong Balmer emission lines of H{alpha} and H{beta} and the separation of the blue and red peaks in the broad double-peaked profiles in their rms spectra, we determine M {sup vir}{sub bh} = 1.77{sup +0.29}{sub -0.31} Multiplication-Sign 10{sup 8} M{sub Sun} and using {sigma}{sub line} of the rms spectra M {sup vir}{sub bh} 2.60{sup +0.23}{sub -0.31} Multiplication-Sign 10{sup 8} M{sub Sun} for the central black hole of 3C 390.3, respectively. Using the inclination angle of the line-emitting region which is measured from superluminal motion detected in the radio range, accretion disk models to fit the optical double-peaked emission-line profiles, and X-ray observations

  4. Turbine trip transient analysis in peach bottom NPP with TRAC-BF1 code and Simtab-1D methodology

    International Nuclear Information System (INIS)

    Barrachina, T.; Miro, R.; Verdu, G.; Collazo, I.; Gonzalez, P.; Concejal, A.; Ortego, P.; Melara, J.

    2010-01-01

    In TRAC-BF1 nuclear cross-sections are specified in the input deck in as a polynomial expansion. Therefore, it is necessary to obtain the coefficients of this polynomial function. One of the methods proposed in the literature is the KINPAR methodology. This methodology uses the results from different perturbations of the original state to obtain the coefficients of the polynominal expansion. The simulations are performed using the SIMULATE3 code. In this work, a new methodology to obtain the cross-sections set in 1D is presented. The first step consists of the application of the SIMTAB methodology, developed in UPV, to obtain the 3D cross-sections sets from CASMO4/SIMULATE3. These 3D cross-sections sets are collapsed to 1D, using as a weighting factor the 3D thermal and rapid neutron fluxes obtained from SIMULATE3. The 1D cross-sections obtained are in the same format as the 3D sets, hence, it has been necessary to modify the TRAC-BF1 code in order to be able to read and interpolate between these tabulated 1D cross-sections. With this new methodology it is not necessary to perform simulations of different perturbations of the original state, and also the variation range of the moderator density can be higher than using the former KINPAR methodology. This is important for simulating severe accidents in which the variables vary in a wide range. This new methodology is applied to the simulation of the turbine trip transient Benchmark in Peach Bottom NPP using the TRAC-BF1 code. The results of the transient simulation in TRAC-BF1 using the KINPAR methodology and the new methodology, SIMTAB-1D, are compared. (author)

  5. CONSTRAINTS ON BLACK HOLE GROWTH, QUASAR LIFETIMES, AND EDDINGTON RATIO DISTRIBUTIONS FROM THE SDSS BROAD-LINE QUASAR BLACK HOLE MASS FUNCTION

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Hernquist, Lars; Siemiginowska, Aneta; Vestergaard, Marianne; Fan Xiaohui; Hopkins, Philip

    2010-01-01

    We present an estimate of the black hole mass function of broad-line quasars (BLQSOs) that self-consistently corrects for incompleteness and the statistical uncertainty in the mass estimates, based on a sample of 9886 quasars at 1 1 it is highly incomplete at M BH ∼ 9 M sun and L/L Edd ∼ BL > 150 ± 15 Myr for black holes at z = 1 with a mass of M BH = 10 9 M sun , and we constrain the maximum mass of a black hole in a BLQSO to be ∼3 x 10 10 M sun . Our estimated distribution of BLQSO Eddington ratios peaks at L/L Edd ∼ 0.05 and has a dispersion of ∼0.4 dex, implying that most BLQSOs are not radiating at or near the Eddington limit; however, the location of the peak is subject to considerable uncertainty. The steep increase in number density of BLQSOs toward lower Eddington ratios is expected if the BLQSO accretion rate monotonically decays with time. Furthermore, our estimated lifetime and Eddington ratio distributions imply that the majority of the most massive black holes spend a significant amount of time growing in an earlier obscured phase, a conclusion which is independent of the unknown obscured fraction. These results are consistent with models for self-regulated black hole growth, at least for massive systems at z > 1, where the BLQSO phase occurs at the end of a fueling event when black hole feedback unbinds the accreting gas, halting the accretion flow.

  6. THE DEMOGRAPHICS OF BROAD-LINE QUASARS IN THE MASS-LUMINOSITY PLANE. II. BLACK HOLE MASS AND EDDINGTON RATIO FUNCTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93107 (United States); Shen, Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States)

    2013-02-10

    We employ a flexible Bayesian technique to estimate the black hole (BH) mass and Eddington ratio functions for Type 1 (i.e., broad line) quasars from a uniformly selected data set of {approx}58, 000 quasars from the Sloan Digital Sky Survey (SDSS) DR7. We find that the SDSS becomes significantly incomplete at M {sub BH} {approx}< 3 Multiplication-Sign 10{sup 8} M {sub Sun} or L/L {sub Edd} {approx}< 0.07, and that the number densities of Type 1 quasars continue to increase down to these limits. Both the mass and Eddington ratio functions show evidence of downsizing, with the most massive and highest Eddington ratio BHs experiencing Type 1 quasar phases first, although the Eddington ratio number densities are flat at z < 2. We estimate the maximum Eddington ratio of Type 1 quasars in the observable universe to be L/L {sub Edd} {approx} 3. Consistent with our results in Shen and Kelly, we do not find statistical evidence for a so-called sub-Eddington boundary in the mass-luminosity plane of broad-line quasars, and demonstrate that such an apparent boundary in the observed distribution can be caused by selection effect and errors in virial BH mass estimates. Based on the typical Eddington ratio in a given mass bin, we estimate growth times for the BHs in Type 1 quasars and find that they are comparable to or longer than the age of the universe, implying an earlier phase of accelerated (i.e., with higher Eddington ratios) and possibly obscured growth. The large masses probed by our sample imply that most of our BHs reside in what are locally early-type galaxies, and we interpret our results within the context of models of self-regulated BH growth.

  7. A generic semi-implicit coupling methodology for use in RELAP5-3Dcopyright

    International Nuclear Information System (INIS)

    Aumiller, D.L.; Tomlinson, E.T.; Weaver, W.L.

    2000-01-01

    A generic semi-implicit coupling methodology has been developed and implemented in the RELAP5-3Dcopyright computer program. This methodology allows RELAP5-3Dcopyright to be used with other computer programs to perform integrated analyses of nuclear power reactor systems and related experimental facilities. The coupling methodology potentially allows different programs to be used to model different portions of the system. The programs are chosen based on their capability to model the phenomena that are important in the simulation in the various portions of the system being considered. The methodology was demonstrated using a test case in which the test geometry was divided into two parts each of which was solved as a RELAP5-3Dcopyright simulation. This test problem exercised all of the semi-implicit coupling features which were installed in RELAP5-3D0. The results of this verification test case show that the semi-implicit coupling methodology produces the same answer as the simulation of the test system as a single process

  8. A trend analysis methodology for enhanced validation of 3-D LWR core simulations

    International Nuclear Information System (INIS)

    Wieselquist, William; Ferroukhi, Hakim; Bernatowicz, Kinga

    2011-01-01

    This paper presents an approach that is being developed and implemented at PSI to enhance the Verification and Validation (V and V) procedure of 3-D static core simulations for the Swiss LWR reactors. The principle is to study in greater details the deviations between calculations and measurements and to assess on that basis if distinct trends of the accuracy can be observed. The presence of such trends could then be a useful indicator of eventual limitations/weaknesses in the applied lattice/core analysis methodology and could thereby serve as guidance for method/model enhancements. Such a trend analysis is illustrated here for a Swiss PWR core model using as basis, the state-of-the-art industrial CASMO/SIMULATE codes. The accuracy of the core-follow models to reproduce the periodic in-core neutron flux measurements is studied for a total of 21 operating cycles. The error is analyzed with respect to different physics parameters with a ranking of the individual assemblies/nodes contribution to the total RMS error and trends are analyzed by performing partial correlation analysis. The highest errors appear at the core axial peripheries (top/bottom nodes) where a mean C/E-1 error of 10% is observed for the top nodes and -5% for the bottom nodes and the maximum C/E-1 error reaches almost 20%. Partial correlation analysis shows significant correlation of error to distance from core mid-plane and only less significant correlations to other variables. Overall, it appears that the primary areas that could benefit from further method/modeling improvements are: axial reflectors, MOX treatment and control rod cusping. (author)

  9. A New Simulation Technique for Study of Collisionless Shocks: Self-Adaptive Simulations

    International Nuclear Information System (INIS)

    Karimabadi, H.; Omelchenko, Y.; Driscoll, J.; Krauss-Varban, D.; Fujimoto, R.; Perumalla, K.

    2005-01-01

    The traditional technique for simulating physical systems modeled by partial differential equations is by means of time-stepping methodology where the state of the system is updated at regular discrete time intervals. This method has inherent inefficiencies. In contrast to this methodology, we have developed a new asynchronous type of simulation based on a discrete-event-driven (as opposed to time-driven) approach, where the simulation state is updated on a 'need-to-be-done-only' basis. Here we report on this new technique, show an example of particle acceleration in a fast magnetosonic shockwave, and briefly discuss additional issues that we are addressing concerning algorithm development and parallel execution

  10. Probabilistic Simulation of Multi-Scale Composite Behavior

    Science.gov (United States)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  11. Advanced methodology to simulate boiling water reactor transient using coupled thermal-hydraulic/neutron-kinetic codes

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Christoph Oliver

    2016-06-13

    -sets) predicted by SCALE6/TRITON and CASMO. Thereby the coupled TRACE/PARCS simulations reproduced the single fuel assembly depletion and stand-alone PARCS results. A turbine trip event, occurred at a BWR plant of type 72, has been investigated in detail using the cross-section libraries generated with SCALE/TRITON and CASMO. Thereby the evolution of the integral BWR parameters predicted by the coupled codes using cross-sections from SCALE/TRITON is very close to the global trends calculated using CASMO cross-sections. Further, to implement uncertainty quantifications, the PARCS reactor dynamic code was extended (uncertainty module) to facilitate the consideration of the uncertainty of neutron kinetic parameters in coupled TRACE/PARCS simulations. For a postulated pressure pertubation, an uncertainty and sensitivity study was performed using TRACE/PARCS and SUSA. The obtained results illustrated the capability of such methodologies which are still under development. Based on this analysis, the uncertainty band for key-parameters, e.g. reactivity, as well as the importance ranking of reactor kinetics parameters could be predicted and identified for this accident scenario.

  12. Development and new applications of quantum chemical simulation methodology

    International Nuclear Information System (INIS)

    Weiss, A. K. H.

    2012-01-01

    The Division of Theoretical Chemistry at the University of Innsbruck is focused on the study of chemical compounds in aqueous solution, in terms of mainly hybrid quantum mechanical / molecular mechanical molecular dynamics simulations (QM/MM MD). Besides the standard means of data analysis employed for such simulations, this study presents several advanced and capable algorithms for the description of structural and dynamic properties of the simulated species and its hydration. The first part of this thesis further presents selected exemplary simulations, in particular a comparative study of Formamide and N-methylformamide, Guanidinium, and Urea. An included review article further summarizes the major advances of these studies. The computer programs developed in the course of this thesis are by now well established in the research field. The second part of this study presents the theory and a development guide for a quantum chemical program, QuMuLuS, that is by now used as a QM program for recent QM/MM simulations at the division. In its course, this part presents newly developed algorithms for electron integral evaluation and point charge embedding. This program is validated in terms of benchmark computations. The associated theory is presented on a detailed level, to serve as a source for contemporary and future studies in the division. In the third and final part, further investigations of related topics are addressed. This covers additional schemes of molecular simulation analysis, new software, as well as a mathematical investigation of a non-standard two-electron integral. (author)

  13. Development of radiation risk assessment simulator using system dynamics methodology

    International Nuclear Information System (INIS)

    Kang, Kyung Min; Jae, Moosung

    2008-01-01

    The potential magnitudes of radionuclide releases under severe accident loadings and offsite consequences as well as the overall risk (the product of accident frequencies and consequences) are analyzed and evaluated quantitatively in this study. The system dynamics methodology has been applied to predict the time-dependent behaviors such as feedback and dependency as well as to model uncertain behavior of complex physical system. It is used to construct the transfer mechanisms of time dependent radioactivity concentration and to evaluate them. Dynamic variations of radio activities are simulated by considering several effects such as deposition, weathering, washout, re-suspension, root uptake, translocation, leaching, senescence, intake, and excretion of soil. The time-dependent radio-ecological model applicable to Korean specific environment has been developed in order to assess the radiological consequences following the short-term deposition of radio-nuclides during severe accidents nuclear power plant. An ingestion food chain model can estimate time dependent radioactivity concentrations in foodstuffs. And it is also shown that the system dynamics approach is useful for analyzing the phenomenon of the complex system as well as the behavior of structure values with respect to time. The output of this model (Bq ingested per Bq m - 2 deposited) may be multiplied by the deposition and a dose conversion factor (Gy Bq -1 ) to yield organ-specific doses. The model may be run deterministically to yield a single estimate or stochastic distributions by 'Monte-Carlo' calculation that reflects uncertainty of parameter and model uncertainties. The results of this study may contribute to identifying the relative importance of various parameters occurred in consequence analysis, as well as to assessing risk reduction effects in accident management. (author)

  14. Demonstration of an infiltration evaluation methodology

    International Nuclear Information System (INIS)

    Smyth, J.D.; Gee, G.W.; Kincaid, C.T.; Nichols, W.M.; Bresler, E.

    1990-07-01

    An Infiltration Evaluation Methodology (IEM) was developed for the US Nuclear Regulatory Commission (NRC) by Pacific Northwest Laboratory (PNL) to provide a consistent, well formulated approach for evaluating drainage through engineered covers at low-level radioactive waste (LLW) sites. The methodology is designed to help evaluate the ability of proposed waste site covers to minimize drainage for LLW site license applications and for sites associated with the Uranium Mill Tailings Remedial Action (UMTRA) program. The objective of this methodology is to estimate the drainage through an engineered burial site cover system. The drainage estimate can be used as an input to a broader performance assessment methodology currently under development by the NRC. The methodology is designed to simulate, at the field scale, significant factors and hydrologic conditions which determine or influence estimates of infiltration, long-term moisture content profiles, and drainage from engineered covers and barriers. The IEM developed under this study acknowledges the uncertainty inherent in soil properties and quantifies the influence of such uncertainty on the estimates of drainage in engineered cover systems at waste disposal sites. 6 refs., 1 fig

  15. ESD full chip simulation: HBM and CDM requirements and simulation approach

    Directory of Open Access Journals (Sweden)

    E. Franell

    2008-05-01

    Full Text Available Verification of ESD safety on full chip level is a major challenge for IC design. Especially phenomena with their origin in the overall product setup are posing a hurdle on the way to ESD safe products. For stress according to the Charged Device Model (CDM, a stumbling stone for a simulation based analysis is the complex current distribution among a huge number of internal nodes leading to hardly predictable voltage drops inside the circuits.

    This paper describes an methodology for Human Body Model (HBM simulations with an improved ESD-failure coverage and a novel methodology to replace capacitive nodes within a resistive network by current sources for CDM simulation. This enables a highly efficient DC simulation clearly marking CDM relevant design weaknesses allowing for application of this software both during product development and for product verification.

  16. A study on methodological of software development for HEP

    International Nuclear Information System (INIS)

    Ding Yuzheng; Dai Guiliang

    1999-01-01

    The HEP related software system is a large one. It comprises mainly detector simulation software, DAQ software and offline system. The author discusses the advantages of OO object oriented methodologies applying to such software system, and the basic strategy for the usage of OO methodologies, languages and tools in the development of the HEP related software are given

  17. Development of risk assessment methodology against natural external hazards for sodium-cooled fast reactors: project overview and strong Wind PRA methodology - 15031

    International Nuclear Information System (INIS)

    Yamano, H.; Nishino, H.; Kurisaka, K.; Okano, Y.; Sakai, T.; Yamamoto, T.; Ishizuka, Y.; Geshi, N.; Furukawa, R.; Nanayama, F.; Takata, T.; Azuma, E.

    2015-01-01

    This paper describes mainly strong wind probabilistic risk assessment (PRA) methodology development in addition to the project overview. In this project, to date, the PRA methodologies against snow, tornado and strong wind were developed as well as the hazard evaluation methodologies. For the volcanic eruption hazard, ash fallout simulation was carried out to contribute to the development of the hazard evaluation methodology. For the forest fire hazard, the concept of the hazard evaluation methodology was developed based on fire simulation. Event sequence assessment methodology was also developed based on plant dynamics analysis coupled with continuous Markov chain Monte Carlo method in order to apply to the event sequence against snow. In developing the strong wind PRA methodology, hazard curves were estimated by using Weibull and Gumbel distributions based on weather data recorded in Japan. The obtained hazard curves were divided into five discrete categories for event tree quantification. Next, failure probabilities for decay heat removal related components were calculated as a product of two probabilities: i.e., a probability for the missiles to enter the intake or out-take in the decay heat removal system, and fragility caused by the missile impacts. Finally, based on the event tree, the core damage frequency was estimated about 6*10 -9 /year by multiplying the discrete hazard probabilities in the Gumbel distribution by the conditional decay heat removal failure probabilities. A dominant sequence was led by the assumption that the operators could not extinguish fuel tank fire caused by the missile impacts and the fire induced loss of the decay heat removal system. (authors)

  18. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  19. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  20. THE LICK AGN MONITORING PROJECT: BROAD-LINE REGION RADII AND BLACK HOLE MASSES FROM REVERBERATION MAPPING OF Hβ

    International Nuclear Information System (INIS)

    Bentz, Misty C.; Walsh, Jonelle L.; Barth, Aaron J.; Baliber, Nairn; Bennert, Vardha Nicola; Greene, Jenny E.; Hidas, Marton G.; Canalizo, Gabriela; Hiner, Kyle D.; Filippenko, Alexei V.; Ganeshalingam, Mohan; Lee, Nicholas; Li, Weidong; Serduke, Frank J. D.; Silverman, Jeffrey M.; Steele, Thea N.; Gates, Elinor L.; Malkan, Matthew A.; Minezaki, Takeo; Sakata, Yu

    2009-01-01

    We have recently completed a 64-night spectroscopic monitoring campaign at the Lick Observatory 3-m Shane telescope with the aim of measuring the masses of the black holes in 12 nearby (z 6 -10 7 M sun and also the well-studied nearby active galactic nucleus (AGN) NGC 5548. Nine of the objects in the sample (including NGC 5548) showed optical variability of sufficient strength during the monitoring campaign to allow for a time lag to be measured between the continuum fluctuations and the response to these fluctuations in the broad Hβ emission. We present here the light curves for all the objects in this sample and the subsequent Hβ time lags for the nine objects where these measurements were possible. The Hβ lag time is directly related to the size of the broad-line region (BLR) in AGNs, and by combining the Hβ lag time with the measured width of the Hβ emission line in the variable part of the spectrum, we determine the virial mass of the central supermassive black hole in these nine AGNs. The absolute calibration of the black hole masses is based on the normalization derived by Onken et al., which brings the masses determined by reverberation mapping into agreement with the local M BH -σ * relationship for quiescent galaxies. We also examine the time lag response as a function of velocity across the Hβ line profile for six of the AGNs. The analysis of four leads to rather ambiguous results with relatively flat time lags as a function of velocity. However, SBS 1116+583A exhibits a symmetric time lag response around the line center reminiscent of simple models for circularly orbiting BLR clouds, and Arp 151 shows an asymmetric profile that is most easily explained by a simple gravitational infall model. Further investigation will be necessary to fully understand the constraints placed on the physical models of the BLR by the velocity-resolved response in these objects.

  1. Methodology for the computational simulation of the components in photovoltaic systems; Desarrollo de herramientas para la prediccion del comportamiento de sistemas fotovoltaicos

    Energy Technology Data Exchange (ETDEWEB)

    Galimberti, P.; Arcuri, G.; Manno, R.; Fasulo, A. J.

    2004-07-01

    This work presents a methodology for the computational simulation of the components that comprise photovoltaic systems, in order to study the behavior of each component and its relevance in the operation of the whole system, which would allow to make decisions in the selection process of these components and their improvements.As a result of the simulation, files with values of different variables which characterize the behaviour of the components are obtained. Different kind of plots can be drawn, which show the information in a summarized form. Finally, the results are discussed making a comparison with actual data for the city of Rio Cuarto in Argentina (33,1 degree South Latitude) and some advantages of the propose method are mentioned. (Author)

  2. Research in Modeling and Simulation for Airspace Systems Innovation

    Science.gov (United States)

    Ballin, Mark G.; Kimmel, William M.; Welch, Sharon S.

    2007-01-01

    This viewgraph presentation provides an overview of some of the applied research and simulation methodologies at the NASA Langley Research Center that support aerospace systems innovation. Risk assessment methodologies, complex systems design and analysis methodologies, and aer ospace operations simulations are described. Potential areas for future research and collaboration using interactive and distributed simula tions are also proposed.

  3. Comprehensive MRI simulation methodology using a dedicated MRI scanner in radiation oncology for external beam radiation treatment planning

    International Nuclear Information System (INIS)

    Paulson, Eric S.; Erickson, Beth; Schultz, Chris; Allen Li, X.

    2015-01-01

    Purpose: The use of magnetic resonance imaging (MRI) in radiation oncology is expanding rapidly, and more clinics are integrating MRI into their radiation therapy workflows. However, radiation therapy presents a new set of challenges and places additional constraints on MRI compared to diagnostic radiology that, if not properly addressed, can undermine the advantages MRI offers for radiation treatment planning (RTP). The authors introduce here strategies to manage several challenges of using MRI for virtual simulation in external beam RTP. Methods: A total of 810 clinical MRI simulation exams were performed using a dedicated MRI scanner for external beam RTP of brain, breast, cervix, head and neck, liver, pancreas, prostate, and sarcoma cancers. Patients were imaged in treatment position using MRI-optimal immobilization devices. Radiofrequency (RF) coil configurations and scan protocols were optimized based on RTP constraints. Off-resonance and gradient nonlinearity-induced geometric distortions were minimized or corrected prior to using images for RTP. A multidisciplinary MRI simulation guide, along with window width and level presets, was created to standardize use of MR images during RTP. A quality assurance program was implemented to maintain accuracy and repeatability of MRI simulation exams. Results: The combination of a large bore scanner, high field strength, and circumferentially wrapped, flexible phased array RF receive coils permitted acquisition of thin slice images with high contrast-to-noise ratio (CNR) and image intensity uniformity, while simultaneously accommodating patient setup and immobilization devices. Postprocessing corrections and alternative acquisition methods were required to reduce or correct off-resonance and gradient nonlinearity induced geometric distortions. Conclusions: The methodology described herein contains practical strategies the authors have implemented through lessons learned performing clinical MRI simulation exams. In

  4. Comprehensive MRI simulation methodology using a dedicated MRI scanner in radiation oncology for external beam radiation treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Eric S., E-mail: epaulson@mcw.edu [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 and Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States); Erickson, Beth; Schultz, Chris; Allen Li, X. [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States)

    2015-01-15

    Purpose: The use of magnetic resonance imaging (MRI) in radiation oncology is expanding rapidly, and more clinics are integrating MRI into their radiation therapy workflows. However, radiation therapy presents a new set of challenges and places additional constraints on MRI compared to diagnostic radiology that, if not properly addressed, can undermine the advantages MRI offers for radiation treatment planning (RTP). The authors introduce here strategies to manage several challenges of using MRI for virtual simulation in external beam RTP. Methods: A total of 810 clinical MRI simulation exams were performed using a dedicated MRI scanner for external beam RTP of brain, breast, cervix, head and neck, liver, pancreas, prostate, and sarcoma cancers. Patients were imaged in treatment position using MRI-optimal immobilization devices. Radiofrequency (RF) coil configurations and scan protocols were optimized based on RTP constraints. Off-resonance and gradient nonlinearity-induced geometric distortions were minimized or corrected prior to using images for RTP. A multidisciplinary MRI simulation guide, along with window width and level presets, was created to standardize use of MR images during RTP. A quality assurance program was implemented to maintain accuracy and repeatability of MRI simulation exams. Results: The combination of a large bore scanner, high field strength, and circumferentially wrapped, flexible phased array RF receive coils permitted acquisition of thin slice images with high contrast-to-noise ratio (CNR) and image intensity uniformity, while simultaneously accommodating patient setup and immobilization devices. Postprocessing corrections and alternative acquisition methods were required to reduce or correct off-resonance and gradient nonlinearity induced geometric distortions. Conclusions: The methodology described herein contains practical strategies the authors have implemented through lessons learned performing clinical MRI simulation exams. In

  5. Methodology for transient simulation of a small heliothermic central station; Metodologia para simulacao transiente de uma pequena central heliotermica

    Energy Technology Data Exchange (ETDEWEB)

    Wendel, Marcelo

    2010-08-15

    The final steps of generating electricity from concentrated solar power technologies are similar to conventional thermal processes, since steam or gas is also employed for moving turbines or pistons. The fundamental difference lies on the fact that steam or hot gas is generated by solar radiation instead of fossil fuels or nuclear heat. The cheapest electricity generated from solar energy has been achieved with large-scale power stations based on this concept. Computer simulations represent a low-cost option for the design of thermal systems. The present study aims to develop a methodology for the transient simulation of a micro-scale solar-thermal power plant (120 kWe) which should be appropriate in terms of accuracy and computational effort. The facility considered can optionally operate as a cogeneration plant producing electric power as well as chilled water. Solar radiation is collected by parabolic troughs, electricity is generated by an organic Rankine cycle and chilled water is produced by an absorption cooling cycle. The organic Rankine cycle is of interest because it allows for a plant with relatively simple structure and automated operation. The simulation methodology proposed in this study is implemented in TRNSYS with new components (TYPEs) developed for the solar field and thermal cycles. The parabolic trough field component is based on an experimental efficiency curve of the solar collector. In the case of the Rankine and absorption cycles, the components are based on performance polynomials generated with EES from detailed thermodynamic models, which are calibrated with performance data from manufacturers. Distinct plant configurations are considered. An optimization algorithm is used for searching the best operating point in each case. Results are presented for the following Brazilian sites: Fortaleza, Petrolina and Bom Jesus da Lapa. The latter offers the highest global plant performance. An analysis about the influence of the thermal storage on

  6. Kinetic Monte Carlo simulations for transient thermal fields: Computational methodology and application to the submicrosecond laser processes in implanted silicon.

    Science.gov (United States)

    Fisicaro, G; Pelaz, L; Lopez, P; La Magna, A

    2012-09-01

    Pulsed laser irradiation of damaged solids promotes ultrafast nonequilibrium kinetics, on the submicrosecond scale, leading to microscopic modifications of the material state. Reliable theoretical predictions of this evolution can be achieved only by simulating particle interactions in the presence of large and transient gradients of the thermal field. We propose a kinetic Monte Carlo (KMC) method for the simulation of damaged systems in the extremely far-from-equilibrium conditions caused by the laser irradiation. The reference systems are nonideal crystals containing point defect excesses, an order of magnitude larger than the equilibrium density, due to a preirradiation ion implantation process. The thermal and, eventual, melting problem is solved within the phase-field methodology, and the numerical solutions for the space- and time-dependent thermal field were then dynamically coupled to the KMC code. The formalism, implementation, and related tests of our computational code are discussed in detail. As an application example we analyze the evolution of the defect system caused by P ion implantation in Si under nanosecond pulsed irradiation. The simulation results suggest a significant annihilation of the implantation damage which can be well controlled by the laser fluence.

  7. Evaluation of methodologies for remunerating wind power's reliability in Colombia

    International Nuclear Information System (INIS)

    Botero B, Sergio; Isaza C, Felipe; Valencia, Adriana

    2010-01-01

    Colombia strives to have enough firm capacity available to meet unexpected power shortages and peak demand; this is clear from mechanisms currently in place that provide monetary incentives (in the order of nearly US$ 14/MW h) to power producers that can guarantee electricity provision during scarcity periods. Yet, wind power in Colombia is not able to currently guarantee firm power because an accepted methodology to calculate its potential firm capacity does not exist. In this paper we argue that developing such methodology would provide an incentive to potential investors to enter into this low carbon technology. This paper analyzes three methodologies currently used in energy markets around the world to calculate firm wind energy capacity: PJM, NYISO, and Spain. These methodologies are initially selected due to their ability to accommodate to the Colombian energy regulations. The objective of this work is to determine which of these methodologies makes most sense from an investor's perspective, to ultimately shed light into developing a methodology to be used in Colombia. To this end, the authors developed a methodology consisting on the elaboration of a wind model using the Monte-Carlo simulation, based on known wind behaviour statistics of a region with adequate wind potential in Colombia. The simulation gives back random generation data, representing the resource's inherent variability and simulating the historical data required to evaluate the mentioned methodologies, thus achieving the technology's theoretical generation data. The document concludes that the evaluated methodologies are easy to implement and that these do not require historical data (important for Colombia, where there is almost no historical wind power data). It is also found that the Spanish methodology provides a higher Capacity Value (and therefore a higher return to investors). The financial assessment results show that it is crucial that these types of incentives exist to make viable

  8. A methodology for the parametric modelling of the flow coefficients and flow rate in hydraulic valves

    International Nuclear Information System (INIS)

    Valdés, José R.; Rodríguez, José M.; Saumell, Javier; Pütz, Thomas

    2014-01-01

    Highlights: • We develop a methodology for the parametric modelling of flow in hydraulic valves. • We characterize the flow coefficients with a generic function with two parameters. • The parameters are derived from CFD simulations of the generic geometry. • We apply the methodology to two cases from the automotive brake industry. • We validate by comparing with CFD results varying the original dimensions. - Abstract: The main objective of this work is to develop a methodology for the parametric modelling of the flow rate in hydraulic valve systems. This methodology is based on the derivation, from CFD simulations, of the flow coefficient of the critical restrictions as a function of the Reynolds number, using a generalized square root function with two parameters. The methodology is then demonstrated by applying it to two completely different hydraulic systems: a brake master cylinder and an ABS valve. This type of parametric valve models facilitates their implementation in dynamic simulation models of complex hydraulic systems

  9. Application of the methodology of surface of answer in the determination of the PCT in the simulation of a LOFT

    International Nuclear Information System (INIS)

    Alva N, J.; Ortiz V, J.; Amador G, R.

    2008-01-01

    This article summarizes the main typical of the methodology of surfaces and answer (MSA) and its connections with the lineal regression analysis. Also, an example of the application of MSA in the prediction of the principle cladding temperature (PCT) of a combustible assembly of a nuclear reactor, whose used data were taken from the simulation of a LOFT (Loss Of Fluid Test) during a course of experts. The made prediction will be used like one first approach to predict the behavior of the PCT, this is made in order to diminish the time of calculation when realizing the executions of codes thermal hydraulics of better estimation. The present work comprises of the theoretical base of the project in charge to delineate a methodology of uncertainty analysis for codes of better estimation, employees in the thermal hydraulics analysis and safety of plants and nuclear reactors. The institutions that participate in such project are: ININ, CFE, IPN and CNSNS, is possible to mention that this project is sponsored by the IAEA. (Author)

  10. Simulation of the operational monitoring of a BWR with Simulate-3

    International Nuclear Information System (INIS)

    Jimenez F, J. O.; Martin del Campo M, C.; Fuentes M, L.; Francois L, J. L.

    2015-09-01

    This work was developed in order to describe the methodology for calculating the fuel burned of nuclear power reactors throughout the duration of their operating cycle and for each fuel reload. In other words, simulate and give monitoring to the main operation parameters of sequential way along its operation cycles. For this particular case, the operational monitoring of five consecutive cycles of a reactor was realized using the information reported by their processes computer. The simulation was performed with the Simulate-3 software and the results were compared with those of the process computer. The goal is to get the fuel burned, cycle after cycle for obtain the state conditions of the reactor needed for the fuel reload analyses, stability studies and transients analysis, and the development of a methodology that allows to manage and resolve similar cases for future fuel cycles of the nuclear power plant and explore the various options offered by the simulator. (Author)

  11. Methodology to evaluate the crack growth rate by stress corrosion cracking in dissimilar metals weld in simulated environment of PWR nuclear reactor

    International Nuclear Information System (INIS)

    Paula, Raphael G.; Figueiredo, Celia A.; Rabelo, Emerson G.

    2013-01-01

    Inconel alloys weld metal is widely used to join dissimilar metals in nuclear reactors applications. It was recently observed failures of weld components in plants, which have triggered an international effort to determine reliable data on the stress corrosion cracking behavior of this material in reactor environment. The objective of this work is to develop a methodology to determine the crack growth rate caused by stress corrosion in Inconel alloy 182, using the specimen (Compact Tensile) in simulated PWR environment. (author)

  12. A generic semi-implicit coupling methodology for use in RELAP5-3D(c)

    International Nuclear Information System (INIS)

    Weaver, W.L.; Tomlinson, E.T.; Aumiller, D.L.

    2002-01-01

    A generic semi-implicit coupling methodology has been developed and implemented in the RELAP5-3D (c) computer program. This methodology allows RELAP5-3D (c) to be used with other computer programs to perform integrated analyses of nuclear power reactor systems and related experimental facilities. The coupling methodology potentially allows different programs to be used to model different portions of the system. The programs are chosen based on their capability to model the phenomena that are important in the simulation in the various portions of the system being considered and may use different numbers of conservation equations to model fluid flow in their respective solution domains. The methodology was demonstrated using a test case in which the test geometry was divided into two parts, each of which was solved as a RELAP5-3D (c) simulation. This test problem exercised all of the semi-implicit coupling features that were implemented in RELAP5-3D (c) The results of this verification test case show that the semi-implicit coupling methodology produces the same answer as the simulation of the test system as a single process

  13. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  14. Stochastic techno-economic assessment based on Monte Carlo simulation and the Response Surface Methodology: The case of an innovative linear Fresnel CSP (concentrated solar power) system

    International Nuclear Information System (INIS)

    Bendato, Ilaria; Cassettari, Lucia; Mosca, Marco; Mosca, Roberto

    2016-01-01

    Combining technological solutions with investment profitability is a critical aspect in designing both traditional and innovative renewable power plants. Often, the introduction of new advanced-design solutions, although technically interesting, does not generate adequate revenue to justify their utilization. In this study, an innovative methodology is developed that aims to satisfy both targets. On the one hand, considering all of the feasible plant configurations, it allows the analysis of the investment in a stochastic regime using the Monte Carlo method. On the other hand, the impact of every technical solution on the economic performance indicators can be measured by using regression meta-models built according to the theory of Response Surface Methodology. This approach enables the design of a plant configuration that generates the best economic return over the entire life cycle of the plant. This paper illustrates an application of the proposed methodology to the evaluation of design solutions using an innovative linear Fresnel Concentrated Solar Power system. - Highlights: • A stochastic methodology for solar plants investment evaluation. • Study of the impact of new technologies on the investment results. • Application to an innovative linear Fresnel CSP system. • A particular application of Monte Carlo simulation and response surface methodology.

  15. Validation of multi-body modelling methodology for reconfigurable underwater robots

    DEFF Research Database (Denmark)

    Nielsen, M.C.; Eidsvik, O. A.; Blanke, Mogens

    2016-01-01

    This paper investigates the problem of employing reconfigurable robots in an underwater setting. The main results presented is the experimental validation of a modelling methodology for a system consisting of N dynamically connected robots with heterogeneous dynamics. Two distinct types...... of experiments are performed, a series of hydrostatic free-decay tests and a series of open-loop trajectory tests. The results are compared to a simulation based on the modelling methodology. The modelling methodology shows promising results for usage with systems composed of reconfigurable underwater modules...

  16. Scenario aggregation and analysis via Mean-Shift Methodology

    International Nuclear Information System (INIS)

    Mandelli, D.; Yilmaz, A.; Metzroth, K.; Aldemir, T.; Denning, R.

    2010-01-01

    A new generation of dynamic methodologies is being developed for nuclear reactor probabilistic risk assessment (PRA) which explicitly account for the time element in modeling the probabilistic system evolution and use numerical simulation tools to account for possible dependencies between failure events. The dynamic event tree (DET) approach is one of these methodologies. One challenge with dynamic PRA methodologies is the large amount of data they produce which may be difficult to analyze without appropriate software tools. The concept of 'data mining' is well known in the computer science community and several methodologies have been developed in order to extract useful information from a dataset with a large number of records. Using the dataset generated by the DET analysis of the reactor vessel auxiliary cooling system (RVACS) of an ABR-1000 for an aircraft crash recovery scenario and the Mean-Shift Methodology for data mining, it is shown how clusters of transients with common characteristics can be identified and classified. (authors)

  17. A methodology of neutronic-thermodynamics simulation for fast reactor

    International Nuclear Information System (INIS)

    Waintraub, M.

    1986-01-01

    Aiming at a general optimization of the project, controlled fuel depletion and management, this paper develop a neutronic thermodynamics simulator, SIRZ, which besides being sufficiently precise, is also economic. That results in a 75% reduction in CPU time, for a startup calculation, when compared with the same calculation at the CITATION code. The simulation system by perturbation calculations, applied to fast reactors, which produce errors smaller than 1% in all components of the reference state given by the CITATION code was tested. (author)

  18. Coupling methodology within the software platform alliances

    Energy Technology Data Exchange (ETDEWEB)

    Montarnal, Ph; Deville, E; Adam, E; Bengaouer, A [CEA Saclay, Dept. de Modelisation des Systemes et Structures 91 - Gif-sur-Yvette (France); Dimier, A; Gaombalet, J; Loth, L [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France); Chavant, C [Electricite de France (EDF), 92 - Clamart (France)

    2005-07-01

    CEA, ANDRA and EDF are jointly developing the software platform ALLIANCES which aim is to produce a tool for the simulation of nuclear waste storage and disposal repository. This type of simulations deals with highly coupled thermo-hydro-mechanical and chemical (T-H-M-C) processes. A key objective of Alliances is to give the capability for coupling algorithms development between existing codes. The aim of this paper is to present coupling methodology use in the context of this software platform. (author)

  19. Coupling methodology within the software platform alliances

    International Nuclear Information System (INIS)

    Montarnal, Ph.; Deville, E.; Adam, E.; Bengaouer, A.; Dimier, A.; Gaombalet, J.; Loth, L.; Chavant, C.

    2005-01-01

    CEA, ANDRA and EDF are jointly developing the software platform ALLIANCES which aim is to produce a tool for the simulation of nuclear waste storage and disposal repository. This type of simulations deals with highly coupled thermo-hydro-mechanical and chemical (T-H-M-C) processes. A key objective of Alliances is to give the capability for coupling algorithms development between existing codes. The aim of this paper is to present coupling methodology use in the context of this software platform. (author)

  20. Discrete simulation and related fields. Proceedings of the IMACS European Simulation Meeting

    Energy Technology Data Exchange (ETDEWEB)

    Javor, A

    1982-01-01

    The following topics were dealt with: philosophy and methodology; flexible microprocessor systems; aggregative simulation system; directed simulation systems; directed simulation experiments; closed loop feedback controlled simulation system; system architecture; complex systems modelling and analysis; SIMKOM; artificial intelligence; forecasting of commodity flows; microprogrammed simulation; education systems; linear transformations; Cyber-72 input output subsystems; hazard detecting in MOS LSI circuits; and fault simulation for logic networks. 23 papers were presented, of which all are published in full in the present proceedings. Abstracts of individual papers can be found under the relevant classification in this or other issues.

  1. Pulse superimposition calculational methodology for estimating the subcriticality level of nuclear fuel assemblies

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Y.; Rabiti, C.; Aliberti, G.; Kondev, F.; Smith, D.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Serafimovich, I.

    2009-01-01

    One of the most reliable experimental methods for measuring the subcriticality level of a nuclear fuel assembly is the Sjoestrand method applied to the reaction rate generated from a pulsed neutron source. This study developed a new analytical methodology simulating the Sjoestrand method, which allows comparing the experimental and analytical reaction rates and the obtained subcriticality levels. In this methodology, the reaction rate is calculated due to a single neutron pulse using MCNP/MCNPX computer code or any other neutron transport code that explicitly simulates the delayed fission neutrons. The calculation simulates a single neutron pulse over a long time period until the delayed neutron contribution to the reaction rate is vanished. The obtained reaction rate is then superimposed to itself, with respect to the time, to simulate the repeated pulse operation until the asymptotic level of the reaction rate, set by the delayed neutrons, is achieved. The superimposition of the pulse to itself was calculated by a simple C computer program. A parallel version of the C program is used due to the large amount of data being processed, e.g. by the Message Passing Interface (MPI). The analytical results of this new calculation methodology have shown an excellent agreement with the experimental data available from the YALINA-Booster facility of Belarus. This methodology can be used to calculate Bell and Glasstone spatial correction factor.

  2. Pulse superimposition calculational methodology for estimating the subcriticality level of nuclear fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)], E-mail: atalamo@anl.gov; Gohar, Y. [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Rabiti, C. [Idaho National Laboratory, P.O. Box 2528, Idaho Falls, ID 83403 (United States); Aliberti, G.; Kondev, F.; Smith, D.; Zhong, Z. [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Serafimovich, I. [Joint Institute for Power and Nuclear Research-Sosny, National Academy of Sciences (Belarus)

    2009-07-21

    One of the most reliable experimental methods for measuring the subcriticality level of a nuclear fuel assembly is the Sjoestrand method applied to the reaction rate generated from a pulsed neutron source. This study developed a new analytical methodology simulating the Sjoestrand method, which allows comparing the experimental and analytical reaction rates and the obtained subcriticality levels. In this methodology, the reaction rate is calculated due to a single neutron pulse using MCNP/MCNPX computer code or any other neutron transport code that explicitly simulates the delayed fission neutrons. The calculation simulates a single neutron pulse over a long time period until the delayed neutron contribution to the reaction rate is vanished. The obtained reaction rate is then superimposed to itself, with respect to the time, to simulate the repeated pulse operation until the asymptotic level of the reaction rate, set by the delayed neutrons, is achieved. The superimposition of the pulse to itself was calculated by a simple C computer program. A parallel version of the C program is used due to the large amount of data being processed, e.g. by the Message Passing Interface (MPI). The analytical results of this new calculation methodology have shown an excellent agreement with the experimental data available from the YALINA-Booster facility of Belarus. This methodology can be used to calculate Bell and Glasstone spatial correction factor.

  3. Fully Stochastic Distributed Methodology for Multivariate Flood Frequency Analysis

    Directory of Open Access Journals (Sweden)

    Isabel Flores-Montoya

    2016-05-01

    Full Text Available An adequate estimation of the extreme behavior of basin response is essential both for designing river structures and for evaluating their risk. The aim of this paper is to develop a new methodology to generate extreme hydrograph series of thousands of years using an event-based model. To this end, a spatial-temporal synthetic rainfall generator (RainSimV3 is combined with a distributed physically-based rainfall–runoff event-based model (RIBS. The use of an event-based model allows simulating longer hydrograph series with less computational and data requirements but need to characterize the initial basis state, which depends on the initial basin moisture distribution. To overcome this problem, this paper proposed a probabilistic calibration–simulation approach, which considers the initial state and the model parameters as random variables characterized by a probability distribution though a Monte Carlo simulation. This approach is compared with two other approaches, the deterministic and the semi-deterministic approaches. Both approaches use a unique initial state. The deterministic approach also uses a unique value of the model parameters while the semi-deterministic approach obtains these values from its probability distribution through a Monte Carlo simulation, considering the basin variability. This methodology has been applied to the Corbès and Générargues basins, in the Southeast of France. The results show that the probabilistic approach offers the best fit. That means that the proposed methodology can be successfully used to characterize the extreme behavior of the basin considering the basin variability and overcoming the basin initial state problem.

  4. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    Science.gov (United States)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  5. Computational intelligence as a platform for data collection methodology in management science

    DEFF Research Database (Denmark)

    Jespersen, Kristina Risom

    2006-01-01

    With the increased focus in management science on how to collect data close to the real-world of managers, then agent-based simulations have interesting prospects that are usable for the design of business applications aimed at the collection of data. As a new generation of data collection...... methodologies this chapter discusses and presents a behavioral simulation founded in the agent-based simulation life cycle and supported by Web technology. With agent-based modeling the complexity of the method is increased without limiting the research due to the technological support, because this makes...... it possible to exploit the advantages of a questionnaire, an experimental design, a role-play and a scenario as such gaining the synergy effect of these methodologies. At the end of the chapter an example of a simulation is presented for researchers and practitioners to study....

  6. Methodology for evaluation of industrial CHP production

    International Nuclear Information System (INIS)

    Pavlovic, Nenad V.; Studovic, Milovan

    2000-01-01

    At the end of the century industry switched from exclusive power consumer into power consumer-producer which is one of the players on the deregulated power market. Consequently, goals of industrial plant optimization have to be changed, making new challenges that industrial management has to be faced with. In the paper is reviewed own methodology for evaluation of industrial power production on deregulated power market. The methodology recognizes economic efficiency of industrial CHP facilities as a main criterion for evaluation. Energy and ecological efficiency are used as additional criteria, in which implicit could be found social goals. Also, methodology recognizes key and limit factors for CHP production in industry. It could be successful applied, by use of available commercial software for energy simulation in CHP plants and economic evaluation. (Authors)

  7. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  8. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conduction Finite Difference (CondFD) algorithms.

  9. Sustained qualification process for full scope nuclear power plant simulators

    International Nuclear Information System (INIS)

    Pirson, J.; Stubbe, E.; Vanhoenacker, L.

    1994-01-01

    In the past decade, simulator training for all nuclear power plant operators has evolved into a vital requirement. To assure a correct training, the simulator qualification process is an important issue not only for the initial validation but also following major simulator updates, which are necessary during the lifetime of the simulator. In order to avoid degradation of the simulator validated software, the modifications have to be introduced according to a rigorous methodology and a practical requalification process has to be applied. Such methodology has to be enforced at every phase of the simulator construction or updating process from plant data package constitution, over simulator software development to simulator response qualification. The initial qualification and requalification process is based on the 3 levels identified by the ANSI/ANS 3-5 standard for steady-state, operational transients and accident conditions. For the initial certification of the full scope simulators in Belgium, a practical qualification methodology has been applied, which has been adapted into a set of non regression tests for the requalification after major simulator updates. (orig.) (4 refs., 3 figs.)

  10. Weapon Simulator Test Methodology Investigation: Comparison of Live Fire and Weapon Simulator Test Methodologies and the Effects of Clothing and Individual Equipment on Marksmanship

    Science.gov (United States)

    2016-09-15

    variables analyzed included shot group tightness, radial error from the center of the target, and multiple time variables. The weapon simulator and...performance could be analyzed to determine if the weapon simulator data aligns with live fire data (i.e., if similar performance decrements appear in the...Analysis and Reporting The Noptel NOS4 software records shot performance data real-time and presents multiple statistical calculations as well as

  11. A top-down design methodology and its implementation for VCSEL-based optical links design

    Science.gov (United States)

    Li, Jiguang; Cao, Mingcui; Cai, Zilong

    2005-01-01

    In order to find the optimal design for a given specification of an optical communication link, an integrated simulation of electronic, optoelectronic, and optical components of a complete system is required. It is very important to be able to simulate at both system level and detailed model level. This kind of model is feasible due to the high potential of Verilog-AMS language. In this paper, we propose an effective top-down design methodology and employ it in the development of a complete VCSEL-based optical links simulation. The principle of top-down methodology is that the development would proceed from the system to device level. To design a hierarchical model for VCSEL based optical links, the design framework is organized in three levels of hierarchy. The models are developed, and implemented in Verilog-AMS. Therefore, the model parameters are fitted to measured data. A sample transient simulation demonstrates the functioning of our implementation. Suggestions for future directions in top-down methodology used for optoelectronic systems technology are also presented.

  12. Determining the number of kanbans for dynamic production systems: An integrated methodology

    Directory of Open Access Journals (Sweden)

    Özlem Uzun Araz

    2016-08-01

    Full Text Available Just-in-time (JIT is a management philosophy that reduces the inventory levels and eliminates manufacturing wastes by producing only the right quantity at the right time. A kanban system is one of the key elements of JIT philosophy. Kanbans are used to authorize production and to control movement of materials in JIT systems. In Kanban systems, the efficiency of the manufacturing system depends on several factors such as number of kanbans, container size etc. Hence, determining the number of kanbans is a critical decision in Kanban systems. The aim of this study is to develop a methodology that can be used in order to determine the number of kanbans in a dynamic production environment. In this methodology, the changes in system state is monitored in real time manner, and the number of the kanbans are dynamically re-arranged. The proposed methodology integrates simulation, neural networks and Mamdani type fuzzy inference system. The methodology is modelled in simulation environment and applied on a hypothetic production system. We also performed several comparisons for different control policies to show the effectiveness of the proposed methodology.

  13. Simulation of the operational monitoring of a BWR with Simulate-3; Simulacion del seguimiento operacional de un reactor BWR con Simulate-3

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez F, J. O.; Martin del Campo M, C.; Fuentes M, L.; Francois L, J. L., E-mail: ace.jo.cu@gmail.com [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico)

    2015-09-15

    This work was developed in order to describe the methodology for calculating the fuel burned of nuclear power reactors throughout the duration of their operating cycle and for each fuel reload. In other words, simulate and give monitoring to the main operation parameters of sequential way along its operation cycles. For this particular case, the operational monitoring of five consecutive cycles of a reactor was realized using the information reported by their processes computer. The simulation was performed with the Simulate-3 software and the results were compared with those of the process computer. The goal is to get the fuel burned, cycle after cycle for obtain the state conditions of the reactor needed for the fuel reload analyses, stability studies and transients analysis, and the development of a methodology that allows to manage and resolve similar cases for future fuel cycles of the nuclear power plant and explore the various options offered by the simulator. (Author)

  14. Use of calibration methodology of gamma cameras for the workers surveillance using a thyroid simulator

    International Nuclear Information System (INIS)

    Alfaro, M.; Molina, G.; Vazquez, R.; Garcia, O.

    2010-09-01

    In Mexico there are a significant number of nuclear medicine centers in operation. For what the accidents risk related to the transport and manipulation of open sources used in nuclear medicine can exist. The National Institute of Nuclear Research (ININ) has as objective to establish a simple and feasible methodology for the workers surveillance related with the field of the nuclear medicine. This radiological surveillance can also be applied to the public in the event of a radiological accident. To achieve this it intends to use the available equipment s in the nuclear medicine centers, together with the neck-thyroid simulators elaborated by the ININ to calibrate the gamma cameras. The gamma cameras have among their component elements that conform spectrometric systems like the employees in the evaluation of the internal incorporation for direct measurements, reason why, besides their use for diagnostic for image, they can be calibrated with anthropomorphic simulators and also with punctual sources for the quantification of the radionuclides activity distributed homogeneously in the human body, or located in specific organs. Inside the project IAEA-ARCAL-RLA/9/049-LXXVIII -Procedures harmonization of internal dosimetry- where 9 countries intervened (Argentina, Brazil, Colombia, Cuba, Chile, Mexico, Peru, Uruguay and Spain). It was developed a protocol of cameras gamma calibration for the determination in vivo of radionuclides. The protocol is the base to establish and integrated network in Latin America to attend in response to emergencies, using nuclear medicine centers of public hospitals of the region. The objective is to achieve the appropriate radiological protection of the workers, essential for the sure and acceptable radiation use, the radioactive materials and the nuclear energy. (Author)

  15. Integrated detoxification methodology of hazardous phenolic wastewaters in environmentally based trickle-bed reactors: Experimental investigation and CFD simulation

    International Nuclear Information System (INIS)

    Lopes, Rodrigo J.G.; Almeida, Teresa S.A.; Quinta-Ferreira, Rosa M.

    2011-01-01

    Centralized environmental regulations require the use of efficient detoxification technologies for the secure disposal of hazardous wastewaters. Guided by federal directives, existing plants need reengineering activities and careful analysis to improve their overall effectiveness and to become environmentally friendly. Here, we illustrate the application of an integrated methodology which encompasses the experimental investigation of catalytic wet air oxidation and CFD simulation of trickle-bed reactors. As long as trickle-bed reactors are determined by the flow environment coupled with chemical kinetics, first, on the optimization of prominent numerical solution parameters, the CFD model was validated with experimental data taken from a trickle bed pilot plant specifically designed for the catalytic wet oxidation of phenolic wastewaters. Second, several experimental and computational runs were carried out under unsteady-state operation to evaluate the dynamic performance addressing the TOC concentration and temperature profiles. CFD computations of total organic carbon conversion were found to agree better with experimental data at lower temperatures. Finally, the comparison of test data with simulation results demonstrated that this integrated framework was able to describe the mineralization of organic matter in trickle beds and the validated consequence model can be exploited to promote cleaner remediation technologies of contaminated waters.

  16. Integrated detoxification methodology of hazardous phenolic wastewaters in environmentally based trickle-bed reactors: Experimental investigation and CFD simulation.

    Science.gov (United States)

    Lopes, Rodrigo J G; Almeida, Teresa S A; Quinta-Ferreira, Rosa M

    2011-05-15

    Centralized environmental regulations require the use of efficient detoxification technologies for the secure disposal of hazardous wastewaters. Guided by federal directives, existing plants need reengineering activities and careful analysis to improve their overall effectiveness and to become environmentally friendly. Here, we illustrate the application of an integrated methodology which encompasses the experimental investigation of catalytic wet air oxidation and CFD simulation of trickle-bed reactors. As long as trickle-bed reactors are determined by the flow environment coupled with chemical kinetics, first, on the optimization of prominent numerical solution parameters, the CFD model was validated with experimental data taken from a trickle bed pilot plant specifically designed for the catalytic wet oxidation of phenolic wastewaters. Second, several experimental and computational runs were carried out under unsteady-state operation to evaluate the dynamic performance addressing the TOC concentration and temperature profiles. CFD computations of total organic carbon conversion were found to agree better with experimental data at lower temperatures. Finally, the comparison of test data with simulation results demonstrated that this integrated framework was able to describe the mineralization of organic matter in trickle beds and the validated consequence model can be exploited to promote cleaner remediation technologies of contaminated waters. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. The evolution of groundwater flow and mass transport in Canadian shield flow domains: a methodology for numerical simulation

    International Nuclear Information System (INIS)

    Sykes, J.F.; Sudicky, E.A.; Normani, S.D.; Park, Y.J.; Cornaton, F.; McLaren, R.G.

    2007-01-01

    The Deep Geologic Repository Technology Programme (DGRTP) of Ontario Power Generation (OPG) is developing numerous approaches and methodologies for integrated and multidisciplinary site characterisation. A principal element involves the use and further development of state-of-the-art numerical simulators, and immersive visualisation technologies, while fully honouring multi-disciplinary litho-structural, hydrogeologic, paleo-hydrogeologic, geophysical, hydrogeochemical and geomechanical field data. Paleo-climate reconstructions provide surface boundary conditions for numerical models of the subsurface, furthering the understanding of groundwater flow in deep geologic systems and quantifying the effects of glaciation and deglaciation events. The use of geo-statistically plausible fracture networks conditioned on surface lineaments within the numerical models results in more physically representative and realistic characterizations of the repository site. Finally, immersive three-dimensional visualisation technology is used to query, investigate, explore and understand both the raw data, and simulation results in a spatially and temporally consistent framework. This environment allows multi-disciplinary teams of geoscience professionals to explore each other's work and can significantly enhance understanding and knowledge, thereby creating stronger linkages between the geo-scientific disciplines. The use of more physically representative and realistic conceptual models, coupled with immersive visualisation, contributes to an overall integrated approach to site characterisation, instilling further confidence in the understanding of flow system evolution. (authors)

  18. THE BROAD-LINED Type Ic SN 2012ap AND THE NATURE OF RELATIVISTIC SUPERNOVAE LACKING A GAMMA-RAY BURST DETECTION

    Energy Technology Data Exchange (ETDEWEB)

    Milisavljevic, D.; Margutti, R.; Parrent, J. T.; Soderberg, A. M.; Sanders, N. E.; Kamble, A.; Chakraborti, S.; Drout, M. R.; Kirshner, R. P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Fesen, R. A. [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755 (United States); Mazzali, P. [Astrophysics Research Institute, Liverpool John Moores University, Liverpool L3 5RF (United Kingdom); Maeda, K. [Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan); Cenko, S. B. [Astrophysics Science Division, NASA Goddard Space Flight Center, Mail Code 661, Greenbelt, MD 20771 (United States); Silverman, J. M. [University of Texas at Austin, 1 University Station C1400, Austin, TX 78712-0259 (United States); Filippenko, A. V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Pickering, T. E. [Southern African Large Telescope, P.O. Box 9, Observatory 7935, Cape Town (South Africa); Kawabata, K. [Hiroshima Astrophysical Science Center, Hiroshima University, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Hattori, T. [Subaru Telescope, National Astronomical Observatory of Japan, Hilo, HI 96720 (United States); Hsiao, E. Y. [Carnegie Observatories, Las Campanas Observatory, Colina El Pino, Casilla 601 (Chile); Stritzinger, M. D., E-mail: dmilisav@cfa.harvard.edu [Department of Physics and Astronomy, Aarhus University, Ny Munkegade, DK-8000 Aarhus C (Denmark); and others

    2015-01-20

    We present ultraviolet, optical, and near-infrared observations of SN 2012ap, a broad-lined Type Ic supernova in the galaxy NGC 1729 that produced a relativistic and rapidly decelerating outflow without a gamma-ray burst signature. Photometry and spectroscopy follow the flux evolution from –13 to +272 days past the B-band maximum of –17.4 ± 0.5 mag. The spectra are dominated by Fe II, O I, and Ca II absorption lines at ejecta velocities of v ≈ 20,000 km s{sup –1} that change slowly over time. Other spectral absorption lines are consistent with contributions from photospheric He I, and hydrogen may also be present at higher velocities (v ≳ 27,000 km s{sup –1}). We use these observations to estimate explosion properties and derive a total ejecta mass of ∼2.7 M {sub ☉}, a kinetic energy of ∼1.0 × 10{sup 52} erg, and a {sup 56}Ni mass of 0.1-0.2 M {sub ☉}. Nebular spectra (t > 200 days) exhibit an asymmetric double-peaked [O I] λλ6300, 6364 emission profile that we associate with absorption in the supernova interior, although toroidal ejecta geometry is an alternative explanation. SN 2012ap joins SN 2009bb as another exceptional supernova that shows evidence for a central engine (e.g., black hole accretion or magnetar) capable of launching a non-negligible portion of ejecta to relativistic velocities without a coincident gamma-ray burst detection. Defining attributes of their progenitor systems may be related to notable observed properties including environmental metallicities of Z ≳ Z {sub ☉}, moderate to high levels of host galaxy extinction (E(B – V) > 0.4 mag), detection of high-velocity helium at early epochs, and a high relative flux ratio of [Ca II]/[O I] >1 at nebular epochs. These events support the notion that jet activity at various energy scales may be present in a wide range of supernovae.

  19. The Broad-Lined Type Ic SN 2012ap and the Nature of Relativistic Supernovae Lacking a Gamma-Ray Burst Detection

    Science.gov (United States)

    Milisavljevic, D.; Margutti, R.; Parrent, J. T.; Soderberg, A. M.; Fesen, R. A.; Mazzali, P.; Maeda, K.; Sanders, N. E.; Cenko, S. B.; Silverman, J. M.

    2014-01-01

    We present ultraviolet, optical, and near-infrared observations of SN2012ap, a broad-lined Type Ic supernova in the galaxy NGC 1729 that produced a relativistic and rapidly decelerating outflow without a gamma-ray burst signature. Photometry and spectroscopy follow the flux evolution from -13 to +272 days past the B-band maximum of -17.4 +/- 0.5 mag. The spectra are dominated by Fe II, O I, and Ca II absorption lines at ejecta velocities of v approx. 20,000 km s(exp. -1) that change slowly over time. Other spectral absorption lines are consistent with contributions from photospheric He I, and hydrogen may also be present at higher velocities (v approx. greater than 27,000 km s(exp. -1)). We use these observations to estimate explosion properties and derive a total ejecta mass of 2.7 Solar mass, a kinetic energy of 1.0×1052 erg, and a (56)Ni mass of 0.1-0.2 Solar mass. Nebular spectra (t > 200 d) exhibit an asymmetric double-peaked [O I] lambda lambda 6300, 6364 emission profile that we associate with absorption in the supernova interior, although toroidal ejecta geometry is an alternative explanation. SN2012ap joins SN2009bb as another exceptional supernova that shows evidence for a central engine (e.g., black-hole accretion or magnetar) capable of launching a non-negligible portion of ejecta to relativistic velocities without a coincident gamma-ray burst detection. Defining attributes of their progenitor systems may be related to notable properties including above-average environmental metallicities of Z approx. greater than Solar Z, moderate to high levels of host-galaxy extinction (E(B -V ) > 0.4 mag), detection of high-velocity helium at early epochs, and a high relative flux ratio of [Ca II]/[O I] > 1 at nebular epochs. These events support the notion that jet activity at various energy scales may be present in a wide range of supernovae.

  20. OBEST: The Object-Based Event Scenario Tree Methodology

    International Nuclear Information System (INIS)

    WYSS, GREGORY D.; DURAN, FELICIA A.

    2001-01-01

    Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies

  1. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  2. CFD simulation of homogenisation time measured by radiotracers

    International Nuclear Information System (INIS)

    Thyn, J.; Novy, M.; Zitny, R.; Mostik, M.; Jahoda, M.

    2004-01-01

    A methodology for CFD (Computational Fluid Dynamics) simulation of radiotracer experiments was suggested. The most important parts of the methodology for validation of CFD results by radiotracers are: a) successful simulation of tracer experiment by CFD code (numerical solution of tracer dispersion in a stirred tank), which results in tracer concentration field at several time intervals; b) post-process data treatment, which uses detection chain description and which enables to simulate the detector measurement of homogenisation time from the tracer concentration field evaluated by CFD code. (author)

  3. Supply chain simulation tools and techniques: a survey

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    The main contribution of this paper is twofold: it surveys different types of simulation for supply chain management; it discusses several methodological issues. These different types of simulation are spreadsheet simulation, system dynamics, discrete-event simulation and business games. Which

  4. Methodology Development for SiC Sensor Signal Modelling in the Nuclear Reactor Radiation Environments

    International Nuclear Information System (INIS)

    Cetnar, J.; Krolikowski, I.P.

    2013-06-01

    This paper deals with SiC detector simulation methodology for signal formation by neutrons and induced secondary radiation as well as its inverse interpretation. The primary goal is to achieve the SiC capability of simultaneous spectroscopic measurements of neutrons and gamma-rays for which an appropriate methodology of the detector signal modelling and its interpretation must be adopted. The process of detector simulation is divided into two basically separate but actually interconnected sections. The first one is the forward simulation of detector signal formation in the field of the primary neutron and secondary radiations, whereas the second one is the inverse problem of finding a representation of the primary radiation, based on the measured detector signals. The applied methodology under development is based on the Monte Carlo description of radiation transport and analysis of the reactor physics. The methodology of SiC detector signal interpretation will be based on the existing experience in neutron metrology developed in the past for various neutron and gamma-ray detection systems. Since the novel sensors based on SiC are characterised by a new structure, yet to be finally designed, the methodology for particle spectroscopic fluence measurement must be developed while giving a productive feed back to the designing process of SiC sensor, in order to arrive at the best possible design. (authors)

  5. An Innovative Fuzzy-Logic-Based Methodology for Trend Identification

    International Nuclear Information System (INIS)

    Wang Xin; Tsoukalas, Lefteri H.; Wei, Thomas Y.C.; Reifman, Jaques

    2001-01-01

    A new fuzzy-logic-based methodology for on-line signal trend identification is introduced. The methodology may be used for detecting the onset of nuclear power plant (NPP) transients at the earliest possible time and could be of great benefit to diagnostic, maintenance, and performance-monitoring programs. Although signal trend identification is complicated by the presence of noise, fuzzy methods can help capture important features of on-line signals, integrate the information included in these features, and classify incoming NPP signals into increasing, decreasing, and steady-state trend categories. A computer program named PROTREN is developed and tested for the purpose of verifying this methodology using NPP and simulation data. The results indicate that the new fuzzy-logic-based methodology is capable of detecting transients accurately, it identifies trends reliably and does not misinterpret a steady-state signal as a transient one

  6. Computational Evolutionary Methodology for Knowledge Discovery and Forecasting in Epidemiology and Medicine

    International Nuclear Information System (INIS)

    Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria

    2008-01-01

    Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic

  7. Electrochemical treatment of simulated sugar industrial effluent: Optimization and modeling using a response surface methodology

    Directory of Open Access Journals (Sweden)

    P. Asaithambi

    2016-11-01

    Full Text Available The removal of organic compounds from a simulated sugar industrial effluent was investigated through the electrochemical oxidation technique. Effect of various experimental parameters such as current density, concentration of electrolyte and flow rate in a batch electrochemical reactor was studied on the percentage of COD removal and power consumption. The electrochemical reactor performance was analyzed based on with and without recirculation of the effluent having constant inter-electrodes distance. It was found out that the percentage removal of COD increased with the increase of electrolyte concentration and current density. The maximum percentage removal of COD was achieved at 80.74% at a current density of 5 A/dm2 and 5 g/L of electrolyte concentration in the batch electrochemical reactor. The recirculation electrochemical reactor system parameters like current density, concentration of COD and flow rate were optimized using response surface methodology, while COD removal percents were maximized and power consumption minimized. It has been observed from the present analysis that the predicted values are in good agreement with the experimental data with a correlation coefficient of 0.9888.

  8. Performance of uncertainty quantification methodologies and linear solvers in cardiovascular simulations

    Science.gov (United States)

    Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison

    2017-11-01

    Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.

  9. Digital Controller Development Methodology Based on Real-Time Simulations with LabVIEW FPGA Hardware-Software Toolset

    Directory of Open Access Journals (Sweden)

    Tommaso Caldognetto

    2013-12-01

    Full Text Available In this paper, we exemplify the use of NI Lab-VIEW FPGA as a rapid prototyping environment for digital controllers. In our power electronics laboratory, it has been successfully employed in the development, debugging, and test of different power converter controllers for microgrid applications.The paper shows how this high level programming language,together with its target hardware platforms, including CompactRIO and Single Board RIO systems, allows researchers and students to develop even complex applications in reasonable times. The availability of efficient drivers for the considered hardware platforms frees the users from the burden of low level programming. At the same time, the high level programming approach facilitates software re-utilization, allowing the laboratory know-how to steadily grow along time. Furthermore, it allows hardware-in-the-loop real-time simulation, that proved to be effective, and safe, in debugging even complex hardware and software co-designed controllers. To illustrate the effectiveness of these hardware-software toolsets and of the methodology based upon them, two case studies are

  10. Implementing the cost-optimal methodology in EU countries

    DEFF Research Database (Denmark)

    Atanasiu, Bogdan; Kouloumpi, Ilektra; Thomsen, Kirsten Engelund

    This study presents three cost-optimal calculations. The overall aim is to provide a deeper analysis and to provide additional guidance on how to properly implement the cost-optimality methodology in Member States. Without proper guidance and lessons from exemplary case studies using realistic...... input data (reflecting the likely future development), there is a risk that the cost-optimal methodology may be implemented at sub-optimal levels. This could lead to a misalignment between the defined cost-optimal levels and the long-term goals, leaving a significant energy saving potential unexploited....... Therefore, this study provides more evidence on the implementation of the cost-optimal methodology and highlights the implications of choosing different values for key factors (e.g. discount rates, simulation variants/packages, costs, energy prices) at national levels. The study demonstrates how existing...

  11. Methodology for Evaluating the Simulator Flight Performance of Pilots

    National Research Council Canada - National Science Library

    Smith, Jennifer

    2004-01-01

    The type of research that investigates operational tasks such as flying an aircraft or flight simulator is extremely useful to the Air Force's operational community because the results apply directly...

  12. Methodology to estimate parameters of an excitation system based on experimental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Saavedra-Montes, A.J. [Carrera 80 No 65-223, Bloque M8 oficina 113, Escuela de Mecatronica, Universidad Nacional de Colombia, Medellin (Colombia); Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Ramirez-Scarpetta, J.M. [Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Malik, O.P. [2500 University Drive N.W., Electrical and Computer Engineering Department, University of Calgary, Calgary, Alberta (Canada)

    2011-01-15

    A methodology to estimate the parameters of a potential-source controlled rectifier excitation system model is presented in this paper. The proposed parameter estimation methodology is based on the characteristics of the excitation system. A comparison of two pseudo random binary signals, two sampling periods for each one, and three estimation algorithms is also presented. Simulation results from an excitation control system model and experimental results from an excitation system of a power laboratory setup are obtained. To apply the proposed methodology, the excitation system parameters are identified at two different levels of the generator saturation curve. The results show that it is possible to estimate the parameters of the standard model of an excitation system, recording two signals and the system operating in closed loop with the generator. The normalized sum of squared error obtained with experimental data is below 10%, and with simulation data is below 5%. (author)

  13. Modeling Methodologies for Representing Urban Cultural Geographies in Stability Operations

    National Research Council Canada - National Science Library

    Ferris, Todd P

    2008-01-01

    ... 2.0.0, in an effort to provide modeling methodologies for a single simulation tool capable of exploring the complex world of urban cultural geographies undergoing Stability Operations in an irregular warfare (IW) environment...

  14. Simulation of sea water intrusion in coastal aquifers

    Indian Academy of Sciences (India)

    dependent miscible flow and transport modelling approach for simulation of seawater intrusion in coastal aquifers. A nonlinear optimization-based simulation methodology was used in this study. Various steady state simulations are performed for a ...

  15. Methodology for the hybrid solution of systems of differential equations

    International Nuclear Information System (INIS)

    Larrinaga, E.F.; Lopez, M.A.

    1993-01-01

    This work shows a general methodology of solution to systems of differential equations in hybrid computers. Taking into account this methodology, a mathematical model was elaborated. It offers wide possibilities of recording and handling the results on the basis of using the hybrid system IBM-VIDAC 1224 which the ISCTN has. It also presents the results gained when simulating a simple model of a nuclear reactor, which was used in the validation of the results of the computational model

  16. Development of risk assessment methodology of decay heat removal function against external hazards for sodium-cooled fast reactors. (3) Numerical simulations of forest fire spread and smoke transport as an external hazard assessment methodology development

    International Nuclear Information System (INIS)

    Okano, Yasushi; Yamano, Hidemasa

    2015-01-01

    As a part of a development of the risk assessment methodologies against external hazards, a new methodology to assess forest fire hazards is being developed. Frequency and consequence of the forest fire are analyzed to obtain the hazard intensity curve and then Level 1 probabilistic safety assessment is performed to obtain the conditional core damage probability due to the challenges by the forest fire. 'Heat', 'flame', 'smoke' and 'flying object' are the challenges to a nuclear power plant. For a sodium-cooled fast reactor, a decay heat removal under accident conditions is operated with an ultimate heat sink of air, then, the challenge by 'smoke' will potentially be on the air filter of the system. In this paper, numerical simulations of forest fire propagation and smoke transport were performed with sensibility studies to weather conditions, and the effect by the smoke on the air filter was quantitatively evaluated. Forest fire propagation simulations were performed using FARSITE code. A temporal increase of a forest fire spread area and a position of the frontal fireline are obtained by the simulation, and 'reaction intensity' and 'frontal fireline intensity' as the indexes of 'heat' are obtained as well. The boundary of the fire spread area is shaped like an ellipse on the terrain, and the boundary length is increased with time and fire spread. The sensibility analyses on weather conditions of wind, temperature, and humidity were performed, and it was summarized that 'forest fire spread rate' and 'frontal fireline intensity' depend much on wind speed and humidity. Smoke transport simulations were performed by ALOFT-FT code where three-dimensional spatial distribution of smoke density, especially of particle matters of PM2.5 and PM10, are evaluated. The snapshot outputs, namely 'reaction intensity' and 'position of frontal fireline', from the sensibility studies of the FARSITE were directly utilized as the input data for ALOFT-FT, whereas it is assumed that the

  17. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    Science.gov (United States)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in

  18. Numerical characteristics of quantum computer simulation

    Science.gov (United States)

    Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.

    2016-12-01

    The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.

  19. A coupled groundwater-flow-modelling and vulnerability-mapping methodology for karstic terrain management

    Science.gov (United States)

    Kavouri, Konstantina P.; Karatzas, George P.; Plagnes, Valérie

    2017-08-01

    A coupled groundwater-flow-modelling and vulnerability-mapping methodology for the management of karst aquifers with spatial variability is developed. The methodology takes into consideration the duality of flow and recharge in karst and introduces a simple method to integrate the effect of temporal storage in the unsaturated zone. In order to investigate the applicability of the developed methodology, simulation results are validated against available field measurement data. The criteria maps from the PaPRIKa vulnerability-mapping method are used to document the groundwater flow model. The FEFLOW model is employed for the simulation of the saturated zone of Palaikastro-Chochlakies karst aquifer, in the island of Crete, Greece, for the hydrological years 2010-2012. The simulated water table reproduces typical karst characteristics, such as steep slopes and preferred drain axes, and is in good agreement with field observations. Selected calculated error indicators—Nash-Sutcliffe efficiency (NSE), root mean squared error (RMSE) and model efficiency (E')—are within acceptable value ranges. Results indicate that different storage processes take place in different parts of the aquifer. The north-central part seems to be more sensitive to diffuse recharge, while the southern part is affected primarily by precipitation events. Sensitivity analysis is performed on the parameters of hydraulic conductivity and specific yield. The methodology is used to estimate the feasibility of artificial aquifer recharge (AAR) at the study area. Based on the developed methodology, guidelines were provided for the selection of the appropriate AAR scenario that has positive impact on the water table.

  20. In vitro dissolution methodology, mini-Gastrointestinal Simulator (mGIS), predicts better in vivo dissolution of a weak base drug, dasatinib.

    Science.gov (United States)

    Tsume, Yasuhiro; Takeuchi, Susumu; Matsui, Kazuki; Amidon, Gregory E; Amidon, Gordon L

    2015-08-30

    USP apparatus I and II are gold standard methodologies for determining the in vitro dissolution profiles of test drugs. However, it is difficult to use in vitro dissolution results to predict in vivo dissolution, particularly the pH-dependent solubility of weak acid and base drugs, because the USP apparatus contains one vessel with a fixed pH for the test drug, limiting insight into in vivo drug dissolution of weak acid and weak base drugs. This discrepancy underscores the need to develop new in vitro dissolution methodology that better predicts in vivo response to assure the therapeutic efficacy and safety of oral drug products. Thus, the development of the in vivo predictive dissolution (IPD) methodology is necessitated. The major goals of in vitro dissolution are to ensure the performance of oral drug products and the support of drug formulation design, including bioequivalence (BE). Orally administered anticancer drugs, such as dasatinib and erlotinib (tyrosine kinase inhibitors), are used to treat various types of cancer. These drugs are weak bases that exhibit pH-dependent and high solubility in the acidic stomach and low solubility in the small intestine (>pH 6.0). Therefore, these drugs supersaturate and/or precipitate when they move from the stomach to the small intestine. Also of importance, gastric acidity for cancer patients may be altered with aging (reduction of gastric fluid secretion) and/or co-administration of acid-reducing agents. These may result in changes to the dissolution profiles of weak base and the reduction of drug absorption and efficacy. In vitro dissolution methodologies that assess the impact of these physiological changes in the GI condition are expected to better predict in vivo dissolution of oral medications for patients and, hence, better assess efficacy, toxicity and safety concerns. The objective of this present study is to determine the initial conditions for a mini-Gastrointestinal Simulator (mGIS) to assess in vivo

  1. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    Science.gov (United States)

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  2. Vibration-Driven Microrobot Positioning Methodologies for Nonholonomic Constraint Compensation

    Directory of Open Access Journals (Sweden)

    Kostas Vlachos

    2015-03-01

    Full Text Available This paper presents the formulation and practical implementation of positioning methodologies that compensate for the nonholonomic constraints of a mobile microrobot that is driven by two vibrating direct current (DC micromotors. The open-loop and closed-loop approaches described here add the capability for net sidewise displacements of the microrobotic platform. A displacement is achieved by the execution of a number of repeating steps that depend on the desired displacement, the speed of the micromotors, and the elapsed time. Simulation and experimental results verified the performance of the proposed methodologies.

  3. An improved methodology for dynamic modelling and simulation of ...

    Indian Academy of Sciences (India)

    The diversity of the processes and the complexity of the drive system .... modelling the specific event, general simulation tools such as Matlab R provide the user with tools for creating ..... using the pulse width modulation (PWM) techniques.

  4. A methodology for identification and control of electro-mechanical actuators.

    Science.gov (United States)

    Tutunji, Tarek A; Saleem, Ashraf

    2015-01-01

    Mechatronic systems are fully-integrated engineering systems that are composed of mechanical, electronic, and computer control sub-systems. These integrated systems use electro-mechanical actuators to cause the required motion. Therefore, the design of appropriate controllers for these actuators are an essential step in mechatronic system design. In this paper, a three-stage methodology for real-time identification and control of electro-mechanical actuator plants is presented, tested, and validated. First, identification models are constructed from experimental data to approximate the plants' response. Second, the identified model is used in a simulation environment for the purpose of designing a suitable controller. Finally, the designed controller is applied and tested on the real plant through Hardware-in-the-Loop (HIL) environment. The described three-stage methodology provides the following practical contributions: •Establishes an easy-to-follow methodology for controller design of electro-mechanical actuators.•Combines off-line and on-line controller design for practical performance.•Modifies the HIL concept by using physical plants with computer control (rather than virtual plants with physical controllers). Simulated and experimental results for two case studies, induction motor and vehicle drive system, are presented in order to validate the proposed methodology. These results showed that electromechanical actuators can be identified and controlled using an easy-to-duplicate and flexible procedure.

  5. Mesoscopic Simulations of Crosslinked Polymer Networks

    Science.gov (United States)

    Megariotis, Grigorios; Vogiatzis, Georgios G.; Schneider, Ludwig; Müller, Marcus; Theodorou, Doros N.

    2016-08-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1’4-polyisoprene’ is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn segments. From the thermodynamic point of view, the system is described by a Helmholtz free-energy containing contributions from entropic springs between successive beads along a chain, slip-springs representing entanglements between beads on different chains, and non-bonded interactions. The methodology is employed for the calculation of the stress relaxation function from simulations of several microseconds at equilibrium, as well as for the prediction of stress-strain curves of crosslinked polymer networks under deformation.

  6. Determining disease intervention strategies using spatially resolved simulations.

    Directory of Open Access Journals (Sweden)

    Mark Read

    Full Text Available Predicting efficacy and optimal drug delivery strategies for small molecule and biological therapeutics is challenging due to the complex interactions between diverse cell types in different tissues that determine disease outcome. Here we present a new methodology to simulate inflammatory disease manifestation and test potential intervention strategies in silico using agent-based computational models. Simulations created using this methodology have explicit spatial and temporal representations, and capture the heterogeneous and stochastic cellular behaviours that lead to emergence of pathology or disease resolution. To demonstrate this methodology we have simulated the prototypic murine T cell-mediated autoimmune disease experimental autoimmune encephalomyelitis, a mouse model of multiple sclerosis. In the simulation immune cell dynamics, neuronal damage and tissue specific pathology emerge, closely resembling behaviour found in the murine model. Using the calibrated simulation we have analysed how changes in the timing and efficacy of T cell receptor signalling inhibition leads to either disease exacerbation or resolution. The technology described is a powerful new method to understand cellular behaviours in complex inflammatory disease, permits rational design of drug interventional strategies and has provided new insights into the role of TCR signalling in autoimmune disease progression.

  7. Simulation of Attacks for Security in Wireless Sensor Network.

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  8. Simulation of Attacks for Security in Wireless Sensor Network

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-01-01

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710

  9. An ultrasonic methodology for muscle cross section measurement of support space flight

    Science.gov (United States)

    Hatfield, Thomas R.; Klaus, David M.; Simske, Steven J.

    2004-09-01

    The number one priority for any manned space mission is the health and safety of its crew. The study of the short and long term physiological effects on humans is paramount to ensuring crew health and mission success. One of the challenges associated in studying the physiological effects of space flight on humans, such as loss of bone and muscle mass, has been that of readily attaining the data needed to characterize the changes. The small sampling size of astronauts, together with the fact that most physiological data collection tends to be rather tedious, continues to hinder elucidation of the underlying mechanisms responsible for the observed changes that occur in space. Better characterization of the muscle loss experienced by astronauts requires that new technologies be implemented. To this end, we have begun to validate a 360° ultrasonic scanning methodology for muscle measurements and have performed empirical sampling of a limb surrogate for comparison. Ultrasonic wave propagation was simulated using 144 stations of rotated arm and calf MRI images. These simulations were intended to provide a preliminary check of the scanning methodology and data analysis before its implementation with hardware. Pulse-echo waveforms were processed for each rotation station to characterize fat, muscle, bone, and limb boundary interfaces. The percentage error between MRI reference values and calculated muscle areas, as determined from reflection points for calf and arm cross sections, was -2.179% and +2.129%, respectively. These successful simulations suggest that ultrasound pulse scanning can be used to effectively determine limb cross-sectional areas. Cross-sectional images of a limb surrogate were then used to simulate signal measurements at several rotation angles, with ultrasonic pulse-echo sampling performed experimentally at the same stations on the actual limb surrogate to corroborate the results. The objective of the surrogate sampling was to compare the signal

  10. Metamodel-based robust simulation-optimization : An overview

    NARCIS (Netherlands)

    Dellino, G.; Meloni, C.; Kleijnen, J.P.C.; Dellino, Gabriella; Meloni, Carlo

    2015-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a "robust" methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  11. Best practice strategies for validation of micro moulding process simulation

    DEFF Research Database (Denmark)

    Costa, Franco; Tosello, Guido; Whiteside, Ben

    2009-01-01

    The use of simulation for injection moulding design is a powerful tool which can be used up-front to avoid costly tooling modifications and reduce the number of mould trials. However, the accuracy of the simulation results depends on many component technologies and information, some of which can...... be easily controlled or known by the simulation analyst and others which are not easily known. For this reason, experimental validation studies are an important tool for establishing best practice methodologies for use during analysis set up on all future design projects. During the validation studies......, detailed information about the moulding process is gathered and used to establish these methodologies. Whereas in routine design projects, these methodologies are then relied on to provide efficient but reliable working practices. Data analysis and simulations on preliminary micro-moulding experiments have...

  12. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  13. Performance evaluation by simulation and analysis with applications to computer networks

    CERN Document Server

    Chen, Ken

    2015-01-01

    This book is devoted to the most used methodologies for performance evaluation: simulation using specialized software and mathematical modeling. An important part is dedicated to the simulation, particularly in its theoretical framework and the precautions to be taken in the implementation of the experimental procedure.  These principles are illustrated by concrete examples achieved through operational simulation languages ​​(OMNeT ++, OPNET). Presented under the complementary approach, the mathematical method is essential for the simulation. Both methodologies based largely on the theory of

  14. INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING

    Science.gov (United States)

    Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong

    2017-01-01

    Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363

  15. Multi-agent systems simulation and applications

    CERN Document Server

    Uhrmacher, Adelinde M

    2009-01-01

    Methodological Guidelines for Modeling and Developing MAS-Based SimulationsThe intersection of agents, modeling, simulation, and application domains has been the subject of active research for over two decades. Although agents and simulation have been used effectively in a variety of application domains, much of the supporting research remains scattered in the literature, too often leaving scientists to develop multi-agent system (MAS) models and simulations from scratch. Multi-Agent Systems: Simulation and Applications provides an overdue review of the wide ranging facets of MAS simulation, i

  16. Response Surface Methodology's Steepest Ascent and Step Size Revisited

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; den Hertog, D.; Angun, M.E.

    2002-01-01

    Response Surface Methodology (RSM) searches for the input combination maximizing the output of a real system or its simulation.RSM is a heuristic that locally fits first-order polynomials, and estimates the corresponding steepest ascent (SA) paths.However, SA is scale-dependent; and its step size is

  17. Terminological Ambiguity: Game and Simulation

    Science.gov (United States)

    Klabbers, Jan H. G.

    2009-01-01

    Since its introduction in academia and professional practice during the 1950s, gaming has been linked to simulation. Although both fields have a few important characteristics in common, they are distinct in their form and underlying theories of knowledge and methodology. Nevertheless, in the literature, hybrid terms such as "gaming/simulation" and…

  18. Cutting Method of the CAD model of the Nuclear facility for Dismantling Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ikjune; Choi, ByungSeon; Hyun, Dongjun; Jeong, KwanSeong; Kim, GeunHo; Lee, Jonghwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    Current methods for process simulation cannot simulate the cutting operation flexibly. As is, to simulate a cutting operation, user needs to prepare the result models of cutting operation based on pre-define cutting path, depth and thickness with respect to a dismantle scenario in advance. And those preparations should be built again as scenario changes. To be, user can change parameters and scenarios dynamically within a simulation configuration process so that the user saves time and efforts to simulate cutting operations. This study presents the methodology of cutting operation which can be applied to all the procedure in the simulation of dismantling of nuclear facilities. We developed the cutting simulation module for cutting operation in the dismantling of the nuclear facilities based on proposed cutting methodology. We defined the requirement of model cutting methodology based on the requirement of the dismantling of nuclear facilities. And we implemented cutting simulation module based on API of the commercial CAD system.

  19. Using Modern Methodologies with Maintenance Software

    Science.gov (United States)

    Streiffert, Barbara A.; Francis, Laurie K.; Smith, Benjamin D.

    2014-01-01

    Jet Propulsion Laboratory uses multi-mission software produced by the Mission Planning and Sequencing (MPS) team to process, simulate, translate, and package the commands that are sent to a spacecraft. MPS works under the auspices of the Multi-Mission Ground Systems and Services (MGSS). This software consists of nineteen applications that are in maintenance. The MPS software is classified as either class B (mission critical) or class C (mission important). The scheduling of tasks is difficult because mission needs must be addressed prior to performing any other tasks and those needs often spring up unexpectedly. Keeping track of the tasks that everyone is working on is also difficult because each person is working on a different software component. Recently the group adopted the Scrum methodology for planning and scheduling tasks. Scrum is one of the newer methodologies typically used in agile development. In the Scrum development environment, teams pick their tasks that are to be completed within a sprint based on priority. The team specifies the sprint length usually a month or less. Scrum is typically used for new development of one application. In the Scrum methodology there is a scrum master who is a facilitator who tries to make sure that everything moves smoothly, a product owner who represents the user(s) of the software and the team. MPS is not the traditional environment for the Scrum methodology. MPS has many software applications in maintenance, team members who are working on disparate applications, many users, and is interruptible based on mission needs, issues and requirements. In order to use scrum, the methodology needed adaptation to MPS. Scrum was chosen because it is adaptable. This paper is about the development of the process for using scrum, a new development methodology, with a team that works on disparate interruptible tasks on multiple software applications.

  20. An Efficient Power Estimation Methodology for Complex RISC Processor-based Platforms

    OpenAIRE

    Rethinagiri , Santhosh Kumar; Ben Atitallah , Rabie; Dekeyser , Jean-Luc; Niar , Smail; Senn , Eric

    2012-01-01

    International audience; In this contribution, we propose an efficient power estima- tion methodology for complex RISC processor-based plat- forms. In this methodology, the Functional Level Power Analysis (FLPA) is used to set up generic power models for the different parts of the system. Then, a simulation framework based on virtual platform is developed to evalu- ate accurately the activities used in the related power mod- els. The combination of the two parts above leads to a het- erogeneou...

  1. Mesoscopic simulations of crosslinked polymer networks

    NARCIS (Netherlands)

    Megariotis, G.; Vogiatzis, G.G.; Schneider, L.; Müller, M.; Theodorou, D.N.

    2016-01-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1'4-polyisoprene' is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn

  2. Progressive design methodology for complex engineering systems based on multiobjective genetic algorithms and linguistic decision making

    NARCIS (Netherlands)

    Kumar, P.; Bauer, P.

    2008-01-01

    This work focuses on a design methodology that aids in design and development of complex engineering systems. This design methodology consists of simulation, optimization and decision making. Within this work a framework is presented in which modelling, multi-objective optimization and multi

  3. Assessment of radiotherapy photon beams: A practical and low cost methodology

    International Nuclear Information System (INIS)

    Reis, C.Q.M.; Nicolucci, P.

    2017-01-01

    Dosimetric properties of radiation beams used in radiotherapy are directly related to the energy spectrum produced by the treatment unit. Therefore, the development of methodologies to evaluate in a simple and accurate way the spectra of clinical beams can help establishing the quality control of the treatment. The purpose of this study is to present a practical and low cost methodology for determining primary spectra of radiotherapy photon beams from transmission measurements in attenuators of aluminum and using the method of the inverse Laplace transform. Monte Carlo simulation with PENELOPE code was used in order to evaluate and validate the reconstructed spectra by the calculation of dosimetric parameters that characterize the beam. Percentage depth dose values simulated with a 6 MV reconstructed spectrum shows maximum difference of 4.4% when compared to values measured at the corresponding clinical beam. For a 10 MV beam that difference was around 4.2%. Results obtained in this study confirm the adequacy of the proposed methodology for assessing primary photon beams produced by clinical accelerators. - Highlights: • Primary spectra of radiotherapy photon beams are determined from transmission measurements. • Monte Carlo calculations are used to evaluate the method of the inverse Laplace transform. • The proposed methodology is practical and of low cost for clinical purposes. • Results are in fair agreement with literature and clinical data.

  4. Stochastic simulation of PWR vessel integrity for pressurized thermal shock conditions

    International Nuclear Information System (INIS)

    Jackson, P.S.; Moelling, D.S.

    1984-01-01

    A stochastic simulation methodology is presented for performing probabilistic analyses of Pressurized Water Reactor vessel integrity. Application of the methodology to vessel-specific integrity analyses is described in the context of Pressurized Thermal Shock (PTS) conditions. A Bayesian method is described for developing vessel-specific models of the density of undetected volumetric flaws from ultrasonic inservice inspection results. Uncertainty limits on the probabilistic results due to sampling errors are determined from the results of the stochastic simulation. An example is provided to illustrate the methodology

  5. C++ Toolbox for Object-Oriented Modeling and Dynamic Simulation of Physical Systems

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1999-01-01

    This paper presents the efforts made in an ongoing project that exploits the advantages of using object-oriented methodologies for describing and simulating dynamical systems. The background for this work is a search for new and better ways to simulate physical systems.......This paper presents the efforts made in an ongoing project that exploits the advantages of using object-oriented methodologies for describing and simulating dynamical systems. The background for this work is a search for new and better ways to simulate physical systems....

  6. Systematic uncertainties on Monte Carlo simulation of lead based ADS

    International Nuclear Information System (INIS)

    Embid, M.; Fernandez, R.; Garcia-Sanz, J.M.; Gonzalez, E.

    1999-01-01

    Computer simulations of the neutronic behaviour of ADS systems foreseen for actinide and fission product transmutation are affected by many sources of systematic uncertainties, both from the nuclear data and by the methodology selected when applying the codes. Several actual ADS Monte Carlo simulations are presented, comparing different options both for the data and for the methodology, evaluating the relevance of the different uncertainties. (author)

  7. Simulation of the noise transmission through automotive door seals

    CERN Document Server

    Hazir, Andreas

    2016-01-01

    Andreas Hazir is investigating the door seal contribution to the interior noise level of production vehicles. These investigations contain experimental contribution analyses of real production vehicles and of academic test cases as well as the development of a simulation methodology for noise transmission through sealing systems and side windows. The simulations are realized by coupling transient computational aeroacoustics of the exterior flow to nonlinear finite element simulations of the structural transmission. By introducing a linear transmission model, the setup and computational costs of the seal noise transmission are significantly reduced, resulting in the feasibility of numerical contribution analyses of real production vehicles. Contents Contribution Analyses of Production Vehicles Acoustic Excitation versus Aeroacoustic Excitation Development of a Simulation Methodology Sensitivity Analysis of Noise Transmission Simulations Target Groups Researchers and students in the field of automotive engineer...

  8. Controlling simulations of human-artifact interaction with scenario bundles

    NARCIS (Netherlands)

    Van der Vegte, W.F.; Rusák, Z.

    2008-01-01

    We introduce a methodology for modeling and simulating fully virtual human-artifact systems, aiming to resolve two issues in virtual prototyping: (i) integration of distinct modeling and simulation approaches, and (ii) extending the deployability of simulations towards conceptual design. We are

  9. Methodology for the digital calibration of analog circuits and systems with case studies

    CERN Document Server

    Pastre, Marc

    2006-01-01

    Methodology for the Digital Calibration of Analog Circuits and Systems shows how to relax the extreme design constraints in analog circuits, allowing the realization of high-precision systems even with low-performance components. A complete methodology is proposed, and three applications are detailed. To start with, an in-depth analysis of existing compensation techniques for analog circuit imperfections is carried out. The M/2+M sub-binary digital-to-analog converter is thoroughly studied, and the use of this very low-area circuit in conjunction with a successive approximations algorithm for digital compensation is described. A complete methodology based on this compensation circuit and algorithm is then proposed. The detection and correction of analog circuit imperfections is studied, and a simulation tool allowing the transparent simulation of analog circuits with automatic compensation blocks is introduced. The first application shows how the sub-binary M/2+M structure can be employed as a conventional di...

  10. Robust Optimization in Simulation : Taguchi and Krige Combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2009-01-01

    Optimization of simulated systems is the goal of many methods, but most methods as- sume known environments. We, however, develop a `robust' methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  11. Robust optimization in simulation : Taguchi and Krige combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2012-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a “robust” methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  12. REVIEW OF FLEXIBLE MANUFACTURING SYSTEM BASED ON MODELING AND SIMULATION

    Directory of Open Access Journals (Sweden)

    SAREN Sanjib Kumar

    2016-05-01

    Full Text Available This paper focused on the literature survey of the use of flexible manufacturing system design and operation problems on the basis of simulation tools and their methodology which has been widely used for manufacturing system design and analysis. During this period, simulation has been proving to be an extremely useful analysis and optimization tool, and many articles, papers, and conferences have focused directly on the topic. This paper presents a scenario the use of simulation tools and their methodology in flexible manufacturing system from a period 1982 to 2015.

  13. New quickest transient detection methodology. Nuclear engineering applications

    International Nuclear Information System (INIS)

    Wang, Xin; Jevremovic, Tatjana; Tsoukalas, Lefteri H.

    2003-01-01

    A new intelligent systems methodology for quickest online transient detection is presented. Based on information that includes, but is not limited to, statistical features, energy of frequency components and wavelet coefficients, the new methodology decides whether a transient has emerged. A fuzzy system makes the final decision, the membership functions of which are obtained by artificial neural networks and adjusted in an online manner. Comparisons are performed with conventional methods for transient detection using simulated and plant data. The proposed methodology could be useful in power plant operations, diagnostic and maintenance activities. It is also considered as a design tool for quick design modifications in a virtual design environment aimed at next generation University Research and Training Reactors (URTRs). (The virtual design environment is pursued as part of the Big-10 Consortium sponsored by the new Innovations in Nuclear Infrastructure and Education (INIE) program sponsored by the US Department of Energy.) (author)

  14. Advantages of Westinghouse BWR control rod drop accidents methodology utilizing integrated POLCA-T code

    International Nuclear Information System (INIS)

    Panayotov, Dobromir

    2008-01-01

    The paper focuses on the activities pursued by Westinghouse in the development and licensing of POLCA-T code Control Rod Drop Accident (CRDA) Methodology. The comprehensive CRDA methodology that utilizes PHOENIX4/POLCA7/POLCA-T calculation chain foresees complete cycle-specific analysis. The methodology consists of determination of candidates of control rods (CR) that could cause a significant reactivity excursion if dropped throughout the entire fuel cycle, selection of limiting initial conditions for CRDA transient simulation and transient simulation itself. The Westinghouse methodology utilizes state-of-the-art methods. Unnecessary conservatisms in the methodology have been avoided to allow the accurate prediction of margin to design bases. This is mainly achieved by using the POLCA-T code for dynamic CRDA evaluations. The code belongs to the same calculation chain that is used for core design. Thus the very same reactor, core, cycle and fuel data base is used. This allows also reducing the uncertainties of input data and parameters that determine the energy deposition in the fuel. Uncertainty treatment, very selective use of conservatisms, selection of the initial conditions for limiting case analyses, incorporation into POLCA-T code models of the licensed fuel performance code are also among the means of performing realistic CRDA transient analyses. (author)

  15. Simulation as a surgical teaching model.

    Science.gov (United States)

    Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos

    2018-01-01

    Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    technique involve model structure, system representation and the degree of validity, coupled with the simplicity, of the overall model. ABM is best suited... system representation of the air combat system . We feel that a simulation model that combines ABM with equation-based representation of weapons and...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA

  17. Impact limiters for radioactive materials transport packagings: a methodology for assessment

    International Nuclear Information System (INIS)

    Mourao, Rogerio Pimenta

    2002-01-01

    This work aims at establishing a methodology for design assessment of a cellular material-filled impact limiter to be used as part of a radioactive material transport packaging. This methodology comprises the selection of the cellular material, its structural characterization by mechanical tests, the development of a case study in the nuclear field, preliminary determination of the best cellular material density for the case study, performance of the case and its numerical simulation using the finite element method. Among the several materials used as shock absorbers in packagings, the polyurethane foam was chosen, particularly the foam obtained from the castor oil plant (Ricinus communis), a non-polluting and renewable source. The case study carried out was the 9 m drop test of a package prototype containing radioactive wastes incorporated in a cement matrix, considered one of the most severe tests prescribed by the Brazilian and international transport standards. Prototypes with foam density pre-determined as ideal as well as prototypes using lighter and heavier foams were tested for comparison. The results obtained validate the methodology in that expectations regarding the ideal foam density were confirmed by the drop tests and the numerical simulation. (author)

  18. Study of methodology diversification in diagnostics

    International Nuclear Information System (INIS)

    Suda, Kazunori; Yonekawa, Tsuyoshi; Yoshikawa, Shinji; Hasegawa, Makoto

    1999-03-01

    There are several research activities to enhance safety and reliability of nuclear power plant operation and maintenance. We are developing a concept of an autonomous operation system where the role of operators is replaced with artificial intelligence. The purpose of the study described in this report is to develop a operator support system in abnormal plant situations. Conventionally, diagnostic modules based on individual methodology such as expert system have been developed and verified. In this report, methodology diversification is considered to integrate diagnostic modules which performance are confirmed using information processing technique. Technical issues to be considered in diagnostic methodology diversification are; 1)reliability of input data, 2)diversification of knowledge models, algorithms and reasoning schemes, 3)mutual complement and robustness. The diagnostic module utilizing the different approaches defined along with strategy of diversification was evaluated using fast breeder plant simulator. As a result, we confirmed that any singular diagnostic module can not meet accuracy criteria for the entire set of anomaly events. In contrast with this, we confirmed that every abnormality could be precisely diagnosed by a mutual combination. In other words, legitimacy of approach selected by strategy of diversification was shown, and methodology diversification attained clear efficiency for abnormal diagnosis. It has been also confirmed that the diversified diagnostic system implemented in this study is able to maintain its accuracy even in case that encountered scale of abnormality is different from reference cases embedded in the knowledge base. (author)

  19. Collection and analysis of training simulator data

    International Nuclear Information System (INIS)

    Krois, P.A.; Haas, P.M.

    1985-01-01

    The purposes of this paper are: (1) to review the objectives, approach, and results of a series of research experiments performed on nuclear power plant training simulators in support of regulatory and research programs of the US Nuclear Regulatory Commission (NRC), and (2) to identify general research issues that may lead to an improved research methodology using the training simulator as a field setting. Research products consist of a refined field research methodology, a data store on operator performance, and specific results pertinent to NRC regulatory positions. Issues and potential advances in operator performance measurement are discussed

  20. Safety-related operator actions: methodology for developing criteria

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Gray, L.H.; Beare, A.N.; Barks, D.B.; Gomer, F.E.

    1984-03-01

    This report presents a methodology for developing criteria for design evaluation of safety-related actions by nuclear power plant reactor operators, and identifies a supporting data base. It is the eleventh and final NUREG/CR Report on the Safety-Related Operator Actions Program, conducted by Oak Ridge National Laboratory for the US Nuclear Regulatory Commission. The operator performance data were developed from training simulator experiments involving operator responses to simulated scenarios of plant disturbances; from field data on events with similar scenarios; and from task analytic data. A conceptual model to integrate the data was developed and a computer simulation of the model was run, using the SAINT modeling language. Proposed is a quantitative predictive model of operator performance, the Operator Personnel Performance Simulation (OPPS) Model, driven by task requirements, information presentation, and system dynamics. The model output, a probability distribution of predicted time to correctly complete safety-related operator actions, provides data for objective evaluation of quantitative design criteria

  1. Methodology for neural networks prototyping. Application to traffic control

    Energy Technology Data Exchange (ETDEWEB)

    Belegan, I.C.

    1998-07-01

    The work described in this report was carried out in the context of the European project ASTORIA (Advanced Simulation Toolbox for Real-World Industrial Application in Passenger Management and Adaptive Control), and concerns the development of an advanced toolbox for complex transportation systems. Our work was focused on the methodology for prototyping a set of neural networks corresponding to specific strategies for traffic control and congestion management. The tool used for prototyping is SNNS (Stuggart Neural Network Simulator), developed at the University of Stuggart, Institute for Parallel and Distributed High Performance Systems, and the real data from the field were provided by ZELT. This report is structured into six parts. The introduction gives some insights about traffic control and its approaches. The second chapter discusses the various control strategies existing. The third chapter is an introduction to the field of neural networks. The data analysis and pre-processing is described in the fourth chapter. In the fifth chapter, the methodology for prototyping the neural networks is presented. Finally, conclusions and further work are presented. (author) 14 refs.

  2. Virtual reality and live simulation: a comparison between two simulation tools for assessing mass casualty triage skills.

    Science.gov (United States)

    Luigi Ingrassia, Pier; Ragazzoni, Luca; Carenzo, Luca; Colombo, Davide; Ripoll Gallardo, Alba; Della Corte, Francesco

    2015-04-01

    This study tested the hypothesis that virtual reality simulation is equivalent to live simulation for testing naive medical students' abilities to perform mass casualty triage using the Simple Triage and Rapid Treatment (START) algorithm in a simulated disaster scenario and to detect the improvement in these skills after a teaching session. Fifty-six students in their last year of medical school were randomized into two groups (A and B). The same scenario, a car accident, was developed identically on the two simulation methodologies: virtual reality and live simulation. On day 1, group A was exposed to the live scenario and group B was exposed to the virtual reality scenario, aiming to triage 10 victims. On day 2, all students attended a 2-h lecture on mass casualty triage, specifically the START triage method. On day 3, groups A and B were crossed over. The groups' abilities to perform mass casualty triage in terms of triage accuracy, intervention correctness, and speed in the scenarios were assessed. Triage and lifesaving treatment scores were assessed equally by virtual reality and live simulation on day 1 and on day 3. Both simulation methodologies detected an improvement in triage accuracy and treatment correctness from day 1 to day 3 (PVirtual reality simulation proved to be a valuable tool, equivalent to live simulation, to test medical students' abilities to perform mass casualty triage and to detect improvement in such skills.

  3. Methodological framework for economical and controllable design of heat exchanger networks: Steady-state analysis, dynamic simulation, and optimization

    International Nuclear Information System (INIS)

    Masoud, Ibrahim T.; Abdel-Jabbar, Nabil; Qasim, Muhammad; Chebbi, Rachid

    2016-01-01

    Highlights: • HEN total annualized cost, heat recovery, and controllability are considered in the framework. • Steady-state and dynamic simulations are performed. • Effect of bypass on total annualized cost and controllability is reported. • Optimum bypass fractions are found from closed and open-loop efforts. - Abstract: The problem of interaction between economic design and control system design of heat exchanger networks (HENs) is addressed in this work. The controllability issues are incorporated in the classical design of HENs. A new methodological framework is proposed to account for both economics and controllability of HENs. Two classical design methods are employed, namely, Pinch and superstructure designs. Controllability measures such as relative gain array (RGA) and singular value decomposition (SVD) are used. The proposed framework also presents a bypass placement strategy for optimal control of the designed network. A case study is used to test the applicability of the framework and to assess both economics and controllability. The results indicate that the superstructure design is more economical and controllable compared to the Pinch design. The controllability of the designed HEN is evaluated using Aspen-HYSYS closed-loop dynamic simulator. In addition, a sensitivity analysis is performed to study the effect of bypass fractions on the total annualized cost and controllability of the designed HEN. The analysis shows that increasing any bypass fraction increases the total annualized cost. However, the trend with the total annualized cost was not observed with respect to the control effort manifested by minimizing the integral of the squared errors (ISE) between the controlled stream temperatures and their targets (set-points). An optimal ISE point is found at a certain bypass fraction, which does not correspond to the minimal total annualized cost. The bypass fractions are validated via open-loop simulation and the additional cooling and

  4. Air pollution simulation and geographical information systems (GIS) applied to Athens International Airport.

    Science.gov (United States)

    Theophanides, Mike; Anastassopoulou, Jane

    2009-07-01

    This study presents an improved methodology for analysing atmospheric pollution around airports using Gaussian-plume numerical simulation integrated with Geographical Information Systems (GIS). The new methodology focuses on streamlining the lengthy analysis process for Airport Environmental Impact Assessments by integrating the definition of emission sources, simulating and displaying the results in a GIS environment. One of the objectives of the research is to validate the methodology applied to the Athens International Airport, "Eleftherios Venizelos", to produce a realistic estimate of emission inventories, dispersion simulations and comparison to measured data. The methodology used a combination of the Emission Dispersion and Modelling System (EDMS) and the Atmospheric Dispersion and Modelling system (ADMS) to improve the analysis process. The second objective is to conduct numerical simulations under various adverse conditions (e.g. scenarios) and assess the dispersion in the surrounding areas. The study concludes that the use of GIS in environmental assessments provides a valuable advantage for organizing data and entering accurate geographical/topological information for the simulation engine. Emissions simulation produced estimates within 10% of published values. Dispersion simulations indicate that airport pollution will affect neighbouring cities such as Rafina and Loutsa. Presently, there are no measured controls in these areas. In some cases, airport pollution can contribute to as much as 40% of permissible EU levels in VOCs.

  5. SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations

    International Nuclear Information System (INIS)

    Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir

    2014-01-01

    The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with

  6. Neutronics and thermal-hydraulics coupling: some contributions toward an improved methodology to simulate the initiating phase of a severe accident in a sodium fast reactor

    International Nuclear Information System (INIS)

    Guyot, Maxime

    2014-01-01

    This project is dedicated to the analysis and the quantification of bias corresponding to the computational methodology for simulating the initiating phase of severe accidents on Sodium Fast Reactors. A deterministic approach is carried out to assess the consequences of a severe accident by adopting best estimate design evaluations. An objective of this deterministic approach is to provide guidance to mitigate severe accident developments and re-criticalities through the implementation of adequate design measures. These studies are generally based on modern simulation techniques to test and verify a given design. The new approach developed in this project aims to improve the safety assessment of Sodium Fast Reactors by decreasing the bias related to the deterministic analysis of severe accident scenarios. During the initiating phase, the subassembly wrapper tubes keep their mechanical integrity. Material disruption and dispersal is primarily one-dimensional. For this reason, evaluation methodology for the initiating phase relies on a multiple-channel approach. Typically a channel represents an average pin in a subassembly or a group of similar subassemblies. In the multiple-channel approach, the core thermal-hydraulics model is composed of 1 or 2 D channels. The thermal-hydraulics model is coupled to a neutronics module to provide an estimate of the reactor power level. In this project, a new computational model has been developed to extend the initiating phase modeling. This new model is based on a multi-physics coupling. This model has been applied to obtain information unavailable up to now in regards to neutronics and thermal-hydraulics models and their coupling. (author) [fr

  7. A methodology to enlarge narrow stability windows

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Ewerton M.P.; Pastor, Jorge A.S.C.; Fontoura, Sergio A.B. [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Civil. Grupo de Tecnologia e Engenharia de Petroleo

    2004-07-01

    The stability window in a wellbore design is defined by the difference between fracture pressure and collapse pressure. Deep water environments typically present narrow stability windows, because rocks have low strength due to under-compaction process. Often also, horizontal wells are drilled to obtain a better development of reservoirs placed in thin layers of sandstone. In this scenario, several challenges are faced when drilling in deep water. The traditional approach for predicting instabilities is to determine collapses and fractures at borehole wall. However, the initiation of rupture does not indicate that the borehole fails to perform its function as a wellbore. Thus, a methodology in which the stability window may be enlarged is desirable. This paper presents one practical analytical methodology that consists in allowing wellbore pressures smaller than the conventional collapse pressure, i.e., based upon failure on the borehole wall. This means that a collapse region (shear failure) will be developed around the borehole wall. This collapse region is pre-defined and to estimate its size is used a failure criterion. The aforementioned methodology is implemented in a user-friendly software, which can perform analyses of stress, pore pressure, formation failure, mud weight and mud salinity design for drilling in shale formations. Simulations of a wellbore drilling in a narrow stability window environment are performed to demonstrate the improvements of using the methodology. (author)

  8. A European test of pesticide-leaching models: methodology and major recommendations

    NARCIS (Netherlands)

    Vanclooster, M.; Boesten, J.J.T.I.; Trevisan, M.; Brown, C.D.; Capri, E.; Eklo, O.M.; Gottesbüren, B.; Gouy, V.; Linden, van der A.M.A.

    2000-01-01

    Testing of pesticide-leaching models is important in view of their increasing use in pesticide registration procedures in the European Union. This paper presents the methodology and major conclusions of a test of pesticide-leaching models. Twelve models simulating the vertical one-dimensional

  9. Safety and reliability in nuclear power plants operation using total range simulators for operators training

    International Nuclear Information System (INIS)

    Gleason, E.; Espinosa, G.; Rodriguez, S.

    1993-01-01

    This paper presents a methodology developed for the management of the configuration simulator, unit 1 of Laguna Verde's nucleoelectric power station. This methodology has the purchase to conclude the simulator modernization and to have interaction with the power station's administration. The validation and the application of this methodology is also presented as well as the up-to-date results. (B.C.A.). 12 refs, 01 fig

  10. Methodology to classify accident sequences of an Individual Plant Examination according to the severe releases for BWR type reactors

    International Nuclear Information System (INIS)

    Sandoval V, S.

    2001-01-01

    The Light Water Reactor (LWR) operation regulations require to every operating plant to perform of an Individual Plant Examination study (Ipe). One of the main purposes of an Ipe is t o gain a more quantitative understanding of the overall probabilities of core damage and fission product releases . Probabilistic Safety Analysis (PSA) methodologies and Severe Accident Analysis are used to perform Ipe studies. PSA methodologies are used to identify and analyse the set of event sequences that might originate the fission product release from a nuclear power plant; these methodologies are combinatorial in nature and generate thousands of sequences. Among other uses within an Ipe, severe accident simulations are used to determine the characteristics of the fission product release for the identified sequences and in this way, the releases can be understood and characterized. A vast amount of resources is required to simulate and analyse every Ipe sequence. This effort is unnecessary if similar sequences are grouped. The grouping scheme must achieve an efficient trade off between problem reduction and accuracy. The methodology presented in this work enables an accurate characterization and analysis of the Ipe fission product releases by using a reduced problem. The methodology encourages the use of specific plant simulations. (Author)

  11. Multi-Level Simulated Fault Injection for Data Dependent Reliability Analysis of RTL Circuit Descriptions

    Directory of Open Access Journals (Sweden)

    NIMARA, S.

    2016-02-01

    Full Text Available This paper proposes data-dependent reliability evaluation methodology for digital systems described at Register Transfer Level (RTL. It uses a hybrid hierarchical approach, combining the accuracy provided by Gate Level (GL Simulated Fault Injection (SFI and the low simulation overhead required by RTL fault injection. The methodology comprises the following steps: the correct simulation of the RTL system, according to a set of input vectors, hierarchical decomposition of the system into basic RTL blocks, logic synthesis of basic RTL blocks, data-dependent SFI for the GL netlists, and RTL SFI. The proposed methodology has been validated in terms of accuracy on a medium sized circuit – the parallel comparator used in Check Node Unit (CNU of the Low-Density Parity-Check (LDPC decoders. The methodology has been applied for the reliability analysis of a 128-bit Advanced Encryption Standard (AES crypto-core, for which the GL simulation was prohibitive in terms of required computational resources.

  12. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  13. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  14. Digraph-fault tree methodology for the assessment of material control systems

    International Nuclear Information System (INIS)

    Lambert, H.E.; Lim, J.J.; Gilman, F.M.

    1979-01-01

    The Lawrence Livermore Laboratory, under contract to the United States Nuclear Regulatory Commission, is developing a procedure to assess the effectiveness of material control and accounting systems at nuclear fuel cycle facilities. The purpose of a material control and accounting system is to prevent the theft of special nuclear material such as plutonium or highly enriched uranium. This report presents the use of a directed graph and fault tree analysis methodology in the assessment procedure. This methodology is demonstrated by assessing a simulated material control system design, the Test Bed

  15. Development of Constraint Force Equation Methodology for Application to Multi-Body Dynamics Including Launch Vehicle Stage Seperation

    Science.gov (United States)

    Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.

    2016-01-01

    The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.

  16. Simulations Of Neutron Beam Optic For Neutron Radiography Collimator Using Ray Tracing Methodology

    International Nuclear Information System (INIS)

    Norfarizan Mohd Said; Muhammad Rawi Mohamed Zin

    2014-01-01

    Ray- tracing is a technique for simulating the performance of neutron instruments. McStas, the open-source software package based on a meta-language, is a tool for carrying out ray-tracing simulations. The program has been successfully applied in investigating neutron guide design, flux optimization and other related areas with high complexity and precision. The aim of this paper is to discuss the implementation of ray-tracing technique with McStas for simulating the performance of neutron collimation system developed for imaging system of TRIGA RTP reactor. The code for the simulation was developed and the results are presented. The analysis of the performance is reported and discussed. (author)

  17. Vertical Footbridge Vibrations: The Response Spectrum Methodology

    DEFF Research Database (Denmark)

    Georgakis, Christos; Ingólfsson, Einar Thór

    2008-01-01

    In this paper, a novel, accurate and readily codifiable methodology for the prediction of vertical footbridge response is presented. The methodology is based on the well-established response spectrum approach used in the majority of the world’s current seismic design codes of practice. The concept...... of a universally applicable reference response spectrum is introduced, from which the pedestrian-induced vertical response of any footbridge may be determined, based on a defined “event” and the probability of occurrence of that event. A series of Monte Carlo simulations are undertaken for the development...... period is introduced and its implication on the calculation of footbridge response is discussed. Finally, a brief comparison is made between the theoretically predicted pedestrian-induced vertical response of an 80m long RC footbridge (as an example) and actual field measurements. The comparison shows...

  18. International Meeting on Simulation in Healthcare

    Science.gov (United States)

    2010-02-01

    into healthcare. Lean principles originate from Japanese manufacturing, particularly the Toyota production system. “Lean thinking”, essentially...for Simulation in Healthcare Page 50 Appendix 6: Video Presentations Simulation Cinema The video sessions provide an opportunity for the...Lean methodology is a series of principles derived from Japanese manufacturing aimed at creating value for customers through elimination of wasteful

  19. Convergence studies of deterministic methods for LWR explicit reflector methodology

    International Nuclear Information System (INIS)

    Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.

    2013-01-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)

  20. Analytical model and behavioral simulation approach for a ΣΔ fractional-N synthesizer employing a sample-hold element

    DEFF Research Database (Denmark)

    Cassia, Marco; Shah, Peter Jivan; Bruun, Erik

    2003-01-01

    is discussed. Also, a new methodology for behavioral simulation is presented: the proposed methodology is based on an object-oriented event-driven approach and offers the possibility to perform very fast and accurate simulations, and the theoretical models developed validate the simulation results. We show...

  1. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms

    International Nuclear Information System (INIS)

    Guerra, J.G.; Rubiano, J.G.; Winter, G.; Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Gil, J.M.; Rodríguez, R.; Martel, P.; Bolivar, J.P.

    2015-01-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. - Highlights: • A computational method for characterizing an HPGe spectrometer has been developed. • Detector characterized using as reference photopeak efficiencies obtained experimentally or by Monte Carlo calibration. • The characterization obtained has been validated for samples with different geometries and composition. • Good agreement

  2. Numerical simulations of viscoelastic flows with free surfaces

    DEFF Research Database (Denmark)

    Comminal, Raphaël; Spangenberg, Jon; Hattel, Jesper Henri

    2013-01-01

    We present a new methodology to simulate viscoelastic flows with free-surfaces. These simulations are motivated by the modelling of polymers manufacturing techniques, such as extrusion and injection moulding. One of the consequences of viscoelasticity is that polymeric materials have a “memory...

  3. Development of a simplified statistical methodology for nuclear fuel rod internal pressure calculation

    International Nuclear Information System (INIS)

    Kim, Kyu Tae; Kim, Oh Hwan

    1999-01-01

    A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs

  4. New scoring methodology improves the sensitivity of the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) in clinical trials.

    Science.gov (United States)

    Verma, Nishant; Beretvas, S Natasha; Pascual, Belen; Masdeu, Joseph C; Markey, Mia K

    2015-11-12

    As currently used, the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) has low sensitivity for measuring Alzheimer's disease progression in clinical trials. A major reason behind the low sensitivity is its sub-optimal scoring methodology, which can be improved to obtain better sensitivity. Using item response theory, we developed a new scoring methodology (ADAS-CogIRT) for the ADAS-Cog, which addresses several major limitations of the current scoring methodology. The sensitivity of the ADAS-CogIRT methodology was evaluated using clinical trial simulations as well as a negative clinical trial, which had shown an evidence of a treatment effect. The ADAS-Cog was found to measure impairment in three cognitive domains of memory, language, and praxis. The ADAS-CogIRT methodology required significantly fewer patients and shorter trial durations as compared to the current scoring methodology when both were evaluated in simulated clinical trials. When validated on data from a real clinical trial, the ADAS-CogIRT methodology had higher sensitivity than the current scoring methodology in detecting the treatment effect. The proposed scoring methodology significantly improves the sensitivity of the ADAS-Cog in measuring progression of cognitive impairment in clinical trials focused in the mild-to-moderate Alzheimer's disease stage. This provides a boost to the efficiency of clinical trials requiring fewer patients and shorter durations for investigating disease-modifying treatments.

  5. Education Strategies Through Simulation For Training In Cardiopulmonary Resuscitation Treatment

    Directory of Open Access Journals (Sweden)

    Regimar Carla Machado

    2017-01-01

    Full Text Available Theoretical and reflective study based on scientific literature and critical analysis of authors related to teaching strategies through simulation for training in cardiopulmonary resuscitation (CPR. Current teaching methodologies CPR involve realistic simulation strategies and simulations in virtual environments, but the first method provides the best results, allowing proactivity of individuals in their teaching-learning process and bringing them the experience of a life threatening situation. It is noteworthy that health professionals need to be able to assist a victim in cardiac arrest, but even  existing effective teaching methodologies to enable them in this subject, is not fully applicable in the Brazilian context of health education.

  6. Nonlinear Observer Design of the Generalized Rössler Hyperchaotic Systems via DIL Methodology

    Directory of Open Access Journals (Sweden)

    Yeong-Jeu Sun

    2012-01-01

    Full Text Available The generalized Rössler hyperchaotic systems are presented, and the state observation problem of such systems is investigated. Based on the differential inequality with Lyapunov methodology (DIL methodology, a nonlinear observer design for the generalized Rössler hyperchaotic systems is developed to guarantee the global exponential stability of the resulting error system. Meanwhile, the guaranteed exponential decay rate can be accurately estimated. Finally, numerical simulations are provided to illustrate the feasibility and effectiveness of proposed approach.

  7. Comparative assessment of CFD Tools and the Eurocode Methodology in describing Externally Venting Flames

    Directory of Open Access Journals (Sweden)

    Asimakopoulou Eleni K.

    2013-11-01

    Full Text Available The ability of currently available Computational Fluid Dynamics (CFD tools to adequately describe Externally Venting Flames (EVF is assessed, aiming to demonstrate compliance with performance-based fire safety regulations. The Fire Dynamics Simulator (FDS CFD tool is used to simulate the EVF characteristics in a corridor-compartment-façade configuration exposed to natural fire conditions. Numerical results of the temporal evolution of gas velocity, gas temperatures and flame shape are obtained for both the interior and the exterior of the compartment. Predictions are compared to respective experimental data, as well as to correlations suggested by the Eurocode methodology. The effects of ventilation conditions are investigated by simulating both Forced Draught (FD and No Forced Draught (NoFD test cases. The obtained results suggest that currently available CFD tools are capable of achieving good qualitative agreement with experimental data and, in certain cases (e.g. FD conditions, adequate quantitative agreement, that generally outperforms the Eurocode prescriptive methodology.

  8. Introducing Simulation via the Theory of Records

    Science.gov (United States)

    Johnson, Arvid C.

    2011-01-01

    While spreadsheet simulation can be a useful method by which to help students to understand some of the more advanced concepts in an introductory statistics course, introducing the simulation methodology at the same time as these concepts can result in student cognitive overload. This article describes a spreadsheet model that has been…

  9. Development of a comprehensive management site evaluation methodology

    International Nuclear Information System (INIS)

    Rodgers, J.C.; Onishi, Y.

    1981-01-01

    The Nuclear Regulatory Commission is in the process of preparing regulations that will define the necessary conditions for adequate disposal of low-level waste (LLW) by confinement in an LLW disposal facility. These proposed regulations form the context in which the motivation for the joint Los Alamos National Laboratory Battelle Pacific Northwest Laboratory program to develop a site-specific, LLW site evaluation methodology is discussed. The overall effort is divided into three development areas: land-use evaluation, environmental transport modelling, and long term scenario development including long-range climatology projections. At the present time four steps are envisioned in the application of the methodology to a site: site land use suitability assessment, land use-ecosystem interaction, contaminant transport simulation, and sensitivity analysis. Each of these steps is discussed in the paper. 12 refs

  10. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  11. Developing Agent-Oriented Video Surveillance System through Agent-Oriented Methodology (AOM

    Directory of Open Access Journals (Sweden)

    Cheah Wai Shiang

    2016-12-01

    Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.

  12. Simulation and case-based learning

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Guralnick, David

    2008-01-01

    Abstract- This paper has its origin in the authors' reflection on years of practical experiences combined with literature readings in our preparation for a workshop on learn-by-doing simulation and case-based learning to be held at the ICELW 2008 conference (the International Conference on E-Learning...... in the Workplace). The purpose of this paper is to describe the two online learning methodologies and to raise questions for future discussion. In the workshop, the organizers and participants work with and discuss differences and similarities within the two pedagogical methodologies, focusing on how...... they are applied in workplace related and e-learning contexts. In addition to the organizers, a small number of invited presenters will attend, giving demonstrations of their work within learn-by-doing simulation and cases-based learning, but still leaving ample of time for discussion among all participants....

  13. UPC Scaling-up methodology for Deterministic Safety Assessment and Support to Plant Operation

    Energy Technology Data Exchange (ETDEWEB)

    Martínez-Quiroga, V.; Reventós, F.; Batet, Il.

    2015-07-01

    Best Estimate codes along with necessary nodalizations are widely used tools in nuclear engineering for both Deterministic Safety Assessment (DSA) and Support to Plant Operation and Control. In this framework, the application of quality assurance procedures in both codes and nodalizations becomes an essential step prior any significant study. Along these lines the present paper introduces the UPC SCUP, a systematic methodology based on the extrapolation of the Integral Test Facilities (ITF) post-test simulations by means of scaling analyses. In that sense, SCUP fulfills a gap in current nodalization qualification procedures, the related with the validation of NPP nodalizations for Design Basis Accidents conditions. Three are the pillars that support SCUP: judicial selection of the experimental transients, full confidence in the quality of the ITF simulations, and simplicity in justifying discrepancies that appear between ITF and NPP counterpart transients. The techniques that are presented include the socalled Kv scaled calculations as well as the use of two new approaches, ”Hybrid nodalizations” and ”Scaled-up nodalizations”. These last two methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The study of both LSTF and PKL counterpart tests have allowed to qualify the methodology by the comparison with experimental data. Post-test simulations at different sizes allowed to define which phenomena could be well reproduced by system codes and which not, in this way also establishing the basis for the extrapolation to an NPP scaled calculation. Furthermore, the application of the UPC SCUP methodology demonstrated that selected phenomena can be scaled-up and explained between counterpart simulations by carefully considering the differences in scale and design. (Author)

  14. Multi-model approach to petroleum resource appraisal using analytic methodologies for probabilistic systems

    Science.gov (United States)

    Crovelli, R.A.

    1988-01-01

    The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.

  15. Methodology for the selection of alternatives to the waterfall division of hydro graphical basin, considering environmental impacts

    International Nuclear Information System (INIS)

    Cunha, S.H.F. da; Pires, S.H.; Rovere, E.L. La; Pereira, M.V.F.

    1993-01-01

    This paper presents the many stages of a new methodology propose to selection of alternatives of waterfall division of hydro graphical basin considering environmental impacts. The methodology uses the techniques of hierarchical analysis in evaluation of environmental impacts, simulation the individualized power plants in energy evaluation and multi-objective analysis in selection of better alternative of division of basin waterfall. The methodology still foresee moments and mechanisms to take into account the opinion of different social sectors. (C.M.)

  16. The development of a 4D treatment planning methodology to simulate the tracking of central lung tumors in an MRI-linac.

    Science.gov (United States)

    Al-Ward, Shahad M; Kim, Anthony; McCann, Claire; Ruschin, Mark; Cheung, Patrick; Sahgal, Arjun; Keller, Brian M

    2018-01-01

    Targeting and tracking of central lung tumors may be feasible on the Elekta MRI-linac (MRL) due to the soft-tissue visualization capabilities of MRI. The purpose of this work is to develop a novel treatment planning methodology to simulate tracking of central lung tumors with the MRL and to quantify the benefits in OAR sparing compared with the ITV approach. Full 4D-CT datasets for five central lung cancer patients were selected to simulate the condition of having 4D-pseudo-CTs derived from 4D-MRI data available on the MRL with real-time tracking capabilities. We used the MRL treatment planning system to generate two plans: (a) with a set of MLC-defined apertures around the target at each phase of the breathing ("4D-MRL" method); (b) with a fixed set of fields encompassing the maximum inhale and exhale of the breathing cycle ("ITV" method). For both plans, dose accumulation was performed onto a reference phase. To further study the potential benefits of a 4D-MRL method, the results were stratified by tumor motion amplitude, OAR-to-tumor proximity, and the relative OAR motion (ROM). With the 4D-MRL method, the reduction in mean doses was up to 3.0 Gy and 1.9 Gy for the heart and the lung. Moreover, the lung's V12.5 Gy was spared by a maximum of 300 cc. Maximum doses to serial organs were reduced by up to 6.1 Gy, 1.5 Gy, and 9.0 Gy for the esophagus, spinal cord, and the trachea, respectively. OAR dose reduction with our method depended on the tumor motion amplitude and the ROM. Some OARs with large ROMs and in close proximity to the tumor benefited from tracking despite small tumor amplitudes. We developed a novel 4D tracking methodology for the MRL for central lung tumors and quantified the potential dosimetric benefits compared with our current ITV approach. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Methodology for Analysing the NOx-NH3 Trade-off for the Heavy-duty Automotive SCR Catalyst

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2017-01-01

    This paper presents a methodology where pareto fronts were used to analyse how changes in the control structure for the urea dosing to the automotive SCR catalyst can improve the trade-o_ between NOx slip and NH3 slip. A previously developed simulation model was used to simulate the European...

  18. Methodological studies for deriving release criteria for liquid effluents from medical installations

    International Nuclear Information System (INIS)

    Shu, Jane; Comissao Nacional de Energia Nuclear; Rochedo, Elaine R.R.; Heilbron Filho, Paulo F.L.; Crispim, Verginia R.

    2009-01-01

    This work aims to develop a methodology for the assessment of clearance limits for the release of liquid waste arising from medical installations using radionuclides for medical diagnostic purposes in the town of Rio de Janeiro. The results will be used to assess the need to justify or to revise the current clearance values as specified in regulation CNEN-NE-6.05 - Radioactive Waste Management in Radioactive Facilities. The proposed methodology is based on the mathematical model recommended by the International Atomic Energy Agency, adapted to the observed release conditions in the study area. In order to turn the assessment as realistic as possible, two scenarios are simulated. The first scenario simulates the release to the sewage system with access to a sewer treatment stations. The second scenario simulates the releases without passing a treatment station, with direct outflow to surface water. Probabilistic assessments were performed using the Crystal Ball software. Distributions were than compared to current IAEA clearance criteria that include specific values for average and low probability scenarios. The results will be used to derive adequate clearance levels for radionuclides used in nuclear diagnostic medicine in Brazil according to the specific relevant release scenario. (author)

  19. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  20. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    Science.gov (United States)

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Exergoeconomic improvement of a complex cogeneration system integrated with a professional process simulator

    International Nuclear Information System (INIS)

    Vieira, Leonardo S.; Donatelli, Joao L.; Cruz, Manuel E.

    2009-01-01

    In this paper, the application of an iterative exergoeconomic methodology for improvement of thermal systems to a complex combined-cycle cogeneration plant is presented. The methodology integrates exergoeconomics with a professional process simulator, and represents an alternative to conventional mathematical optimization techniques, because it reduces substantially the number of variables to be considered in the improvement process. By exploiting the computational power of a simulator, the integrated approach permits the optimization routine to ignore the variables associated with the thermodynamic equations, and thus to deal only with the economic equations and objective function. In addition, the methodology combines recent available exergoeconomic techniques with qualitative and quantitative criteria to identify only those decision variables, which matter for the improvement of the system. To demonstrate the strengths of the methodology, it is here applied to a 24-component cogeneration plant, which requires O(10 3 ) variables for its simulation. The results which are obtained, are compared to those reached using a conventional mathematical optimization procedure, also coupled to the process simulator. It is shown that, for engineering purposes, improvement of the system is often more cost effective and less time consuming than optimization of the system.

  2. An integrative approach to the design methodology for 3-phase power conditioners in Photovoltaic Grid-Connected systems

    International Nuclear Information System (INIS)

    Rey-Boué, Alexis B.; García-Valverde, Rafael; Ruz-Vila, Francisco de A.; Torrelo-Ponce, José M.

    2012-01-01

    Highlights: ► A design methodology for Photovoltaic grid-connected systems is presented. ► Models of the Photovoltaic Generator and the 3-phase Inverter are described. ► The power factor and the power quality are regulated with vector control. ► Simulation and experimental results validate the design methodology. ► The proposed methodology can be extended to any Renewable or Industrial System. - Abstract: A novel methodology is presented in this paper, for the design of the Power and Control Subsystems of a 3-phase Photovoltaic Grid-Connected system in an easy and comprehensive way, as an integrative approach. At the DC side of the Power Subsystem, the Photovoltaic Generator modeling is revised and a simple model is proposed, whereas at the AC side, a vector analysis is done to deal with the instantaneous 3-phase variables of the grid-connected Voltage Source Inverter. A d–q control approach is established in the Control Subsystem, along with its specific tuned parameters, as a vector control alternative which will allow the decoupled control of the instantaneous active and reactive powers. A particular Case of Study is presented to illustrate the behavior of the design methodology regarding the fulfillment of the Photovoltaic plant specifications. Some simulations are run to study the performance of the Photovoltaic Generator together with the exerted d–q control to the grid-connected 3-phase inverter, and some experimental results, obtained from a built flexible platform, are also shown. The simulations and the experimental results validate the overall performance of the 3-phase Photovoltaic Grid-Connected system due to the attained unitary power factor operation together with good power quality. The final validation of the proposed design methodology is also achieved.

  3. The Research and Evaluation of Serious Games: Toward a Comprehensive Methodology

    Science.gov (United States)

    Mayer, Igor; Bekebrede, Geertje; Harteveld, Casper; Warmelink, Harald; Zhou, Qiqi; van Ruijven, Theo; Lo, Julia; Kortmann, Rens; Wenzler, Ivo

    2014-01-01

    The authors present the methodological background to and underlying research design of an ongoing research project on the scientific evaluation of serious games and/or computer-based simulation games (SGs) for advanced learning. The main research questions are: (1) what are the requirements and design principles for a comprehensive social…

  4. Bio-inspired algorithms applied to molecular docking simulations.

    Science.gov (United States)

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  5. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  6. Evaluating Simulation Methodologies to Determine Best Strategies to Maximize Student Learning.

    Science.gov (United States)

    Scherer, Yvonne K; Foltz-Ramos, Kelly; Fabry, Donna; Chao, Ying-Yu

    2016-01-01

    Limited evidence exists as to the most effective ways to provide simulation experiences to maximize student learning. This quasi-experimental study investigated 2 different strategies repeated versus 1 exposure and participation versus observation on student outcomes following exposure to a high-fidelity acute asthma exacerbation of asthma scenario. Immediate repeated exposure resulted in significantly higher scores on knowledge, student satisfaction and self-confidence, and clinical performance measures than a single exposure. Significant intergroup differences were found on participants' satisfaction and self-confidence as compared with observers. Implications for nurse educators include expanding the observer role when designing repeated exposure to simulations and integrating technical, cognitive, and behavioral outcomes as a way for faculty to evaluate students' clinical performance. Published by Elsevier Inc.

  7. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  8. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    Directory of Open Access Journals (Sweden)

    Marco Aldinucci

    2014-01-01

    Full Text Available The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  9. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    Science.gov (United States)

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  10. Finding Biomarker Signatures in Pooled Sample Designs: A Simulation Framework for Methodological Comparisons

    Directory of Open Access Journals (Sweden)

    Anna Telaar

    2010-01-01

    Full Text Available Detection of discriminating patterns in gene expression data can be accomplished by using various methods of statistical learning. It has been proposed that sample pooling in this context would have negative effects; however, pooling cannot always be avoided. We propose a simulation framework to explicitly investigate the parameters of patterns, experimental design, noise, and choice of method in order to find out which effects on classification performance are to be expected. We use a two-group classification task and simulated gene expression data with independent differentially expressed genes as well as bivariate linear patterns and the combination of both. Our results show a clear increase of prediction error with pool size. For pooled training sets powered partial least squares discriminant analysis outperforms discriminance analysis, random forests, and support vector machines with linear or radial kernel for two of three simulated scenarios. The proposed simulation approach can be implemented to systematically investigate a number of additional scenarios of practical interest.

  11. Airfoil selection methodology for Small Wind Turbines

    DEFF Research Database (Denmark)

    Salgado Fuentes, Valentin; Troya, Cesar; Moreno, Gustavo

    2016-01-01

    On wind turbine technology, the aerodynamic performance is fundamental to increase efficiency. Nowadays there are several databases with airfoils designed and simulated for different applications; that is why it is necessary to select those suitable for a specific application. This work presents...... a new methodology for airfoil selection used in feasibility and optimization of small wind turbines with low cut-in speed. On the first stage, airfoils data is tested on XFOIL software to check its compatibility with the simulator; then, arithmetic mean criteria is recursively used to discard...... underperformed airfoils; the best airfoil data was exported to Matlab for a deeper analysis. In the second part, data points were interpolated using "splines" to calculate glide ratio and stability across multiple angles of attack, those who present a bigger steadiness were conserved. As a result, 3 airfoils...

  12. Optimization Versus Robustness in Simulation : A Practical Methodology, With a Production-Management Case-Study

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Gaury, E.G.A.

    2001-01-01

    Whereas Operations Research has always paid much attention to optimization, practitioners judge the robustness of the 'optimum' solution to be of greater importance.Therefore this paper proposes a practical methodology that is a stagewise combination of the following four proven techniques: (1)

  13. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    Science.gov (United States)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  14. Simulation for IT Service Desk Improvement

    Directory of Open Access Journals (Sweden)

    Peter Bober

    2014-07-01

    Full Text Available IT service desk is a complex service that IT service company provides to its customers. This article provides a methodology which uses discrete-event simulation to help IT service management to make decision and to plan service strategy. Simulation model considers learning ability of service desk agents and growth of knowledge database of the company. Model shows how the learning curve influences the time development of service desk quality and efficiency. This article promotes using simulation to define quantitative goals for the service desk improvement.

  15. Validation of three-dimensional micro injection molding simulation accuracy

    DEFF Research Database (Denmark)

    Tosello, Guido; Costa, F.S.; Hansen, Hans Nørgaard

    2011-01-01

    length, injection pressure profile, molding mass and flow pattern. The importance of calibrated micro molding process monitoring for an accurate implementation strategy of the simulation and its validation has been demonstrated. In fact, inconsistencies and uncertainties in the experimental data must...... be minimized to avoid introducing uncertainties in the simulation calculations. Simulations of bulky sub-100 milligrams micro molded parts have been validated and a methodology for accurate micro molding simulations was established....

  16. Power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hacking, D [Marconi Simulation (United Kingdom)

    1992-09-01

    Over many years in the field of simulation Marconi has developed and adopted a number of procedures and methodologies for the management, design and development of an extensive range of training equipment. This equipment encompasses desktop computer-based training systems, generic training devices. The procurement of a training simulator is clearly dictated by the perceived training requirement or problem. Also, it should preferably involve or follow a detailed training needs analysis. Although the cost benefits of training are often difficult to quantify, a simulator is frequently easier to justify if plant familiarisation and training can be provided in advance of on-the-job experience. This is particularly true if the target operators have little hands-on experience of similar plant either in terms of processes or the operator interface. (author).

  17. A methodology for the extraction of quantitative information from electron microscopy images at the atomic level

    International Nuclear Information System (INIS)

    Galindo, P L; Pizarro, J; Guerrero, E; Guerrero-Lebrero, M P; Scavello, G; Yáñez, A; Sales, D L; Herrera, M; Molina, S I; Núñez-Moraleda, B M; Maestre, J M

    2014-01-01

    In this paper we describe a methodology developed at the University of Cadiz (Spain) in the past few years for the extraction of quantitative information from electron microscopy images at the atomic level. This work is based on a coordinated and synergic activity of several research groups that have been working together over the last decade in two different and complementary fields: Materials Science and Computer Science. The aim of our joint research has been to develop innovative high-performance computing techniques and simulation methods in order to address computationally challenging problems in the analysis, modelling and simulation of materials at the atomic scale, providing significant advances with respect to existing techniques. The methodology involves several fundamental areas of research including the analysis of high resolution electron microscopy images, materials modelling, image simulation and 3D reconstruction using quantitative information from experimental images. These techniques for the analysis, modelling and simulation allow optimizing the control and functionality of devices developed using materials under study, and have been tested using data obtained from experimental samples

  18. Formulation of Generic Simulation Models for Analyzing Construction Claims

    Directory of Open Access Journals (Sweden)

    Rifat Rustom

    2012-11-01

    Full Text Available While there are several techniques for analyzing the impact of claims on time schedule and productivity,very few are considered adequate and comprehensive to consider risks and uncertainties.A generic approach for claims analysis using simulation is proposed. The formulation of the generic methodology presented in this paper depends on three simulation models;As-Planned Model (APM,As-Built Model (ABM, and What-Would-HaveBeenModel(WWHBM. The proposed generic methodology as presented in this paper provides a good basis as a more elaborate approach to better analyze claims and their impacts on project time and productivity utilizing discrete event simulation.The approach proposed allows for scenario analysis to account for the disputed events and workflow disruptions. The proposed models will assist claimants in presenting their cases effectively and professionally.

  19. Application of Response Surface Methodology in Optimizing a Three Echelon Inventory System

    Directory of Open Access Journals (Sweden)

    Seyed Hossein Razavi Hajiagha

    2014-01-01

    Full Text Available Inventory control is an important subject in supply chain management. In this paper, a three echelon production, distribution, inventory system composed of one producer, two wholesalers and a set of retailers has been considered. Costumers' demands follow a compound Poisson process and the inventory policy is a kind of continuous review (R, Q. In this paper, regarding the standard cost structure in an inventory model, the cost function of system has been approximated using Response Surface Methodology as a combination of designed experiments, simulation, regression analysis and optimization. The proposed methodology in this paper can be applied as a novel method in optimization of inventory policy of supply chains. Also, the joint optimization of inventory parameters, including reorder point and batch order size, is another advantage of the proposed methodology.

  20. Methodology based on genetic heuristics for in-vivo characterizing the patient-specific biomechanical behavior of the breast tissues.

    Science.gov (United States)

    Lago, M A; Rúperez, M J; Martínez-Martínez, F; Martínez-Sanchis, S; Bakic, P R; Monserrat, C

    2015-11-30

    This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work.

  1. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  2. Methodology applied to develop the DHIE: applied methodology

    CSIR Research Space (South Africa)

    Herselman, Marlien

    2016-12-01

    Full Text Available This section will address the methodology that was applied to develop the South African Digital Health Innovation Ecosystem (DHIE). Each chapter under Section B represents a specific phase in the methodology....

  3. A Novel Dynamic Co-Simulation Analysis for Overall Closed Loop Operation Control of a Large Wind Turbine

    Directory of Open Access Journals (Sweden)

    Ching-Sung Wang

    2016-08-01

    Full Text Available A novel dynamic co-simulation methodology of overall wind turbine systems is presented. This methodology combines aerodynamics, mechanism dynamics, control system dynamics, and subsystems dynamics. Aerodynamics and turbine properties were modeled in FAST (Fatigue, Aerodynamic, Structures, and Turbulence, and ADAMS (Automatic Dynamic Analysis of Mechanical Systems performed the mechanism dynamics; control system dynamics and subsystem dynamics such as generator, pitch control system, and yaw control system were modeled and built in MATLAB/SIMULINK. Thus, this comprehensive integration of methodology expands both the flexibility and controllability of wind turbines. The dynamic variations of blades, rotor dynamic response, and tower vibration can be performed under different inputs of wind profile, and the control strategies can be verified in the different closed loop simulation. Besides, the dynamic simulation results are compared with the measuring results of SCADA (Supervisory Control and Data Acquisition of a 2 MW wind turbine for ensuring the novel dynamic co-simulation methodology.

  4. AREVA main steam line break fully coupled methodology based on CATHARE-ARTEMIS - 15496

    International Nuclear Information System (INIS)

    Denis, L.; Jasserand, L.; Tomatis, D.; Segond, M.; Royere, C.; Sauvage, J.Y.

    2015-01-01

    The CATHARE code developed since 1979 by AREVA, CEA, EDF and IRSN is one of the major thermal-hydraulic system codes worldwide. In order to have at disposal realistic methodologies based on CATHARE for the whole transient and accident analysis in Chapter 15 of Safety Reports, a coupling with the code ARTEMIS was developed. ARTEMIS is the core code in AREVA's new reactor simulator system ARCADIA, using COBRA-FLX to model the thermal-hydraulics in the core. The Fully Coupled Methodology was adapted to the CATHARE-ARTEMIS coupling to perform Main Steam Line Break studies. This methodology, originally applied to the MANTA-SMART-FLICA coupling, is dedicated to Main Steam Line Break transients at zero power. The aim of this paper is to present the coupling between CATHARE and ARTEMIS and the application of the Fully Coupled Methodology in a different code environment. (authors)

  5. Study of a methodology of identifying important research problems by the PIRT process

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Takagi, Toshiyuki; Urayama, Ryoichi; Komura, Ichiro; Furukawa, Takashi; Yusa, Noritaka

    2014-01-01

    In this paper, we propose a new methodology of identifying important research problems to be solved to improve the performance of some specific scientific technologies by the phenomena identification and ranking table (PIRT) process which has been used as a methodology for demonstrating the validity of the best estimate simulation codes in US Nuclear Regulatory Commission (USNRC) licensing of nuclear power plants. The new methodology makes it possible to identify important factors affecting the performance of the technologies from the viewpoint of the figure of merit and problems associated with them while it keeps the fundamental concepts of the original PIRT process. Also in this paper, we demonstrate the effectiveness of the new methodology by applying it to a task of extracting research problems for improving an inspection accuracy of ultrasonic testing or eddy current testing in the inspection of objects having cracks due to fatigue or stress corrosion cracking. (author)

  6. Estimating small area health-related characteristics of populations: a methodological review

    Directory of Open Access Journals (Sweden)

    Azizur Rahman

    2017-05-01

    Full Text Available Estimation of health-related characteristics at a fine local geographic level is vital for effective health promotion programmes, provision of better health services and population-specific health planning and management. Lack of a micro-dataset readily available for attributes of individuals at small areas negatively impacts the ability of local and national agencies to manage serious health issues and related risks in the community. A solution to this challenge would be to develop a method that simulates reliable small-area statistics. This paper provides a significant appraisal of the methodologies for estimating health-related characteristics of populations at geographical limited areas. Findings reveal that a range of methodologies are in use, which can be classified as three distinct set of approaches: i indirect standardisation and individual level modelling; ii multilevel statistical modelling; and iii micro-simulation modelling. Although each approach has its own strengths and weaknesses, it appears that microsimulation- based spatial models have significant robustness over the other methods and also represent a more precise means of estimating health-related population characteristics over small areas.

  7. Numerical simulation of real-world flows

    Energy Technology Data Exchange (ETDEWEB)

    Hayase, Toshiyuki, E-mail: hayase@ifs.tohoku.ac.jp [Institute of Fluid Science, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577 (Japan)

    2015-10-15

    Obtaining real flow information is important in various fields, but is a difficult issue because measurement data are usually limited in time and space, and computational results usually do not represent the exact state of real flows. Problems inherent in the realization of numerical simulation of real-world flows include the difficulty in representing exact initial and boundary conditions and the difficulty in representing unstable flow characteristics. This article reviews studies dealing with these problems. First, an overview of basic flow measurement methodologies and measurement data interpolation/approximation techniques is presented. Then, studies on methods of integrating numerical simulation and measurement, namely, four-dimensional variational data assimilation (4D-Var), Kalman filters (KFs), state observers, etc are discussed. The first problem is properly solved by these integration methodologies. The second problem can be partially solved with 4D-Var in which only initial and boundary conditions are control parameters. If an appropriate control parameter capable of modifying the dynamical structure of the model is included in the formulation of 4D-Var, unstable modes are properly suppressed and the second problem is solved. The state observer and KFs also solve the second problem by modifying mathematical models to stabilize the unstable modes of the original dynamical system by applying feedback signals. These integration methodologies are now applied in simulation of real-world flows in a wide variety of research fields. Examples are presented for basic fluid dynamics and applications in meteorology, aerospace, medicine, etc. (topical review)

  8. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    Science.gov (United States)

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A Comparison Study of a Generic Coupling Methodology for Modeling Wake Effects of Wave Energy Converter Arrays

    Directory of Open Access Journals (Sweden)

    Tim Verbrugghe

    2017-10-01

    Full Text Available Wave Energy Converters (WECs need to be deployed in large numbers in an array layout in order to have a significant power production. Each WEC has an impact on the incoming wave field, by diffracting, reflecting and radiating waves. Simulating the wave transformations within and around a WEC array is complex; it is difficult, or in some cases impossible, to simulate both these near-field and far-field wake effects using a single numerical model, in a time- and cost-efficient way in terms of computational time and effort. Within this research, a generic coupling methodology is developed to model both near-field and far-field wake effects caused by floating (e.g., WECs, platforms or fixed offshore structures. The methodology is based on the coupling of a wave-structure interaction solver (Nemoh and a wave propagation model. In this paper, this methodology is applied to two wave propagation models (OceanWave3D and MILDwave, which are compared to each other in a wide spectrum of tests. Additionally, the Nemoh-OceanWave3D model is validated by comparing it to experimental wave basin data. The methodology proves to be a reliable instrument to model wake effects of WEC arrays; results demonstrate a high degree of agreement between the numerical simulations with relative errors lower than 5 % and to a lesser extent for the experimental data, where errors range from 4 % to 17 % .

  10. Design Methodology of a Sensor Network Architecture Supporting Urgent Information and Its Evaluation

    Science.gov (United States)

    Kawai, Tetsuya; Wakamiya, Naoki; Murata, Masayuki

    Wireless sensor networks are expected to become an important social infrastructure which helps our life to be safe, secure, and comfortable. In this paper, we propose design methodology of an architecture for fast and reliable transmission of urgent information in wireless sensor networks. In this methodology, instead of establishing single complicated monolithic mechanism, several simple and fully-distributed control mechanisms which function in different spatial and temporal levels are incorporated on each node. These mechanisms work autonomously and independently responding to the surrounding situation. We also show an example of a network architecture designed following the methodology. We evaluated the performance of the architecture by extensive simulation and practical experiments and our claim was supported by the results of these experiments.

  11. Methodology for Distributed Electric Propulsion Aircraft Control Development with Simulation and Flight Demonstration, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In the proposed STTR study, Empirical Systems Aerospace, Inc. (ESAero) and the University of Illinois at Urbana-Champaign (UIUC) will create a methodology for the...

  12. Quantitative assessments of distributed systems methodologies and techniques

    CERN Document Server

    Bruneo, Dario

    2015-01-01

    Distributed systems employed in critical infrastructures must fulfill dependability, timeliness, and performance specifications. Since these systems most often operate in an unpredictable environment, their design and maintenance require quantitative evaluation of deterministic and probabilistic timed models. This need gave birth to an abundant literature devoted to formal modeling languages combined with analytical and simulative solution techniques The aim of the book is to provide an overview of techniques and methodologies dealing with such specific issues in the context of distributed

  13. Efficient Substrate Noise Coupling Verification and Failure Analysis Methodology for Smart Power ICs in Automotive Applications

    OpenAIRE

    Moursy , Yasser; Zou , Hao; Khalil , Raouf; Iskander , Ramy; Tisserand , Pierre; Ton , Dieu-My; Pasetti , Giuseppe; Louërat , Marie-Minerve

    2016-01-01

    International audience; This paper presents a methodology to analyze the substrate noise coupling and reduce their effects in smart power integrated circuits. This methodology considers the propagation of minority carriers in the substrate. Hence, it models the lateral bipolar junction transistors that are layout dependent and are not modeled in conventional substrate extraction tools. It allows the designer to simulate substrate currents and check their effects on circuits functionality. The...

  14. STAR FORMATION IN SELF-GRAVITATING DISKS IN ACTIVE GALACTIC NUCLEI. II. EPISODIC FORMATION OF BROAD-LINE REGIONS

    International Nuclear Information System (INIS)

    WangJianmin; Du Pu; Ge Junqiang; Hu Chen; Baldwin, Jack A.; Ferland, Gary J.

    2012-01-01

    This is the second in a series of papers discussing the process and effects of star formation in the self-gravitating disk around the supermassive black holes in active galactic nuclei (AGNs). We have previously suggested that warm skins are formed above the star-forming (SF) disk through the diffusion of warm gas driven by supernova explosions. Here we study the evolution of the warm skins when they are exposed to the powerful radiation from the inner part of the accretion disk. The skins initially are heated to the Compton temperature, forming a Compton atmosphere (CAS) whose subsequent evolution is divided into four phases. Phase I is the duration of pure accumulation supplied by the SF disk. During phase II clouds begin to form due to line cooling and sink to the SF disk. Phase III is a period of preventing clouds from sinking to the SF disk through dynamic interaction between clouds and the CAS because of the CAS overdensity driven by continuous injection of warm gas from the SF disk. Finally, phase IV is an inevitable collapse of the entire CAS through line cooling. This CAS evolution drives the episodic appearance of broad-line regions (BLRs). We follow the formation of cold clouds through the thermal instability of the CAS during phases II and III, using linear analysis. Since the clouds are produced inside the CAS, the initial spatial distribution of newly formed clouds and angular momentum naturally follow the CAS dynamics, producing a flattened disk of clouds. The number of clouds in phases II and III can be estimated, as well as the filling factor of clouds in the BLR. Since the cooling function depends on the metallicity, the metallicity gradients that originate in the SF disk give rise to different properties of clouds in different radial regions. We find from the instability analysis that clouds have column density N H ∼ 22 cm –2 in the metal-rich regions whereas they have N H ∼> 10 22 cm –2 in the metal-poor regions. The metal-rich clouds

  15. Mixed-realism simulation of adverse event disclosure: an educational methodology and assessment instrument.

    Science.gov (United States)

    Matos, Francisco M; Raemer, Daniel B

    2013-04-01

    Physicians have an ethical duty to disclose adverse events to patients or families. Various strategies have been reported for teaching disclosure, but no instruments have been shown to be reliable for assessing them.The aims of this study were to report a structured method for teaching adverse event disclosure using mixed-realism simulation, develop and begin to validate an instrument for assessing performance, and describe the disclosure practice of anesthesiology trainees. Forty-two anesthesiology trainees participated in a 2-part exercise with mixed-realism simulation. The first part took place using a mannequin patient in a simulated operating room where trainees became enmeshed in a clinical episode that led to an adverse event and the second part in a simulated postoperative care unit where the learner is asked to disclose to a standardized patient who systematically moves through epochs of grief response. Two raters scored subjects using an assessment instrument we developed that combines a 4-element behaviorally anchored rating scale (BARS) and a 5-stage objective rating scale. The performance scores for elements within the BARS and the 5-stage instrument showed excellent interrater reliability (Cohen's κ = 0.7), appropriate range (mean range for BARS, 4.20-4.47; mean range for 5-stage instrument, 3.73-4.46), and high internal consistency (P realism simulation that engages learners in an adverse event and allows them to practice disclosure to a structured range of patient responses. We have developed a reliable 2-part instrument with strong psychometric properties for assessing disclosure performance.

  16. Advances in social simulation 2015

    CERN Document Server

    Verbrugge, Rineke; Flache, Andreas; Roo, Gert; Hoogduin, Lex; Hemelrijk, Charlotte

    2017-01-01

    This book highlights recent developments in the field, presented at the Social Simulation 2015 conference in Groningen, The Netherlands. It covers advances both in applications and methods of social simulation. Societal issues addressed range across complexities in economic systems, opinion dynamics and civil violence, changing mobility patterns, different land-use, transition in the energy system, food production and consumption, ecosystem management and historical processes. Methodological developments cover how to use empirical data in validating models in general, formalization of behavioral theory in agent behavior, construction of artificial populations for experimentation, replication of models, and agent-based models that can be run in a web browser. Social simulation is a rapidly evolving field. Social scientists are increasingly interested in social simulation as a tool to tackle the complex non-linear dynamics of society. Furthermore, the software and hardware tools available for social simulation ...

  17. Modeling ground-based timber harvesting systems using computer simulation

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux

    2001-01-01

    Modeling ground-based timber harvesting systems with an object-oriented methodology was investigated. Object-oriented modeling and design promote a better understanding of requirements, cleaner designs, and better maintainability of the harvesting simulation system. The model developed simulates chainsaw felling, drive-to-tree feller-buncher, swing-to-tree single-grip...

  18. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    Energy Technology Data Exchange (ETDEWEB)

    Vismari, Lucio Flavio, E-mail: lucio.vismari@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil); Batista Camargo Junior, Joao, E-mail: joaocamargo@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil)

    2011-07-15

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  19. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    International Nuclear Information System (INIS)

    Vismari, Lucio Flavio; Batista Camargo Junior, Joao

    2011-01-01

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  20. Development of a General Modelling Methodology for Vacuum Residue Hydroconversion

    Directory of Open Access Journals (Sweden)

    Pereira de Oliveira L.

    2013-11-01

    Full Text Available This work concerns the development of a methodology for kinetic modelling of refining processes, and more specifically for vacuum residue conversion. The proposed approach allows to overcome the lack of molecular detail of the petroleum fractions and to simulate the transformation of the feedstock molecules into effluent molecules by means of a two-step procedure. In the first step, a synthetic mixture of molecules representing the feedstock for the process is generated via a molecular reconstruction method, termed SR-REM molecular reconstruction. In the second step, a kinetic Monte-Carlo method (kMC is used to simulate the conversion reactions on this mixture of molecules. The molecular reconstruction was applied to several petroleum residues and is illustrated for an Athabasca (Canada vacuum residue. The kinetic Monte-Carlo method is then described in detail. In order to validate this stochastic approach, a lumped deterministic model for vacuum residue conversion was simulated using Gillespie’s Stochastic Simulation Algorithm. Despite the fact that both approaches are based on very different hypotheses, the stochastic simulation algorithm simulates the conversion reactions with the same accuracy as the deterministic approach. The full-scale stochastic simulation approach using molecular-level reaction pathways provides high amounts of detail on the effluent composition and is briefly illustrated for Athabasca VR hydrocracking.

  1. Methodologies and applications for critical infrastructure protection: State-of-the-art

    International Nuclear Information System (INIS)

    Yusta, Jose M.; Correa, Gabriel J.; Lacal-Arantegui, Roberto

    2011-01-01

    This work provides an update of the state-of-the-art on energy security relating to critical infrastructure protection. For this purpose, this survey is based upon the conceptual view of OECD countries, and specifically in accordance with EU Directive 114/08/EC on the identification and designation of European critical infrastructures, and on the 2009 US National Infrastructure Protection Plan. The review discusses the different definitions of energy security, critical infrastructure and key resources, and shows some of the experie'nces in countries considered as international reference on the subject, including some information-sharing issues. In addition, the paper carries out a complete review of current methodologies, software applications and modelling techniques around critical infrastructure protection in accordance with their functionality in a risk management framework. The study of threats and vulnerabilities in critical infrastructure systems shows two important trends in methodologies and modelling. A first trend relates to the identification of methods, techniques, tools and diagrams to describe the current state of infrastructure. The other trend accomplishes a dynamic behaviour of the infrastructure systems by means of simulation techniques including systems dynamics, Monte Carlo simulation, multi-agent systems, etc. - Highlights: → We examine critical infrastructure protection experiences, systems and applications. → Some international experiences are reviewed, including EU EPCIP Plan and the US NIPP programme. → We discuss current methodologies and applications on critical infrastructure protection, with emphasis in electric networks.

  2. Simulation Optimization by Genetic Search: A Comprehensive Study with Applications to Production Management

    National Research Council Canada - National Science Library

    Yunker, James

    2003-01-01

    In this report, a relatively new simulation optimization technique, the genetic search, is compared to two more established simulation techniques-the pattern search and the response surface methodology search...

  3. Scenario development methodologies

    International Nuclear Information System (INIS)

    Eng, T.; Hudson, J.; Stephansson, O.

    1994-11-01

    In the period 1981-1994, SKB has studied several methodologies to systematize and visualize all the features, events and processes (FEPs) that can influence a repository for radioactive waste in the future. All the work performed is based on the terminology and basic findings in the joint SKI/SKB work on scenario development presented in the SKB Technical Report 89-35. The methodologies studied are a) Event tree analysis, b) Influence diagrams and c) Rock Engineering Systems (RES) matrices. Each one of the methodologies is explained in this report as well as examples of applications. One chapter is devoted to a comparison between the two most promising methodologies, namely: Influence diagrams and the RES methodology. In conclusion a combination of parts of the Influence diagram and the RES methodology is likely to be a promising approach. 26 refs

  4. HRP's Healthcare Spin-Offs Through Computational Modeling and Simulation Practice Methodologies

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Peng, Grace; Morrison, Tina; Erdemir, Ahmet; Myers, Jerry

    2014-01-01

    Spaceflight missions expose astronauts to novel operational and environmental conditions that pose health risks that are currently not well understood, and perhaps unanticipated. Furthermore, given the limited number of humans that have flown in long duration missions and beyond low Earth-orbit, the amount of research and clinical data necessary to predict and mitigate these health and performance risks are limited. Consequently, NASA's Human Research Program (HRP) conducts research and develops advanced methods and tools to predict, assess, and mitigate potential hazards to the health of astronauts. In this light, NASA has explored the possibility of leveraging computational modeling since the 1970s as a means to elucidate the physiologic risks of spaceflight and develop countermeasures. Since that time, substantial progress has been realized in this arena through a number of HRP funded activates such as the Digital Astronaut Project (DAP) and the Integrated Medical Model (IMM). Much of this success can be attributed to HRP's endeavor to establish rigorous verification, validation, and credibility (VV&C) processes that ensure computational models and simulations (M&S) are sufficiently credible to address issues within their intended scope. This presentation summarizes HRP's activities in credibility of modeling and simulation, in particular through its outreach to the community of modeling and simulation practitioners. METHODS: The HRP requires all M&S that can have moderate to high impact on crew health or mission success must be vetted in accordance to NASA Standard for Models and Simulations, NASA-STD-7009 (7009) [5]. As this standard mostly focuses on engineering systems, the IMM and DAP have invested substantial efforts to adapt the processes established in this standard for their application to biological M&S, which is more prevalent in human health and performance (HHP) and space biomedical research and operations [6,7]. These methods have also generated

  5. Methodology to Assess No Touch Audit Software Using Simulated Building Utility Data

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States); Langner, M. Rois [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-10-01

    This report describes a methodology developed for assessing the performance of no touch building audit tools and presents results for an available tool. Building audits are conducted in many commercial buildings to reduce building energy costs and improve building operation. Because the audits typically require significant input obtained by building engineers, they are usually only affordable for larger commercial building owners. In an effort to help small building and business owners gain the benefits of an audit at a lower cost, no touch building audit tools have been developed to remotely analyze a building's energy consumption.

  6. Systematic methodology for diagnosis of water hammer in LWR power plants

    International Nuclear Information System (INIS)

    Safwat, H.H.; Arastu, A.H.; Husaini, S.M.

    1990-01-01

    The paper gives the dimensions of the knowledge base that is necessary to carry out a diagnosis of water hammer susceptibility/root cause analyses for Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) nuclear power plant systems. After introducing some fundamentals, water hammer phenomena are described. Situations where each phenomenon is encountered are given and analytical models capable of simulating the phenomena are referenced. Water hammer events in operating plants and their inclusion in the knowledge base is discussed. The diagnostic methodology is presented through an application on a system in a typical light water reactor plant. The methodology presented serves as a possible foundation for the creation of an expert water hammer diagnosis system. (orig.)

  7. Methodology for reliability based condition assessment

    International Nuclear Information System (INIS)

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period

  8. Methodological challenges to bridge the gap between regional climate and hydrology models

    Science.gov (United States)

    Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph; Felder, Guido

    2017-04-01

    The frequency and severity of floods worldwide, together with their impacts, are expected to increase under climate change scenarios. It is therefore very important to gain insight into the physical mechanisms responsible for such events in order to constrain the associated uncertainties. Model simulations of the climate and hydrological processes are important tools that can provide insight in the underlying physical processes and thus enable an accurate assessment of the risks. Coupled together, they can provide a physically consistent picture that allows to assess the phenomenon in a comprehensive way. However, climate and hydrological models work at different temporal and spatial scales, so there are a number of methodological challenges that need to be carefully addressed. An important issue pertains the presence of biases in the simulation of precipitation. Climate models in general, and Regional Climate models (RCMs) in particular, are affected by a number of systematic biases that limit their reliability. In many studies, prominently the assessment of changes due to climate change, such biases are minimised by applying the so-called delta approach, which focuses on changes disregarding absolute values that are more affected by biases. However, this approach is not suitable in this scenario, as the absolute value of precipitation, rather than the change, is fed into the hydrological model. Therefore, bias has to be previously removed, being this a complex matter where various methodologies have been proposed. In this study, we apply and discuss the advantages and caveats of two different methodologies that correct the simulated precipitation to minimise differences with respect an observational dataset: a linear fit (FIT) of the accumulated distributions and Quantile Mapping (QM). The target region is Switzerland, and therefore the observational dataset is provided by MeteoSwiss. The RCM is the Weather Research and Forecasting model (WRF), driven at the

  9. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    Science.gov (United States)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-06-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  10. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    Science.gov (United States)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-02-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  11. Computational optimization of biodiesel combustion using response surface methodology

    Directory of Open Access Journals (Sweden)

    Ganji Prabhakara Rao

    2017-01-01

    Full Text Available The present work focuses on optimization of biodiesel combustion phenomena through parametric approach using response surface methodology. Physical properties of biodiesel play a vital role for accurate simulations of the fuel spray, atomization, combustion, and emission formation processes. Typically methyl based biodiesel consists of five main types of esters: methyl palmitate, methyl oleate, methyl stearate, methyl linoleate, and methyl linolenate in its composition. Based on the amount of methyl esters present the properties of pongamia bio-diesel and its blends were estimated. CONVERGETM computational fluid dynamics software was used to simulate the fuel spray, turbulence and combustion phenomena. The simulation responses such as indicated specific fuel consumption, NOx, and soot were analyzed using design of experiments. Regression equations were developed for each of these responses. The optimum parameters were found out to be compression ratio – 16.75, start of injection – 21.9° before top dead center, and exhaust gas re-circulation – 10.94%. Results have been compared with baseline case.

  12. Verification of a hybrid adjoint methodology in Titan for single photon emission computed tomography - 316

    International Nuclear Information System (INIS)

    Royston, K.; Haghighat, A.; Yi, C.

    2010-01-01

    The hybrid deterministic transport code TITAN is being applied to a Single Photon Emission Computed Tomography (SPECT) simulation of a myocardial perfusion study. The TITAN code's hybrid methodology allows the use of a discrete ordinates solver in the phantom region and a characteristics method solver in the collimator region. Currently we seek to validate the adjoint methodology in TITAN for this application using a SPECT model that has been created in the MCNP5 Monte Carlo code. The TITAN methodology was examined based on the response of a single voxel detector placed in front of the heart with and without collimation. For the case without collimation, the TITAN response for single voxel-sized detector had a -9.96% difference relative to the MCNP5 response. To simulate collimation, the adjoint source was specified in directions located within the collimator acceptance angle. For a single collimator hole with a diameter matching the voxel dimension, a difference of -0.22% was observed. Comparisons to groupings of smaller collimator holes of two different sizes resulted in relative differences of 0.60% and 0.12%. The number of adjoint source directions within an acceptance angle was increased and showed no significant change in accuracy. Our results indicate that the hybrid adjoint methodology of TITAN yields accurate solutions greater than a factor of two faster than MCNP5. (authors)

  13. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  14. Leakage localisation method in a water distribution system based on sensitivity matrix: methodology and real test

    OpenAIRE

    Pascual Pañach, Josep

    2010-01-01

    Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...

  15. Simulation of investment returns of toll projects.

    Science.gov (United States)

    2013-08-01

    This research develops a methodological framework to illustrate key stages in applying the simulation of investment returns of toll projects, acting as an example process of helping agencies conduct numerical risk analysis by taking certain uncertain...

  16. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  17. Validation process of simulation model; Proceso de validacion de modelos de simulacion

    Energy Technology Data Exchange (ETDEWEB)

    San Isidro Pindado, M J

    1998-12-31

    It is presented a methodology on empirical about any detailed simulation model. This kind of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparison between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posteriori experiments. Three steps can be well differentiated: - Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. - Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. - Residual analysis. This analysis has been made on the time domain on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, ESP studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author)

  18. Validation process of simulation model; Proceso de validacion de modelos de simulacion

    Energy Technology Data Exchange (ETDEWEB)

    San Isidro Pindado, M.J.

    1997-12-31

    It is presented a methodology on empirical about any detailed simulation model. This kind of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparison between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posteriori experiments. Three steps can be well differentiated: - Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. - Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. - Residual analysis. This analysis has been made on the time domain on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, ESP studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author)

  19. Improvement of Cycle Dependent Core Model for NPP Simulator

    International Nuclear Information System (INIS)

    Song, J. S.; Koo, B. S.; Kim, H. Y. and others

    2003-11-01

    The purpose of this study is to establish automatic core model generation system and to develop 4 cycle real time core analysis methodology with 5% power distribution and 500 pcm reactivity difference criteria for nuclear power plant simulator. The standardized procedure to generate database from ROCS and ANC, which are used for domestic PWR core design, was established for the cycle specific simulator core model generation. An automatic data interface system to generate core model also established. The system includes ARCADIS which edits group constant and DHCGEN which generates interface coupling coefficient correction database. The interface coupling coefficient correction method developed in this study has 4 cycle real time capability and accuracies of which the maximum differences between core design results are within 103 pcm reactivity, 1% relative power distribution and 6% control rod worth. A nuclear power plant core simulation program R-MASTER was developed using the methodology and applied by the concept of distributed client system in simulator. The performance was verified by site acceptance test in Simulator no. 2 in Kori Training Center for 30 initial condition generation and 27 steady state, transient and postulated accident situations

  20. Improvement of Cycle Dependent Core Model for NPP Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Song, J. S.; Koo, B. S.; Kim, H. Y. and others

    2003-11-15

    The purpose of this study is to establish automatic core model generation system and to develop 4 cycle real time core analysis methodology with 5% power distribution and 500 pcm reactivity difference criteria for nuclear power plant simulator. The standardized procedure to generate database from ROCS and ANC, which are used for domestic PWR core design, was established for the cycle specific simulator core model generation. An automatic data interface system to generate core model also established. The system includes ARCADIS which edits group constant and DHCGEN which generates interface coupling coefficient correction database. The interface coupling coefficient correction method developed in this study has 4 cycle real time capability and accuracies of which the maximum differences between core design results are within 103 pcm reactivity, 1% relative power distribution and 6% control rod worth. A nuclear power plant core simulation program R-MASTER was developed using the methodology and applied by the concept of distributed client system in simulator. The performance was verified by site acceptance test in Simulator no. 2 in Kori Training Center for 30 initial condition generation and 27 steady state, transient and postulated accident situations.

  1. Simulation and the emergency department overcrowding problem

    OpenAIRE

    Nahhas, A.; Awaldi, A.; Reggelin, T.

    2017-01-01

    In this paper, a brief review on the emergency department overcrowding problem and its associated solution methodologies is presented. In addition, a case study of an urgent care center is investigated that demonstrates different simulation-based solution strategies to deal with the Emergency Department overcrowding problem. More precisely, a simulation study is conducted to identify critical aspects and propose possible scenarios to configure an urgent care center. Based on statistical data ...

  2. High-fidelity large eddy simulation for supersonic jet noise prediction

    Science.gov (United States)

    Aikens, Kurt M.

    The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the

  3. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    Science.gov (United States)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  4. Numerical simulation and analysis of fuzzy PID and PSD control methodologies as dynamic energy efficiency measures

    International Nuclear Information System (INIS)

    Ardehali, M.M.; Saboori, M.; Teshnelab, M.

    2004-01-01

    Energy efficiency enhancement is achieved by utilizing control algorithms that reduce overshoots and undershoots as well as unnecessary fluctuations in the amount of energy input to energy consuming systems during transient operation periods. It is hypothesized that application of control methodologies with characteristics that change with time and according to the system dynamics, identified as dynamic energy efficiency measures (DEEM), achieves the desired enhancement. The objective of this study is to simulate and analyze the effects of fuzzy logic based tuning of proportional integral derivative (F-PID) and proportional sum derivative (F-PSD) controllers for a heating and cooling energy system while accounting for the dynamics of the major system components. The procedure to achieve the objective includes utilization of fuzzy logic rules to determine the PID and PSD controllers gain coefficients so that the control laws for regulating the heat exchangers heating or cooling energy inputs are determined in each time step of the operation period. The performances of the F-PID and F-PSD controllers are measured by means of two cost functions that are based on quadratic forms of the energy input and deviation from a set point temperature. It is found that application of the F-PID control algorithm, as a DEEM, results in lower costs for energy input and deviation from a set point temperature by 24% and 17% as compared to a PID and 13% and 8% as compared to a PSD, respectively. It is also shown that the F-PSD performance is better than that of the F-PID controller

  5. Development of a methodology of analysis of instabilities in BWR reactors; Desarrollo de una metodologia de analisis de inestabilidades en reactores PWR

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Fenoll, M.; Abarca, A.; Barrachina, T.; Miro, R.; Verdu, G.

    2012-07-01

    This paper presents a methodology of analysis of the reactors instabilities of BWR type. This methodology covers of modal analysis of the point operation techniques of signal analysis and simulation of transients, through 3D Coupled RELAP5/PARCSv2.7 code.

  6. Archetype modeling methodology.

    Science.gov (United States)

    Moner, David; Maldonado, José Alberto; Robles, Montserrat

    2018-03-01

    Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Simulation of the lateral pole-impact on a crash simulation system; Simulation des seitlichen Pfahlaufpralls auf einer Katapultanlage

    Energy Technology Data Exchange (ETDEWEB)

    Hoegner, C.; Gajewski, M.; Zippel, I. [ACTS GmbH und Co. KG, Sailauf (Germany)

    2001-07-01

    The test set-up for the simulation of a lateral pole impact has the following characteristics: - Simulation of pole impact procedures Euro-NCAP and others - Very realistic reproduction of the deformation performance of side panels (with simulated or original part assemblies) - Consideration of the dynamic displacement of the cant rail - Seat displacement and deformation - Realistic depiction of the vehicle displacement (i.e. car-to-pole) - Test velocity: up to 50 km/h - Intrusion: up to 550 mm (more also possible on demand) - Pole diameter 254 mm (variable). This test system design results in variable set-up possibilities: - Test-set up possible from very simple (on the basis of simulation data without real parts) to very complex (use of side panels and interior door trim) - Support of different protection systems (window bag and side bag) - A maximum of two occupants feasible (set up weight max. 300 kg) - Optimal camera perspectives, stationary or onboard - Front as well as rear side impact simulation possible - Front-rear interaction of protection systems feasible (two dummies). A test system has been realised with which the complex process in a lateral pole impact can be simulated with excellent approximation and with relatively simple means. Due to the avoidance of point validation this methodology can be ideally implemented for development tests. (orig.)

  8. MHSS: a material handling system simulator

    Energy Technology Data Exchange (ETDEWEB)

    Pomernacki, L.; Hollstien, R.B.

    1976-04-07

    A Material Handling System Simulator (MHSS) program is described that provides specialized functional blocks for modeling and simulation of nuclear material handling systems. Models of nuclear fuel fabrication plants may be built using functional blocks that simulate material receiving, storage, transport, inventory, processing, and shipping operations as well as the control and reporting tasks of operators or on-line computers. Blocks are also provided that allow the user to observe and gather statistical information on the dynamic behavior of simulated plants over single or replicated runs. Although it is currently being developed for the nuclear materials handling application, MHSS can be adapted to other industries in which material accountability is important. In this paper, emphasis is on the simulation methodology of the MHSS program with application to the nuclear material safeguards problem. (auth)

  9. Application of the integrated safety assessment methodology to the protection of electric systems

    International Nuclear Information System (INIS)

    Hortal, Javier; Izquierdo, Jose M.

    1996-01-01

    The generalization of classical techniques for risk assessment incorporating dynamic effects is the main objective of the Integrated Safety Assessment Methodology, as practical implementation of Protection Theory. Transient stability, contingency analysis and protection setpoint verification in electric power systems are particularly appropriate domains of application, since the coupling of reliability and dynamic analysis in the protection assessment process is being increasingly demanded. Suitable techniques for dynamic simulation of sequences of switching events in power systems are derived from the use of quasi-linear equation solution algorithms. The application of the methodology, step by step, is illustrated in a simple but representative example

  10. Simulation of liquids and solids

    International Nuclear Information System (INIS)

    Ciccotti, G.; Frenkel, D.; McDonald, I.R.

    1987-01-01

    This book is a collection of key reprints of papers on the computer simulation of statistical-mechanical systems, introduced and commented upon by the editors. The papers provide the reader with a complete and comprehensive source book. It enables the new and experienced practitioner of computer simulation to learn and use the various methods and interpret the restrictions and various possibilities. A balanced view of the history and methodology of the subject and the presence of notes and comments greatly enhances the value of the book as reference material

  11. Development of an Improved Methodology to Assess Potential Unconventional Gas Resources

    International Nuclear Information System (INIS)

    Salazar, Jesus; McVay, Duane A.; Lee, W. John

    2010-01-01

    Considering the important role played today by unconventional gas resources in North America and their enormous potential for the future around the world, it is vital to both policy makers and industry that the volumes of these resources and the impact of technology on these resources be assessed. To provide for optimal decision making regarding energy policy, research funding, and resource development, it is necessary to reliably quantify the uncertainty in these resource assessments. Since the 1970s, studies to assess potential unconventional gas resources have been conducted by various private and governmental agencies, the most rigorous of which was by the United States Geological Survey (USGS). The USGS employed a cell-based, probabilistic methodology which used analytical equations to calculate distributions of the resources assessed. USGS assessments have generally produced distributions for potential unconventional gas resources that, in our judgment, are unrealistically narrow for what are essentially undiscovered, untested resources. In this article, we present an improved methodology to assess potential unconventional gas resources. Our methodology is a stochastic approach that includes Monte Carlo simulation and correlation between input variables. Application of the improved methodology to the Uinta-Piceance province of Utah and Colorado with USGS data validates the means and standard deviations of resource distributions produced by the USGS methodology, but reveals that these distributions are not right skewed, as expected for a natural resource. Our investigation indicates that the unrealistic shape and width of the gas resource distributions are caused by the use of narrow triangular input parameter distributions. The stochastic methodology proposed here is more versatile and robust than the USGS analytic methodology. Adoption of the methodology, along with a careful examination and revision of input distributions, should allow a more realistic

  12. Design and development of driving simulator scenarios for road validation studies

    Energy Technology Data Exchange (ETDEWEB)

    Dols Ruiz, J.F.

    2016-07-01

    In recent years, the number of road-based studies using driving simulators is growing significantly. This allows evaluating controlled situations that otherwise would require disproportionate observations in time and/or cost. The Institute of Design and Manufacturing (IDF) of the Polytechnic University of Valencia (UPV) has developed, in collaboration with the Engineering Research Group Highway (GIIC) of the UPV, a low cost simulator that allows rapid implementation and effectively a new methodology for validation studies of different roads through the implementation in the simulator scenarios of existing roads. This methodology allows the development of new scenarios based on the analysis of a layers-file system. Each layer includes different information from the road, such as mapping, geometry, signaling, aerial photos, etc. The creation of the simulated scenario is very fast based on the geometric design software, making it easier to consulting firms using the system that can evaluate and audit a particular route, obtaining reliable conclusions at minimal cost, even if the road is not actually built. This paper describes the basic structure of the layers generated for developing scenarios and guidelines for the implementation thereof. Finally the application of this methodology to a case of success will be described. (Author)

  13. CASTING IMPROVEMENT BASED ON METAHEURISTIC OPTIMIZATION AND NUMERICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Radomir Radiša

    2017-12-01

    Full Text Available This paper presents the use of metaheuristic optimization techniques to support the improvement of casting process. Genetic algorithm (GA, Ant Colony Optimization (ACO, Simulated annealing (SA and Particle Swarm Optimization (PSO have been considered as optimization tools to define the geometry of the casting part’s feeder. The proposed methodology has been demonstrated in the design of the feeder for casting Pelton turbine bucket. The results of the optimization are dimensional characteristics of the feeder, and the best result from all the implemented optimization processes has been adopted. Numerical simulation has been used to verify the validity of the presented design methodology and the feeding system optimization in the casting system of the Pelton turbine bucket.

  14. Interfacing Network Simulations and Empirical Data

    Science.gov (United States)

    2009-05-01

    contraceptive innovations in the Cameroon. He found that real-world adoption rates did not follow simulation models when the network relationships were...Analysis of the Coevolution of Adolescents ’ Friendship Networks, Taste in Music, and Alcohol Consumption. Methodology, 2: 48-56. Tichy, N.M., Tushman

  15. Bolometer Simulation Using SPICE

    Science.gov (United States)

    Jones, Hollis H.; Aslam, Shahid; Lakew, Brook

    2004-01-01

    A general model is presented that assimilates the thermal and electrical properties of the bolometer - this block model demonstrates the Electro-Thermal Feedback (ETF) effect on the bolometers performance. This methodology is used to construct a SPICE model that by way of analogy combines the thermal and electrical phenomena into one simulation session. The resulting circuit diagram is presented and discussed.

  16. Monitoring and diagnosis for sensor fault detection using GMDH methodology

    International Nuclear Information System (INIS)

    Goncalves, Iraci Martinez Pereira

    2006-01-01

    The fault detection and diagnosis system is an Operator Support System dedicated to specific functions that alerts operators to sensors and actuators fault problems, and guide them in the diagnosis before the normal alarm limits are reached. Operator Support Systems appears to reduce panels complexity caused by the increase of the available information in nuclear power plants control room. In this work a Monitoring and Diagnosis System was developed based on the GMDH (Group Method of Data Handling) methodology. The methodology was applied to the IPEN research reactor IEA-R1. The system performs the monitoring, comparing GMDH model calculated values with measured values. The methodology developed was firstly applied in theoretical models: a heat exchanger model and an IPEN reactor theoretical model. The results obtained with theoretical models gave a base to methodology application to the actual reactor operation data. Three GMDH models were developed for actual operation data monitoring: the first one using just the thermal process variables, the second one was developed considering also some nuclear variables, and the third GMDH model considered all the reactor variables. The three models presented excellent results, showing the methodology utilization viability in monitoring the operation data. The comparison between the three developed models results also shows the methodology capacity to choose by itself the best set of input variables for the model optimization. For the system diagnosis implementation, faults were simulated in the actual temperature variable values by adding a step change. The fault values correspond to a typical temperature descalibration and the result of monitoring faulty data was then used to build a simple diagnosis system based on fuzzy logic. (author)

  17. Analysis of Wind Turbine Simulation Models: Assessment of Simplified versus Complete Methodologies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.; Fuentes-Moreno, J. A.; Muljadi, Eduard; Gomez-Lazaro, E.

    2015-09-14

    This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.

  18. FLARE-LIKE VARIABILITY OF THE Mg II {lambda}2800 EMISSION LINE IN THE {gamma}-RAY BLAZAR 3C 454.3

    Energy Technology Data Exchange (ETDEWEB)

    Leon-Tavares, J. [Finnish Centre for Astronomy with ESO (FINCA), University of Turku, Vaeisaelaentie 20, FI-21500 Piikkioe (Finland); Chavushyan, V.; Patino-Alvarez, V.; Carraminana, A.; Carrasco, L. [Instituto Nacional de Astrofisica Optica y Electronica (INAOE), Apartado Postal 51 y 216, 72000 Puebla (Mexico); Valtaoja, E. [Tuorla Observatory, Department of Physics and Astronomy, University of Turku, FI-20100 Turku (Finland); Arshakian, T. G. [I. Physikalisches Institut, Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Popovic, L. C. [Astronomical Observatory, Volgina 7, 11160 Belgrade 74 (Serbia); Tornikoski, M.; Laehteenmaeki, A. [Aalto University Metsaehovi Radio Observatory, Metsaehovintie 114, FI-02540 Kylmaelae (Finland); Lobanov, A. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, D-53121 Bonn (Germany)

    2013-02-01

    We report the detection of a statistically significant flare-like event in the Mg II {lambda}2800 emission line of 3C 454.3 during the outburst of autumn 2010. The highest levels of emission line flux recorded over the monitoring period (2008-2011) coincide with a superluminal jet component traversing through the radio core. This finding crucially links the broad emission line fluctuations to the non-thermal continuum emission produced by relativistically moving material in the jet and hence to the presence of broad-line region clouds surrounding the radio core. If the radio core were located at several parsecs from the central black hole, then our results would suggest the presence of broad-line region material outside the inner parsec where the canonical broad-line region is envisaged to be located. We briefly discuss the implications of broad emission line material ionized by non-thermal continuum in the context of virial black hole mass estimates and gamma-ray production mechanisms.

  19. Optical emission line spectra of Seyfert galaxies and radio galaxies

    International Nuclear Information System (INIS)

    Osterbrock, D.E.

    1978-01-01

    Many radio galaxies have strong emission lines in their optical spectra, similar to the emission lines in the spectra of Seyfert galaxies. The range of ionization extends from [O I] and [N I] through [Ne V] and [Fe VII] to [Fe X]. The emission-line spectra of radio galaxies divide into two types, narrow-line radio galaxies whose spectra are indistinguishable from Seyfert 2 galaxies, and broad-line radio galaxies whose spectra are similar to Seyfert 1 galaxies. However on the average the broad-line radio galaxies have steeper Balmer decrements, stronger [O III] and weaker Fe II emission than the Seyfert 1 galaxies, though at least one Seyfert 1 galaxy not known to be a radio source has a spectrum very similar to typical broad-line radio galaxies. Intermediate-type Seyfert galaxies exist that show various mixtures of the Seyfert 1 and Seyfert 2 properties, and the narrow-line or Seyfert 2 property seems to be strongly correlated with radio emission. (Auth.)

  20. Line profile variations in selected Seyfert galaxies

    International Nuclear Information System (INIS)

    Kollatschny, W; Zetzl, M; Ulbrich, K

    2010-01-01

    Continua as well as the broad emission lines in Seyfert 1 galaxies vary in different galaxies with different amplitudes on typical timescales of days to years. We present the results of two independent variability campaigns taken with the Hobby-Eberly Telescope. We studied in detail the integrated line and continuum variations in the optical spectra of the narrow-line Seyfert galaxy Mrk 110 and the very broad-line Seyfert galaxy Mrk 926. The broad-line emitting region in Mrk 110 has radii of four to 33 light-days as a function of the ionization degree of the emission lines. The line-profile variations are matched by Keplerian disk models with some accretion disk wind. The broad-line region in Mrk 926 is very small showing an extension of two to three light-days only. We could detect a structure in the rms line-profiles as well as in the response of the line profile segments of Mrk 926 indicating the BLR is structured.

  1. Methodologies for Wind Turbine and STATCOM Integration in Wind Power Plant Models for Harmonic Resonances Assessment

    DEFF Research Database (Denmark)

    Freijedo Fernandez, Francisco Daniel; Chaudhary, Sanjay Kumar; Guerrero, Josep M.

    2015-01-01

    -domain. As an alternative, a power based averaged modelling is also proposed. Type IV wind turbine harmonic signature and STATCOM active harmonic mitigation are considered for the simulation case studies. Simulation results provide a good insight of the features and limitations of the proposed methodologies.......This paper approaches modelling methodologies for integration of wind turbines and STATCOM in harmonic resonance studies. Firstly, an admittance equivalent model representing the harmonic signature of grid connected voltage source converters is provided. A simplified type IV wind turbine modelling...... is then straightforward. This linear modelling is suitable to represent the wind turbine in the range of frequencies at which harmonic interactions are likely. Even the admittance method is suitable both for frequency and time domain studies, some limitations arise in practice when implementing it in the time...

  2. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    Science.gov (United States)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  3. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    Directory of Open Access Journals (Sweden)

    Elie Bienenstock

    2008-06-01

    Full Text Available Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i For all estimators considered, the main source of error is the bias. (ii The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in

  4. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    Science.gov (United States)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  5. Understanding Contamination; Twenty Years of Simulating Radiological Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Emily Snyder; John Drake; Ryan James

    2012-02-01

    A wide variety of simulated contamination methods have been developed by researchers to reproducibly test radiological decontamination methods. Some twenty years ago a method of non-radioactive contamination simulation was proposed at the Idaho National Laboratory (INL) that mimicked the character of radioactive cesium and zirconium contamination on stainless steel. It involved baking the contamination into the surface of the stainless steel in order to 'fix' it into a tenacious, tightly bound oxide layer. This type of contamination was particularly applicable to nuclear processing facilities (and nuclear reactors) where oxide growth and exchange of radioactive materials within the oxide layer became the predominant model for material/contaminant interaction. Additional simulation methods and their empirically derived basis (from a nuclear fuel reprocessing facility) are discussed. In the last ten years the INL, working with the Defense Advanced Research Projects Agency (DARPA) and the National Homeland Security Research Center (NHSRC), has continued to develop contamination simulation methodologies. The most notable of these newer methodologies was developed to compare the efficacy of different decontamination technologies against radiological dispersal device (RDD, 'dirty bomb') type of contamination. There are many different scenarios for how RDD contamination may be spread, but the most commonly used one at the INL involves the dispersal of an aqueous solution containing radioactive Cs-137. This method was chosen during the DARPA projects and has continued through the NHSRC series of decontamination trials and also gives a tenacious 'fixed' contamination. Much has been learned about the interaction of cesium contamination with building materials, particularly concrete, throughout these tests. The effects of porosity, cation-exchange capacity of the material and the amount of dirt and debris on the surface are very important factors

  6. Nuclear power plant training simulator fidelity assessment

    International Nuclear Information System (INIS)

    Carter, R.J.; Laughery, K.R.

    1985-01-01

    The fidelity assessment portion of a methodology for evaluating nuclear power plant simulation facilities in regard to their appropriateness for conducting the Nuclear Regulatory Commission's operating test was described. The need for fidelity assessment, data sources, and fidelity data to be collected are addressed. Fidelity data recording, collection, and analysis are discussed. The processes for drawing conclusions from the fidelity assessment and evaluating the adequacy of the simulator control-room layout were presented. 3 refs

  7. Simulation software: engineer processes before reengineering.

    Science.gov (United States)

    Lepley, C J

    2001-01-01

    People make decisions all the time using intuition. But what happens when you are asked: "Are you sure your predictions are accurate? How much will a mistake cost? What are the risks associated with this change?" Once a new process is engineered, it is difficult to analyze what would have been different if other options had been chosen. Simulating a process can help senior clinical officers solve complex patient flow problems and avoid wasted efforts. Simulation software can give you the data you need to make decisions. The author introduces concepts, methodologies, and applications of computer aided simulation to illustrate their use in making decisions to improve workflow design.

  8. Multilevel Simulation of Heterogeneous Reconfigurable Platforms

    Directory of Open Access Journals (Sweden)

    Damien Picard

    2009-01-01

    Full Text Available This paper presents a general system-level simulation and testing methodology for reconfigurable System-on-Chips, starting from behavioral specifications of system activities to multilevel simulations of accelerated tasks running on the reconfigurable circuit. The system is based on a common objectoriented environment that offers valuable debugging and probing facilities as well as integrated testing features. Our system brings these benefits to the hardware simulation, while enforcing validation through characterization tests and interoperability through on-demand mainstream tools connections. This framework has been partially developed in the scope of the EU Morpheus project and is used to validate our contribution to the spatial design task.

  9. Simulation and Optimization Methodologies for Military Transportation Network Routing and Scheduling and for Military Medical Services

    National Research Council Canada - National Science Library

    Rodin, Ervin Y

    2005-01-01

    The purpose of this present research was to develop a generic model and methodology for analyzing and optimizing large-scale air transportation networks including both their routing and their scheduling...

  10. Designing simulation experiments with controllable and uncontrollable factors for applications in healthcare

    DEFF Research Database (Denmark)

    Dehlendorff, Christian; Kulahci, Murat; Andersen, Klaus Kaae

    2011-01-01

    We propose a new methodology for designing computer experiments that was inspired by the split-plot designs that are often used in physical experimentation.The methodology has been developed for a simulation model of a surgical unit in a Danish hospital.We classify the factors as controllable and...

  11. Integrated building (and) airflow simulation: an overview

    NARCIS (Netherlands)

    Hensen, J.L.M.

    2002-01-01

    This paper aims to give a broad overview of building airflow simulation, and advocates that the essential ingredients for quality assurance are: domain knowledge; selection of appropriate level of resolution; calibration and validation; and a correct performance assessment methodology. Directions

  12. Assessing the Impact of Clothing and Individual Equipment (CIE) on Soldier Physical, Biomechanical, and Cognitive Performance Part 1: Test Methodology

    Science.gov (United States)

    2018-02-01

    29 during Soldier Equipment Configuration Impact on Performance: Establishing a Test Methodology for the...during ACSM’S resource manual for exercise testing and prescription Human Movement Science, 31(2), Proceedings of the 2016 American Biomechanics...Performance of Medium Rucksack Prototypes An investigation: Comparison of live-fire and weapon simulator test methodologies and the of three extremity armor

  13. Utilization of El Dabaa basic simulator for manpower development

    International Nuclear Information System (INIS)

    Wahab, W.A.; Abdel Hamid, S.B.

    1998-01-01

    Training with basic simulators is considered as an essential tool for training and retraining of the nuclear power plant staff. To achieve that objective; Nuclear Power Plants Authority (NPPA) has installed a basic training simulator for PWR and PHWR simulation at El Dabaa site. The basic simulator simulates a 3-loops PWR-900 MW(e) and 2-loops PHWR-600 MW(e). The simulator has passed in-plant acceptance tests in CEA/CENG, Grenoble, France and passed successfully on-site completion tests at El Dabaa site in November 1991. This paper presents the main features of the simulator; training capabilities, hardware configuration and software architectures. Also training methodology by NPPA and the training experience gained by using the simulator are presented. (author)

  14. Simulating chemical reactions in ionic liquids using QM/MM methodology.

    Science.gov (United States)

    Acevedo, Orlando

    2014-12-18

    The use of ionic liquids as a reaction medium for chemical reactions has dramatically increased in recent years due in large part to the numerous reported advances in catalysis and organic synthesis. In some extreme cases, ionic liquids have been shown to induce mechanistic changes relative to conventional solvents. Despite the large interest in the solvents, a clear understanding of the molecular factors behind their chemical impact is largely unknown. This feature article reviews our efforts developing and applying mixed quantum and molecular mechanical (QM/MM) methodology to elucidate the microscopic details of how these solvents operate to enhance rates and alter mechanisms for industrially and academically important reactions, e.g., Diels-Alder, Kemp eliminations, nucleophilic aromatic substitutions, and β-eliminations. Explicit solvent representation provided the medium dependence of the activation barriers and atomic-level characterization of the solute-solvent interactions responsible for the experimentally observed "ionic liquid effects". Technical advances are also discussed, including a linear-scaling pairwise electrostatic interaction alternative to Ewald sums, an efficient polynomial fitting method for modeling proton transfers, and the development of a custom ionic liquid OPLS-AA force field.

  15. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    Science.gov (United States)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  16. A system for automatic evaluation of simulation software

    Science.gov (United States)

    Ryan, J. P.; Hodges, B. C.

    1976-01-01

    Within the field of computer software, simulation and verification are complementary processes. Simulation methods can be used to verify software by performing variable range analysis. More general verification procedures, such as those described in this paper, can be implicitly, viewed as attempts at modeling the end-product software. From software requirement methodology, each component of the verification system has some element of simulation to it. Conversely, general verification procedures can be used to analyze simulation software. A dynamic analyzer is described which can be used to obtain properly scaled variables for an analog simulation, which is first digitally simulated. In a similar way, it is thought that the other system components and indeed the whole system itself have the potential of being effectively used in a simulation environment.

  17. Proceedings of the 17. IASTED international conference on modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wamkeue, R. (comp.) [Quebec Univ., Abitibi-Temiscaminque, PQ (Canada)

    2006-07-01

    The International Association of Science and Technology for Development (IASTED) hosted this conference to provide a forum for international researchers and practitioners interested in all areas of modelling and simulation. The conference featured 12 sessions entitled: (1) automation, control and robotics, (2) hydraulic and hydrologic modelling, (3) applications in processes and design optimization, (4) environmental systems, (5) biomedicine and biomechanics, (6) communications, computers and informatics 1, (7) economics, management and operations research 1, (8) modelling and simulation methodologies 1, (9) economics, management and operations research 2, (10) modelling, optimization, identification and simulation, (11) communications, computers and informatics 2, and, (12) modelling and simulation methodologies 2. Participants took the opportunity to present the latest research, results, and ideas in mathematical modelling; physically-based modelling; agent-based modelling; dynamic modelling; 3-dimensional modelling; computational geometry; time series analysis; finite element methods; discrete event simulation; web-based simulation; Monte Carlo simulation; simulation optimization; simulation uncertainty; fuzzy systems; data modelling; computer aided design; and, visualization. Case studies in engineering design were also presented along with simulation tools and languages. The conference also highlighted topical issues in environmental systems modelling such as air modelling and simulation, atmospheric modelling, hazardous materials, mobile source emissions, ecosystem modelling, hydrological modelling, aquatic ecosystems, terrestrial ecosystems, biological systems, agricultural modelling, terrain analysis, meteorological modelling, earth system modelling, climatic modelling, and natural resource management. The conference featured 110 presentations, of which 3 have been catalogued separately for inclusion in this database. refs., tabs., figs.

  18. Spent fuel reprocessing system availability definition by process simulation

    International Nuclear Information System (INIS)

    Holder, N.; Haldy, B.B.; Jonzen, M.

    1978-05-01

    To examine nuclear fuel reprocessing plant operating parameters such as maintainability, reliability, availability, equipment redundancy, and surge storage requirements and their effect on plant throughput, a computer simulation model of integrated HTGR fuel reprocessing plant operations is being developed at General Atomic Company (GA). The simulation methodology and the status of the computer programming completed on reprocessing head end systems is reported

  19. Development of 3D pseudo pin-by-pin calculation methodology in ANC

    International Nuclear Information System (INIS)

    Zhang, B.; Mayhue, L.; Huria, H.; Ivanov, B.

    2012-01-01

    Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000 R plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. The mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)

  20. A generalized methodology for identification of threshold for HRU delineation in SWAT model

    Science.gov (United States)

    M, J.; Sudheer, K.; Chaubey, I.; Raj, C.

    2016-12-01

    The distributed hydrological model, Soil and Water Assessment Tool (SWAT) is a comprehensive hydrologic model widely used for making various decisions. The simulation accuracy of the distributed hydrological model differs due to the mechanism involved in the subdivision of the watershed. Soil and Water Assessment Tool (SWAT) considers sub-dividing the watershed and the sub-basins into small computing units known as 'hydrologic response units (HRU). The delineation of HRU is done based on unique combinations of land use, soil types, and slope within the sub-watersheds, which are not spatially defined. The computations in SWAT are done at HRU level and are then aggregated up to the sub-basin outlet, which is routed through the stream system. Generally, the HRUs are delineated by considering a threshold percentage of land use, soil and slope are to be given by the modeler to decrease the computation time of the model. The thresholds constrain the minimum area for constructing an HRU. In the current HRU delineation practice in SWAT, the land use, soil and slope of the watershed within a sub-basin, which is less than the predefined threshold, will be surpassed by the dominating land use, soil and slope, and introduce some level of ambiguity in the process simulations in terms of inappropriate representation of the area. But the loss of information due to variation in the threshold values depends highly on the purpose of the study. Therefore this research studies the effects of threshold values of HRU delineation on the hydrological modeling of SWAT on sediment simulations and suggests guidelines for selecting the appropriate threshold values considering the sediment simulation accuracy. The preliminary study was done on Illinois watershed by assigning different thresholds for land use and soil. A general methodology was proposed for identifying an appropriate threshold for HRU delineation in SWAT model that considered computational time and accuracy of the simulation

  1. Radiological safety methodology in radioactive tracer applications for hydrodynamics and environmental studies

    International Nuclear Information System (INIS)

    Suarez, R.; Badano, A.; Dellepere, A.; Artucio, G.; Bertolotti, A.

    1995-01-01

    The use of radioactive tracer techniques as control sewage disposal contamination in Montevideo Estuarine and Carrasco beach has been studied for the Nuclear Technology National Direction. Hydrodynamic models simulation has been introduced as work methodology. As well as radiological safety and radioactive material applications in the environmental studies has been evaluated mainly in the conclusions and recommendations in this report. maps

  2. On the origin of gamma rays in Fermi blazars: beyond the broad line region.

    Science.gov (United States)

    Costamante, L.; Cutini, S.; Tosti, G.; Antolini, E.; Tramacere, A.

    2018-05-01

    The gamma-ray emission in broad-line blazars is generally explained as inverse Compton (IC) radiation of relativistic electrons in the jet scattering optical-UV photons from the Broad Line Region (BLR), the so-called BLR External Compton scenario. We test this scenario on the Fermi gamma-ray spectra of 106 broad-line blazars detected with the highest significance or largest BLR, by looking for cut-off signatures at high energies compatible with γ-γ interactions with BLR photons. We do not find evidence for the expected BLR absorption. For 2/3 of the sources, we can exclude any significant absorption (τmax 5). We conclude that for 9 out of 10 objects, the jet does not interact with BLR photons. Gamma-rays seem either produced outside the BLR most of the time, or the BLR is ˜100 × larger than given by reverberation mapping. This means that i) External Compton on BLR photons is disfavoured as the main gamma-ray mechanism, vs IC on IR photons from the torus or synchrotron self-Compton; ii) the Fermi gamma-ray spectrum is mostly intrinsic, determined by the interaction of the particle distribution with the seed-photons spectrum; iii) without suppression by the BLR, broad-line blazars can become copious emitters above 100 GeV, as demonstrated by 3C 454.3. We expect the CTA sky to be much richer of broad-line blazars than previously thought.

  3. The Use of Computer Simulation Gaming in Teaching Broadcast Economics.

    Science.gov (United States)

    Mancuso, Louis C.

    The purpose of this study was to develop a broadcast economic computer simulation and to ascertain how a lecture-computer simulation game compared as a teaching method with a more traditional lecture and case study instructional methods. In each of three sections of a broadcast economics course, a different teaching methodology was employed: (1)…

  4. A Alternative Analog Circuit Design Methodology Employing Integrated Artificial Intelligence Techniques

    Science.gov (United States)

    Tuttle, Jeffery L.

    In consideration of the computer processing power now available to the designer, an alternative analog circuit design methodology is proposed. Computer memory capacities no longer require the reduction of the transistor operational characteristics to an imprecise formulation. Therefore, it is proposed that transistor modelling be abandoned in favor of fully characterized transistor data libraries. Secondly, availability of the transistor libraries would facilitate an automated selection of the most appropriate device(s) for the circuit being designed. More specifically, a preprocessor computer program to a more sophisticated circuit simulator (e.g. SPICE) is developed to assist the designer in developing the basic circuit topology and the selection of the most appropriate transistor. Once this is achieved, the circuit topology and selected transistor data library would be downloaded to the simulator for full circuit operational characterization and subsequent design modifications. It is recognized that the design process is enhanced by the use of heuristics as applied to iterative design results. Accordingly, an artificial intelligence (AI) interface is developed to assist the designer in applying the preprocessor results. To demonstrate the retrofitability of the AI interface to established programs, the interface is specifically designed to be as non-intrusive to the host code as possible. Implementation of the proposed methodology offers the potential to speed the design process, since the preprocessor both minimizes the required number of simulator runs and provides a higher acceptance potential of the initial and subsequent simulator runs. Secondly, part count reductions may be realizable since the circuit topologies are not as strongly driven by transistor limitations. Thirdly, the predicted results should more closely match actual circuit operations since the inadequacies of the transistor models have been virtually eliminated. Finally, the AI interface

  5. ARIANNE. Analytical uncertainties. Simulation of influential factors in the inventory of the final web cam

    International Nuclear Information System (INIS)

    Morales Prieto, M.; Ortega Saiz, P.

    2011-01-01

    Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.

  6. On the Verification of a WiMax Design Using Symbolic Simulation

    Directory of Open Access Journals (Sweden)

    Gabriela Nicolescu

    2013-07-01

    Full Text Available In top-down multi-level design methodologies, design descriptions at higher levels of abstraction are incrementally refined to the final realizations. Simulation based techniques have traditionally been used to verify that such model refinements do not change the design functionality. Unfortunately, with computer simulations it is not possible to completely check that a design transformation is correct in a reasonable amount of time, as the number of test patterns required to do so increase exponentially with the number of system state variables. In this paper, we propose a methodology for the verification of conformance of models generated at higher levels of abstraction in the design process to the design specifications. We model the system behavior using sequence of recurrence equations. We then use symbolic simulation together with equivalence checking and property checking techniques for design verification. Using our proposed method, we have verified the equivalence of three WiMax system models at different levels of design abstraction, and the correctness of various system properties on those models. Our symbolic modeling and verification experiments show that the proposed verification methodology provides performance advantage over its numerical counterpart.

  7. Integrating Simulated Physics and Device Virtualization in Control System Testbeds

    OpenAIRE

    Redwood , Owen; Reynolds , Jason; Burmester , Mike

    2016-01-01

    Part 3: INFRASTRUCTURE MODELING AND SIMULATION; International audience; Malware and forensic analyses of embedded cyber-physical systems are tedious, manual processes that testbeds are commonly not designed to support. Additionally, attesting the physics impact of embedded cyber-physical system malware has no formal methodologies and is currently an art. This chapter describes a novel testbed design methodology that integrates virtualized embedded industrial control systems and physics simula...

  8. AEGIS methodology and a perspective from AEGIS methodology demonstrations

    International Nuclear Information System (INIS)

    Dove, F.H.

    1981-03-01

    Objectives of AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) are to develop the capabilities needed to assess the post-closure safety of waste isolation in geologic formation; demonstrate these capabilities on reference sites; apply the assessment methodology to assist the NWTS program in site selection, waste package and repository design; and perform repository site analyses for the licensing needs of NWTS. This paper summarizes the AEGIS methodology, the experience gained from methodology demonstrations, and provides an overview in the following areas: estimation of the response of a repository to perturbing geologic and hydrologic events; estimation of the transport of radionuclides from a repository to man; and assessment of uncertainties

  9. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    Science.gov (United States)

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  10. Experimentally-based optimization of contact parameters in dynamics simulation of humanoid robots

    NARCIS (Netherlands)

    Vivian, Michele; Reggiani, Monica; Sartori, Massimo

    2013-01-01

    With this work we introduce a novel methodology for the simulation of walking of a humanoid robot. Motion capture technology is used to calibrate the dynamics engine internal parameters and validate the simulated motor task. Results showed the calibrated contact model allows predicting dynamically

  11. Gas Turbine Blade Damper Optimization Methodology

    Directory of Open Access Journals (Sweden)

    R. K. Giridhar

    2012-01-01

    Full Text Available The friction damping concept is widely used to reduce resonance stresses in gas turbines. A friction damper has been designed for high pressure turbine stage of a turbojet engine. The objective of this work is to find out effectiveness of the damper while minimizing resonant stresses for sixth and ninth engine order excitation of first flexure mode. This paper presents a methodology that combines three essential phases of friction damping optimization in turbo-machinery. The first phase is to develop an analytical model of blade damper system. The second phase is experimentation and model tuning necessary for response studies while the third phase is evaluating damper performance. The reduced model of blade is developed corresponding to the mode under investigation incorporating the friction damper then the simulations were carried out to arrive at an optimum design point of the damper. Bench tests were carried out in two phases. Phase-1 deals with characterization of the blade dynamically and the phase-2 deals with finding optimal normal load at which the blade resonating response is minimal for a given excitation. The test results are discussed, and are corroborated with simulated results, are in good agreement.

  12. Clinical trial methodology

    National Research Council Canada - National Science Library

    Peace, Karl E; Chen, Ding-Geng

    2011-01-01

    ... in the pharmaceutical industry, Clinical trial methodology emphasizes the importance of statistical thinking in clinical research and presents the methodology as a key component of clinical research...

  13. Methodology for interpretation of fissile mass flow measurements

    International Nuclear Information System (INIS)

    March-Leuba, J.; Mattingly, J.K.; Mullens, J.A.

    1997-01-01

    This paper describes a non-intrusive measurement technique to monitor the mass flow rate of fissile material in gaseous or liquid streams. This fissile mass flow monitoring system determines the fissile mass flow rate by relying on two independent measurements: (1) a time delay along a given length of pipe, which is inversely proportional to the fissile material flow velocity, and (2) an amplitude measurement, which is proportional to the fissile concentration (e.g., grams of 235 U per length of pipe). The development of this flow monitor was first funded by DOE/NE in September 95, and initial experimental demonstration by ORNL was described in the 37th INMM meeting held in July 1996. This methodology was chosen by DOE/NE for implementation in November 1996; it has been implemented in hardware/software and is ready for installation. This paper describes the methodology used to interpret the data measured by the fissile mass flow monitoring system and the models used to simulate the transport of fission fragments from the source location to the detectors

  14. An atomistic methodology of energy release rate for graphene at nanoscale

    International Nuclear Information System (INIS)

    Zhang, Zhen; Lee, James D.; Wang, Xianqiao

    2014-01-01

    Graphene is a single layer of carbon atoms packed into a honeycomb architecture, serving as a fundamental building block for electric devices. Understanding the fracture mechanism of graphene under various conditions is crucial for tailoring the electrical and mechanical properties of graphene-based devices at atomic scale. Although most of the fracture mechanics concepts, such as stress intensity factors, are not applicable in molecular dynamics simulation, energy release rate still remains to be a feasible and crucial physical quantity to characterize the fracture mechanical property of materials at nanoscale. This work introduces an atomistic simulation methodology, based on the energy release rate, as a tool to unveil the fracture mechanism of graphene at nanoscale. This methodology can be easily extended to any atomistic material system. We have investigated both opening mode and mixed mode at different temperatures. Simulation results show that the critical energy release rate of graphene is independent of initial crack length at low temperature. Graphene with inclined pre-crack possesses higher fracture strength and fracture deformation but smaller critical energy release rate compared with the graphene with vertical pre-crack. Owing to its anisotropy, graphene with armchair chirality always has greater critical energy release rate than graphene with zigzag chirality. The increase of temperature leads to the reduction of fracture strength, fracture deformation, and the critical energy release rate of graphene. Also, higher temperature brings higher randomness of energy release rate of graphene under a variety of predefined crack lengths. The energy release rate is independent of the strain rate as long as the strain rate is small enough

  15. Methodology for analysis and simulation of large multidisciplinary problems

    Science.gov (United States)

    Russell, William C.; Ikeda, Paul J.; Vos, Robert G.

    1989-01-01

    The Integrated Structural Modeling (ISM) program is being developed for the Air Force Weapons Laboratory and will be available for Air Force work. Its goal is to provide a design, analysis, and simulation tool intended primarily for directed energy weapons (DEW), kinetic energy weapons (KEW), and surveillance applications. The code is designed to run on DEC (VMS and UNIX), IRIS, Alliant, and Cray hosts. Several technical disciplines are included in ISM, namely structures, controls, optics, thermal, and dynamics. Four topics from the broad ISM goal are discussed. The first is project configuration management and includes two major areas: the software and database arrangement and the system model control. The second is interdisciplinary data transfer and refers to exchange of data between various disciplines such as structures and thermal. Third is a discussion of the integration of component models into one system model, i.e., multiple discipline model synthesis. Last is a presentation of work on a distributed processing computing environment.

  16. Case-Crossover Analysis of Air Pollution Health Effects: A Systematic Review of Methodology and Application

    Science.gov (United States)

    Carracedo-Martínez, Eduardo; Taracido, Margarita; Tobias, Aurelio; Saez, Marc; Figueiras, Adolfo

    2010-01-01

    Background Case-crossover is one of the most used designs for analyzing the health-related effects of air pollution. Nevertheless, no one has reviewed its application and methodology in this context. Objective We conducted a systematic review of case-crossover (CCO) designs used to study the relationship between air pollution and morbidity and mortality, from the standpoint of methodology and application. Data sources and extraction A search was made of the MEDLINE and EMBASE databases. Reports were classified as methodologic or applied. From the latter, the following information was extracted: author, study location, year, type of population (general or patients), dependent variable(s), independent variable(s), type of CCO design, and whether effect modification was analyzed for variables at the individual level. Data synthesis The review covered 105 reports that fulfilled the inclusion criteria. Of these, 24 addressed methodological aspects, and the remainder involved the design’s application. In the methodological reports, the designs that yielded the best results in simulation were symmetric bidirectional CCO and time-stratified CCO. Furthermore, we observed an increase across time in the use of certain CCO designs, mainly symmetric bidirectional and time-stratified CCO. The dependent variables most frequently analyzed were those relating to hospital morbidity; the pollutants most often studied were those linked to particulate matter. Among the CCO-application reports, 13.6% studied effect modification for variables at the individual level. Conclusions The use of CCO designs has undergone considerable growth; the most widely used designs were those that yielded better results in simulation studies: symmetric bidirectional and time-stratified CCO. However, the advantages of CCO as a method of analysis of variables at the individual level are put to little use. PMID:20356818

  17. Modelling and Simulation Methodology for Dynamic Resources Assignment System in Container Terminal

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available As the competition among international container terminals has become increasingly fierce, every port is striving to maintain the competitive edge and provide satisfactory services to port users. By virtue of information technology enhancement, many efforts to raise port competitiveness through an advanced operation system are actively being made, and judging from the viewpoint of investment effect, these efforts are more preferable than infrastructure expansion and additional equipment acquisition. Based on simulation, this study has tried to prove that RFID-based real-time location system (RTLS data collection and dynamic operation of transfer equipment brings a positive effect on the productivity improvement and resource utilization enhancement. Moreover, this study on the demand for the real-time data for container terminal operation have been made, and operation processes have been redesigned along with the collection of related data, and based on them, simulations have been conducted. As a result of them, much higher productivity improvement could be expected.

  18. Synchronization of autonomous objects in discrete event simulation

    Science.gov (United States)

    Rogers, Ralph V.

    1990-01-01

    Autonomous objects in event-driven discrete event simulation offer the potential to combine the freedom of unrestricted movement and positional accuracy through Euclidean space of time-driven models with the computational efficiency of event-driven simulation. The principal challenge to autonomous object implementation is object synchronization. The concept of a spatial blackboard is offered as a potential methodology for synchronization. The issues facing implementation of a spatial blackboard are outlined and discussed.

  19. Learning Empathy Through Simulation: A Systematic Literature Review.

    Science.gov (United States)

    Bearman, Margaret; Palermo, Claire; Allen, Louise M; Williams, Brett

    2015-10-01

    Simulation is increasingly used as an educational methodology for teaching empathy to preservice health professional students. This systematic review aimed to determine if and how simulation, including games, simulated patients, and role-play, might develop empathy and empathetic behaviors in learners. Eleven databases or clearing houses including MEDLINE, EMBASE, CINAHL, PsychInfo, and ERIC were searched for all articles published from any date until May 2014, using terms relating to (i) preservice health professional students, (ii) simulation, and (iii) empathy. Twenty-seven studies met the inclusion criteria, including 9 randomized controlled trials. A narrative synthesis suggests that simulation may be an appropriate method to teach empathy to preservice health professional students and identifies the value of the learner taking the role of the patient.

  20. Distributed Mission Operations Within-Simulator Training Effectiveness Baseline Study. Volume 5. Using the Pathfinder Methodology to Assess Pilot Knowledge Structure Changes

    National Research Council Canada - National Science Library

    Schreiber, Brian T; DiSalvo, Pam; Stock, William A; Bennett, Jr., Winston

    2006-01-01

    ... collection methodology both before and after five days of DMO training. The Pathfinder methodology is a qualitative/quantitative method that can be used to assess if the pilots' underlying knowledge structures (i.e...

  1. An approach to evaluate the cutting time for the nuclear dismantling simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jonghwan; Hyun, Dongjun; Kang, Sinyoung; Kim, Ikjune; Jeong, Kwan-Seong; Choi, Byung-Seon; Moon, Jeikwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Nuclear power plant (NPP) decommissioning involves various processes and technologies. Decommissioning should be performed after a comprehensive review of the information related to these processes and technologies. There are various means of prior examination and evaluation to ensure the feasibility and safety of the decommissioning process plan. Our dismantling simulation system aims to simulate and evaluate whole processes related to the dismantlement of core equipment of NPP such as the device preparation, cutting operation, waste transfer, and so on. This paper introduces the estimation methodology of the time required for the cutting processes based on real cutting conditions in order to provide effective economic evaluation functionalities used for the system. The methodology to estimate the time required for the remote cutting process in the nuclear dismantling simulation system was proposed. Among the factors which mainly determine the time, the cutting trace was directly calculated from the simulation system and the continuous cutting speed was obtained by proper order of the spline fitting with constraint conditions.

  2. A new approach to incorporate operator actions in the simulation of accident sequences

    International Nuclear Information System (INIS)

    Antonio Exposito; Juan Antonio Quiroga; Javier Hortal; John-Einar Hulsund

    2006-01-01

    Full text of publication follows: Nowadays, simulation-based human reliability analysis (HRA) methods seem to provide a new direction for the development of advanced methodologies to study operator actions effect during accident sequences. Due to this, the Spanish Nuclear Safety Council (CSN) started a working group which has, among other objectives, to develop such simulation-based HRA methodology. As a result of its activities, a new methodology, named Integrated Safety Assessment (ISA), has been developed and is currently being incorporated into licensing activities at CSN. One of the key aspects of this approach is the incorporation of the capability to simulate operator actions, expanding the ISA methodology scopes to make HRA studies. For this reason, CSN is involved in several activities oriented to develop a new tool, which must be able to incorporate operator actions in conventional thermohydraulic (TH) simulations. One of them is the collaboration project between CSN, Halden Reactor Project (HRP) and the Department of Energy Systems (DSE) of the Polytechnic University of Madrid that started in 2003. The basic aim of the project is to develop a software tool that consists of a closed-loop plant/operator simulator, a thermal hydraulic (TH) code for simulating the plant transient and the procedures processor to give the information related with operator actions to the TH code, both coupled by a data communication system which allows the information exchange. For the plant simulation we have a plant transient simulator code (TRETA/TIZONA for PWR/BWR NPPs respectively), developed by the CSN, with PWR/BWR full scope models. The functionality of these thermalhydraulic codes has been expanded, allowing control the overall information flow between coupled codes, simulating the TH transient and determining when the operator actions must be considered. In the other hand, we have the COPMA-III code, a computerized procedure system able to manage XML operational

  3. Determination of representative CANDU feeder dimensions for engineering simulator

    International Nuclear Information System (INIS)

    Cho, S.; Muzumdar, A.

    1996-01-01

    This paper describes a logic for selection of representative channel groups and a methodology for determination of representative CANDU feeder dimensions and the pressure drops between inlet/outlet header and fuel channel in the primary loop. A code, MEDOC, was developed based on this logic and methodology and helps perform a calculation of representative feeder dimensions for a selected channel group on the basis of feeder geometry data (fluid volume, mass flow rate, loss factor) and given property data (pressure, quality, density) at inlet/outlet header. The representative feeder dimensions calculated based on this methodology will be useful for the engineering simulator for the CANDU type reactor. (author)

  4. Methods for Monte Carlo simulations of biomacromolecules.

    Science.gov (United States)

    Vitalis, Andreas; Pappu, Rohit V

    2009-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

  5. Benchmarking HRA methods against different NPP simulator data

    International Nuclear Information System (INIS)

    Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta

    2008-01-01

    The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described

  6. PERCEPTION OF SATISFACTION OF STUDENTS IN THE USE OF CLINICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Lubia del Carmen Castillo-Arcos

    2017-07-01

    Full Text Available This study determinesthe perception of student satisfaction in the use of clinical simulation as a technique of teaching and learning to the development of "nursing care" competition. An assumption was raised; a greater perception of student satisfaction in the use of clinical simulation, the greater the development of competition. The methodology consisted of a qualitative design was used, the sample was not random, through a focus group composed of eight students of the fourth semester of the Bachelor of Nursing at the Autonomous University of Carmen. Semi-structured interview according to Miller model for the evaluation of skills was developed. In the results, the students reported that clinical simulation is an excellent learning strategy that allows them to integrate theory and practice without harming others, stated that previous contact with clinical simulation improves critical thinking, strengthen the knowledge, skills, decision making and professional ethics. The integration of this methodology improves develops clinical competence "nursing care". In conclusion, the Clinical Simulation is a method of teaching innovation of great interest, to be applied in the curricula of Health Sciences, due to its effectiveness as a learning strategy in the training of nursing students.

  7. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  8. Experiential Learning in Vehicle Dynamics Education via Motion Simulation and Interactive Gaming

    OpenAIRE

    Hulme, Kevin; Kasprzak, Edward; English, Ken; Moore-Russo, Deborah; Lewis, Kemper

    2009-01-01

    Creating active, student-centered learning situations in postsecondary education is an ongoing challenge for engineering educators. Contemporary students familiar with visually engaging and fast-paced games can find traditional classroom methods of lecture and guided laboratory experiments limiting. This paper presents a methodology that incorporates driving simulation, motion simulation, and educational practices into an engaging, gaming-inspired simulation framework for a vehicle dynamics c...

  9. Robust modelling and simulation integration of SIMIO with coloured petri nets

    CERN Document Server

    De La Mota, Idalia Flores; Mujica Mota, Miguel; Angel Piera, Miquel

    2017-01-01

    This book presents for the first time a methodology that combines the power of a modelling formalism such as colored petri nets with the flexibility of a discrete event program such as SIMIO. Industrial practitioners have seen the growth of simulation as a methodology for tacking problems in which variability is the common denominator. Practically all industrial systems, from manufacturing to aviation are considered stochastic systems. Different modelling techniques have been developed as well as mathematical techniques for formalizing the cause-effect relationships in industrial and complex systems. The methodology in this book illustrates how complexity in modelling can be tackled by the use of coloured petri nets, while at the same time the variability present in systems is integrated in a robust fashion. The book can be used as a concise guide for developing robust models, which are able to efficiently simulate the cause-effect relationships present in complex industrial systems without losing the simulat...

  10. Using Software Zelio Soft in Educational Process to Simulation Control Programs for Intelligent Relays

    Science.gov (United States)

    Michalik, Peter; Mital, Dusan; Zajac, Jozef; Brezikova, Katarina; Duplak, Jan; Hatala, Michal; Radchenko, Svetlana

    2016-10-01

    Article deals with point to using intelligent relay and PLC systems in practice, to their architecture and principles of programming and simulations for education process on all types of school from secondary to universities. Aim of the article is proposal of simple examples of applications, where is demonstrated methodology of programming on real simple practice examples and shown using of chosen instructions. In practical part is described process of creating schemas and describing of function blocks, where are described methodologies of creating program and simulations of output reactions on changeable inputs for intelligent relays.

  11. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    Science.gov (United States)

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  12. Building Performance Simulation tools for planning of energy efficiency retrofits

    DEFF Research Database (Denmark)

    Mondrup, Thomas Fænø; Karlshøj, Jan; Vestergaard, Flemming

    2014-01-01

    Designing energy efficiency retrofits for existing buildings will bring environmental, economic, social, and health benefits. However, selecting specific retrofit strategies is complex and requires careful planning. In this study, we describe a methodology for adopting Building Performance...... to energy efficiency retrofits in social housing. To generate energy savings, we focus on optimizing the building envelope. We evaluate alternative building envelope actions using procedural solar radiation and daylight simulations. In addition, we identify the digital information flow and the information...... Simulation (BPS) tools as energy and environmentally conscious decision-making aids. The methodology has been developed to screen buildings for potential improvements and to support the development of retrofit strategies. We present a case study of a Danish renovation project, implementing BPS approaches...

  13. Multi-Exciter Vibroacoustic Simulation of Hypersonic Flight Vibration

    International Nuclear Information System (INIS)

    GREGORY, DANNY LYNN; CAP, JEROME S.; TOGAMI, THOMAS C.; NUSSER, MICHAEL A.; HOLLINGSHEAD, JAMES RONALD

    1999-01-01

    Many aerospace structures must survive severe high frequency, hypersonic, random vibration during their flights. The random vibrations are generated by the turbulent boundary layer developed along the exterior of the structures during flight. These environments have not been simulated very well in the past using a fixed-based, single exciter input with an upper frequency range of 2 kHz. This study investigates the possibility of using acoustic ardor independently controlled multiple exciters to more accurately simulate hypersonic flight vibration. The test configuration, equipment, and methodology are described. Comparisons with actual flight measurements and previous single exciter simulations are also presented

  14. Development of an aeroelastic methodology for surface morphing rotors

    Science.gov (United States)

    Cook, James R.

    Helicopter performance capabilities are limited by maximum lift characteristics and vibratory loading. In high speed forward flight, dynamic stall and transonic flow greatly increase the amplitude of vibratory loads. Experiments and computational simulations alike have indicated that a variety of active rotor control devices are capable of reducing vibratory loads. For example, periodic blade twist and flap excitation have been optimized to reduce vibratory loads in various rotors. Airfoil geometry can also be modified in order to increase lift coefficient, delay stall, or weaken transonic effects. To explore the potential benefits of active controls, computational methods are being developed for aeroelastic rotor evaluation, including coupling between computational fluid dynamics (CFD) and computational structural dynamics (CSD) solvers. In many contemporary CFD/CSD coupling methods it is assumed that the airfoil is rigid to reduce the interface by single dimension. Some methods retain the conventional one-dimensional beam model while prescribing an airfoil shape to simulate active chord deformation. However, to simulate the actual response of a compliant airfoil it is necessary to include deformations that originate not only from control devices (such as piezoelectric actuators), but also inertial forces, elastic stresses, and aerodynamic pressures. An accurate representation of the physics requires an interaction with a more complete representation of loads and geometry. A CFD/CSD coupling methodology capable of communicating three-dimensional structural deformations and a distribution of aerodynamic forces over the wetted blade surface has not yet been developed. In this research an interface is created within the Fully Unstructured Navier-Stokes (FUN3D) solver that communicates aerodynamic forces on the blade surface to University of Michigan's Nonlinear Active Beam Solver (UM/NLABS -- referred to as NLABS in this thesis). Interface routines are developed for

  15. Simulation-based assessments in health professional education: a systematic review.

    Science.gov (United States)

    Ryall, Tayne; Judd, Belinda K; Gordon, Christopher J

    2016-01-01

    The use of simulation in health professional education has increased rapidly over the past 2 decades. While simulation has predominantly been used to train health professionals and students for a variety of clinically related situations, there is an increasing trend to use simulation as an assessment tool, especially for the development of technical-based skills required during clinical practice. However, there is a lack of evidence about the effectiveness of using simulation for the assessment of competency. Therefore, the aim of this systematic review was to examine simulation as an assessment tool of technical skills across health professional education. A systematic review of Cumulative Index to Nursing and Allied Health Literature (CINAHL), Education Resources Information Center (ERIC), Medical Literature Analysis and Retrieval System Online (Medline), and Web of Science databases was used to identify research studies published in English between 2000 and 2015 reporting on measures of validity, reliability, or feasibility of simulation as an assessment tool. The McMasters Critical Review for quantitative studies was used to determine methodological value on all full-text reviewed articles. Simulation techniques using human patient simulators, standardized patients, task trainers, and virtual reality were included. A total of 1,064 articles were identified using search criteria, and 67 full-text articles were screened for eligibility. Twenty-one articles were included in the final review. The findings indicated that simulation was more robust when used as an assessment in combination with other assessment tools and when more than one simulation scenario was used. Limitations of the research papers included small participant numbers, poor methodological quality, and predominance of studies from medicine, which preclude any definite conclusions. Simulation has now been embedded across a range of health professional education and it appears that simulation

  16. Hybrid High-Fidelity Modeling of Radar Scenarios Using Atemporal, Discrete-Event, and Time-Step Simulation

    Science.gov (United States)

    2016-12-01

    10 Figure 1.8 High-efficiency and high-fidelity radar system simulation flowchart . 15 Figure 1.9...Methodology roadmaps: experimental-design flowchart showing hybrid sensor models integrated from three simulation categories, followed by overall...simulation display and output produced by Java Simkit program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Figure 4.5 Hybrid

  17. On the performance simulation of inter-stage turbine reheat

    International Nuclear Information System (INIS)

    Pellegrini, Alvise; Nikolaidis, Theoklis; Pachidis, Vassilios; Köhler, Stephan

    2017-01-01

    Highlights: • An innovative gas turbine performance simulation methodology is proposed. • It allows to perform DP and OD performance calculations for complex engines layouts. • It is essential for inter-turbine reheat (ITR) engine performance calculation. • A detailed description is provided for fast and flexible implementation. • The methodology is successfully verified against a commercial closed-source software. - Abstract: Several authors have suggested the implementation of reheat in high By-Pass Ratio (BPR) aero engines, to improve engine performance. In contrast to military afterburning, civil aero engines would aim at reducing Specific Fuel Consumption (SFC) by introducing ‘Inter-stage Turbine Reheat’ (ITR). To maximise benefits, the second combustor should be placed at an early stage of the expansion process, e.g. between the first and second High-Pressure Turbine (HPT) stages. The aforementioned cycle design requires the accurate simulation of two or more turbine stages on the same shaft. The Design Point (DP) performance can be easily evaluated by defining a Turbine Work Split (TWS) ratio between the turbine stages. However, the performance simulation of Off-Design (OD) operating points requires the calculation of the TWS parameter for every OD step, by taking into account the thermodynamic behaviour of each turbine stage, represented by their respective maps. No analytical solution of the aforementioned problem is currently available in the public domain. This paper presents an analytical methodology by which ITR can be simulated at DP and OD. Results show excellent agreement with a commercial, closed-source performance code; discrepancies range from 0% to 3.48%, and are ascribed to the different gas models implemented in the codes.

  18. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    International Nuclear Information System (INIS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-01-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations

  19. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    Energy Technology Data Exchange (ETDEWEB)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de [Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany)

    2015-05-21

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  20. Simulation Framework for Rebalancing of Autonomous Mobility on Demand Systems

    Directory of Open Access Journals (Sweden)

    Marczuk Katarzyna A.

    2016-01-01

    This study is built upon our previous work on Autonomous Mobility on Demand (AMOD systems. Our methodology is simulation-based and we make use of SimMobility, an agent-based microscopic simulation platform. In the current work we focus on the framework for testing different rebalancing policies for the AMOD systems. We compare three different rebalancing methods: (i no rebalancing, (ii offline rebalancing, and (iii online rebalancing. Simulation results indicate that rebalancing reduces the required fleet size and shortens the customers’ wait time.

  1. Development of Thermal-hydraulic Analysis Methodology for Multi-module Breeding Blankets in K-DEMO

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Geon-Woo; Lee, Jeong-Hun; Park, Goon-Cherl; Cho, Hyoung-Kyu [Seoul National University, Seoul (Korea, Republic of); Im, Kihak [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    In this paper, the purpose of the analyses is to extend the capability of MARS-KS to the entire blanket system which includes a few hundreds of single blanket modules. Afterwards, the plan for the whole blanket system analysis using MARS-KS is introduced and the result of the multiple blanket module analysis is summarized. A thermal-hydraulic analysis code for a nuclear reactor safety, MARS-KS, was applied for the conceptual design of the K-DEMO breeding blanket thermal analysis. Then, a methodology to simulate multiple blanket modules was proposed, which uses a supervisor program to handle each blanket module individually at first and then distribute the flow rate considering pressure drops arises in each module. For a feasibility test of the proposed methodology, 10 outboard blankets in a toroidal field sector were simulated, which are connected with each other through the inlet and outlet common headers. The calculation results of flow rates, pressure drops, and temperatures showed the validity of the calculation and thanks to the parallelization using MPI, almost linear speed-up could be obtained.

  2. TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.

    Science.gov (United States)

    Somogyi, Endre; Hagar, Amit; Glazier, James A

    2016-12-01

    Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.

  3. Bibliography for Verification and Validation in Computational Simulation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1998-01-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering

  4. Bibliography for Verification and Validation in Computational Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.

    1998-10-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering.

  5. Application of digital image processing for the generation of voxels phantoms for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Boia, L.S.; Menezes, A.F.; Cardoso, M.A.C. [Programa de Engenharia Nuclear/COPPE (Brazil); Rosa, L.A.R. da [Instituto de Radioprotecao e Dosimetria-IRD, Av. Salvador Allende, s/no Recreio dos Bandeirantes, CP 37760, CEP 22780-160 Rio de Janeiro, RJ (Brazil); Batista, D.V.S. [Instituto de Radioprotecao e Dosimetria-IRD, Av. Salvador Allende, s/no Recreio dos Bandeirantes, CP 37760, CEP 22780-160 Rio de Janeiro, RJ (Brazil); Instituto Nacional de Cancer-Secao de Fisica Medica, Praca Cruz Vermelha, 23-Centro, 20230-130 Rio de Janeiro, RJ (Brazil); Cardoso, S.C. [Departamento de Fisica Nuclear, Instituto de Fisica, Universidade Federal do Rio de Janeiro, Bloco A-Sala 307, CP 68528, CEP 21941-972 Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.br [Programa de Engenharia Nuclear/COPPE (Brazil); Departamento de Engenharia Nuclear/Escola Politecnica, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970 Rio de Janeiro, RJ (Brazil); Facure, A. [Comissao Nacional de Energia Nuclear, R. Gal. Severiano 90, sala 409, 22294-900 Rio de Janeiro, RJ (Brazil)

    2012-01-15

    This paper presents the application of a computational methodology for optimizing the conversion of medical tomographic images in voxel anthropomorphic models for simulation of radiation transport using the MCNP code. A computational system was developed for digital image processing that compresses the information from the DICOM medical image before it is converted to the Scan2MCNP software input file for optimization of the image data. In order to validate the computational methodology, a radiosurgery treatment simulation was performed using the Alderson Rando phantom and the acquisition of DICOM images was performed. The simulation results were compared with data obtained with the BrainLab planning system. The comparison showed good agreement for three orthogonal treatment beams of {sup 60}Co gamma radiation. The percentage differences were 3.07%, 0.77% and 6.15% for axial, coronal and sagital projections, respectively. - Highlights: Black-Right-Pointing-Pointer We use a method to optimize the CT image conversion in voxel model for MCNP simulation. Black-Right-Pointing-Pointer We present a methodology to compress a DICOM image before conversion to input file. Black-Right-Pointing-Pointer To validate this study an idealized radiosurgery applied to the Alderson phantom was used.

  6. Proposal of a method for evaluating tsunami risk using response-surface methodology

    Science.gov (United States)

    Fukutani, Y.

    2017-12-01

    Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response

  7. A new transmission methodology for quality assurance in radiotherapy based on radiochromic film measurements

    Science.gov (United States)

    do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi

    2015-01-01

    Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID

  8. Development of a simplified methodology for the isotopic determination of fuel spent in Light Water Reactors

    International Nuclear Information System (INIS)

    Hernandez N, H.; Francois L, J.L.

    2005-01-01

    The present work presents a simplified methodology to quantify the isotopic content of the spent fuel of light water reactors; their application is it specific to the Laguna Verde Nucleo electric Central by means of a balance cycle of 18 months. The methodology is divided in two parts: the first one consists on the development of a model of a simplified cell, for the isotopic quantification of the irradiated fuel. With this model the burnt one is simulated 48,000 MWD/TU of the fuel in the core of the reactor, taking like base one fuel assemble type 10x10 and using a two-dimensional simulator for a fuel cell of a light water reactor (CPM-3). The second part of the methodology is based on the creation from an isotopic decay model through an algorithm in C++ (decay) to evaluate the amount, by decay of the radionuclides, after having been irradiated the fuel until the time in which the reprocessing is made. Finally the method used for the quantification of the kilograms of uranium and obtained plutonium of a normalized quantity (1000 kg) of fuel irradiated in a reactor is presented. These results will allow later on to make analysis of the final disposition of the irradiated fuel. (Author)

  9. Development of the evaluation methodology for the material relocation behavior in the core disruptive accident of sodium cooled fast reactors

    International Nuclear Information System (INIS)

    Tobita, Yoshiharu; Kamiyama, Kenji; Tagami, Hirotaka; Matsuba, Ken-ichi; Suzuki, Tohru; Isozaki, Mikio; Yamano, Hidemasa; Morita, Koji; Guo, Liancheng; Zhang, Bin

    2014-01-01

    The in-vessel retention (IVR) of core disruptive accident (CDA) is of prime importance in enhancing safety characteristics of sodium-cooled fast reactors (SFRs). In the CDA of SFRs, molten core material relocates to the lower plenum of reactor vessel and may impose significant thermal load on the structures, resulting in the melt through of the reactor vessel. In order to enable the assessment of this relocation process and prove that IVR of core material is the most probable consequence of the CDA in SFRs, a research program to develop the evaluation methodology for the material relocation behavior in the CDA of SFRs has been conducted. This program consists of three developmental studies, namely the development of the analysis method of molten material discharge from the core region, the development of evaluation methodology of molten material penetration into sodium pool, and the development of the simulation tool of debris bed behavior. The analysis method of molten material discharge was developed based on the computer code SIMMER-III since this code is designed to simulate the multi-phase, multi-component fluid dynamics with phase changes involved in the discharge process. Several experiments simulating the molten material discharge through duct using simulant materials were utilized as the basis of validation study of the physical models in this code. It was shown that SIMMER-III with improved physical models could simulate the molten material discharge behavior including the momentum exchange with duct wall and thermal interaction with coolant. In order to develop evaluation methodology of molten material penetration into sodium pool, a series of experiments simulating jet penetration behavior into sodium pool in SFR thermal condition were performed. These experiments revealed that the molten jet was fragmented in significantly shorter penetration length than the prediction by existing correlation for light water reactor conditions, due to the direct

  10. Development of the evaluation methodology for the material relocation behavior in the core disruptive accident of sodium-cooled fast reactors

    International Nuclear Information System (INIS)

    Tobita, Yoshiharu; Kamiyama, Kenji; Tagami, Hirotaka; Matsuba, Ken-ichi; Suzuki, Tohru; Isozaki, Mikio; Yamano, Hidemasa; Morita, Koji; Guo, LianCheng; Zhang, Bin

    2016-01-01

    The in-vessel retention (IVR) of core disruptive accident (CDA) is of prime importance in enhancing safety characteristics of sodium-cooled fast reactors (SFRs). In the CDA of SFRs, molten core material relocates to the lower plenum of reactor vessel and may impose significant thermal load on the structures, resulting in the melt-through of the reactor vessel. In order to enable the assessment of this relocation process and prove that IVR of core material is the most probable consequence of the CDA in SFRs, a research program to develop the evaluation methodology for the material relocation behavior in the CDA of SFRs has been conducted. This program consists of three developmental studies, namely the development of the analysis method of molten material discharge from the core region, the development of evaluation methodology of molten material penetration into sodium pool, and the development of the simulation tool of debris bed behavior. The analysis method of molten material discharge was developed based on the computer code SIMMER-III since this code is designed to simulate the multi-phase, multi-component fluid dynamics with phase changes involved in the discharge process. Several experiments simulating the molten material discharge through duct using simulant materials were utilized as the basis of validation study of the physical models in this code. It was shown that SIMMER-III with improved physical models could simulate the molten material discharge behavior, including the momentum exchange with duct wall and thermal interaction with coolant. In order to develop an evaluation methodology of molten material penetration into sodium pool, a series of experiments simulating jet penetration behavior into sodium pool in SFR thermal condition were performed. These experiments revealed that the molten jet was fragmented in significantly shorter penetration length than the prediction by existing correlation for light water reactor conditions, due to the direct

  11. Development of Fast-Running Simulation Methodology Using Neural Networks for Load Follow Operation

    International Nuclear Information System (INIS)

    Seong, Seung-Hwan; Park, Heui-Youn; Kim, Dong-Hoon; Suh, Yong-Suk; Hur, Seop; Koo, In-Soo; Lee, Un-Chul; Jang, Jin-Wook; Shin, Yong-Chul

    2002-01-01

    A new fast-running analytic model has been developed for analyzing the load follow operation. The new model was based on the neural network theory, which has the capability of modeling the input/output relationships of a nonlinear system. The new model is made up of two error back-propagation neural networks and procedures to calculate core parameters, such as the distributions and density of xenon in a quasi-steady-state core like load follow operation. One neural network is designed to retrieve the axial offset of power distribution, and the other is for reactivity corresponding to a given core condition. The training data sets for learning the neural networks in the new model are generated with a three-dimensional nodal code and, also, the measured data of the first-day test of load follow operation. Using the new model, the simulation results of the 5-day load follow test in a pressurized water reactor show a good agreement between the simulation data and the actual measured data. Required computing time for simulating a load follow operation is comparable to that of a fast-running lumped model. Moreover, the new model does not require additional engineering factors to compensate for the difference between the actual measurements and analysis results because the neural network has the inherent learning capability of neural networks to new situations

  12. Two-way coupling of magnetohydrodynamic simulations with embedded particle-in-cell simulations

    Science.gov (United States)

    Makwana, K. D.; Keppens, R.; Lapenta, G.

    2017-12-01

    We describe a method for coupling an embedded domain in a magnetohydrodynamic (MHD) simulation with a particle-in-cell (PIC) method. In this two-way coupling we follow the work of Daldorff et al. (2014) [19] in which the PIC domain receives its initial and boundary conditions from MHD variables (MHD to PIC coupling) while the MHD simulation is updated based on the PIC variables (PIC to MHD coupling). This method can be useful for simulating large plasma systems, where kinetic effects captured by particle-in-cell simulations are localized but affect global dynamics. We describe the numerical implementation of this coupling, its time-stepping algorithm, and its parallelization strategy, emphasizing the novel aspects of it. We test the stability and energy/momentum conservation of this method by simulating a steady-state plasma. We test the dynamics of this coupling by propagating plasma waves through the embedded PIC domain. Coupling with MHD shows satisfactory results for the fast magnetosonic wave, but significant distortion for the circularly polarized Alfvén wave. Coupling with Hall-MHD shows excellent coupling for the whistler wave. We also apply this methodology to simulate a Geospace Environmental Modeling (GEM) challenge type of reconnection with the diffusion region simulated by PIC coupled to larger scales with MHD and Hall-MHD. In both these cases we see the expected signatures of kinetic reconnection in the PIC domain, implying that this method can be used for reconnection studies.

  13. Application of the Integrated Safety Assessment methodology to safety margins. Dynamic Event Trees, Damage Domains and Risk Assessment

    International Nuclear Information System (INIS)

    Ibánez, L.; Hortal, J.; Queral, C.; Gómez-Magán, J.; Sánchez-Perea, M.; Fernández, I.; Meléndez, E.; Expósito, A.; Izquierdo, J.M.; Gil, J.; Marrao, H.; Villalba-Jabonero, E.

    2016-01-01

    The Integrated Safety Assessment (ISA) methodology, developed by the Consejo de Seguridad Nuclear, has been applied to an analysis of Zion NPP for sequences with Loss of the Component Cooling Water System (CCWS). The ISA methodology proposal starts from the unfolding of the Dynamic Event Tree (DET). Results from this first step allow assessing the sequence delineation of standard Probabilistic Safety Analysis results. For some sequences of interest of the outlined DET, ISA then identifies the Damage Domain (DD). This is the region of uncertain times and/or parameters where a safety limit is exceeded, which indicates the occurrence of certain damage situation. This paper illustrates application of this concept obtained simulating sequences with MAAP and with TRACE. From information of simulation results of sequence transients belonging to the DD and the time-density probability distributions of the manual actions and of occurrence of stochastic phenomena, ISA integrates the dynamic reliability equations proposed to obtain the sequence contribution to the global Damage Exceedance Frequency (DEF). Reported results show a slight increase in the DEF for sequences investigated following a power uprate from 100% to 110%. This demonstrates the potential use of the method to help in the assessment of design modifications. - Highlights: • This paper illustrates an application of the ISA methodology to safety margins. • Dynamic Event Trees are useful tool for verifying the standard PSA Event Trees. • The ISA methodology takes into account the uncertainties in human action times. • The ISA methodology shows the Damage Exceedance Frequency increase in power uprates.

  14. GVTRAN-PC-a steam generator transient simulator

    International Nuclear Information System (INIS)

    Nakata, H.

    1991-02-01

    Since an accuracy and inexpensive analysis capability is one of the desirable requirement in the reactor licensing procedure and, also, in instances when a severe accident sequence and its degree of severity must be estimated, the present work tries partially to fulfill that present need by developing a fast and acceptably accurate simulator. The present report presents the methodology utilized to develop GVTRAN-PC program, the steam generator simulation program for the microcomputer environment, which possess a capability to reproduce the experimental data with accuracies comparable to those of the mainframe simulators. The methodology is based on the mass and energy conservation in the control volumes which represents both the primary and the secondary fluid in the U-tube steam generator. The quasi-static momentum conservation in the secondary fluid control volumes determines in a semi-iterative scheme the liquid level in the feedwater chamber. The implementation of the moving boundary technique has allowed the tracking of the boundary of bulk boiling region with the utilization of a reduced number of control volumes in the tube region. GVTRAN-PC program has been tested against typical PWR pump trip transient experimental data and the calculation results showed good agreement in most representative parameters, viz. the feedchamber water-level and the steam dome pressure. (author)

  15. Discrete event simulation modelling of patient service management with Arena

    Science.gov (United States)

    Guseva, Elena; Varfolomeyeva, Tatyana; Efimova, Irina; Movchan, Irina

    2018-05-01

    This paper describes the simulation modeling methodology aimed to aid in solving the practical problems of the research and analysing the complex systems. The paper gives the review of a simulation platform sand example of simulation model development with Arena 15.0 (Rockwell Automation).The provided example of the simulation model for the patient service management helps to evaluate the workload of the clinic doctors, determine the number of the general practitioners, surgeons, traumatologists and other specialized doctors required for the patient service and develop recommendations to ensure timely delivery of medical care and improve the efficiency of the clinic operation.

  16. Calibration of modified parallel-plate rheometer through calibrated oil and lattice Boltzmann simulation

    DEFF Research Database (Denmark)

    Ferraris, Chiara F; Geiker, Mette Rica; Martys, Nicos S

    2007-01-01

    inapplicable here. This paper presents the analysis of a modified parallel plate rheometer for measuring cement mortar and propose a methodology for calibration using standard oils and numerical simulation of the flow. A lattice Boltzmann method was used to simulate the flow in the modified rheometer, thus...

  17. Situating methodology within qualitative research.

    Science.gov (United States)

    Kramer-Kile, Marnie L

    2012-01-01

    Qualitative nurse researchers are required to make deliberate and sometimes complex methodological decisions about their work. Methodology in qualitative research is a comprehensive approach in which theory (ideas) and method (doing) are brought into close alignment. It can be difficult, at times, to understand the concept of methodology. The purpose of this research column is to: (1) define qualitative methodology; (2) illuminate the relationship between epistemology, ontology and methodology; (3) explicate the connection between theory and method in qualitative research design; and 4) highlight relevant examples of methodological decisions made within cardiovascular nursing research. Although there is no "one set way" to do qualitative research, all qualitative researchers should account for the choices they make throughout the research process and articulate their methodological decision-making along the way.

  18. JASMINE Simulator - construction of framework

    Science.gov (United States)

    Yamada, Yoshiyuki; Ueda, Seiji; Kuwabara, Takashi; Yano, Taihei; Gouda, Naoteru

    2004-10-01

    JASMINE is an abbreviation of Japan Astrometry Satellite Mission for INfrared Exploration currently planned at National Astronomical Observatory of Japan. JASMINE stands at a stage where its basic design will be determined in a few years. Then it is very important for JASMINE to simulate the data stream generated by the astrometric fields in order to support investigations of accuracy, sampling strategy, data compression, data analysis, scientific performances, etc. It is found that the new software technologies of Object Oriented methodologies with Unified Modeling Language are ideal for the simulation system of JASMINE (JASMINE Simualtor). In this paper, we briefly introduce some concepts of such technologies and explain the framework of the JASMINE Simulator which is constructed by new technologies. We believe that these technologies are useful also for other future big projects of astronomcial research.

  19. Ab initio simulation of dislocation cores in metals

    International Nuclear Information System (INIS)

    Ventelon, L.

    2008-01-01

    In the framework of the multi scale simulation of metals and alloys plasticity, the aim of this study is to develop a methodology of ab initio dislocations study and to apply it to the [111] screw dislocation in the bc iron. (A.L.B.)

  20. Methodology of sustainability accounting

    Directory of Open Access Journals (Sweden)

    O.H. Sokil

    2017-03-01

    Full Text Available Modern challenges of the theory and methodology of accounting are realized through the formation and implementation of new concepts, the purpose of which is to meet the needs of users in standard and unique information. The development of a methodology for sustainability accounting is a key aspect of the management of an economic entity. The purpose of the article is to form the methodological bases of accounting for sustainable development and determine its goals, objectives, object, subject, methods, functions and key aspects. The author analyzes the theoretical bases of the definition and considers the components of the traditional accounting methodology. Generalized structural diagram of the methodology for accounting for sustainable development is offered in the article. The complex of methods and principles of sustainable development accounting for systematized and non-standard provisions has been systematized. The new system of theoretical and methodological provisions of accounting for sustainable development is justified in the context of determining its purpose, objective, subject, object, methods, functions and key aspects.

  1. Comparison Of Irms Delhi Methodology With Who Methodology On Immunization Coverage

    Directory of Open Access Journals (Sweden)

    Singh Padam

    1996-01-01

    Full Text Available Research question: What are the merits of IRMS Model over WHO Model for Coverage Evaluation Survey? Which method is superior and appropriate for coverage evolution survey of immunization in our setting? Objective: To compare IRMS Delhi methodology with WHO methodology on Immunization Coverage. Study Design: Cross-Sectional Setting: Urban and Rural both. Participants: Mothers& Children Sample Size: 300 children between 1-2 years and 300 mothers in rural areas and 75 children and 75 mothers in urban areas. Study Variables: Rural, Urban, Cast-Group, Size of the stratum, Literacy, Sex and Cost effectiveness. Outcome Variables: Coverage level of immunization. Analysis: Routine Statistical Analysis. Results: IRMS developed methodology scores better rating over WHO methodology, especially when coverage evolution is attempted in medium size villages with existence of socio-economic seggregation-which remains the main characteristic of the Indian villages.

  2. Discrete event simulation as an ergonomic tool to predict workload exposures during systems design

    NARCIS (Netherlands)

    Perez, J.; Looze, M.P. de; Bosch, T.; Neumann, W.P.

    2014-01-01

    This methodological paper presents a novel approach to predict operator's mechanical exposure and fatigue accumulation in discrete event simulations. A biomechanical model of work-cycle loading is combined with a discrete event simulation model which provides work cycle patterns over the shift

  3. Simulations of an Offshore Wind Farm Using Large-Eddy Simulation and a Torque-Controlled Actuator Disc Model

    Science.gov (United States)

    Creech, Angus; Früh, Wolf-Gerrit; Maguire, A. Eoghan

    2015-05-01

    We present here a computational fluid dynamics (CFD) simulation of Lillgrund offshore wind farm, which is located in the Øresund Strait between Sweden and Denmark. The simulation combines a dynamic representation of wind turbines embedded within a large-eddy simulation CFD solver and uses hr-adaptive meshing to increase or decrease mesh resolution where required. This allows the resolution of both large-scale flow structures around the wind farm, and the local flow conditions at individual turbines; consequently, the response of each turbine to local conditions can be modelled, as well as the resulting evolution of the turbine wakes. This paper provides a detailed description of the turbine model which simulates the interaction between the wind, the turbine rotors, and the turbine generators by calculating the forces on the rotor, the body forces on the air, and instantaneous power output. This model was used to investigate a selection of key wind speeds and directions, investigating cases where a row of turbines would be fully aligned with the wind or at specific angles to the wind. Results shown here include presentations of the spin-up of turbines, the observation of eddies moving through the turbine array, meandering turbine wakes, and an extensive wind farm wake several kilometres in length. The key measurement available for cross-validation with operational wind farm data is the power output from the individual turbines, where the effect of unsteady turbine wakes on the performance of downstream turbines was a main point of interest. The results from the simulations were compared to the performance measurements from the real wind farm to provide a firm quantitative validation of this methodology. Having achieved good agreement between the model results and actual wind farm measurements, the potential of the methodology to provide a tool for further investigations of engineering and atmospheric science problems is outlined.

  4. Urban metabolism: A review of research methodologies

    International Nuclear Information System (INIS)

    Zhang, Yan

    2013-01-01

    Urban metabolism analysis has become an important tool for the study of urban ecosystems. The problems of large metabolic throughput, low metabolic efficiency, and disordered metabolic processes are a major cause of unhealthy urban systems. In this paper, I summarize the international research on urban metabolism, and describe the progress that has been made in terms of research methodologies. I also review the methods used in accounting for and evaluating material and energy flows in urban metabolic processes, simulation of these flows using a network model, and practical applications of these methods. Based on this review of the literature, I propose directions for future research, and particularly the need to study the urban carbon metabolism because of the modern context of global climate change. Moreover, I recommend more research on the optimal regulation of urban metabolic systems. Highlights: •Urban metabolic processes can be analyzed by regarding cities as superorganisms. •Urban metabolism methods include accounting, assessment, modeling, and regulation. •Research methodologies have improved greatly since this field began in 1965. •Future research should focus on carbon metabolism and optimal regulation. -- The author reviews research progress in the field of urban metabolism, and based on her literature review, proposes directions for future research

  5. Selection methodology for LWR safety programs and proposals. Volume 2. Methodology application

    International Nuclear Information System (INIS)

    Ritzman, R.L.; Husseiny, A.A.

    1980-08-01

    The results of work done to update and apply a methodology for selecting (prioritizing) LWR safety technology R and D programs are described. The methodology is based on multiattribute utility (MAU) theory. Application of the methodology to rank-order a group of specific R and D programs included development of a complete set of attribute utility functions, specification of individual attribute scaling constants, and refinement and use of an interactive computer program (MAUP) to process decision-maker inputs and generate overall (multiattribute) program utility values. The output results from several decision-makers are examined for consistency and conclusions and recommendations regarding general use of the methodology are presented. 3 figures, 18 tables

  6. Methodology for attainment of density and effective atomic number through dual energy technique using microtomographic images

    International Nuclear Information System (INIS)

    Alves, H.; Lima, I.; Lopes, R.T.

    2014-01-01

    Dual energy technique for computerized microtomography shows itself as a promising method for identification of mineralogy on geological samples of heterogeneous composition. It can also assist with differentiating very similar objects regarding the attenuation coefficient, which are usually not separable during image processing and analysis of microtomographic data. Therefore, the development of a feasible and applicable methodology of dual energy in the analysis of microtomographic images was sought. - Highlights: • Dual energy technique is promising for identification of distribution of minerals. • A feasible methodology of dual energy in analysis of tomographic images was sought. • The dual energy technique is efficient for density and atomic number identification. • Simulation showed that the proposed methodology agrees with theoretical data. • Nondestructive characterization of distribution of density and chemical composition

  7. Field programmable gate array reliability analysis using the dynamic flow graph methodology

    Energy Technology Data Exchange (ETDEWEB)

    McNelles, Phillip; Lu, Lixuan [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario (Canada)

    2016-10-15

    Field programmable gate array (FPGA)-based systems are thought to be a practical option to replace certain obsolete instrumentation and control systems in nuclear power plants. An FPGA is a type of integrated circuit, which is programmed after being manufactured. FPGAs have some advantages over other electronic technologies, such as analog circuits, microprocessors, and Programmable Logic Controllers (PLCs), for nuclear instrumentation and control, and safety system applications. However, safety-related issues for FPGA-based systems remain to be verified. Owing to this, modeling FPGA-based systems for safety assessment has now become an important point of research. One potential methodology is the dynamic flowgraph methodology (DFM). It has been used for modeling software/hardware interactions in modern control systems. In this paper, FPGA logic was analyzed using DFM. Four aspects of FPGAs are investigated: the 'IEEE 1164 standard', registers (D flip-flops), configurable logic blocks, and an FPGA-based signal compensator. The ModelSim simulations confirmed that DFM was able to accurately model those four FPGA properties, proving that DFM has the potential to be used in the modeling of FPGA-based systems. Furthermore, advantages of DFM over traditional reliability analysis methods and FPGA simulators are presented, along with a discussion of potential issues with using DFM for FPGA-based system modeling.

  8. Multiscale simulation of molecular processes in cellular environments.

    Science.gov (United States)

    Chiricotto, Mara; Sterpone, Fabio; Derreumaux, Philippe; Melchionna, Simone

    2016-11-13

    We describe the recent advances in studying biological systems via multiscale simulations. Our scheme is based on a coarse-grained representation of the macromolecules and a mesoscopic description of the solvent. The dual technique handles particles, the aqueous solvent and their mutual exchange of forces resulting in a stable and accurate methodology allowing biosystems of unprecedented size to be simulated.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).

  9. A methodology to analize the safety of a coastal nuclear power plant against the Typhoon external flooding risks

    International Nuclear Information System (INIS)

    Chen Tian; He Mi; Chen Guofei; Joly, Antoine; Pan Rong; Ji Ping

    2015-01-01

    For the protection of coastal Nuclear Power Plant (NPP) against the external flooding hazard, the risks caused by natural events have to be taken into account. In this article, a methodology is proposed to analyze the risk of the typical natural event in China (Typhoon). It includes the simulation of the storm surge and the strong waves due to its passage in Chinese coastal zones and the quantification of the sequential overtopping flow rate. The simulation is carried out by coupling 2 modules of the hydraulic modeling system TELEMAC-MASCARET from EDF, TELEMAC2D (Shallow water module) and TOMAWAC (spectral wave module). As an open-source modeling system, this methodology could still be enriched by other phenomena in the near future to ameliorate its performance in safety analysis of the coastal NPPs in China. (author)

  10. MARA - methodology to analyze environmental risks; MARA - elaboracao de metodologia para analise dos riscos ambientais

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Renato F; Yogui, Regiane [PETROBRAS, Rio de Janeiro, RJ (Brazil); Minniti, Vivienne [Companhia de Tecnologia de Saneamento Ambiental (CETESB), Sao Paulo, SP (Brazil). Setor de Analise de Riscos; Lopes, Carlos F [Companhia de Tecnologia de Saneamento Ambiental (CETESB), Sao Paulo, SP (Brazil). Setor de Operacoes de Emergencia; Milaneli, Joao [Companhia de Tecnologia de Saneamento Ambiental (CETESB), Ubatuba, SP (Brazil). Agencia Ambiental de Ubatuba; Torres, Carlos [TRANSPETRO - PETROBRAS Transportes, Sao Paulo, SP (Brazil). Seguranca, Meio Ambiente e Saude; Rodrigues, Gabriela; Mariz, Eduardo [Consultoria Paulista, SP (Brazil)

    2005-07-01

    In the oil industry, the environmental impact assessment of an accident is both multi and interdisciplinary and goes through several approaches and depths. Due to the enormous complexity of the environmental analyses issues, mainly for being a science in development and not technological consensus, a macro methodology is presented for the purpose of identify areas that can be impacted by pipeline leakages and recommend improvements are applicable working as a table top response plan. The methodology of the Environmental Risk Mapping-MARA for pipelines rows, describes its concept and justifies the adoption of the environmental mapping during Risk Analyses studies, for PETROBRAS/TRANSPETRO new and existing pipelines. The development of this methodology is justified by the fact that it is a practical tool for identification, analysis and categorization of the more vulnerable environmental elements along a pipeline row and vicinities, during simulated occurrence of accidental spills of hydrocarbons in the environment. This methodology is a tool that allows Environmental Agencies and PETROBRAS a better way to manage in advance the Company emergencies. (author)

  11. Monte Carlo Simulation of a Solvated Ionic Polymer with Cluster Morphology

    National Research Council Canada - National Science Library

    Matthews, Jessica L; Lada, Emily K; Weiland, Lisa M; Smith, Ralph C; Leo, Donald J

    2005-01-01

    .... Traditional rotational isomeric state theory is applied in combination with a Monte Carlo methodology to develop a simulation model of the conformation of Nafion polymer chains on a nanoscopic level...

  12. Development of new assessment methodology for locally corroded pipe

    International Nuclear Information System (INIS)

    Lim, Hwan; Shim, Do Jun; Kim, Yun Jae; Kim, Young Jin

    2002-01-01

    In this paper, a unified methodology based on the local stress concept to estimate residual strength of locally thinned pipes is proposed. An underlying idea of the proposed methodology is that the local stress in the minimum section for locally thinned pipe is related to the reference stress, popularly used in creep problems. Then the problem remains how to define the reference stress, that is the reference load. Extensive three-dimensional Finite Element (FE) analyses were performed to simulate full-scale pipe tests conducted for various shapes of wall thinned area under internal pressure and bending moment. Based on these FE results, the reference load is proposed, which is independent of materials. A natural outcome of this method is the maximum load capacity. By comparing with existing test results, it is shown that the reference stress is related to the fracture stress, which in turn can be posed as the fracture criterion of locally thinned pipes. The proposed method is powerful as it can be easily generalised to more complex problems, such as pipe bends and tee-joints

  13. Kaupapa Maori Methodology: Trusting the Methodology through Thick and Thin

    Science.gov (United States)

    Hiha, Anne Aroha

    2016-01-01

    Kaupapa Maori is thoroughly theorised in academia in Aotearoa and those wishing to use it as their research methodology can find support through the writing of a number of Maori academics. What is not so well articulated, is the experiential voice of those who have used Kaupapa Maori as research methodology. My identity as a Maori woman…

  14. Calculation of the electrostatic potential of lipid bilayers from molecular dynamics simulations: methodological issues

    DEFF Research Database (Denmark)

    Gurtovenko, Andrey A; Vattulainen, Ilpo

    2009-01-01

    of the electrostatic potential from atomic-scale molecular dynamics simulations of lipid bilayers. We discuss two slightly different forms of Poisson equation that are normally used to calculate the membrane potential: (i) a classical form when the potential and the electric field are chosen to be zero on one...... systems). For symmetric bilayers we demonstrate that both approaches give essentially the same potential profiles, provided that simulations are long enough (a production run of at least 100 ns is required) and that fluctuations of the center of mass of a bilayer are properly accounted for. In contrast...

  15. Applying a behavioural simulation for the collection of data

    DEFF Research Database (Denmark)

    Jespersen, Kristina Risom

    2005-01-01

    To collect real-time data as opposed to retrospective data requires new methodological traits. One possibility is the use of behavioral simulations that synthesize the self-administered questionnaire, experimental designs, role-playing and scenarios. Supported by Web technology this new data...

  16. Weld distortion prediction of the ITER Vacuum Vessel using Finite Element simulations

    Energy Technology Data Exchange (ETDEWEB)

    Caixas, Joan, E-mail: joan.caixas@f4e.europa.eu [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Guirao, Julio [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Bayon, Angel; Jones, Lawrence; Arbogast, Jean François [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Barbensi, Andrea [Ansaldo Nucleare, Corso F.M. Perrone, 25, I-16152 Genoa (Italy); Dans, Andres [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Facca, Aldo [Mangiarotti, Pannellia di Sedegliano, I-33039 Sedegliano (UD) (Italy); Fernandez, Elena; Fernández, José [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Iglesias, Silvia [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Jimenez, Marc; Jucker, Philippe; Micó, Gonzalo [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Ordieres, Javier [Numerical Analysis Technologies, S. L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); Pacheco, Jose Miguel [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Paoletti, Roberto [Walter Tosto, Via Erasmo Piaggio, 72, I-66100 Chieti Scalo (Italy); Sanguinetti, Gian Paolo [Ansaldo Nucleare, Corso F.M. Perrone, 25, I-16152 Genoa (Italy); Stamos, Vassilis [F4E, c/ Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Tacconelli, Massimiliano [Walter Tosto, Via Erasmo Piaggio, 72, I-66100 Chieti Scalo (Italy)

    2013-10-15

    Highlights: ► Computational simulations of the weld processes can rapidly assess different sequences. ► Prediction of welding distortion to optimize the manufacturing sequence. ► Accurate shape prediction after each manufacture phase allows to generate modified procedures and pre-compensate distortions. ► The simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computation resources. ► For each welding process, the models are calibrated with the results of coupons and mock-ups. -- Abstract: The as-welded surfaces of the ITER Vacuum Vessel sectors need to be within a very tight tolerance, without a full-scale prototype. In order to predict welding distortion and optimize the manufacturing sequence, the industrial contract includes extensive computational simulations of the weld processes which can rapidly assess different sequences. The accurate shape prediction, after each manufacturing phase, enables actual distortions to be compared with the welding simulations to generate modified procedures and pre-compensate distortions. While previous mock-ups used heavy welded-on jigs to try to restrain the distortions, this method allows the use of lightweight jigs and yields important cost and rework savings. In order to enable the optimization of different alternative welding sequences, the simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computational resources. For each welding process, the models are calibrated with the results of coupons and mock-ups. The calibration is used to construct representative models of each segment and sector. This paper describes the application to the construction of the Vacuum Vessel sector of the enhanced simulation methodology with condensed Finite Element computation techniques and results of the calibration on several test pieces for different types of welds.

  17. Weld distortion prediction of the ITER Vacuum Vessel using Finite Element simulations

    International Nuclear Information System (INIS)

    Caixas, Joan; Guirao, Julio; Bayon, Angel; Jones, Lawrence; Arbogast, Jean François; Barbensi, Andrea; Dans, Andres; Facca, Aldo; Fernandez, Elena; Fernández, José; Iglesias, Silvia; Jimenez, Marc; Jucker, Philippe; Micó, Gonzalo; Ordieres, Javier; Pacheco, Jose Miguel; Paoletti, Roberto; Sanguinetti, Gian Paolo; Stamos, Vassilis; Tacconelli, Massimiliano

    2013-01-01

    Highlights: ► Computational simulations of the weld processes can rapidly assess different sequences. ► Prediction of welding distortion to optimize the manufacturing sequence. ► Accurate shape prediction after each manufacture phase allows to generate modified procedures and pre-compensate distortions. ► The simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computation resources. ► For each welding process, the models are calibrated with the results of coupons and mock-ups. -- Abstract: The as-welded surfaces of the ITER Vacuum Vessel sectors need to be within a very tight tolerance, without a full-scale prototype. In order to predict welding distortion and optimize the manufacturing sequence, the industrial contract includes extensive computational simulations of the weld processes which can rapidly assess different sequences. The accurate shape prediction, after each manufacturing phase, enables actual distortions to be compared with the welding simulations to generate modified procedures and pre-compensate distortions. While previous mock-ups used heavy welded-on jigs to try to restrain the distortions, this method allows the use of lightweight jigs and yields important cost and rework savings. In order to enable the optimization of different alternative welding sequences, the simulation methodology is improved using condensed computation techniques with ANSYS in order to reduce computational resources. For each welding process, the models are calibrated with the results of coupons and mock-ups. The calibration is used to construct representative models of each segment and sector. This paper describes the application to the construction of the Vacuum Vessel sector of the enhanced simulation methodology with condensed Finite Element computation techniques and results of the calibration on several test pieces for different types of welds

  18. Methodology for dimensional variation analysis of ITER integrated systems

    International Nuclear Information System (INIS)

    Fuentes, F. Javier; Trouvé, Vincent; Cordier, Jean-Jacques; Reich, Jens

    2016-01-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  19. Methodology for dimensional variation analysis of ITER integrated systems

    Energy Technology Data Exchange (ETDEWEB)

    Fuentes, F. Javier, E-mail: FranciscoJavier.Fuentes@iter.org [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France); Trouvé, Vincent [Assystem Engineering & Operation Services, rue J-M Jacquard CS 60117, 84120 Pertuis (France); Cordier, Jean-Jacques; Reich, Jens [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France)

    2016-11-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  20. A performance assessment methodology for high-level radioactive waste disposal in unsaturated, fractured tuff

    International Nuclear Information System (INIS)

    Gallegos, D.P.

    1991-07-01

    Sandia National Laboratories, has developed a methodology for performance assessment of deep geologic disposal of high-level nuclear waste. The applicability of this performance assessment methodology has been demonstrated for disposal in bedded salt and basalt; it has since been modified for assessment of repositories in unsaturated, fractured tuff. Changes to the methodology are primarily in the form of new or modified ground water flow and radionuclide transport codes. A new computer code, DCM3D, has been developed to model three-dimensional ground-water flow in unsaturated, fractured rock using a dual-continuum approach. The NEFTRAN 2 code has been developed to efficiently model radionuclide transport in time-dependent velocity fields, has the ability to use externally calculated pore velocities and saturations, and includes the effect of saturation dependent retardation factors. In order to use these codes together in performance-assessment-type analyses, code-coupler programs were developed to translate DCM3D output into NEFTRAN 2 input. Other portions of the performance assessment methodology were evaluated as part of modifying the methodology for tuff. The scenario methodology developed under the bedded salt program has been applied to tuff. An investigation of the applicability of uncertainty and sensitivity analysis techniques to non-linear models indicate that Monte Carlo simulation remains the most robust technique for these analyses. No changes have been recommended for the dose and health effects models, nor the biosphere transport models. 52 refs., 1 fig