WorldWideScience

Sample records for broadline simulation methodology

  1. GPS system simulation methodology

    Science.gov (United States)

    Ewing, Thomas F.

    1993-01-01

    The following topics are presented: background; Global Positioning System (GPS) methodology overview; the graphical user interface (GUI); current models; application to space nuclear power/propulsion; and interfacing requirements. The discussion is presented in vugraph form.

  2. Simulation enabled safeguards assessment methodology

    International Nuclear Information System (INIS)

    Bean, Robert; Bjornard, Trond; Larson, Tom

    2007-01-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wire-frame construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed. (authors)

  3. Simulation Enabled Safeguards Assessment Methodology

    International Nuclear Information System (INIS)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-01-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment Methodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed

  4. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  5. Nuclear power plant simulation facility evaluation methodology

    International Nuclear Information System (INIS)

    Haas, P.M.; Carter, R.J.; Laughery, K.R. Jr.

    1985-01-01

    A methodology for evaluation of nuclear power plant simulation facilities with regard to their acceptability for use in the US Nuclear Regulatory Commission (NRC) operator licensing exam is described. The evaluation is based primarily on simulator fidelity, but incorporates some aspects of direct operator/trainee performance measurement. The panel presentation and paper discuss data requirements, data collection, data analysis and criteria for conclusions regarding the fidelity evaluation, and summarize the proposed use of direct performance measurment. While field testing and refinement of the methodology are recommended, this initial effort provides a firm basis for NRC to fully develop the necessary methodology

  6. Methodological issues in lipid bilayer simulations

    NARCIS (Netherlands)

    Anezo, C; de Vries, AH; Holtje, HD; Tieleman, DP; Marrink, SJ

    2003-01-01

    Methodological issues in molecular dynamics (MD) simulations, such as the treatment of long-range electrostatic interactions or the type of pressure coupling, have important consequences for the equilibrium properties observed. We report a series of long (up to 150 ns) MD simulations of

  7. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  8. Methodology for the interactive graphic simulator construction

    International Nuclear Information System (INIS)

    Milian S, Idalmis; Rodriguez M, Lazaro; Lopez V, Miguel A.

    1997-01-01

    The PC-supported Interactive Graphic Simulators (IGS) have successfully been used for industrial training programs in many countries. This paper is intended to illustrate the general methodology applied by our research team for the construction of this kind of conceptual or small scale simulators. The information and tools available to achieve this goal are also described. The applicability of the present methodology was confirmed with the construction of a set of IGS for nuclear power plants operators training programs in Cuba. One of them, relating reactor kinetics, is shown and briefly described in this paper. (author). 11 refs., 3 figs

  9. CAGE IIIA Distributed Simulation Design Methodology

    Science.gov (United States)

    2014-05-01

    2 VHF Very High Frequency VLC Video LAN Codec – an Open-source cross-platform multimedia player and framework VM Virtual Machine VOIP Voice Over...Implementing Defence Experimentation (GUIDEx). The key challenges for this methodology are with understanding how to: • design it o define the...operation and to be available in the other nation’s simulations. The challenge for the CAGE campaign of experiments is to continue to build upon this

  10. Robust Optimization in Simulation : Taguchi and Response Surface Methodology

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, J.P.C.; Meloni, C.

    2008-01-01

    Optimization of simulated systems is tackled by many methods, but most methods assume known environments. This article, however, develops a 'robust' methodology for uncertain environments. This methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  11. Methodology for Developing a Diesel Exhaust After Treatment Simulation Tool

    DEFF Research Database (Denmark)

    Christiansen, Tine; Jensen, Johanne; Åberg, Andreas

    2018-01-01

    A methodology for the development of catalyst models is presented. Also, a methodology of the implementation of such models into a modular simulation tool, which simulates the units in succession, is presented. A case study is presented illustrating how suitable models can be found and used for s...

  12. Simulation Methodology in Nursing Education and Adult Learning Theory

    Science.gov (United States)

    Rutherford-Hemming, Tonya

    2012-01-01

    Simulation is often used in nursing education as a teaching methodology. Simulation is rooted in adult learning theory. Three learning theories, cognitive, social, and constructivist, explain how learners gain knowledge with simulation experiences. This article takes an in-depth look at each of these three theories as each relates to simulation.…

  13. Verification and validation methodology of training simulators

    International Nuclear Information System (INIS)

    Hassan, M.W.; Khan, N.M.; Ali, S.; Jafri, M.N.

    1997-01-01

    A full scope training simulator comprising of 109 plant systems of a 300 MWe PWR plant contracted by Pakistan Atomic Energy Commission (PAEC) from China is near completion. The simulator has its distinction in the sense that it will be ready prior to fuel loading. The models for the full scope training simulator have been developed under APROS (Advanced PROcess Simulator) environment developed by the Technical Research Center (VTT) and Imatran Voima (IVO) of Finland. The replicated control room of the plant is contracted from Shanghai Nuclear Engineering Research and Design Institute (SNERDI), China. The development of simulation models to represent all the systems of the target plant that contribute to plant dynamics and are essential for operator training has been indigenously carried out at PAEC. This multifunctional simulator is at present under extensive testing and will be interfaced with the control planes in March 1998 so as to realize a full scope training simulator. The validation of the simulator is a joint venture between PAEC and SNERDI. For the individual components and the individual plant systems, the results have been compared against design data and PSAR results to confirm the faithfulness of the simulator against the physical plant systems. The reactor physics parameters have been validated against experimental results and benchmarks generated using design codes. Verification and validation in the integrated state has been performed against the benchmark transients conducted using the RELAP5/MOD2 for the complete spectrum of anticipated transient covering the well known five different categories. (author)

  14. Methodology for the LABIHS PWR simulator modernization

    Energy Technology Data Exchange (ETDEWEB)

    Jaime, Guilherme D.G.; Oliveira, Mauro V., E-mail: gdjaime@ien.gov.b, E-mail: mvitor@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  15. Methodology for the LABIHS PWR simulator modernization

    Energy Technology Data Exchange (ETDEWEB)

    Jaime, Guilherme D.G.; Oliveira, Mauro V., E-mail: gdjaime@ien.gov.b, E-mail: mvitor@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  16. Methodology for the LABIHS PWR simulator modernization

    International Nuclear Information System (INIS)

    Jaime, Guilherme D.G.; Oliveira, Mauro V.

    2011-01-01

    The Human-System Interface Laboratory (LABIHS) simulator is composed by a set of advanced hardware and software components whose goal is to simulate the main characteristics of a Pressured Water Reactor (PWR). This simulator serves for a set of purposes, such as: control room modernization projects; designing of operator aiding systems; providing technological expertise for graphical user interfaces (GUIs) designing; control rooms and interfaces evaluations considering both ergonomics and human factors aspects; interaction analysis between operators and the various systems operated by them; and human reliability analysis in scenarios considering simulated accidents and normal operation. The simulator runs in a PA-RISC architecture server (HPC3700), developed nearby 2000's, using the HP-UX operating system. All mathematical modeling components were written using the HP Fortran-77 programming language with a shared memory to exchange data from/to all simulator modules. Although this hardware/software framework has been discontinued in 2008, with costumer support ceasing in 2013, it is still used to run and operate the simulator. Due to the fact that the simulator is based on an obsolete and proprietary appliance, the laboratory is subject to efficiency and availability issues, such as: downtime caused by hardware failures; inability to run experiments on modern and well known architectures; and lack of choice of running multiple simulation instances simultaneously. This way, there is a need for a proposal and implementation of solutions so that: the simulator can be ported to the Linux operating system, running on the x86 instruction set architecture (i.e. personal computers); we can simultaneously run multiple instances of the simulator; and the operator terminals run remotely. This paper deals with the design stage of the simulator modernization, in which it is performed a thorough inspection of the hardware and software currently in operation. Our goal is to

  17. SHIPBUILDING PRODUCTION PROCESS DESIGN METHODOLOGY USING COMPUTER SIMULATION

    OpenAIRE

    Marko Hadjina; Nikša Fafandjel; Tin Matulja

    2015-01-01

    In this research a shipbuilding production process design methodology, using computer simulation, is suggested. It is expected from suggested methodology to give better and more efficient tool for complex shipbuilding production processes design procedure. Within the first part of this research existing practice for production process design in shipbuilding was discussed, its shortcomings and problem were emphasized. In continuing, discrete event simulation modelling method, as basis of sugge...

  18. Methodology for functional MRI of simulated driving.

    Science.gov (United States)

    Kan, Karen; Schweizer, Tom A; Tam, Fred; Graham, Simon J

    2013-01-01

    The developed world faces major socioeconomic and medical challenges associated with motor vehicle accidents caused by risky driving. Functional magnetic resonance imaging (fMRI) of individuals using virtual reality driving simulators may provide an important research tool to assess driving safety, based on brain activity and behavior. A fMRI-compatible driving simulator was developed and evaluated in the context of straight driving, turning, and stopping in 16 young healthy adults. Robust maps of brain activity were obtained, including activation of the primary motor cortex, cerebellum, visual cortex, and parietal lobe, with limited head motion (driving is a feasible undertaking.

  19. Adaptive LES Methodology for Turbulent Flow Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oleg V. Vasilyev

    2008-06-12

    Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulations that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic

  20. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  1. Methodology for eliciting, encoding and simulating human decision making behaviour

    OpenAIRE

    Rider, Conrad Edgar Scott

    2012-01-01

    Agent-based models (ABM) are an increasingly important research tool for describing and predicting interactions among humans and their environment. A key challenge for such models is the ability to faithfully represent human decision making with respect to observed behaviour. This thesis aims to address this challenge by developing a methodology for empirical measurement and simulation of decision making in humanenvironment systems. The methodology employs the Beliefs-Desires-I...

  2. An EPRI methodology for determining and monitoring simulator operating limits

    International Nuclear Information System (INIS)

    Eichelberg, R.; Pellechi, M.; Wolf, B.; Colley, R.

    1989-01-01

    Of paramount concern to nuclear utilities today is whether their plant-referenced simulator(s) comply with ANSI/ANS 3.5-1985. Of special interest is Section 4.3 of the Standard which requires, in part, that a means be provided to alert the instructor when certain parameters approach values indicative of events beyond the implemented model or known plant behavior. EPRI established Research Project 2054-2 to develop a comprehensive plan for determining, monitoring, and implementing simulator operating limits. As part of the project, a survey was conducted to identify the current/anticipated approach each of the sampled utilities was using to meet the requirements of Section 4.3. A preliminary methodology was drafted and host utilities interviewed. The interview process led to redefining the methodology. This paper covers the objectives of the EPRI project, survey responses, overview of the methodology, resource requirements and conclusions

  3. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  4. A methodology for the rigorous verification of plasma simulation codes

    Science.gov (United States)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  5. Tornado missile simulation and design methodology. Volume 1: simulation methodology, design applications, and TORMIS computer code. Final report

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and risk has been assessed for a hypothetical nuclear power plant design case study

  6. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    Science.gov (United States)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  7. An automated methodology development. [software design for combat simulation

    Science.gov (United States)

    Hawley, L. R.

    1985-01-01

    The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.

  8. Data-driven simulation methodology using DES 4-layer architecture

    Directory of Open Access Journals (Sweden)

    Aida Saez

    2016-05-01

    Full Text Available In this study, we present a methodology to build data-driven simulation models of manufacturing plants. We go further than other research proposals and we suggest focusing simulation model development under a 4-layer architecture (network, logic, database and visual reality. The Network layer includes system infrastructure. The Logic layer covers operations planning and control system, and material handling equipment system. The Database holds all the information needed to perform the simulation, the results used to analyze and the values that the Logic layer is using to manage the Plant. Finally, the Visual Reality displays an augmented reality system including not only the machinery and the movement but also blackboards and other Andon elements. This architecture provides numerous advantages as helps to build a simulation model that consistently considers the internal logistics, in a very flexible way.

  9. Validation of response simulation methodology of Albedo dosemeter

    International Nuclear Information System (INIS)

    Freitas, B.M.; Silva, A.X. da

    2016-01-01

    The Instituto de Radioprotecao e Dosimetria developed and runs a neutron TLD albedo individual monitoring service. To optimize the dose calculation algorithm and to infer new calibration factors, the response of this dosemeter was simulated. In order to validate this employed methodology, it was applied in the simulation of the problem of the QUADOS (Quality Assurance of Computational Tools for Dosimetry) intercomparison, aimed to evaluate dosimetric problems, one being to calculate the response of a generic albedo dosemeter. The obtained results were compared with those of other modeling and the reference one, with good agreements. (author)

  10. The SIMRAND methodology - Simulation of Research and Development Projects

    Science.gov (United States)

    Miles, R. F., Jr.

    1984-01-01

    In research and development projects, a commonly occurring management decision is concerned with the optimum allocation of resources to achieve the project goals. Because of resource constraints, management has to make a decision regarding the set of proposed systems or tasks which should be undertaken. SIMRAND (Simulation of Research and Development Projects) is a methodology which was developed for aiding management in this decision. Attention is given to a problem description, aspects of model formulation, the reduction phase of the model solution, the simulation phase, and the evaluation phase. The implementation of the considered approach is illustrated with the aid of an example which involves a simplified network of the type used to determine the price of silicon solar cells.

  11. Montecarlo simulation for a new high resolution elemental analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto [Universidad de La Frontera, Temuco (Chile). Facultad de Ingenieria y Administracion

    1996-12-31

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2{pi} solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  12. Montecarlo simulation for a new high resolution elemental analysis methodology

    International Nuclear Information System (INIS)

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto

    1996-01-01

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2π solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  13. Simulation Environment Based on the Universal Verification Methodology

    CERN Document Server

    AUTHOR|(SzGeCERN)697338

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC desi...

  14. Methodological Aspects of Modelling and Simulation of Robotized Workstations

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2018-05-01

    Full Text Available From the point of view of development of application and program products, key directions that need to be respected in computer support for project activities are quite clearly specified. User interfaces with a high degree of graphical interactive convenience, two-dimensional and three-dimensional computer graphics contribute greatly to streamlining project methodologies and procedures in particular. This is mainly due to the fact that a high number of solved tasks is clearly graphic in the modern design of robotic systems. Automation of graphical character tasks is therefore a significant development direction for the subject area. The authors present results of their research in the area of automation and computer-aided design of robotized systems. A new methodical approach to modelling robotic workstations, consisting of ten steps incorporated into the four phases of the logistics process of creating and implementing a robotic workplace, is presented. The emphasis is placed on the modelling and simulation phase with verification of elaborated methodologies on specific projects or elements of the robotized welding plant in automotive production.

  15. Methodology for simulation of geomagnetically induced currents in power systems

    Directory of Open Access Journals (Sweden)

    Boteler David

    2014-07-01

    Full Text Available To assess the geomagnetic hazard to power systems it is useful to be able to simulate the geomagnetically induced currents (GIC that are produced during major geomagnetic disturbances. This paper examines the methodology used in power system analysis and shows how it can be applied to modelling GIC. Electric fields in the area of the power network are used to determine the voltage sources or equivalent current sources in the transmission lines. The power network can be described by a mesh impedance matrix which is combined with the voltage sources to calculate the GIC in each loop. Alternatively the power network can be described by a nodal admittance matrix which is combined with the sum of current sources into each node to calculate the nodal voltages which are then used to calculate the GIC in the transmission lines and GIC flowing to ground at each substation. Practical calculations can be made by superposition of results calculated separately for northward and eastward electric fields. This can be done using magnetic data from a single observatory to calculate an electric field that is a uniform approximation of the field over the area of the power system. It is also shown how the superposition of results can be extended to use data from two observatories: approximating the electric field by a linear variation between the two observatory locations. These calculations provide an efficient method for simulating the GIC that would be produced by historically significant geomagnetic storm events.

  16. Broad-line high-excitation gas in the elliptical galaxy NGC5128

    International Nuclear Information System (INIS)

    Phillips, M.M.; Taylor, K.; Axon, D.J.; Atherton, P.D.; Hook, R.N.

    1984-01-01

    A faint, but extensive component of broad-line ionized gas has been discovered in the peculiar giant elliptical galaxy NGC5128. This component has a radically different spatial distribution from the well-studied rotating photoionized gas associated with the dust lane although the velocity fields of the two components are similar. The origin of the broad-line gas is considered as its possible relation to the active nucleus and the X-ray jet discussed. (author)

  17. Wind Farm LES Simulations Using an Overset Methodology

    Science.gov (United States)

    Ananthan, Shreyas; Yellapantula, Shashank

    2017-11-01

    Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.

  18. Simulation environment based on the Universal Verification Methodology

    International Nuclear Information System (INIS)

    Fiergolski, A.

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.

  19. Broad-lined Supernova 2016coi with a Helium Envelope

    Energy Technology Data Exchange (ETDEWEB)

    Yamanaka, Masayuki [Department of Physics, Faculty of Science and Engineering, Konan University, Okamoto, Kobe, Hyogo 658-8501 (Japan); Nakaoka, Tatsuya; Kawabata, Miho [Department of Physical Science, Hiroshima University, Kagamiyama 1-3-1, Higashi-Hiroshima 739-8526 (Japan); Tanaka, Masaomi [National Astronomical Observatory of Japan, National Institutes of Natural Sciences, Osawa, Mitaka, Tokyo 181-8588 (Japan); Maeda, Keiichi [Department of Astronomy, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan); Honda, Satoshi; Hosoya, Kensuke; Karita, Mayu; Morihana, Kumiko [Nishi-Harima Astronomical Observatory, Center for Astronomy, University of Hyogo, 407-2 Nishigaichi, Sayo-cho, Sayo, Hyogo 679-5313 (Japan); Hanayama, Hidekazu [Ishigakijima Astronomical Observatory, National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 1024-1 Arakawa, Ishigaki, Okinawa 907-0024 (Japan); Morokuma, Tomoki [Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Imai, Masataka [Department of Cosmosciences, Graduate School of Science, Hokkaido University, Kita 10 Nishi8, Kita-ku, Sapporo 060-0810 (Japan); Kinugasa, Kenzo [Nobeyama Radio Observatory, National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 462-2 Nobeyama, Minamimaki, Minamisaku, Nagano 384-1305 (Japan); Murata, Katsuhiro L. [Department of Astrophysics, Nagoya University, Chikusa-ku, Nagoya 464-8602 (Japan); Nishimori, Takefumi; Gima, Hirotaka; Ito, Ayano; Morikawa, Yuto; Murakami, Kotone [Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Korimoto, Kagoshima 890-0065 (Japan); Hashimoto, Osamu, E-mail: yamanaka@center.konan-u.ac.jp [Gunma Astronomical Observatory, Takayama, Gunma 377-0702 (Japan); and others

    2017-03-01

    We present the early-phase spectra and the light curves of the broad-lined (BL) supernova (SN) 2016coi from t = 7 to 67 days after the estimated explosion date. This SN was initially reported as a BL Type SN Ic (SN Ic-BL). However, we found that spectra up to t = 12 days exhibited the He i λ 5876, λ 6678, and λ 7065 absorption lines. We show that the smoothed and blueshifted spectra of normal SNe Ib are remarkably similar to the observed spectrum of SN 2016coi. The line velocities of SN 2016coi were similar to those of SNe Ic-BL and significantly faster than those of SNe Ib. Analyses of the line velocity and light curve suggest that the kinetic energy and the total ejecta mass of SN 2016coi are similar to those of SNe Ic-BL. Together with BL SNe 2009bb and 2012ap, for which the detection of He i was also reported, these SNe could be transitional objects between SNe Ic-BL and SNe Ib, and be classified as BL Type “Ib” SNe (SNe “Ib”-BL). Our work demonstrates the diversity of the outermost layer in BL SNe, which should be related to the variety of the evolutionary paths.

  20. Development of radiation risk assessment simulator using system dynamics methodology

    International Nuclear Information System (INIS)

    Kang, Kyung Min; Jae, Moosung

    2008-01-01

    The potential magnitudes of radionuclide releases under severe accident loadings and offsite consequences as well as the overall risk (the product of accident frequencies and consequences) are analyzed and evaluated quantitatively in this study. The system dynamics methodology has been applied to predict the time-dependent behaviors such as feedback and dependency as well as to model uncertain behavior of complex physical system. It is used to construct the transfer mechanisms of time dependent radioactivity concentration and to evaluate them. Dynamic variations of radio activities are simulated by considering several effects such as deposition, weathering, washout, re-suspension, root uptake, translocation, leaching, senescence, intake, and excretion of soil. The time-dependent radio-ecological model applicable to Korean specific environment has been developed in order to assess the radiological consequences following the short-term deposition of radio-nuclides during severe accidents nuclear power plant. An ingestion food chain model can estimate time dependent radioactivity concentrations in foodstuffs. And it is also shown that the system dynamics approach is useful for analyzing the phenomenon of the complex system as well as the behavior of structure values with respect to time. The output of this model (Bq ingested per Bq m - 2 deposited) may be multiplied by the deposition and a dose conversion factor (Gy Bq -1 ) to yield organ-specific doses. The model may be run deterministically to yield a single estimate or stochastic distributions by 'Monte-Carlo' calculation that reflects uncertainty of parameter and model uncertainties. The results of this study may contribute to identifying the relative importance of various parameters occurred in consequence analysis, as well as to assessing risk reduction effects in accident management. (author)

  1. A framework for using simulation methodology in ergonomics interventions in design projects

    DEFF Research Database (Denmark)

    Broberg, Ole; Duarte, Francisco; Andersen, Simone Nyholm

    2014-01-01

    The aim of this paper is to outline a framework of simulation methodology in design processes from an ergonomics perspective......The aim of this paper is to outline a framework of simulation methodology in design processes from an ergonomics perspective...

  2. Methodology for Evaluating the Simulator Flight Performance of Pilots

    National Research Council Canada - National Science Library

    Smith, Jennifer

    2004-01-01

    The type of research that investigates operational tasks such as flying an aircraft or flight simulator is extremely useful to the Air Force's operational community because the results apply directly...

  3. An improved methodology for dynamic modelling and simulation of ...

    Indian Academy of Sciences (India)

    The diversity of the processes and the complexity of the drive system .... modelling the specific event, general simulation tools such as Matlab R provide the user with tools for creating ..... using the pulse width modulation (PWM) techniques.

  4. A methodology of neutronic-thermodynamics simulation for fast reactor

    International Nuclear Information System (INIS)

    Waintraub, M.

    1986-01-01

    Aiming at a general optimization of the project, controlled fuel depletion and management, this paper develop a neutronic thermodynamics simulator, SIRZ, which besides being sufficiently precise, is also economic. That results in a 75% reduction in CPU time, for a startup calculation, when compared with the same calculation at the CITATION code. The simulation system by perturbation calculations, applied to fast reactors, which produce errors smaller than 1% in all components of the reference state given by the CITATION code was tested. (author)

  5. Development and new applications of quantum chemical simulation methodology

    International Nuclear Information System (INIS)

    Weiss, A. K. H.

    2012-01-01

    The Division of Theoretical Chemistry at the University of Innsbruck is focused on the study of chemical compounds in aqueous solution, in terms of mainly hybrid quantum mechanical / molecular mechanical molecular dynamics simulations (QM/MM MD). Besides the standard means of data analysis employed for such simulations, this study presents several advanced and capable algorithms for the description of structural and dynamic properties of the simulated species and its hydration. The first part of this thesis further presents selected exemplary simulations, in particular a comparative study of Formamide and N-methylformamide, Guanidinium, and Urea. An included review article further summarizes the major advances of these studies. The computer programs developed in the course of this thesis are by now well established in the research field. The second part of this study presents the theory and a development guide for a quantum chemical program, QuMuLuS, that is by now used as a QM program for recent QM/MM simulations at the division. In its course, this part presents newly developed algorithms for electron integral evaluation and point charge embedding. This program is validated in terms of benchmark computations. The associated theory is presented on a detailed level, to serve as a source for contemporary and future studies in the division. In the third and final part, further investigations of related topics are addressed. This covers additional schemes of molecular simulation analysis, new software, as well as a mathematical investigation of a non-standard two-electron integral. (author)

  6. Evolutionary-Simulative Methodology in the Management of Social and Economic Systems

    Directory of Open Access Journals (Sweden)

    Konyavskiy V.A.

    2017-01-01

    Full Text Available The article outlines the main provisions of the evolutionary-simulative methodology (ESM which is a methodology of mathematical modeling of equilibrium random processes (CPR, widely used in the economy. It discusses the basic directions of use of ESM solutions for social problems and economic management systems.

  7. A Dynamic Defense Modeling and Simulation Methodology using Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Kangsun Lee

    2010-04-01

    Full Text Available Defense Modeling and Simulations require interoperable and autonomous federates in order to fully simulate complex behavior of war-fighters and to dynamically adapt themselves to various war-game events, commands and controls. In this paper, we propose a semantic web service based methodology to develop war-game simulations. Our methodology encapsulates war-game logic into a set of web services with additional semantic information in WSDL (Web Service Description Language and OWL (Web Ontology Language. By utilizing dynamic discovery and binding power of semantic web services, we are able to dynamically reconfigure federates according to various simulation events. An ASuW (Anti-Surface Warfare simulator is constructed to demonstrate the methodology and successfully shows that the level of interoperability and autonomy can be greatly improved.

  8. A methodology to simulate the cutting process for a nuclear dismantling simulation based on a digital manufacturing platform

    International Nuclear Information System (INIS)

    Hyun, Dongjun; Kim, Ikjune; Lee, Jonghwan; Kim, Geun-Ho; Jeong, Kwan-Seong; Choi, Byung Seon; Moon, Jeikwon

    2017-01-01

    Highlights: • Goal is to provide existing tech. with cutting function handling dismantling process. • Proposed tech. can handle various cutting situations in the dismantlement activities. • Proposed tech. can be implemented in existing graphical process simulation software. • Simulation results have demonstrated that the proposed technology achieves its goal. • Proposed tech. enlarges application of graphic simulation into dismantlement activity. - Abstract: This study proposes a methodology to simulate the cutting process in a digital manufacturing platform for the flexible planning of nuclear facility decommissioning. During the planning phase of decommissioning, visualization and verification using process simulation can be powerful tools for the flexible planning of the dismantling process of highly radioactive, large and complex nuclear facilities. However, existing research and commercial solutions are not sufficient for such a situation because complete segmented digital models for the dismantling objects such as the reactor vessel, internal assembly, and closure head must be prepared before the process simulation. The preparation work has significantly impeded the broad application of process simulation due to the complexity and workload. The methodology of process simulation proposed in this paper can flexibly handle various dismantling processes including repetitive object cuttings over heavy and complex structures using a digital manufacturing platform. The proposed methodology, which is applied to dismantling scenarios of a Korean nuclear power plant in this paper, is expected to reduce the complexity and workload of nuclear dismantling simulations.

  9. Development and demonstration of a validation methodology for vehicle lateral dynamics simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Kutluay, Emir

    2013-02-01

    In this thesis a validation methodology to be used in the assessment of the vehicle dynamics simulation models is presented. Simulation of vehicle dynamics is used to estimate the dynamic responses of existing or proposed vehicles and has a wide array of applications in the development of vehicle technologies. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. The developed validation paradigm has a top-down approach to the problem. It is ascertained that vehicle dynamics simulation models can only be validated using test maneuvers although they are aimed for real world maneuvers. Test maneuvers are determined according to the requirements of the real event at the start of the model development project and data handling techniques, validation metrics and criteria are declared for each of the selected maneuvers. If the simulation results satisfy these criteria, then the simulation is deemed ''not invalid''. If the simulation model fails to meet the criteria, the model is deemed invalid, and model iteration should be performed. The results are analyzed to determine if the results indicate a modeling error or a modeling inadequacy; and if a conditional validity in terms of system variables can be defined. Three test cases are used to demonstrate the application of the methodology. The developed methodology successfully identified the shortcomings of the tested simulation model, and defined the limits of application. The tested simulation model is found to be acceptable but valid only in a certain dynamical range. Several insights for the deficiencies of the model are reported in the analysis but the iteration step of the methodology is not demonstrated. Utilizing the proposed methodology will help to achieve more time and cost efficient simulation projects with

  10. Using soft systems methodology to develop a simulation of out-patient services.

    Science.gov (United States)

    Lehaney, B; Paul, R J

    1994-10-01

    Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.

  11. Novel Methodology for Functional Modeling and Simulation of Wireless Embedded Systems

    Directory of Open Access Journals (Sweden)

    Sosa Morales Emma

    2008-01-01

    Full Text Available Abstract A novel methodology is presented for the modeling and the simulation of wireless embedded systems. Tight interaction between the analog and the digital functionality makes the design and verification of such systems a real challenge. The applied methodology brings together the functional models of the baseband algorithms written in C language with the circuit descriptions at behavioral level in Verilog or Verilog-AMS for the system simulations in a single kernel environment. The physical layer of an ultrawideband system has been successfully modeled and simulated. The results confirm that this methodology provides a standardized framework in order to efficiently and accurately simulate complex mixed signal applications for embedded systems.

  12. Methodology for analysis and simulation of large multidisciplinary problems

    Science.gov (United States)

    Russell, William C.; Ikeda, Paul J.; Vos, Robert G.

    1989-01-01

    The Integrated Structural Modeling (ISM) program is being developed for the Air Force Weapons Laboratory and will be available for Air Force work. Its goal is to provide a design, analysis, and simulation tool intended primarily for directed energy weapons (DEW), kinetic energy weapons (KEW), and surveillance applications. The code is designed to run on DEC (VMS and UNIX), IRIS, Alliant, and Cray hosts. Several technical disciplines are included in ISM, namely structures, controls, optics, thermal, and dynamics. Four topics from the broad ISM goal are discussed. The first is project configuration management and includes two major areas: the software and database arrangement and the system model control. The second is interdisciplinary data transfer and refers to exchange of data between various disciplines such as structures and thermal. Third is a discussion of the integration of component models into one system model, i.e., multiple discipline model synthesis. Last is a presentation of work on a distributed processing computing environment.

  13. Non-plant referenced simulator methodology to meet new 10 CFR 55.45 rule

    International Nuclear Information System (INIS)

    Ibarra, J.G.

    1988-01-01

    The new 10CFR55.45 rule on Operating Tests necessitates that simulators be upgraded to meet the new requirements. This paper presents the human factors work done on an NRC approved guidance document sponsored by four utilities to develop a non-plant reference simulator facility. Human factors developed the simulator process flow and criteria, and integrated all the development work into the simulation facility plan. The human factors work provided the mechanism to solidify ideas and provided the foundation for the simulator development methodology

  14. Methodological advances: using greenhouses to simulate climate change scenarios.

    Science.gov (United States)

    Morales, F; Pascual, I; Sánchez-Díaz, M; Aguirreolea, J; Irigoyen, J J; Goicoechea, N; Antolín, M C; Oyarzun, M; Urdiain, A

    2014-09-01

    Human activities are increasing atmospheric CO2 concentration and temperature. Related to this global warming, periods of low water availability are also expected to increase. Thus, CO2 concentration, temperature and water availability are three of the main factors related to climate change that potentially may influence crops and ecosystems. In this report, we describe the use of growth chamber - greenhouses (GCG) and temperature gradient greenhouses (TGG) to simulate climate change scenarios and to investigate possible plant responses. In the GCG, CO2 concentration, temperature and water availability are set to act simultaneously, enabling comparison of a current situation with a future one. Other characteristics of the GCG are a relative large space of work, fine control of the relative humidity, plant fertirrigation and the possibility of light supplementation, within the photosynthetic active radiation (PAR) region and/or with ultraviolet-B (UV-B) light. In the TGG, the three above-mentioned factors can act independently or in interaction, enabling more mechanistic studies aimed to elucidate the limiting factor(s) responsible for a given plant response. Examples of experiments, including some aimed to study photosynthetic acclimation, a phenomenon that leads to decreased photosynthetic capacity under long-term exposures to elevated CO2, using GCG and TGG are reported. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  16. A simulation methodology of spacer grid residual spring deflection for predictive and interpretative purposes

    International Nuclear Information System (INIS)

    Kim, K. T.; Kim, H. K.; Yoon, K. H.

    1994-01-01

    The in-reactor fuel rod support conditions against the fretting wear-induced damage can be evaluated by spacer grid residual spring deflection. In order to predict the spacer grid residual spring deflection as a function of burnup for various spring designs, a simulation methodology of spacer grid residual spring deflection has been developed and implemented in the GRIDFORCE program. The simulation methodology takes into account cladding creep rate, initial spring deflection, initial spring force, and spring force relaxation rate as the key parameters affecting the residual spring deflection. The simulation methodology developed in this study can be utilized as an effective tool in evaluating the capability of a newly designed spacer grid spring to prevent the fretting wear-induced damage

  17. Tornado missile simulation and design methodology. Volume 2: model verification and data base updates. Final report

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments

  18. A methodological proposal for ancient kiln simulation analyzed by Moessbauer spectroscopy

    International Nuclear Information System (INIS)

    Flores, E.; Fernandes, B.; Flores, E.; Fernandez, B.

    1988-01-01

    In previous papers scientistis from different countries have reported simulating incomplete methodologies for ancient kiln firing conditions. Results from clays fired in a first working hypothesis are presented to end as a methodological proposal for simulation. This study was done through Moessbauer spectroscopy. The fired clay used presented a Fe 2+ and Fe 3+ spectrum in octahedric sites. Moessbauer parameters complemented by other studies indicate illite is the predominant clay mineral. A trait of the parameters behaviour in presence of temperature is reported. (author)

  19. Temperature dependence of broadline NMR spectra of water-soaked, epoxy-graphite composites

    Science.gov (United States)

    Lawing, David; Fornes, R. E.; Gilbert, R. D.; Memory, J. D.

    1981-10-01

    Water-soaked, epoxy resin-graphite fiber composites show a waterline in their broadline proton NMR spectrum which indicates a state of intermediate mobility between the solid and free water liquid states. The line is still present at -42 °C, but shows a reversible decrease in amplitude with decreasing temperature. The line is isotropic upon rotation of the fiber axis with respect to the external magnetic field.

  20. 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Koziel, Slawomir; Kacprzyk, Janusz; Leifsson, Leifur; Ören, Tuncer

    2015-01-01

    This book includes extended and revised versions of a set of selected papers from the 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2013) which was co-organized by the Reykjavik University (RU) and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC). SIMULTECH 2013 was held in cooperation with the ACM SIGSIM - Special Interest Group (SIG) on SImulation and Modeling (SIM), Movimento Italiano Modellazione e Simulazione (MIMOS) and AIS Special Interest Group on Modeling and Simulation (AIS SIGMAS) and technically co-sponsored by the Society for Modeling & Simulation International (SCS), Liophant Simulation, Simulation Team and International Federation for Information Processing (IFIP). This proceedings brings together researchers, engineers, applied mathematicians and practitioners working in the advances and applications in the field of system simulation.

  1. Methodological approach to simulation and choice of ecologically efficient and energetically economic wind turbines (WT)

    Science.gov (United States)

    Bespalov, Vadim; Udina, Natalya; Samarskaya, Natalya

    2017-10-01

    Use of wind energy is related to one of the prospective directions among renewed energy sources. A methodological approach is reviewed in the article to simulation and choice of ecologically efficient and energetically economic wind turbines on the designing stage taking into account characteristics of natural-territorial complex and peculiarities of anthropogenic load in the territory of WT location.

  2. Methodology for digital radiography simulation using the Monte Carlo code MCNPX for industrial applications

    International Nuclear Information System (INIS)

    Souza, E.M.; Correa, S.C.A.; Silva, A.X.; Lopes, R.T.; Oliveira, D.F.

    2008-01-01

    This work presents a methodology for digital radiography simulation for industrial applications using the MCNPX radiography tally. In order to perform the simulation, the energy-dependent response of a BaFBr imaging plate detector was modeled and introduced in the MCNPX radiography tally input. In addition, a post-processing program was used to convert the MCNPX radiography tally output into 16-bit digital images. Simulated and experimental images of a steel pipe containing corrosion alveoli and stress corrosion cracking were compared, and the results showed good agreement between both images

  3. Collecting real-time data with a behavioral simulation: A new methodological trait

    DEFF Research Database (Denmark)

    Jespersen, Kristina Risom

    interactive methods of collecting data [1, 2]. To collect real-time data as opposed to retrospective data, new methodological traits are needed. The paper proposes that a behavioral simulation supported by Web technology is a valid new research strategy to handle the collection of real-time data. Adapting...... the knowledge on agent-based modeling [3, 4], a behavioral simulation synergizes the benefits of self-administered questionnaires and the experimental design, and furthermore, introduces role-playing [5, 6] and scenario [7-11] strategies as very effective methods to ensure high interaction with the respondents....... The Web technology is the key to make a simulation for data collection objectives 'light'. Additionally, Web technology can be a solution to some of the challenges facing the traditional research methodologies such as time, ease, flexibility and cost, but perhaps more interesting, a possible solution...

  4. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    Energy Technology Data Exchange (ETDEWEB)

    Aldemir, Tunc [The Ohio State Univ., Columbus, OH (United States); Denning, Richard [The Ohio State Univ., Columbus, OH (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States); Unwin, Stephen [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-23

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.

  5. Integration of an iterative methodology for exergoeconomic improvement of thermal systems with a process simulator

    International Nuclear Information System (INIS)

    Vieira, Leonardo S.; Donatelli, Joao L.; Cruz, Manuel E.

    2004-01-01

    In this paper, we present the development and automated implementation of an iterative methodology for exergoeconomic improvement of thermal systems integrated with a process simulator, so as to be applicable to real, complex plants. The methodology combines recent available exergoeconomic techniques with new qualitative and quantitative criteria for the following tasks: (i) identification of decision variables that affect system total cost and exergetic efficiency; (ii) hierarchical classification of components; (iii) identification of predominant terms in the component total cost; and (iv) choice of main decision variables in the iterative process. To show the strengths and potential advantages of the proposed methodology, it is here applied to the benchmark CGAM cogeneration system. The results obtained are presented and discussed in detail and are compared to those reached using a mathematical optimization procedure

  6. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    International Nuclear Information System (INIS)

    Aldemir, Tunc; Denning, Richard; Catalyurek, Umit; Unwin, Stephen

    2015-01-01

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.

  7. SN 2009bb: A PECULIAR BROAD-LINED TYPE Ic SUPERNOVA ,

    International Nuclear Information System (INIS)

    Pignata, Giuliano; Stritzinger, Maximilian; Phillips, M. M.; Morrell, Nidia; Boldt, Luis; Campillay, Abdo; Contreras, Carlos; Gonzalez, Sergio; Krzeminski, Wojtek; Roth, Miguel; Salgado, Francisco; Soderberg, Alicia; Mazzali, Paolo; Anderson, J. P.; Folatelli, Gaston; Foerster, Francisco; Hamuy, Mario; Maza, Jose; Levesque, Emily M.; Rest, Armin

    2011-01-01

    Ultraviolet, optical, and near-infrared photometry and optical spectroscopy of the broad-lined Type Ic supernova (SN) 2009bb are presented, following the flux evolution from -10 to +285 days past B-band maximum. Thanks to the very early discovery, it is possible to place tight constraints on the SN explosion epoch. The expansion velocities measured from near maximum spectra are found to be only slightly smaller than those measured from spectra of the prototype broad-lined SN 1998bw associated with GRB 980425. Fitting an analytical model to the pseudobolometric light curve of SN 2009bb suggests that 4.1 ± 1.9 M sun of material was ejected with 0.22 ± 0.06 M sun of it being 56 Ni. The resulting kinetic energy is 1.8 ± 0.7 x 10 52 erg. This, together with an absolute peak magnitude of M B = -18.36 ± 0.44, places SN 2009bb on the energetic and luminous end of the broad-lined Type Ic (SN Ic) sequence. Detection of helium in the early time optical spectra accompanied with strong radio emission and high metallicity of its environment makes SN 2009bb a peculiar object. Similar to the case for gamma-ray bursts (GRBs), we find that the bulk explosion parameters of SN 2009bb cannot account for the copious energy coupled to relativistic ejecta, and conclude that another energy reservoir (a central engine) is required to power the radio emission. Nevertheless, the analysis of the SN 2009bb nebular spectrum suggests that the failed GRB detection is not imputable to a large angle between the line-of-sight and the GRB beamed radiation. Therefore, if a GRB was produced during the SN 2009bb explosion, it was below the threshold of the current generation of γ-ray instruments.

  8. 2014 International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Ören, Tuncer; Kacprzyk, Janusz; Filipe, Joaquim

    2015-01-01

    The present book includes a set of selected extended papers from the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014), held in Vienna, Austria, from 28 to 30 August 2014. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2014 received 167 submissions, from 45 countries, in all continents. After a double blind paper review performed by the Program Committee, 23% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2014. Commitment to high quality standards is a major concern of SIMULTEC...

  9. Discontinuous Galerkin methodology for Large-Eddy Simulations of wind turbine airfoils

    DEFF Research Database (Denmark)

    Frére, A.; Sørensen, Niels N.; Hillewaert, K.

    2016-01-01

    This paper aims at evaluating the potential of the Discontinuous Galerkin (DG) methodology for Large-Eddy Simulation (LES) of wind turbine airfoils. The DG method has shown high accuracy, excellent scalability and capacity to handle unstructured meshes. It is however not used in the wind energy...... sector yet. The present study aims at evaluating this methodology on an application which is relevant for that sector and focuses on blade section aerodynamics characterization. To be pertinent for large wind turbines, the simulations would need to be at low Mach numbers (M ≤ 0.3) where compressible...... at low and high Reynolds numbers and compares the results to state-of-the-art models used in industry, namely the panel method (XFOIL with boundary layer modeling) and Reynolds Averaged Navier-Stokes (RANS). At low Reynolds number (Re = 6 × 104), involving laminar boundary layer separation and transition...

  10. 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Kacprzyk, Janusz; Ören, Tuncer; Filipe, Joaquim

    2016-01-01

    The present book includes a set of selected extended papers from the 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2015), held in Colmar, France, from 21 to 23 July 2015. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2015 received 102 submissions, from 36 countries, in all continents. After a double blind paper review performed by the Program Committee, 19% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2015. Commitment to high quality standards is a major concern of SIMULTECH t...

  11. Concepts and methodologies for modeling and simulation a tribute to Tuncer Oren

    CERN Document Server

    Yilmaz, Levent

    2015-01-01

    This comprehensive text/reference presents cutting-edge advances in the theory and methodology of modeling and simulation (M&S), and reveals how this work has been influenced by the fundamental contributions of Professor Tuncer Ören to this field. Exploring the synergies among the domains of M&S and systems engineering (SE), the book describes how M&S and SE can help to address the complex problems identified as "Grand Challenges" more effectively under a model-driven and simulation-directed systems engineering framework. Topics and features: examines frameworks for the development of advan

  12. The Different Nature in Seyfert 2 Galaxies With and Without Hidden Broad-Line Regions

    OpenAIRE

    Wu, Yu-Zhong; Zhang, En-Peng; Liang, Yan-Chun; Zhang, Cheng-Min; Zhao, Yong-Heng

    2011-01-01

    We compile a large sample of 120 Seyfert 2 galaxies (Sy2s) which contains 49 hidden broad-line region (HBLR) Sy2s and 71 non-HBLR Sy2s. From the difference in the power sources between two groups, we test if HBLR Sy2s are dominated by active galactic nuclei (AGNs), and if non-HBLR Sy2s are dominated by starbursts. We show that: (1) HBLR Sy2s have larger accretion rates than non-HBLR Sy2s; (2) HBLR Sy2s have larger \\Nev $\\lambda 14.32$/\\Neii $\\lambda 12.81$ and \\oiv $\\lambda 25.89$/\\Neii $\\lam...

  13. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    Energy Technology Data Exchange (ETDEWEB)

    Izzuddin, Nur; Sunarsih,; Priyanto, Agoes [Faculty of Mechanical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor (Malaysia)

    2015-05-15

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.

  14. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  15. A methodology for determining the dynamic exchange of resources in nuclear fuel cycle simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gidden, Matthew J., E-mail: gidden@iiasa.ac.at [International Institute for Applied Systems Analysis, Schlossplatz 1, A-2361 Laxenburg (Austria); University of Wisconsin – Madison, Department of Nuclear Engineering and Engineering Physics, Madison, WI 53706 (United States); Wilson, Paul P.H. [University of Wisconsin – Madison, Department of Nuclear Engineering and Engineering Physics, Madison, WI 53706 (United States)

    2016-12-15

    Highlights: • A novel fuel cycle simulation entity interaction mechanism is proposed. • A framework and implementation of the mechanism is described. • New facility outage and regional interaction scenario studies are described and analyzed. - Abstract: Simulation of the nuclear fuel cycle can be performed using a wide range of techniques and methodologies. Past efforts have focused on specific fuel cycles or reactor technologies. The CYCLUS fuel cycle simulator seeks to separate the design of the simulation from the fuel cycle or technologies of interest. In order to support this separation, a robust supply–demand communication and solution framework is required. Accordingly an agent-based supply-chain framework, the Dynamic Resource Exchange (DRE), has been designed implemented in CYCLUS. It supports the communication of complex resources, namely isotopic compositions of nuclear fuel, between fuel cycle facilities and their managers (e.g., institutions and regions). Instances of supply and demand are defined as an optimization problem and solved for each timestep. Importantly, the DRE allows each agent in the simulation to independently indicate preference for specific trading options in order to meet both physics requirements and satisfy constraints imposed by potential socio-political models. To display the variety of possible simulations that the DRE enables, example scenarios are formulated and described. Important features include key fuel-cycle facility outages, introduction of external recycled fuel sources (similar to the current mixed oxide (MOX) fuel fabrication facility in the United States), and nontrivial interactions between fuel cycles existing in different regions.

  16. A methodology for determining the dynamic exchange of resources in nuclear fuel cycle simulation

    International Nuclear Information System (INIS)

    Gidden, Matthew J.; Wilson, Paul P.H.

    2016-01-01

    Highlights: • A novel fuel cycle simulation entity interaction mechanism is proposed. • A framework and implementation of the mechanism is described. • New facility outage and regional interaction scenario studies are described and analyzed. - Abstract: Simulation of the nuclear fuel cycle can be performed using a wide range of techniques and methodologies. Past efforts have focused on specific fuel cycles or reactor technologies. The CYCLUS fuel cycle simulator seeks to separate the design of the simulation from the fuel cycle or technologies of interest. In order to support this separation, a robust supply–demand communication and solution framework is required. Accordingly an agent-based supply-chain framework, the Dynamic Resource Exchange (DRE), has been designed implemented in CYCLUS. It supports the communication of complex resources, namely isotopic compositions of nuclear fuel, between fuel cycle facilities and their managers (e.g., institutions and regions). Instances of supply and demand are defined as an optimization problem and solved for each timestep. Importantly, the DRE allows each agent in the simulation to independently indicate preference for specific trading options in order to meet both physics requirements and satisfy constraints imposed by potential socio-political models. To display the variety of possible simulations that the DRE enables, example scenarios are formulated and described. Important features include key fuel-cycle facility outages, introduction of external recycled fuel sources (similar to the current mixed oxide (MOX) fuel fabrication facility in the United States), and nontrivial interactions between fuel cycles existing in different regions.

  17. A methodology for thermodynamic simulation of high temperature, internal reforming fuel cell systems

    Science.gov (United States)

    Matelli, José Alexandre; Bazzo, Edson

    This work presents a methodology for simulation of fuel cells to be used in power production in small on-site power/cogeneration plants that use natural gas as fuel. The methodology contemplates thermodynamics and electrochemical aspects related to molten carbonate and solid oxide fuel cells (MCFC and SOFC, respectively). Internal steam reforming of the natural gas hydrocarbons is considered for hydrogen production. From inputs as cell potential, cell power, number of cell in the stack, ancillary systems power consumption, reformed natural gas composition and hydrogen utilization factor, the simulation gives the natural gas consumption, anode and cathode stream gases temperature and composition, and thermodynamic, electrochemical and practical efficiencies. Both energetic and exergetic methods are considered for performance analysis. The results obtained from natural gas reforming thermodynamics simulation show that the hydrogen production is maximum around 700 °C, for a steam/carbon ratio equal to 3. As shown in the literature, the found results indicate that the SOFC is more efficient than MCFC.

  18. DISCOVERY OF ULTRA-FAST OUTFLOWS IN A SAMPLE OF BROAD-LINE RADIO GALAXIES OBSERVED WITH SUZAKU

    International Nuclear Information System (INIS)

    Tombesi, F.; Sambruna, R. M.; Mushotzky, R. F.; Reeves, J. N.; Gofford, J.; Braito, V.; Ballo, L.; Cappi, M.

    2010-01-01

    We present the results of a uniform and systematic search for blueshifted Fe K absorption lines in the X-ray spectra of five bright broad-line radio galaxies observed with Suzaku. We detect, for the first time in radio-loud active galactic nuclei (AGNs) at X-rays, several absorption lines at energies greater than 7 keV in three out of five sources, namely, 3C 111, 3C 120, and 3C 390.3. The lines are detected with high significance according to both the F-test and extensive Monte Carlo simulations. Their likely interpretation as blueshifted Fe XXV and Fe XXVI K-shell resonance lines implies an origin from highly ionized gas outflowing with mildly relativistic velocities, in the range v ≅ 0.04-0.15c. A fit with specific photoionization models gives ionization parameters in the range log ξ ≅ 4-5.6 erg s -1 cm and column densities of N H ≅ 10 22 -10 23 cm -2 . These characteristics are very similar to those of the ultra-fast outflows (UFOs) previously observed in radio-quiet AGNs. Their estimated location within ∼0.01-0.3 pc of the central super-massive black hole suggests a likely origin related with accretion disk winds/outflows. Depending on the absorber covering fraction, the mass outflow rate of these UFOs can be comparable to the accretion rate and their kinetic power can correspond to a significant fraction of the bolometric luminosity and is comparable to their typical jet power. Therefore, these UFOs can play a significant role in the expected feedback from the AGN to the surrounding environment and can give us further clues on the relation between the accretion disk and the formation of winds/jets in both radio-quiet and radio-loud AGNs.

  19. Randomized controlled trials of simulation-based interventions in Emergency Medicine: a methodological review.

    Science.gov (United States)

    Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri

    2018-04-01

    The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.

  20. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  1. 3CE Methodology for Conducting a Modeling, Simulation, and Instrumentation Tool Capability Analysis

    Science.gov (United States)

    2010-05-01

    flRmurn I F )T:Ir,tir)l! MCr)lto.-lng DHin nttbli..’"Ollc:~ E,;m:a..liut .!,)’l’lt’Mn:l’lll.ll~ t Managemen t F unction a l Arem 1 .5 Toola na...a modeling, simulation, and instrumentation (MS&I) environment. This methodology uses the DoDAF product set to document operational and systems...engineering process were identified and resolved, such as duplication of data elements derived from DoDAF operational and system views used to

  2. Further developments of multiphysics and multiscale methodologies for coupled nuclear reactor simulations

    International Nuclear Information System (INIS)

    Gomez Torres, Armando Miguel

    2011-01-01

    This doctoral thesis describes the methodological development of coupled neutron-kinetics/thermal-hydraulics codes for the design and safety analysis of reactor systems taking into account the feedback mechanisms on the fuel rod level, according to different approaches. A central part of this thesis is the development and validation of a high fidelity simulation tool, DYNSUB, which results from the ''two-way-coupling'' of DYN3D-SP3 and SUBCHANFLOW. It allows the determination of local safety parameters through a detailed description of the core behavior under stationary and transient conditions at fuel rod level.

  3. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    Science.gov (United States)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the

  4. Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory

    Directory of Open Access Journals (Sweden)

    Joshua Rodewald

    2016-10-01

    Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.

  5. The SIMRAND methodology: Theory and application for the simulation of research and development projects

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.

  6. Development of sodium droplet combustion analysis methodology using direct numerical simulation in 3-dimensional coordinate (COMET)

    International Nuclear Information System (INIS)

    Okano, Yasushi; Ohira, Hiroaki

    1998-08-01

    In the early stage of sodium leak event of liquid metal fast breeder reactor, LMFBR, liquid sodium flows out from a piping, and ignition and combustion of liquid sodium droplet might occur under certain environmental condition. Compressible forced air flow, diffusion of chemical species, liquid sodium droplet behavior, chemical reactions and thermodynamic properties should be evaluated with considering physical dependence and numerical connection among them for analyzing combustion of sodium liquid droplet. A direct numerical simulation code was developed for numerical analysis of sodium liquid droplet in forced convection air flow. The numerical code named COMET, 'Sodium Droplet COmbustion Analysis METhodology using Direct Numerical Simulation in 3-Dimensional Coordinate'. The extended MAC method was used to calculate compressible forced air flow. Counter diffusion among chemical species is also calculated. Transport models of mass and energy between droplet and surrounding atmospheric air were developed. Equation-solving methods were used for computing multiphase equilibrium between sodium and air. Thermodynamic properties of chemical species were evaluated using dynamic theory of gases. Combustion of single sphere liquid sodium droplet in forced convection, constant velocity, uniform air flow was numerically simulated using COMET. Change of droplet diameter with time was closely agree with d 2 -law of droplet combustion theory. Spatial distributions of combustion rate and heat generation and formation, decomposition and movement of chemical species were analyzed. Quantitative calculations of heat generation and chemical species formation in spray combustion are enabled for various kinds of environmental condition by simulating liquid sodium droplet combustion using COMET. (author)

  7. Development of a methodology for simulation of gas cooled reactors with purpose of transmutation

    International Nuclear Information System (INIS)

    Silva, Clarysson Alberto da

    2009-01-01

    This work proposes a methodology of MHR (Modular Helium Reactor) simulation using the WIMSD-5B (Winfrith Improved Multi/group Scheme) nuclear code which is validated by MCNPX 2.6.0 (Monte Carlo N-Particle transport eXtend) nuclear code. The goal is verify the capability of WIMSD-5B to simulate a reactor type GT-MHR (Gas Turbine Modular Helium Reactor), considering all the fuel recharges possibilities. Also is evaluated the possibility of WIMSD-5B to represent adequately the fuel evolution during the fuel recharge. Initially was verified the WIMSD-5B capability to simulate the recharge specificities of this model by analysis of neutronic parameters and isotopic composition during the burnup. After the model was simulated using both WIMSD-5B and MCNPX 2.6.0 codes and the results of k eff , neutronic flux and isotopic composition were compared. The results show that the deterministic WIMSD-5B code can be applied to a qualitative evaluation, representing adequately the core behavior during the fuel recharges being possible in a short period of time to inquire about the burned core that, once optimized, can be quantitatively evaluated by a code type MCNPX 2.6.0. (author)

  8. General methodology for exergy balance in ProSimPlus® process simulator

    International Nuclear Information System (INIS)

    Ghannadzadeh, Ali; Thery-Hetreux, Raphaële; Baudouin, Olivier; Baudet, Philippe; Floquet, Pascal; Joulia, Xavier

    2012-01-01

    This paper presents a general methodology for exergy balance in chemical and thermal processes integrated in ProSimPlus ® as a well-adapted process simulator for energy efficiency analysis. In this work, as well as using the general expressions for heat and work streams, the whole exergy balance is presented within only one software in order to fully automate exergy analysis. In addition, after exergy balance, the essential elements such as source of irreversibility for exergy analysis are presented to help the user for modifications on either process or utility system. The applicability of the proposed methodology in ProSimPlus ® is shown through a simple scheme of Natural Gas Liquids (NGL) recovery process and its steam utility system. The methodology does not only provide the user with necessary exergetic criteria to pinpoint the source of exergy losses, it also helps the user to find the way to reduce the exergy losses. These features of the proposed exergy calculator make it preferable for its implementation in ProSimPlus ® to define the most realistic and profitable retrofit projects on the existing chemical and thermal plants. -- Highlights: ► A set of new expressions for calculation of exergy of material streams is developed. ► A general methodology for exergy balance in ProSimPlus ® is presented. ► A panel of solutions based on exergy analysis is provided to help the user for modifications on process flowsheets. ► The exergy efficiency is chosen as a variable in a bi-criteria optimization.

  9. Determination of phase diagrams via computer simulation: methodology and applications to water, electrolytes and proteins

    International Nuclear Information System (INIS)

    Vega, C; Sanz, E; Abascal, J L F; Noya, E G

    2008-01-01

    In this review we focus on the determination of phase diagrams by computer simulation, with particular attention to the fluid-solid and solid-solid equilibria. The methodology to compute the free energy of solid phases will be discussed. In particular, the Einstein crystal and Einstein molecule methodologies are described in a comprehensive way. It is shown that both methodologies yield the same free energies and that free energies of solid phases present noticeable finite size effects. In fact, this is the case for hard spheres in the solid phase. Finite size corrections can be introduced, although in an approximate way, to correct for the dependence of the free energy on the size of the system. The computation of free energies of solid phases can be extended to molecular fluids. The procedure to compute free energies of solid phases of water (ices) will be described in detail. The free energies of ices Ih, II, III, IV, V, VI, VII, VIII, IX, XI and XII will be presented for the SPC/E and TIP4P models of water. Initial coexistence points leading to the determination of the phase diagram of water for these two models will be provided. Other methods to estimate the melting point of a solid, such as the direct fluid-solid coexistence or simulations of the free surface of the solid, will be discussed. It will be shown that the melting points of ice Ih for several water models, obtained from free energy calculations, direct coexistence simulations and free surface simulations agree within their statistical uncertainty. Phase diagram calculations can indeed help to improve potential models of molecular fluids. For instance, for water, the potential model TIP4P/2005 can be regarded as an improved version of TIP4P. Here we will review some recent work on the phase diagram of the simplest ionic model, the restricted primitive model. Although originally devised to describe ionic liquids, the model is becoming quite popular to describe the behavior of charged colloids

  10. Optical Variability of Narrow-line and Broad-line Seyfert 1 Galaxies

    Science.gov (United States)

    Rakshit, Suvendu; Stalin, C. S.

    2017-06-01

    We studied the optical variability (OV) of a large sample of narrow-line Seyfert 1 (NLSy1) and broad-line Seyfert 1 (BLSy1) galaxies with z anti-correlated with Fe II strength but correlated with the width of the Hβ line. The well-known anti-correlation of variability-luminosity and the variability-Eddington ratio is present in our data. Among the radio-loud sample, variability amplitude is found to be correlated with radio-loudness and radio-power, suggesting that jets also play an important role in the OV in radio-loud objects, in addition to the Eddington ratio, which is the main driving factor of OV in radio-quiet sources.

  11. Unusual broad-line Mg II emitters among luminous galaxies in the baryon oscillation spectroscopic survey

    International Nuclear Information System (INIS)

    Roig, Benjamin; Blanton, Michael R.; Ross, Nicholas P.

    2014-01-01

    Many classes of active galactic nuclei (AGNs) have been observed and recorded since the discovery of Seyfert galaxies. In this paper, we examine the sample of luminous galaxies in the Baryon Oscillation Spectroscopic Survey. We find a potentially new observational class of AGNs, one with strong and broad Mg II λ2799 line emission, but very weak emission in other normal indicators of AGN activity, such as the broad-line Hα, Hβ, and the near-ultraviolet AGN continuum, leading to an extreme ratio of broad Hα/Mg II flux relative to normal quasars. Meanwhile, these objects' narrow-line flux ratios reveal AGN narrow-line regions with levels of activity consistent with the Mg II fluxes and in agreement with that of normal quasars. These AGN may represent an extreme case of the Baldwin effect, with very low continuum and high equivalent width relative to typical quasars, but their ratio of broad Mg II to broad Balmer emission remains very unusual. They may also be representative of a class of AGN where the central engine is observed indirectly with scattered light. These galaxies represent a small fraction of the total population of luminous galaxies (≅ 0.1%), but are more likely (about 3.5 times) to have AGN-like nuclear line emission properties than other luminous galaxies. Because Mg II is usually inaccessible for the population of nearby galaxies, there may exist a related population of broad-line Mg II emitters in the local universe which is currently classified as narrow-line emitters (Seyfert 2 galaxies) or low ionization nuclear emission-line regions.

  12. Optical Variability of Narrow-line and Broad-line Seyfert 1 Galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Rakshit, Suvendu; Stalin, C. S., E-mail: suvenduat@gmail.com [Indian Institute of Astrophysics, Block II, Koramangala, Bangalore-560034 (India)

    2017-06-20

    We studied the optical variability (OV) of a large sample of narrow-line Seyfert 1 (NLSy1) and broad-line Seyfert 1 (BLSy1) galaxies with z < 0.8 to investigate any differences in their OV properties. Using archival optical V -band light curves from the Catalina Real Time Transient Survey that span 5–9 years and modeling them using damped random walk, we estimated the amplitude of variability. We found that NLSy1 galaxies as a class show lower amplitude of variability than their broad-line counterparts. In the sample of both NLSy1 and BLSy1 galaxies, radio-loud sources are found to have higher variability amplitude than radio-quiet sources. Considering only sources that are detected in the X-ray band, NLSy1 galaxies are less optically variable than BLSy1 galaxies. The amplitude of variability in the sample of both NLSy1 and BLSy1 galaxies is found to be anti-correlated with Fe ii strength but correlated with the width of the H β line. The well-known anti-correlation of variability–luminosity and the variability–Eddington ratio is present in our data. Among the radio-loud sample, variability amplitude is found to be correlated with radio-loudness and radio-power, suggesting that jets also play an important role in the OV in radio-loud objects, in addition to the Eddington ratio, which is the main driving factor of OV in radio-quiet sources.

  13. Development and validation of a new turbocharger simulation methodology for marine two stroke diesel engine modelling and diagnostic applications

    International Nuclear Information System (INIS)

    Sakellaridis, Nikolaos F.; Raptotasios, Spyridon I.; Antonopoulos, Antonis K.; Mavropoulos, Georgios C.; Hountalas, Dimitrios T.

    2015-01-01

    Engine cycle simulation models are increasingly used in diesel engine simulation and diagnostic applications, reducing experimental effort. Turbocharger simulation plays an important role in model's ability to accurately predict engine performance and emissions. The present work describes the development of a complete engine simulation model for marine Diesel engines based on a new methodology for turbocharger modelling utilizing physically based meanline models for compressor and turbine. Simulation accuracy is evaluated against engine bench measurements. The methodology was developed to overcome the problem of limited experimental maps availability for compressor and turbine, often encountered in large marine diesel engine simulation and diagnostic studies. Data from the engine bench are used to calibrate the models, as well as to estimate turbocharger shaft mechanical efficiency. Closed cycle and gas exchange are modelled using an existing multizone thermodynamic model. The proposed methodology is applied on a 2-stroke marine diesel engine and its evaluation is based on the comparison of predictions against measured engine data. It is demonstrated model's ability to predict engine response with load variation regarding both turbocharger performance and closed cycle parameters, as well as NOx emission trends, making it an effective tool for both engine diagnostic and optimization studies. - Highlights: • Marine two stroke diesel engine simulation model. • Turbine and compressor simulation using physical meanline models. • Methodology to derive T/C component efficiency and T/C shaft mechanical efficiency. • Extensive validation of predictions against experimental data.

  14. Baccalaureate nursing students' perspectives of peer tutoring in simulation laboratory, a Q methodology study.

    Science.gov (United States)

    Li, Ting; Petrini, Marcia A; Stone, Teresa E

    2018-02-01

    The study aim was to identify the perceived perspectives of baccalaureate nursing students toward the peer tutoring in the simulation laboratory. Insight into the nursing students' experiences and baseline data related to their perception of peer tutoring will assist to improve nursing education. Q methodology was applied to explore the students' perspectives of peer tutoring in the simulation laboratory. A convenience P-sample of 40 baccalaureate nursing students was used. Fifty-eight selected Q statements from each participant were classified into the shape of a normal distribution using an 11-point bipolar scale form with a range from -5 to +5. PQ Method software analyzed the collected data. Three discrete factors emerged: Factor I ("Facilitate or empower" knowledge acquisition), Factor II ("Safety Net" Support environment), and Factor III ("Mentoring" learn how to learn). The findings of this study support and indicate that peer tutoring is an effective supplementary strategy to promote baccalaureate students' knowledge acquisition, establishing a supportive safety net and facilitating their abilities to learn in the simulation laboratory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A systematic methodology to extend the applicability of a bioconversion model for the simulation of various co-digestion scenarios

    DEFF Research Database (Denmark)

    Kovalovszki, Adam; Alvarado-Morales, Merlin; Fotidis, Ioannis

    2017-01-01

    Detailed simulation of anaerobic digestion (AD) requires complex mathematical models and the optimization of numerous model parameters. By performing a systematic methodology and identifying parameters with the highest impact on process variables in a well-established AD model, its applicability...... was extended to various co-digestion scenarios. More specifically, the application of the step-by-step methodology led to the estimation of a general and reduced set of parameters, for the simulation of scenarios where either manure or wastewater were co-digested with different organic substrates. Validation...... experimental data quite well, indicating that it offers a reliable reference point for future simulations of anaerobic co-digestion scenarios....

  16. Simulating fission product transients via the history-based local-parameter methodology

    International Nuclear Information System (INIS)

    Jenkins, D.A.; Rouben, B.; Salvatore, M.

    1993-01-01

    This paper describes the fission-product-calculation capacity of the history-based local-parameter methodology for evaluating lattice properties for use in core-tracking calculations in CANDU reactors. In addition to taking into account the individual past history of each bundles flux/power level, fuel temperature, and coolant density and temperature that the bundle has seen during its stay in the core, the latest refinement of the history-based method provides the capability of fission-product-drivers. It allows the bundle-specific concentrations of the three basic groups of saturating fission products to be calculated in steady state or following a power transient, including long shutdowns. The new capability is illustrated by simulating the startup period following a typical long-shutdown, starting from a snapshot in the Point Lepreau operating history. 9 refs., 7 tabs

  17. On-line diagnosis and recovery of adversary attack using logic flowgraph methodology simulation

    International Nuclear Information System (INIS)

    Guarro, S.B.

    1986-01-01

    The Logic Flowgraph Methodology (LFM) allows the construction of special graph models for simulation of complex processes of causality, including feedback loops and sequential effects. Among the most notable features of LFM is the formal inclusion in its models of causality conditioning by logic switches imbedded in the modeled process, such as faults or modes of operation. The LFM model of a process is a graph structure that captures, in one synthetic representation, the relevant success and fault space characterization of that process. LFM is very similar to an artificial intelligence expert system shell. To illustrate the utilization of LFM, an application to the assessment and on-line monitoring of a material control facility is presented. The LFM models are used to model adversary action and control response, and to generate mini-diagnostic and recovery trees in real time, as well as reliability tress for off-line evaluation. Although the case study presented is for an imaginary facility, most of the conceptual elements that would be present in a real application have been retained in order to highlight the features and capabilities of the methodology

  18. Development of a numerical methodology for flowforming process simulation of complex geometry tubes

    Science.gov (United States)

    Varela, Sonia; Santos, Maite; Arroyo, Amaia; Pérez, Iñaki; Puigjaner, Joan Francesc; Puigjaner, Blanca

    2017-10-01

    Nowadays, the incremental flowforming process is widely explored because of the usage of complex tubular products is increasing due to the light-weighting trend and the use of expensive materials. The enhanced mechanical properties of finished parts combined with the process efficiency in terms of raw material and energy consumption are the key factors for its competitiveness and sustainability, which is consistent with EU industry policy. As a promising technology, additional steps for extending the existing flowforming limits in the production of tubular products are required. The objective of the present research is to further expand the current state of the art regarding limitations on tube thickness and diameter, exploring the feasibility to flowform complex geometries as tubes of elevated thickness of up to 60 mm. In this study, the analysis of the backward flowforming process of 7075 aluminum tubular preform is carried out to define the optimum process parameters, machine requirements and tooling geometry as demonstration case. Numerical simulation studies on flowforming of thin walled tubular components have been considered to increase the knowledge of the technology. The calculation of the rotational movement of the mesh preform, the high ratio thickness/length and the thermomechanical condition increase significantly the computation time of the numerical simulation model. This means that efficient and reliable tools able to predict the forming loads and the quality of flowformed thick tubes are not available. This paper aims to overcome this situation by developing a simulation methodology based on FEM simulation code including new strategies. Material characterization has also been performed through tensile test to able to design the process. Finally, to check the reliability of the model, flowforming tests at industrial environment have been developed.

  19. Response surface methodological approach for the decolorization of simulated dye effluent using Aspergillus fumigatus fresenius.

    Science.gov (United States)

    Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj

    2009-01-30

    The aim of our research was to study, effect of temperature, pH and initial dye concentration on decolorization of diazo dye Acid Red 151 (AR 151) from simulated dye solution using a fungal isolate Aspergillus fumigatus fresenius have been investigated. The central composite design matrix and response surface methodology (RSM) have been applied to design the experiments to evaluate the interactive effects of three most important operating variables: temperature (25-35 degrees C), pH (4.0-7.0), and initial dye concentration (100-200 mg/L) on the biodegradation of AR 151. The total 20 experiments were conducted in the present study towards the construction of a quadratic model. Very high regression coefficient between the variables and the response (R(2)=0.9934) indicated excellent evaluation of experimental data by second-order polynomial regression model. The RSM indicated that initial dye concentration of 150 mg/L, pH 5.5 and a temperature of 30 degrees C were optimal for maximum % decolorization of AR 151 in simulated dye solution, and 84.8% decolorization of AR 151 was observed at optimum growth conditions.

  20. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    Science.gov (United States)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  1. An optical and near-infrared polarization survey of Seyfert and broad-line radio galaxies. Pt. 2

    International Nuclear Information System (INIS)

    Brindle, C.; Hough, J.H.; Bailey, J.A.; Axon, D.J.; Ward, M.J.; McLean, I.S.

    1990-01-01

    We discuss the wavelength dependence (0.44-2.2 μm) of polarization of the sample of 71 Seyfert and three broad-line radio galaxies presented in a previous paper. For four galaxies, 3A 0557-383, Fairall 51, IC 4392A and NGC 3783, we also present spectropolarimetry covering the wavelength range of 0.4-0.6 μm. (author)

  2. Advanced methodology to simulate boiling water reactor transient using coupled thermal-hydraulic/neutron-kinetic codes

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Christoph Oliver

    2016-06-13

    -sets) predicted by SCALE6/TRITON and CASMO. Thereby the coupled TRACE/PARCS simulations reproduced the single fuel assembly depletion and stand-alone PARCS results. A turbine trip event, occurred at a BWR plant of type 72, has been investigated in detail using the cross-section libraries generated with SCALE/TRITON and CASMO. Thereby the evolution of the integral BWR parameters predicted by the coupled codes using cross-sections from SCALE/TRITON is very close to the global trends calculated using CASMO cross-sections. Further, to implement uncertainty quantifications, the PARCS reactor dynamic code was extended (uncertainty module) to facilitate the consideration of the uncertainty of neutron kinetic parameters in coupled TRACE/PARCS simulations. For a postulated pressure pertubation, an uncertainty and sensitivity study was performed using TRACE/PARCS and SUSA. The obtained results illustrated the capability of such methodologies which are still under development. Based on this analysis, the uncertainty band for key-parameters, e.g. reactivity, as well as the importance ranking of reactor kinetics parameters could be predicted and identified for this accident scenario.

  3. Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Krause, E.; et al.

    2017-06-28

    We present the methodology for and detail the implementation of the Dark Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines configuration-space two-point statistics from three different cosmological probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data from the first year of DES observations. We have developed two independent modeling pipelines and describe the code validation process. We derive expressions for analytical real-space multi-probe covariances, and describe their validation with numerical simulations. We stress-test the inference pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters plus 20 nuisance parameters and precisely resemble the analysis to be presented in the DES 3x2pt analysis paper, using a variety of simulated input data vectors with varying assumptions. We find that any disagreement between pipelines leads to changes in assigned likelihood $\\Delta \\chi^2 \\le 0.045$ with respect to the statistical error of the DES Y1 data vector. We also find that angular binning and survey mask do not impact our analytic covariance at a significant level. We determine lower bounds on scales used for analysis of galaxy clustering (8 Mpc$~h^{-1}$) and galaxy-galaxy lensing (12 Mpc$~h^{-1}$) such that the impact of modeling uncertainties in the non-linear regime is well below statistical errors, and show that our analysis choices are robust against a variety of systematics. These tests demonstrate that we have a robust analysis pipeline that yields unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1 analysis. We emphasize that the level of independent code development and subsequent code comparison as demonstrated in this paper is necessary to produce credible constraints from increasingly complex multi-probe analyses of current data.

  4. Development of a cross-section methodology and a real-time core model for VVER-1000 simulator application

    Energy Technology Data Exchange (ETDEWEB)

    Georgieva, Emiliya Lyudmilova

    2016-06-06

    The novel academic contributions are summarized as follows. A) A cross-section modelling methodology and a cycle-specific cross-section update procedure are developed to meet fidelity requirements applicable to a cycle-specific reactor core simulation, as well as particular customer needs and practices supporting VVER-1000 operation and safety. B) A real-time version of the Nodal Expansion Method code is developed and implemented into Kozloduy 6 full-scope replica control room simulator.

  5. Use of calibration methodology of gamma cameras for the workers surveillance using a thyroid simulator

    International Nuclear Information System (INIS)

    Alfaro, M.; Molina, G.; Vazquez, R.; Garcia, O.

    2010-09-01

    In Mexico there are a significant number of nuclear medicine centers in operation. For what the accidents risk related to the transport and manipulation of open sources used in nuclear medicine can exist. The National Institute of Nuclear Research (ININ) has as objective to establish a simple and feasible methodology for the workers surveillance related with the field of the nuclear medicine. This radiological surveillance can also be applied to the public in the event of a radiological accident. To achieve this it intends to use the available equipment s in the nuclear medicine centers, together with the neck-thyroid simulators elaborated by the ININ to calibrate the gamma cameras. The gamma cameras have among their component elements that conform spectrometric systems like the employees in the evaluation of the internal incorporation for direct measurements, reason why, besides their use for diagnostic for image, they can be calibrated with anthropomorphic simulators and also with punctual sources for the quantification of the radionuclides activity distributed homogeneously in the human body, or located in specific organs. Inside the project IAEA-ARCAL-RLA/9/049-LXXVIII -Procedures harmonization of internal dosimetry- where 9 countries intervened (Argentina, Brazil, Colombia, Cuba, Chile, Mexico, Peru, Uruguay and Spain). It was developed a protocol of cameras gamma calibration for the determination in vivo of radionuclides. The protocol is the base to establish and integrated network in Latin America to attend in response to emergencies, using nuclear medicine centers of public hospitals of the region. The objective is to achieve the appropriate radiological protection of the workers, essential for the sure and acceptable radiation use, the radioactive materials and the nuclear energy. (Author)

  6. A trend analysis methodology for enhanced validation of 3-D LWR core simulations

    International Nuclear Information System (INIS)

    Wieselquist, William; Ferroukhi, Hakim; Bernatowicz, Kinga

    2011-01-01

    This paper presents an approach that is being developed and implemented at PSI to enhance the Verification and Validation (V and V) procedure of 3-D static core simulations for the Swiss LWR reactors. The principle is to study in greater details the deviations between calculations and measurements and to assess on that basis if distinct trends of the accuracy can be observed. The presence of such trends could then be a useful indicator of eventual limitations/weaknesses in the applied lattice/core analysis methodology and could thereby serve as guidance for method/model enhancements. Such a trend analysis is illustrated here for a Swiss PWR core model using as basis, the state-of-the-art industrial CASMO/SIMULATE codes. The accuracy of the core-follow models to reproduce the periodic in-core neutron flux measurements is studied for a total of 21 operating cycles. The error is analyzed with respect to different physics parameters with a ranking of the individual assemblies/nodes contribution to the total RMS error and trends are analyzed by performing partial correlation analysis. The highest errors appear at the core axial peripheries (top/bottom nodes) where a mean C/E-1 error of 10% is observed for the top nodes and -5% for the bottom nodes and the maximum C/E-1 error reaches almost 20%. Partial correlation analysis shows significant correlation of error to distance from core mid-plane and only less significant correlations to other variables. Overall, it appears that the primary areas that could benefit from further method/modeling improvements are: axial reflectors, MOX treatment and control rod cusping. (author)

  7. Neutrino-heated stars and broad-line emission from active galactic nuclei

    Science.gov (United States)

    Macdonald, James; Stanev, Todor; Biermann, Peter L.

    1991-01-01

    Nonthermal radiation from active galactic nuclei indicates the presence of highly relativistic particles. The interaction of these high-energy particles with matter and photons gives rise to a flux of high-energy neutrinos. In this paper, the influence of the expected high neutrino fluxes on the structure and evolution of single, main-sequence stars is investigated. Sequences of models of neutrino-heated stars in thermal equilibrium are presented for masses 0.25, 0.5, 0.8, and 1.0 solar mass. In addition, a set of evolutionary sequences for mass 0.5 solar mass have been computed for different assumed values for the incident neutrino energy flux. It is found that winds driven by the heating due to high-energy particles and hard electromagnetic radiation of the outer layers of neutrino-bloated stars may satisfy the requirements of the model of Kazanas (1989) for the broad-line emission clouds in active galactic nuclei.

  8. Hidden Broad-line Regions in Seyfert 2 Galaxies: From the Spectropolarimetric Perspective

    International Nuclear Information System (INIS)

    Du, Pu; Wang, Jian-Min; Zhang, Zhi-Xiang

    2017-01-01

    The hidden broad-line regions (BLRs) in Seyfert 2 galaxies, which display broad emission lines (BELs) in their polarized spectra, are a key piece of evidence in support of the unified model for active galactic nuclei (AGNs). However, the detailed kinematics and geometry of hidden BLRs are still not fully understood. The virial factor obtained from reverberation mapping of type 1 AGNs may be a useful diagnostic of the nature of hidden BLRs in type 2 objects. In order to understand the hidden BLRs, we compile six type 2 objects from the literature with polarized BELs and dynamical measurements of black hole masses. All of them contain pseudobulges. We estimate their virial factors, and find the average value is 0.60 and the standard deviation is 0.69, which agree well with the value of type 1 AGNs with pseudobulges. This study demonstrates that (1) the geometry and kinematics of BLR are similar in type 1 and type 2 AGNs of the same bulge type (pseudobulges), and (2) the small values of virial factors in Seyfert 2 galaxies suggest that, similar to type 1 AGNs, BLRs tend to be very thick disks in type 2 objects.

  9. THE SIZE, STRUCTURE, AND IONIZATION OF THE BROAD-LINE REGION IN NGC 3227

    International Nuclear Information System (INIS)

    Devereux, Nick

    2013-01-01

    Hubble Space Telescope spectroscopy of the Seyfert 1.5 galaxy, NGC 3227, confirms previous reports that the broad Hα emission line flux is time variable, decreasing by a modest ∼11% between 1999 and 2000 in response to a corresponding ∼37% decrease in the underlying continuum. Modeling the gas distribution responsible for the broad Hα, Hβ, and Hγ emission lines favors a spherically symmetric inflow as opposed to a thin disk. Adopting a central black hole mass of 7.6 × 10 6 M ☉ , determined from prior reverberation mapping, leads to the following dimensions for the size of the region emitting the broad Hα line: an outer radius ∼90 lt-days and an inner radius ∼3 lt-days. Thus, the previously determined reverberation size for the broad-line region (BLR) consistently coincides with the inner radius of a much larger volume of ionized gas. However, the perceived size of the BLR is an illusion, a consequence of the fact that the emitting region is ionization bounded at the outer radius and diminished by Doppler broadening at the inner radius. The actual dimensions of the inflow remain to be determined. Nevertheless, the steady-state mass inflow rate is estimated to be ∼10 –2 M ☉ yr –1 which is sufficient to explain the X-ray luminosity of the active galactic nucleus (AGN) in terms of radiatively inefficient accretion. Collectively, the results challenge many preconceived notions concerning the nature of BLRs in AGNs.

  10. Stability of the Broad-line Region Geometry and Dynamics in Arp 151 Over Seven Years

    Science.gov (United States)

    Pancoast, A.; Barth, A. J.; Horne, K.; Treu, T.; Brewer, B. J.; Bennert, V. N.; Canalizo, G.; Gates, E. L.; Li, W.; Malkan, M. A.; Sand, D.; Schmidt, T.; Valenti, S.; Woo, J.-H.; Clubb, K. I.; Cooper, M. C.; Crawford, S. M.; Hönig, S. F.; Joner, M. D.; Kandrashoff, M. T.; Lazarova, M.; Nierenberg, A. M.; Romero-Colmenero, E.; Son, D.; Tollerud, E.; Walsh, J. L.; Winkler, H.

    2018-04-01

    The Seyfert 1 galaxy Arp 151 was monitored as part of three reverberation mapping campaigns spanning 2008–2015. We present modeling of these velocity-resolved reverberation mapping data sets using a geometric and dynamical model for the broad-line region (BLR). By modeling each of the three data sets independently, we infer the evolution of the BLR structure in Arp 151 over a total of 7 yr and constrain the systematic uncertainties in nonvarying parameters such as the black hole mass. We find that the BLR geometry of a thick disk viewed close to face-on is stable over this time, although the size of the BLR grows by a factor of ∼2. The dynamics of the BLR are dominated by inflow, and the inferred black hole mass is consistent for the three data sets, despite the increase in BLR size. Combining the inference for the three data sets yields a black hole mass and statistical uncertainty of log10({M}BH}/{M}ȯ ) = {6.82}-0.09+0.09 with a standard deviation in individual measurements of 0.13 dex.

  11. Weapon Simulator Test Methodology Investigation: Comparison of Live Fire and Weapon Simulator Test Methodologies and the Effects of Clothing and Individual Equipment on Marksmanship

    Science.gov (United States)

    2016-09-15

    variables analyzed included shot group tightness, radial error from the center of the target, and multiple time variables. The weapon simulator and...performance could be analyzed to determine if the weapon simulator data aligns with live fire data (i.e., if similar performance decrements appear in the...Analysis and Reporting The Noptel NOS4 software records shot performance data real-time and presents multiple statistical calculations as well as

  12. Training simulators in nuclear power plants: Experience, programme design and assessment methodology. Proceedings of a specialists' meeting

    International Nuclear Information System (INIS)

    1997-11-01

    Simulators became an indispensable part of training world-wide. Therefore, international exchange of information is important to share the experience gained in different countries in order to assure high international standards. A second aspects is the tremendous evolution in the computing capacities of the simulator hardware and the increasing functionality of the simulator software. This background has let the IAEA to invite the simulator experts for an experience exchange. The German Simulator Centre in Essen, which is operated by the companies KSG and GfS, was asked to host this Specialists' Meeting. The Specialists' Meeting on ''Training Simulators in Nuclear Power Plants: Experience, Programme Design and Assessment Methodology'' was organized by IAEA in-cooperation with the German Simulator Centre operated by KSG Kraftwerks-Simulator-Gesellschaft mbH and GfS Gesellschaft fuer Simulatorschulung mbH and was held from 17 - 19 November 1997 in Essen, Germany. The meeting focused on developments in simulation technology, experiences with simulator upgrades, utilization of computerized tools as support and complement of simulator training, use of simulators for other purposes. The meeting was attended by 50 participants from 16 countries. In the course of four sessions 21 technical presentations were made. The present volume contains the papers by national delegates at the Specialists' Meeting

  13. Training simulators in nuclear power plants: Experience, programme design and assessment methodology. Proceedings of a specialists` meeting

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    Simulators became an indispensable part of training world-wide. Therefore, international exchange of information is important to share the experience gained in different countries in order to assure high international standards. A second aspects is the tremendous evolution in the computing capacities of the simulator hardware and the increasing functionality of the simulator software. This background has let the IAEA to invite the simulator experts for an experience exchange. The German Simulator Centre in Essen, which is operated by the companies KSG and GfS, was asked to host this Specialists` Meeting. The Specialists` Meeting on ``Training Simulators in Nuclear Power Plants: Experience, Programme Design and Assessment Methodology`` was organized by IAEA in-cooperation with the German Simulator Centre operated by KSG Kraftwerks-Simulator-Gesellschaft mbH and GfS Gesellschaft fuer Simulatorschulung mbH and was held from 17 - 19 November 1997 in Essen, Germany. The meeting focused on developments in simulation technology, experiences with simulator upgrades, utilization of computerized tools as support and complement of simulator training, use of simulators for other purposes. The meeting was attended by 50 participants from 16 countries. In the course of four sessions 21 technical presentations were made. The present volume contains the papers by national delegates at the Specialists` Meeting Refs, figs, tabs

  14. Electrochemical treatment of simulated sugar industrial effluent: Optimization and modeling using a response surface methodology

    Directory of Open Access Journals (Sweden)

    P. Asaithambi

    2016-11-01

    Full Text Available The removal of organic compounds from a simulated sugar industrial effluent was investigated through the electrochemical oxidation technique. Effect of various experimental parameters such as current density, concentration of electrolyte and flow rate in a batch electrochemical reactor was studied on the percentage of COD removal and power consumption. The electrochemical reactor performance was analyzed based on with and without recirculation of the effluent having constant inter-electrodes distance. It was found out that the percentage removal of COD increased with the increase of electrolyte concentration and current density. The maximum percentage removal of COD was achieved at 80.74% at a current density of 5 A/dm2 and 5 g/L of electrolyte concentration in the batch electrochemical reactor. The recirculation electrochemical reactor system parameters like current density, concentration of COD and flow rate were optimized using response surface methodology, while COD removal percents were maximized and power consumption minimized. It has been observed from the present analysis that the predicted values are in good agreement with the experimental data with a correlation coefficient of 0.9888.

  15. A Radiative Transfer Modeling Methodology in Gas-Liquid Multiphase Flow Simulations

    Directory of Open Access Journals (Sweden)

    Gautham Krishnamoorthy

    2014-01-01

    Full Text Available A methodology for performing radiative transfer calculations in computational fluid dynamic simulations of gas-liquid multiphase flows is presented. By considering an externally irradiated bubble column photoreactor as our model system, the bubble scattering coefficients were determined through add-on functions by employing as inputs the bubble volume fractions, number densities, and the fractional contribution of each bubble size to the bubble volume from four different multiphase modeling options. The scattering coefficient profiles resulting from the models were significantly different from one another and aligned closely with their predicted gas-phase volume fraction distributions. The impacts of the multiphase modeling option, initial bubble diameter, and gas flow rates on the radiation distribution patterns within the reactor were also examined. An increase in air inlet velocities resulted in an increase in the fraction of larger sized bubbles and their contribution to the scattering coefficient. However, the initial bubble sizes were found to have the strongest impact on the radiation field.

  16. Multiphysics Simulation of Welding-Arc and Nozzle-Arc System: Mathematical-Model, Solution-Methodology and Validation

    Science.gov (United States)

    Pawar, Sumedh; Sharma, Atul

    2018-01-01

    This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.

  17. A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators.

    Science.gov (United States)

    Sánchez, Borja Bordel; Alcarria, Ramón; Sánchez-Picot, Álvaro; Sánchez-de-Rivera, Diego

    2017-09-22

    Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users' needs and requirements and various additional factors such as the development team's experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal.

  18. A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators

    Directory of Open Access Journals (Sweden)

    Borja Bordel Sánchez

    2017-09-01

    Full Text Available Cyber-Physical Social Sensing (CPSS is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users’ needs and requirements and various additional factors such as the development team’s experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal.

  19. Development and Application of a Clinical Microsystem Simulation Methodology for Human Factors-Based Research of Alarm Fatigue.

    Science.gov (United States)

    Kobayashi, Leo; Gosbee, John W; Merck, Derek L

    2017-07-01

    (1) To develop a clinical microsystem simulation methodology for alarm fatigue research with a human factors engineering (HFE) assessment framework and (2) to explore its application to the comparative examination of different approaches to patient monitoring and provider notification. Problems with the design, implementation, and real-world use of patient monitoring systems result in alarm fatigue. A multidisciplinary team is developing an open-source tool kit to promote bedside informatics research and mitigate alarm fatigue. Simulation, HFE, and computer science experts created a novel simulation methodology to study alarm fatigue. Featuring multiple interconnected simulated patient scenarios with scripted timeline, "distractor" patient care tasks, and triggered true and false alarms, the methodology incorporated objective metrics to assess provider and system performance. Developed materials were implemented during institutional review board-approved study sessions that assessed and compared an experimental multiparametric alerting system with a standard monitor telemetry system for subject response, use characteristics, and end-user feedback. A four-patient simulation setup featuring objective metrics for participant task-related performance and response to alarms was developed along with accompanying structured HFE assessment (questionnaire and interview) for monitor systems use testing. Two pilot and four study sessions with individual nurse subjects elicited true alarm and false alarm responses (including diversion from assigned tasks) as well as nonresponses to true alarms. In-simulation observation and subject questionnaires were used to test the experimental system's approach to suppressing false alarms and alerting providers. A novel investigative methodology applied simulation and HFE techniques to replicate and study alarm fatigue in controlled settings for systems assessment and experimental research purposes.

  20. Numerical simulation and analysis of fuzzy PID and PSD control methodologies as dynamic energy efficiency measures

    International Nuclear Information System (INIS)

    Ardehali, M.M.; Saboori, M.; Teshnelab, M.

    2004-01-01

    Energy efficiency enhancement is achieved by utilizing control algorithms that reduce overshoots and undershoots as well as unnecessary fluctuations in the amount of energy input to energy consuming systems during transient operation periods. It is hypothesized that application of control methodologies with characteristics that change with time and according to the system dynamics, identified as dynamic energy efficiency measures (DEEM), achieves the desired enhancement. The objective of this study is to simulate and analyze the effects of fuzzy logic based tuning of proportional integral derivative (F-PID) and proportional sum derivative (F-PSD) controllers for a heating and cooling energy system while accounting for the dynamics of the major system components. The procedure to achieve the objective includes utilization of fuzzy logic rules to determine the PID and PSD controllers gain coefficients so that the control laws for regulating the heat exchangers heating or cooling energy inputs are determined in each time step of the operation period. The performances of the F-PID and F-PSD controllers are measured by means of two cost functions that are based on quadratic forms of the energy input and deviation from a set point temperature. It is found that application of the F-PID control algorithm, as a DEEM, results in lower costs for energy input and deviation from a set point temperature by 24% and 17% as compared to a PID and 13% and 8% as compared to a PSD, respectively. It is also shown that the F-PSD performance is better than that of the F-PID controller

  1. Comparing a simple methodology to evaluate hydrodynamic parameters with rainfall simulation experiments

    Science.gov (United States)

    Di Prima, Simone; Bagarello, Vincenzo; Bautista, Inmaculada; Burguet, Maria; Cerdà, Artemi; Iovino, Massimo; Prosdocimi, Massimo

    2016-04-01

    Studying soil hydraulic properties is necessary for interpreting and simulating many hydrological processes having environmental and economic importance, such as rainfall partition into infiltration and runoff. The saturated hydraulic conductivity, Ks, exerts a dominating influence on the partitioning of rainfall in vertical and lateral flow paths. Therefore, estimates of Ks are essential for describing and modeling hydrological processes (Zimmermann et al., 2013). According to several investigations, Ks data collected by ponded infiltration tests could be expected to be unusable for interpreting field hydrological processes, and particularly infiltration. In fact, infiltration measured by ponding give us information about the soil maximum or potential infiltration rate (Cerdà, 1996). Moreover, especially for the hydrodynamic parameters, many replicated measurements have to be carried out to characterize an area of interest since they are known to vary widely both in space and time (Logsdon and Jaynes, 1996; Prieksat et al., 1994). Therefore, the technique to be applied at the near point scale should be simple and rapid. Bagarello et al. (2014) and Alagna et al. (2015) suggested that the Ks values determined by an infiltration experiment carried applying water at a relatively large distance from the soil surface could be more appropriate than those obtained with a low height of water pouring to explain surface runoff generation phenomena during intense rainfall events. These authors used the Beerkan Estimation of Soil Transfer parameters (BEST) procedure for complete soil hydraulic characterization (Lassabatère et al., 2006) to analyze the field infiltration experiment. This methodology, combining low and high height of water pouring, seems appropriate to test the effect of intense and prolonged rainfall events on the hydraulic characteristics of the surface soil layer. In fact, an intense and prolonged rainfall event has a perturbing effect on the soil surface

  2. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    Energy Technology Data Exchange (ETDEWEB)

    Negrete, C. Alenka [Instituto Nacional de Astrofisica, Optica y Electronica (Mexico); Dultzin, Deborah [Instituto de Astronomia, Universidad Nacional Autonoma de Mexico (Mexico); Marziani, Paola [INAF, Astronomical Observatory of Padova, I-35122 Padova (Italy); Sulentic, Jack W., E-mail: cnegrete@inaoep.mx, E-mail: deborah@astro.unam.mx, E-mail: paola.marziani@oapd.inaf.it, E-mail: sulentic@iaa.es [Instituto de Astrofisica de Andalucia, E-18008 Granada (Spain)

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR} directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.

  3. The case for inflow of the broad-line region of active galactic nuclei

    Science.gov (United States)

    Gaskell, C. Martin; Goosmann, René W.

    2016-02-01

    The high-ionization lines of the broad-line region (BLR) of thermal active galactic nuclei (AGNs) show blueshifts of a few hundred km/s to several thousand km/sec with respect to the low-ionization lines. This has long been thought to be due to the high-ionization lines of the BLR arising in a wind of which the far side of the outflow is blocked from our view by the accretion disc. Evidence for and against the disc-wind model is discussed. The biggest problem for the model is that velocity-resolved reverberation mapping repeatedly fails to show the expected kinematic signature of outflow of the BLR. The disc-wind model also cannot readily reproduce the red side of the line profiles of high-ionization lines. The rapidly falling density in an outflow makes it difficult to obtain high equivalent widths. We point out a number of major problems with associating the BLR with the outflows producing broad absorption lines. An explanation which avoids all these problems and satisfies the constraints of both the line profiles and velocity-resolved reverberation-mapping is a model in which the blueshifting is due to scattering off material spiraling inwards with an inflow velocity of half the velocity of the blueshifting. We discuss how recent reverberation mapping results are consistent with the scattering-plus-inflow model but do not support a disc-wind model. We propose that the anti-correlation of the apparent redshifting of Hβ with the blueshifting of C iv is a consequence of contamination of the red wings of Hβ by the broad wings of [O iii].

  4. The End of the Lines for OX 169: No Binary Broad-Line Region

    Science.gov (United States)

    Halpern, J. P.; Eracleous, M.

    2000-03-01

    We show that unusual Balmer emission-line profiles of the quasar OX 169, frequently described as either self-absorbed or double peaked, are actually neither. The effect is an illusion resulting from two coincidences. First, the forbidden lines are quite strong and broad. Consequently, the [N II] λ6583 line and the associated narrow-line component of Hα present the appearance of twin Hα peaks. Second, the redshift of 0.2110 brings Hβ into coincidence with Na I D at zero redshift, and ISM absorption in Na I D divides the Hβ emission line. In spectra obtained over the past decade, we see no substantial change in the character of the line profiles and no indication of intrinsic double-peaked structure. The Hγ, Mg II, and Lyα emission lines are single peaked, and all of the emission-line redshifts are consistent once they are correctly attributed to their permitted and forbidden-line identifications. A systematic shift of up to 700 km s-1 between broad and narrow lines is seen, but such differences are common and could be due to gravitational and transverse redshift in a low-inclination disk. Stockton & Farnham had called attention to an apparent tidal tail in the host galaxy of OX 169 and speculated that a recent merger had supplied the nucleus with a coalescing pair of black holes that was now revealing its existence in the form of two physically distinct broad-line regions. Although there is no longer any evidence for two broad emission-line regions in OX 169, binary black holes should form frequently in galaxy mergers, and it is still worthwhile to monitor the radial velocities of emission lines that could supply evidence of their existence in certain objects.

  5. Methodology for Analysis, Modeling and Simulation of Airport Gate-waiting Delays

    Science.gov (United States)

    Wang, Jianfeng

    This dissertation presents methodologies to estimate gate-waiting delays from historical data, to identify gate-waiting-delay functional causes in major U.S. airports, and to evaluate the impact of gate operation disruptions and mitigation strategies on gate-waiting delay. Airport gates are a resource of congestion in the air transportation system. When an arriving flight cannot pull into its gate, the delay it experiences is called gate-waiting delay. Some possible reasons for gate-waiting delay are: the gate is occupied, gate staff or equipment is unavailable, the weather prevents the use of the gate (e.g. lightning), or the airline has a preferred gate assignment. Gate-waiting delays potentially stay with the aircraft throughout the day (unless they are absorbed), adding costs to passengers and the airlines. As the volume of flights increases, ensuring that airport gates do not become a choke point of the system is critical. The first part of the dissertation presents a methodology for estimating gate-waiting delays based on historical, publicly available sources. Analysis of gate-waiting delays at major U.S. airports in the summer of 2007 identifies the following. (i) Gate-waiting delay is not a significant problem on majority of days; however, the worst delay days (e.g. 4% of the days at LGA) are extreme outliers. (ii) The Atlanta International Airport (ATL), the John F. Kennedy International Airport (JFK), the Dallas/Fort Worth International Airport (DFW) and the Philadelphia International Airport (PHL) experience the highest gate-waiting delays among major U.S. airports. (iii) There is a significant gate-waiting-delay difference between airlines due to a disproportional gate allocation. (iv) Gate-waiting delay is sensitive to time of a day and schedule peaks. According to basic principles of queueing theory, gate-waiting delay can be attributed to over-scheduling, higher-than-scheduled arrival rate, longer-than-scheduled gate-occupancy time, and reduced gate

  6. Groundwater flow simulations in support of the Local Scale Hydrogeological Description developed within the Laxemar Methodology Test Project

    International Nuclear Information System (INIS)

    Follin, Sven; Svensson, Urban

    2002-05-01

    The deduced Site Descriptive Model of the Laxemar area has been parameterised from a hydraulic point of view and subsequently put into practice in terms of a numerical flow model. The intention of the subproject has been to explore the adaptation of a numerical flow model to site-specific surface and borehole data, and to identify potential needs for development and improvement in the planned modelling methodology and tools. The experiences made during this process and the outcome of the simulations have been presented to the methodology test project group in course of the project. The discussion and conclusions made in this particular report concern two issues mainly, (i) the use of numerical simulations as a means of gaining creditability, e.g. discrimination between alternative geological models, and (ii) calibration and conditioning of probabilistic (Monte Carlo) realisations

  7. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    Directory of Open Access Journals (Sweden)

    Elie Bienenstock

    2008-06-01

    Full Text Available Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i For all estimators considered, the main source of error is the bias. (ii The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in

  8. Validation of response simulation methodology of Albedo dosemeter; Validacao da metodologia de simulacao de resposta de dosimetro de Albedo

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, B.M.; Silva, A.X. da, E-mail: bfreitas@nuclear.ufrj.br [Coordenacao do Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Mauricio, C.L.P. [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2016-07-01

    The Instituto de Radioprotecao e Dosimetria developed and runs a neutron TLD albedo individual monitoring service. To optimize the dose calculation algorithm and to infer new calibration factors, the response of this dosemeter was simulated. In order to validate this employed methodology, it was applied in the simulation of the problem of the QUADOS (Quality Assurance of Computational Tools for Dosimetry) intercomparison, aimed to evaluate dosimetric problems, one being to calculate the response of a generic albedo dosemeter. The obtained results were compared with those of other modeling and the reference one, with good agreements. (author)

  9. Methodology for Distributed Electric Propulsion Aircraft Control Development with Simulation and Flight Demonstration, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In the proposed STTR study, Empirical Systems Aerospace, Inc. (ESAero) and the University of Illinois at Urbana-Champaign (UIUC) will create a methodology for the...

  10. Simulation and Optimization Methodologies for Military Transportation Network Routing and Scheduling and for Military Medical Services

    National Research Council Canada - National Science Library

    Rodin, Ervin Y

    2005-01-01

    The purpose of this present research was to develop a generic model and methodology for analyzing and optimizing large-scale air transportation networks including both their routing and their scheduling...

  11. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    Science.gov (United States)

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  12. Towards an in-plane methodology to track breast lesions using mammograms and patient-specific finite-element simulations

    Science.gov (United States)

    Lapuebla-Ferri, Andrés; Cegoñino-Banzo, José; Jiménez-Mocholí, Antonio-José; Pérez del Palomar, Amaya

    2017-11-01

    In breast cancer screening or diagnosis, it is usual to combine different images in order to locate a lesion as accurately as possible. These images are generated using a single or several imaging techniques. As x-ray-based mammography is widely used, a breast lesion is located in the same plane of the image (mammogram), but tracking it across mammograms corresponding to different views is a challenging task for medical physicians. Accordingly, simulation tools and methodologies that use patient-specific numerical models can facilitate the task of fusing information from different images. Additionally, these tools need to be as straightforward as possible to facilitate their translation to the clinical area. This paper presents a patient-specific, finite-element-based and semi-automated simulation methodology to track breast lesions across mammograms. A realistic three-dimensional computer model of a patient’s breast was generated from magnetic resonance imaging to simulate mammographic compressions in cranio-caudal (CC, head-to-toe) and medio-lateral oblique (MLO, shoulder-to-opposite hip) directions. For each compression being simulated, a virtual mammogram was obtained and posteriorly superimposed to the corresponding real mammogram, by sharing the nipple as a common feature. Two-dimensional rigid-body transformations were applied, and the error distance measured between the centroids of the tumors previously located on each image was 3.84 mm and 2.41 mm for CC and MLO compression, respectively. Considering that the scope of this work is to conceive a methodology translatable to clinical practice, the results indicate that it could be helpful in supporting the tracking of breast lesions.

  13. Methodology of the On-Iine FoIIow Simulation of Pebble-bed High-temperature Reactors

    International Nuclear Information System (INIS)

    Xia Bing; Li Fu; Wei Chunlin; Zheng Yanhua; Chen Fubing; Zhang Jian; Guo Jiong

    2014-01-01

    The on-line fuel management is an essential feature of the pebble-bed high-temperature reactors (PB-HTRs), which is strongly coupled with the normal operation of the reactor. For the purpose of on-line analysis of the continuous shuffling scheme of numerous fuel pebbles, the follow simulation upon the real operation is necessary for the PB-HTRs. In this work, the on-line follow simulation methodology of the PB-HTRs’ operation is described, featured by the parallel treatments of both neutronics analysis and fuel cycling simulation. During the simulation, the operation history of the reactor is divided into a series of burn-up cycles according to the behavior of operation data, in which the steady-state neutron transport equations are solved and the diffusion theory is utilized to determine the physical features of the reactor core. The burn-up equations of heavy metals, fission products and neutron poisons including B-10, decoupled from the pebble flow term, are solved to analyze the burn-up process within a single burn-up cycle. The effect of pebble flow is simulated separately through a discrete fuel shuffling pattern confined by curved pebble flow channels, and the effect of multiple pass of the fuel is represented by logical batches within each spatial region of the core. The on-line thermal-hydraulics feedback is implemented for each bur-up cycle by using the real thermal-hydraulics data of the core operation. The treatment of control rods and absorber balls is carried out by utilizing a coupled neutron transport-diffusion calculation along with discontinuity factors. The physical models mentioned above are established mainly by using a revised version of the V.S.O.P program system. The real operation data of HTR-10 is utilized to verify the methodology presented in this work, which gives good agreement between simulation results and operation data. (author)

  14. Optimization Versus Robustness in Simulation : A Practical Methodology, With a Production-Management Case-Study

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Gaury, E.G.A.

    2001-01-01

    Whereas Operations Research has always paid much attention to optimization, practitioners judge the robustness of the 'optimum' solution to be of greater importance.Therefore this paper proposes a practical methodology that is a stagewise combination of the following four proven techniques: (1)

  15. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    Science.gov (United States)

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A Novel Methodology for the Simulation of Athletic Tasks on Cadaveric Knee Joints with Respect to In Vivo Kinematics

    Science.gov (United States)

    Bates, Nathaniel A.; Nesbitt, Rebecca J.; Shearn, Jason T.; Myer, Gregory D.; Hewett, Timothy E.

    2015-01-01

    Six degree of freedom (6-DOF) robotic manipulators have simulated clinical tests and gait on cadaveric knees to examine knee biomechanics. However, these activities do not necessarily emulate the kinematics and kinetics that lead to anterior cruciate ligament (ACL) rupture. The purpose of this study was to determine the techniques needed to derive reproducible, in vitro simulations from in vivo skin-marker kinematics recorded during simulated athletic tasks. Input of raw, in vivo, skin-marker-derived motion capture kinematics consistently resulted in specimen failure. The protocol described in this study developed an in-depth methodology to adapt in vivo kinematic recordings into 6-DOF knee motion simulations for drop vertical jumps and sidestep cutting. Our simulation method repeatably produced kinetics consistent with vertical ground reaction patterns while preserving specimen integrity. Athletic task simulation represents an advancement that allows investigators to examine ACL-intact and graft biomechanics during motions that generate greater kinetics, and the athletic tasks are more representative of documented cases of ligament rupture. Establishment of baseline functional mechanics within the knee joint during athletic tasks will serve to advance the prevention, repair and rehabilitation of ACL injuries. PMID:25869454

  17. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  18. Simulations Of Neutron Beam Optic For Neutron Radiography Collimator Using Ray Tracing Methodology

    International Nuclear Information System (INIS)

    Norfarizan Mohd Said; Muhammad Rawi Mohamed Zin

    2014-01-01

    Ray- tracing is a technique for simulating the performance of neutron instruments. McStas, the open-source software package based on a meta-language, is a tool for carrying out ray-tracing simulations. The program has been successfully applied in investigating neutron guide design, flux optimization and other related areas with high complexity and precision. The aim of this paper is to discuss the implementation of ray-tracing technique with McStas for simulating the performance of neutron collimation system developed for imaging system of TRIGA RTP reactor. The code for the simulation was developed and the results are presented. The analysis of the performance is reported and discussed. (author)

  19. Optimization of Intelligent Munition Warfare Using Agent-Based Simulation Software and Design of Experiments Methodology

    National Research Council Canada - National Science Library

    Floersheim, Bruce; Hou, Gene

    2006-01-01

    ... mechanism for a number of vehicles caught in the killzone. Thus, it is useful to study and attempt to model through equations and simulation the interaction between enemy agents and these new munitions...

  20. Finding Biomarker Signatures in Pooled Sample Designs: A Simulation Framework for Methodological Comparisons

    Directory of Open Access Journals (Sweden)

    Anna Telaar

    2010-01-01

    Full Text Available Detection of discriminating patterns in gene expression data can be accomplished by using various methods of statistical learning. It has been proposed that sample pooling in this context would have negative effects; however, pooling cannot always be avoided. We propose a simulation framework to explicitly investigate the parameters of patterns, experimental design, noise, and choice of method in order to find out which effects on classification performance are to be expected. We use a two-group classification task and simulated gene expression data with independent differentially expressed genes as well as bivariate linear patterns and the combination of both. Our results show a clear increase of prediction error with pool size. For pooled training sets powered partial least squares discriminant analysis outperforms discriminance analysis, random forests, and support vector machines with linear or radial kernel for two of three simulated scenarios. The proposed simulation approach can be implemented to systematically investigate a number of additional scenarios of practical interest.

  1. Analysis and development of numerical methodologies for simulation of flow control with dielectric barrier discharge actuators

    OpenAIRE

    Abdollahzadehsangroudi, Mohammadmahdi

    2014-01-01

    The aim of this thesis is to investigate and develop different numerical methodologies for modeling the Dielectric Barrier discharge (DBD) plasma actuators for flow control purposes. Two different modeling approaches were considered; one based on Plasma-fluid model and the other based on a phenomenological model. A three component Plasma fluid model based on the transport equations of charged particles was implemented in this thesis in OpenFOAM, using several techniques to redu...

  2. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    Science.gov (United States)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  3. Performance of uncertainty quantification methodologies and linear solvers in cardiovascular simulations

    Science.gov (United States)

    Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison

    2017-11-01

    Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.

  4. OPTICAL MONITORING OF THE BROAD-LINE RADIO GALAXY 3C 390.3

    International Nuclear Information System (INIS)

    Dietrich, Matthias; Peterson, Bradley M.; Grier, Catherine J.; Bentz, Misty C.; Eastman, Jason; Frank, Stephan; Gonzalez, Raymond; Marshall, Jennifer L.; DePoy, Darren L.; Prieto, Jose L.

    2012-01-01

    We have undertaken a new ground-based monitoring campaign on the broad-line radio galaxy 3C 390.3 to improve the measurement of the size of the broad emission-line region and to estimate the black hole mass. Optical spectra and g-band images were observed in late 2005 for three months using the 2.4 m telescope at MDM Observatory. Integrated emission-line flux variations were measured for the hydrogen Balmer lines Hα, Hβ, Hγ, and for the helium line He IIλ4686, as well as g-band fluxes and the optical active galactic nucleus (AGN) continuum at λ = 5100 Å. The g-band fluxes and the optical AGN continuum vary simultaneously within the uncertainties, τ cent (0.2 ± 1.1) days. We find that the emission-line variations are delayed with respect to the variable g-band continuum by τ(Hα) 56.3 +2.4 –6.6 days, τ(Hβ) = 44.3 +3.0 –3.3 days, τ(Hγ) = 58.1 +4.3 –6.1 days, and τ(He II 4686) = 22.3 +6.5 –3.8 days. The blue and red peaks in the double-peaked line profiles, as well as the blue and red outer profile wings, vary simultaneously within ±3 days. This provides strong support for gravitationally bound orbital motion of the dominant part of the line-emitting gas. Combining the time delay of the strong Balmer emission lines of Hα and Hβ and the separation of the blue and red peaks in the broad double-peaked profiles in their rms spectra, we determine M vir bh = 1.77 +0.29 –0.31 × 10 8 M ☉ and using σ line of the rms spectra M vir bh 2.60 +0.23 –0.31 × 10 8 M ☉ for the central black hole of 3C 390.3, respectively. Using the inclination angle of the line-emitting region which is measured from superluminal motion detected in the radio range, accretion disk models to fit the optical double-peaked emission-line profiles, and X-ray observations, the mass of the black hole amounts to M bh = 0.86 +0.19 –0.18 × 10 9 M ☉ (peak separation) and M bh 1.26 +0.21 –0.16 × 10 9 M ☉ (σ line ), respectively. This result is consistent with the black

  5. OPTICAL MONITORING OF THE BROAD-LINE RADIO GALAXY 3C 390.3

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, Matthias; Peterson, Bradley M.; Grier, Catherine J.; Bentz, Misty C.; Eastman, Jason; Frank, Stephan; Gonzalez, Raymond; Marshall, Jennifer L.; DePoy, Darren L.; Prieto, Jose L., E-mail: dietrich@astronomy.ohio-state.edu [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States)

    2012-09-20

    We have undertaken a new ground-based monitoring campaign on the broad-line radio galaxy 3C 390.3 to improve the measurement of the size of the broad emission-line region and to estimate the black hole mass. Optical spectra and g-band images were observed in late 2005 for three months using the 2.4 m telescope at MDM Observatory. Integrated emission-line flux variations were measured for the hydrogen Balmer lines H{alpha}, H{beta}, H{gamma}, and for the helium line He II{lambda}4686, as well as g-band fluxes and the optical active galactic nucleus (AGN) continuum at {lambda} = 5100 A. The g-band fluxes and the optical AGN continuum vary simultaneously within the uncertainties, {tau}{sub cent} (0.2 {+-} 1.1) days. We find that the emission-line variations are delayed with respect to the variable g-band continuum by {tau}(H{alpha}) 56.3{sup +2.4}{sub -6.6} days, {tau}(H{beta}) = 44.3{sup +3.0}{sub -3.3} days, {tau}(H{gamma}) = 58.1{sup +4.3}{sub -6.1} days, and {tau}(He II 4686) = 22.3{sup +6.5}{sub -3.8} days. The blue and red peaks in the double-peaked line profiles, as well as the blue and red outer profile wings, vary simultaneously within {+-}3 days. This provides strong support for gravitationally bound orbital motion of the dominant part of the line-emitting gas. Combining the time delay of the strong Balmer emission lines of H{alpha} and H{beta} and the separation of the blue and red peaks in the broad double-peaked profiles in their rms spectra, we determine M {sup vir}{sub bh} = 1.77{sup +0.29}{sub -0.31} Multiplication-Sign 10{sup 8} M{sub Sun} and using {sigma}{sub line} of the rms spectra M {sup vir}{sub bh} 2.60{sup +0.23}{sub -0.31} Multiplication-Sign 10{sup 8} M{sub Sun} for the central black hole of 3C 390.3, respectively. Using the inclination angle of the line-emitting region which is measured from superluminal motion detected in the radio range, accretion disk models to fit the optical double-peaked emission-line profiles, and X-ray observations

  6. Numerical simulation methodologies for design and development of Diffuser-Augmented Wind Turbines – analysis and comparison

    Directory of Open Access Journals (Sweden)

    Michał Lipian

    2016-01-01

    Full Text Available Different numerical computation methods used to develop a methodology for fast, efficient, reliable design and comparison of Diffuser-Augmented Wind Turbine (DAWT geometries are presented. The demand for such methods is evident, following the multitude of geometrical parameters that influence the flow character through ducted turbines. The results of the Actuator Disk Model (ADM simulations will be confronted with a simulation method of higher order of accuracy, i.e. the 3D Fully-resolved Rotor Model (FRM in the rotor design point. Both will be checked for consistency with the experimental results measured in the wind tunnel at the Institute of Turbo-machinery (IMP, Lodz University of Technology (TUL. An attempt to find an efficient method (with a compromise between accuracy and design time for the flow analysis pertinent to the DAWT is a novel approach presented in this paper.

  7. Methodology to Assess No Touch Audit Software Using Simulated Building Utility Data

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States); Langner, M. Rois [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-10-01

    This report describes a methodology developed for assessing the performance of no touch building audit tools and presents results for an available tool. Building audits are conducted in many commercial buildings to reduce building energy costs and improve building operation. Because the audits typically require significant input obtained by building engineers, they are usually only affordable for larger commercial building owners. In an effort to help small building and business owners gain the benefits of an audit at a lower cost, no touch building audit tools have been developed to remotely analyze a building's energy consumption.

  8. Analysis of Wind Turbine Simulation Models: Assessment of Simplified versus Complete Methodologies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.; Fuentes-Moreno, J. A.; Muljadi, Eduard; Gomez-Lazaro, E.

    2015-09-14

    This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.

  9. Calculation of the electrostatic potential of lipid bilayers from molecular dynamics simulations: methodological issues

    DEFF Research Database (Denmark)

    Gurtovenko, Andrey A; Vattulainen, Ilpo

    2009-01-01

    of the electrostatic potential from atomic-scale molecular dynamics simulations of lipid bilayers. We discuss two slightly different forms of Poisson equation that are normally used to calculate the membrane potential: (i) a classical form when the potential and the electric field are chosen to be zero on one...... systems). For symmetric bilayers we demonstrate that both approaches give essentially the same potential profiles, provided that simulations are long enough (a production run of at least 100 ns is required) and that fluctuations of the center of mass of a bilayer are properly accounted for. In contrast...

  10. Methodology for application of field rainfall simulator to revise c-factor database for conditions of the Czech Republic

    Science.gov (United States)

    Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan

    2016-04-01

    The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final

  11. An optical and near-infrared polarization survey of Seyfert and broad-line radio galaxies. Pt. 1

    International Nuclear Information System (INIS)

    Brindle, C.; Hough, J.H.; Bailey, J.A.; Axon, D.J.; Ward, M.J.; McLean, I.S.

    1990-01-01

    We present new broad-band optical and near-infrared (0.44-2.2 μm) flux density and polarization measurements of a sample of 71 Seyfert galaxies and three broad-line radio galaxies. We confirm the results of earlier studies which show that the polarization of Seyferts is generally low in the V-band and at longer wavelengths, but in the B-band somewhat higher polarizations are commonly found. After correction has been made for the effects of stellar dilution, we find that Seyfert 2 nuclei are probably more highly polarized than Seyfert 1's. The small sample of Seyfert 2's selected using the 'warm' IRAS colour criterion tend to be more highly polarised than those selected by optical techniques. (author)

  12. A FEM based methodology to simulate multiple crack propagation in friction stir welds

    DEFF Research Database (Denmark)

    Lepore, Marcello; Carlone, Pierpaolo; Berto, Filippo

    2017-01-01

    . The residual stress field was inferred by a thermo-mechanical FEM simulation of the process, considering temperature dependent elastic-plastic material properties, material softening and isotropic hardening. Afterwards, cracks introduced in the selected location of FEM computational domain allow stress...

  13. Design of a distributed simulation environment for building control applications based on systems engineering methodology

    NARCIS (Netherlands)

    Yahiaoui, Azzedine

    2018-01-01

    The analysis of innovative designs that distributes control to buildings over a network is currently a challenging task as exciting building performance simulation tools do not offer sufficient capabilities and the flexibility to fully respond to the full complexity of Automated Buildings (ABs). For

  14. Mixed-realism simulation of adverse event disclosure: an educational methodology and assessment instrument.

    Science.gov (United States)

    Matos, Francisco M; Raemer, Daniel B

    2013-04-01

    Physicians have an ethical duty to disclose adverse events to patients or families. Various strategies have been reported for teaching disclosure, but no instruments have been shown to be reliable for assessing them.The aims of this study were to report a structured method for teaching adverse event disclosure using mixed-realism simulation, develop and begin to validate an instrument for assessing performance, and describe the disclosure practice of anesthesiology trainees. Forty-two anesthesiology trainees participated in a 2-part exercise with mixed-realism simulation. The first part took place using a mannequin patient in a simulated operating room where trainees became enmeshed in a clinical episode that led to an adverse event and the second part in a simulated postoperative care unit where the learner is asked to disclose to a standardized patient who systematically moves through epochs of grief response. Two raters scored subjects using an assessment instrument we developed that combines a 4-element behaviorally anchored rating scale (BARS) and a 5-stage objective rating scale. The performance scores for elements within the BARS and the 5-stage instrument showed excellent interrater reliability (Cohen's κ = 0.7), appropriate range (mean range for BARS, 4.20-4.47; mean range for 5-stage instrument, 3.73-4.46), and high internal consistency (P realism simulation that engages learners in an adverse event and allows them to practice disclosure to a structured range of patient responses. We have developed a reliable 2-part instrument with strong psychometric properties for assessing disclosure performance.

  15. EDDINGTON RATIO DISTRIBUTION OF X-RAY-SELECTED BROAD-LINE AGNs AT 1.0 < z < 2.2

    International Nuclear Information System (INIS)

    Suh, Hyewon; Hasinger, Günther; Steinhardt, Charles; Silverman, John D.; Schramm, Malte

    2015-01-01

    We investigate the Eddington ratio distribution of X-ray-selected broad-line active galactic nuclei (AGNs) in the redshift range 1.0 < z < 2.2, where the number density of AGNs peaks. Combining the optical and Subaru/Fiber Multi Object Spectrograph near-infrared spectroscopy, we estimate black hole masses for broad-line AGNs in the Chandra Deep Field South (CDF-S), Extended Chandra Deep Field South (E-CDF-S), and the XMM-Newton Lockman Hole (XMM-LH) surveys. AGNs with similar black hole masses show a broad range of AGN bolometric luminosities, which are calculated from X-ray luminosities, indicating that the accretion rate of black holes is widely distributed. We find a substantial fraction of massive black holes accreting significantly below the Eddington limit at z ≲ 2, in contrast to what is generally found for luminous AGNs at high redshift. Our analysis of observational selection biases indicates that the “AGN cosmic downsizing” phenomenon can be simply explained by the strong evolution of the comoving number density at the bright end of the AGN luminosity function, together with the corresponding selection effects. However, one might need to consider a correlation between the AGN luminosity and the accretion rate of black holes, in which luminous AGNs have higher Eddington ratios than low-luminosity AGNs, in order to understand the relatively small fraction of low-luminosity AGNs with high accretion rates in this epoch. Therefore, the observed downsizing trend could be interpreted as massive black holes with low accretion rates, which are relatively fainter than less-massive black holes with efficient accretion

  16. THE COMPLEX CIRCUMNUCLEAR ENVIRONMENT OF THE BROAD-LINE RADIO GALAXY 3C 390.3 REVEALED BY CHANDRA HETG

    Energy Technology Data Exchange (ETDEWEB)

    Tombesi, F.; Kallman, T.; Leutenegger, M. A. [X-ray Astrophysics Laboratory, NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Reeves, J. N. [Center for Space Science and Technology, University of Maryland Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250 (United States); Reynolds, C. S.; Mushotzky, R. F.; Behar, E. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Braito, V. [INAF—Osservatorio Astronomico di Brera, Via Bianchi 46, I-23807 Merate (Italy); Cappi, M., E-mail: francesco.tombesi@nasa.gov, E-mail: ftombesi@astro.umd.edu [Department of Physics, Technion 32000, Haifa 32000 (Israel)

    2016-10-20

    We present the first high spectral resolution X-ray observation of the broad-line radio galaxy 3C 390.3 obtained with the high-energy transmission grating spectrometer on board the Chandra X-ray Observatory . The spectrum shows complex emission and absorption features in both the soft X-rays and Fe K band. We detect emission and absorption lines in the energy range E = 700–1000 eV associated with ionized Fe L transitions (Fe XVII–XX). An emission line at the energy of E ≃ 6.4 keV consistent with the Fe K α is also observed. Our best-fit model requires at least three different components: (i) a hot emission component likely associated with the hot interstellar medium in this elliptical galaxy with temperature kT = 0.5 ± 0.1 keV; (ii) a warm absorber with ionization parameter log ξ = 2.3 ± 0.5 erg s{sup −1} cm, column density log N {sub H} = 20.7 ± 0.1 cm{sup −2}, and outflow velocity v {sub out} < 150 km s{sup −1}; and (iii) a lowly ionized reflection component in the Fe K band likely associated with the optical broad-line region or the outer accretion disk. These evidences suggest the possibility that we are looking directly down the ionization cone of this active galaxy and that the central X-ray source only photoionizes along the unobscured cone. This is overall consistent with the angle-dependent unified picture of active galactic nuclei.

  17. De-individualized psychophysiological strain assessment during a flight simulation test—Validation of a space methodology

    Science.gov (United States)

    Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen

    For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.

  18. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    Science.gov (United States)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  19. WE-H-BRC-04: Implement Lean Methodology to Make Our Current Process of CT Simulation to Treatment More Efficient

    Energy Technology Data Exchange (ETDEWEB)

    Boddu, S; Morrow, A; Krishnamurthy, N; McVicker, A; Deb, N; Rangaraj, D [Scott & White Hospital, Temple, TX (United States)

    2016-06-15

    Purpose: Our goal is to implement lean methodology to make our current process of CT simulation to treatment more efficient. Methods: In this study, we implemented lean methodology and tools and employed flowchart in excel for process-mapping. We formed a group of physicians, physicists, dosimetrists, therapists and a clinical physics assistant and huddled bi-weekly to map current value streams. We performed GEMBA walks and observed current processes from scheduling patient CT Simulations to treatment plan approval. From this, the entire workflow was categorized into processes, sub-processes, and tasks. For each process we gathered data on touch time, first time quality, undesirable effects (UDEs), and wait-times from relevant members of each task. UDEs were binned per frequency of their occurrence. We huddled to map future state and to find solutions to high frequency UDEs. We implemented visual controls, hard stops, and documented issues found during chart checks prior to treatment plan approval. Results: We have identified approximately 64 UDEs in our current workflow that could cause delays, re-work, compromise the quality and safety of patient treatments, or cause wait times between 1 – 6 days. While some UDEs are unavoidable, such as re-planning due to patient weight loss, eliminating avoidable UDEs is our goal. In 2015, we found 399 issues with patient treatment plans, of which 261, 95 and 43 were low, medium and high severity, respectively. We also mapped patient-specific QA processes for IMRT/Rapid Arc and SRS/SBRT, involving 10 and 18 steps, respectively. From these, 13 UDEs were found and 5 were addressed that solved 20% of issues. Conclusion: We have successfully implemented lean methodology and tools. We are further mapping treatment site specific workflows to identify bottlenecks, potential breakdowns and personnel allocation and employ tools like failure mode effects analysis to mitigate risk factors to make this process efficient.

  20. WE-H-BRC-04: Implement Lean Methodology to Make Our Current Process of CT Simulation to Treatment More Efficient

    International Nuclear Information System (INIS)

    Boddu, S; Morrow, A; Krishnamurthy, N; McVicker, A; Deb, N; Rangaraj, D

    2016-01-01

    Purpose: Our goal is to implement lean methodology to make our current process of CT simulation to treatment more efficient. Methods: In this study, we implemented lean methodology and tools and employed flowchart in excel for process-mapping. We formed a group of physicians, physicists, dosimetrists, therapists and a clinical physics assistant and huddled bi-weekly to map current value streams. We performed GEMBA walks and observed current processes from scheduling patient CT Simulations to treatment plan approval. From this, the entire workflow was categorized into processes, sub-processes, and tasks. For each process we gathered data on touch time, first time quality, undesirable effects (UDEs), and wait-times from relevant members of each task. UDEs were binned per frequency of their occurrence. We huddled to map future state and to find solutions to high frequency UDEs. We implemented visual controls, hard stops, and documented issues found during chart checks prior to treatment plan approval. Results: We have identified approximately 64 UDEs in our current workflow that could cause delays, re-work, compromise the quality and safety of patient treatments, or cause wait times between 1 – 6 days. While some UDEs are unavoidable, such as re-planning due to patient weight loss, eliminating avoidable UDEs is our goal. In 2015, we found 399 issues with patient treatment plans, of which 261, 95 and 43 were low, medium and high severity, respectively. We also mapped patient-specific QA processes for IMRT/Rapid Arc and SRS/SBRT, involving 10 and 18 steps, respectively. From these, 13 UDEs were found and 5 were addressed that solved 20% of issues. Conclusion: We have successfully implemented lean methodology and tools. We are further mapping treatment site specific workflows to identify bottlenecks, potential breakdowns and personnel allocation and employ tools like failure mode effects analysis to mitigate risk factors to make this process efficient.

  1. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    Science.gov (United States)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  2. Modelling and Simulation Methodology for Dynamic Resources Assignment System in Container Terminal

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available As the competition among international container terminals has become increasingly fierce, every port is striving to maintain the competitive edge and provide satisfactory services to port users. By virtue of information technology enhancement, many efforts to raise port competitiveness through an advanced operation system are actively being made, and judging from the viewpoint of investment effect, these efforts are more preferable than infrastructure expansion and additional equipment acquisition. Based on simulation, this study has tried to prove that RFID-based real-time location system (RTLS data collection and dynamic operation of transfer equipment brings a positive effect on the productivity improvement and resource utilization enhancement. Moreover, this study on the demand for the real-time data for container terminal operation have been made, and operation processes have been redesigned along with the collection of related data, and based on them, simulations have been conducted. As a result of them, much higher productivity improvement could be expected.

  3. Evaluating Simulation Methodologies to Determine Best Strategies to Maximize Student Learning.

    Science.gov (United States)

    Scherer, Yvonne K; Foltz-Ramos, Kelly; Fabry, Donna; Chao, Ying-Yu

    2016-01-01

    Limited evidence exists as to the most effective ways to provide simulation experiences to maximize student learning. This quasi-experimental study investigated 2 different strategies repeated versus 1 exposure and participation versus observation on student outcomes following exposure to a high-fidelity acute asthma exacerbation of asthma scenario. Immediate repeated exposure resulted in significantly higher scores on knowledge, student satisfaction and self-confidence, and clinical performance measures than a single exposure. Significant intergroup differences were found on participants' satisfaction and self-confidence as compared with observers. Implications for nurse educators include expanding the observer role when designing repeated exposure to simulations and integrating technical, cognitive, and behavioral outcomes as a way for faculty to evaluate students' clinical performance. Published by Elsevier Inc.

  4. How Does Environmental Regulation Affect Industrial Transformation? A Study Based on the Methodology of Policy Simulation

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2016-01-01

    Full Text Available The difference of factor input structure determines different response to environmental regulation. This paper constructs a theoretical model including environmental regulation, factor input structure, and industrial transformation and conducts a policy simulation based on the difference of influencing mechanism of environmental regulation considering industrial heterogeneity. The findings show that the impact of environmental regulation on industrial transformation presents comparison of distortion effect of resource allocation and technology effect. Environmental regulation will promote industrial transformation when technology effect of environmental regulation is stronger than distortion effect of resource allocation. Particularly, command-control environmental regulation has a significant incentive effect and spillover effect of technological innovation on cleaning industries, but these effects do not exist in pollution-intensive industries. Command-control environmental regulation promotes industrial transformation. The result of simulation showed that environmental regulation of market incentives is similar to that of command-control.

  5. Methodology Development of Computationally-Efficient Full Vehicle Simulations for the Entire Blast Event

    Science.gov (United States)

    2015-08-06

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Altair Engineering 888 W Big Beaver Road #402 Troy MI 48084...soldiers, it is imperative to analyze impact of each sub-event on soldier injuries. Using traditional finite element analysis techniques [1-6] to...CONSTRAINED_LAGRANGE_IN_SOLID) and the results from another commonly used non-linear explicit solver for impact simulations (RADIOSS, [4]) using a coupling

  6. Development of Fast-Running Simulation Methodology Using Neural Networks for Load Follow Operation

    International Nuclear Information System (INIS)

    Seong, Seung-Hwan; Park, Heui-Youn; Kim, Dong-Hoon; Suh, Yong-Suk; Hur, Seop; Koo, In-Soo; Lee, Un-Chul; Jang, Jin-Wook; Shin, Yong-Chul

    2002-01-01

    A new fast-running analytic model has been developed for analyzing the load follow operation. The new model was based on the neural network theory, which has the capability of modeling the input/output relationships of a nonlinear system. The new model is made up of two error back-propagation neural networks and procedures to calculate core parameters, such as the distributions and density of xenon in a quasi-steady-state core like load follow operation. One neural network is designed to retrieve the axial offset of power distribution, and the other is for reactivity corresponding to a given core condition. The training data sets for learning the neural networks in the new model are generated with a three-dimensional nodal code and, also, the measured data of the first-day test of load follow operation. Using the new model, the simulation results of the 5-day load follow test in a pressurized water reactor show a good agreement between the simulation data and the actual measured data. Required computing time for simulating a load follow operation is comparable to that of a fast-running lumped model. Moreover, the new model does not require additional engineering factors to compensate for the difference between the actual measurements and analysis results because the neural network has the inherent learning capability of neural networks to new situations

  7. Unsteady aerodynamics simulation of a full-scale horizontal axis wind turbine using CFD methodology

    International Nuclear Information System (INIS)

    Cai, Xin; Gu, Rongrong; Pan, Pan; Zhu, Jie

    2016-01-01

    Highlights: • A full-scale HAWT is simulated under operational conditions of wind shear and yaw. • The CFD method and sliding mesh are adopted to complete the calculation. • Thrust and torque of blades reach the peak and valley at the same time in wind shear. • The wind turbine produces yaw moment during the whole revolution in yaw case. • The torques and thrusts of the three blades present cyclical changes. - Abstract: The aerodynamic performance of wind turbines is significantly influenced by the unsteady flow around the rotor blades. The research on unsteady aerodynamics for Horizontal Axis Wind Turbines (HAWTs) is still poorly understood because of the complex flow physics. In this study, the unsteady aerodynamic configuration of a full-scale HAWT is simulated with consideration of wind shear, tower shadow and yaw motion. The calculated wind turbine which contains tapered tower, rotor overhang and tilted rotor shaft is constructed by making reference of successfully commercial operated wind turbine designed by NEG Micon and Vestas. A validated CFD method is utilized to analyze unsteady aerodynamic characteristics which affect the performance on such a full-scale HAWT. The approach of sliding mesh is used to carefully deal with the interface between static and moving parts in the flow field. The annual average wind velocity and wind profile in the atmospheric border are applied as boundary conditions. Considering the effects of wind shear and tower shadow, the simulation results show that the each blade reaches its maximum and minimum aerodynamic loads almost at the same time during the rotation circle. The blade–tower interaction imposes great impact on the power output performance. The wind turbine produces yaw moment during the whole revolution and the maximum aerodynamic loads appear at the upwind azimuth in the yaw computation case.

  8. Comprehensive MRI simulation methodology using a dedicated MRI scanner in radiation oncology for external beam radiation treatment planning

    International Nuclear Information System (INIS)

    Paulson, Eric S.; Erickson, Beth; Schultz, Chris; Allen Li, X.

    2015-01-01

    Purpose: The use of magnetic resonance imaging (MRI) in radiation oncology is expanding rapidly, and more clinics are integrating MRI into their radiation therapy workflows. However, radiation therapy presents a new set of challenges and places additional constraints on MRI compared to diagnostic radiology that, if not properly addressed, can undermine the advantages MRI offers for radiation treatment planning (RTP). The authors introduce here strategies to manage several challenges of using MRI for virtual simulation in external beam RTP. Methods: A total of 810 clinical MRI simulation exams were performed using a dedicated MRI scanner for external beam RTP of brain, breast, cervix, head and neck, liver, pancreas, prostate, and sarcoma cancers. Patients were imaged in treatment position using MRI-optimal immobilization devices. Radiofrequency (RF) coil configurations and scan protocols were optimized based on RTP constraints. Off-resonance and gradient nonlinearity-induced geometric distortions were minimized or corrected prior to using images for RTP. A multidisciplinary MRI simulation guide, along with window width and level presets, was created to standardize use of MR images during RTP. A quality assurance program was implemented to maintain accuracy and repeatability of MRI simulation exams. Results: The combination of a large bore scanner, high field strength, and circumferentially wrapped, flexible phased array RF receive coils permitted acquisition of thin slice images with high contrast-to-noise ratio (CNR) and image intensity uniformity, while simultaneously accommodating patient setup and immobilization devices. Postprocessing corrections and alternative acquisition methods were required to reduce or correct off-resonance and gradient nonlinearity induced geometric distortions. Conclusions: The methodology described herein contains practical strategies the authors have implemented through lessons learned performing clinical MRI simulation exams. In

  9. Comprehensive MRI simulation methodology using a dedicated MRI scanner in radiation oncology for external beam radiation treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Eric S., E-mail: epaulson@mcw.edu [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 and Department of Radiology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States); Erickson, Beth; Schultz, Chris; Allen Li, X. [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States)

    2015-01-15

    Purpose: The use of magnetic resonance imaging (MRI) in radiation oncology is expanding rapidly, and more clinics are integrating MRI into their radiation therapy workflows. However, radiation therapy presents a new set of challenges and places additional constraints on MRI compared to diagnostic radiology that, if not properly addressed, can undermine the advantages MRI offers for radiation treatment planning (RTP). The authors introduce here strategies to manage several challenges of using MRI for virtual simulation in external beam RTP. Methods: A total of 810 clinical MRI simulation exams were performed using a dedicated MRI scanner for external beam RTP of brain, breast, cervix, head and neck, liver, pancreas, prostate, and sarcoma cancers. Patients were imaged in treatment position using MRI-optimal immobilization devices. Radiofrequency (RF) coil configurations and scan protocols were optimized based on RTP constraints. Off-resonance and gradient nonlinearity-induced geometric distortions were minimized or corrected prior to using images for RTP. A multidisciplinary MRI simulation guide, along with window width and level presets, was created to standardize use of MR images during RTP. A quality assurance program was implemented to maintain accuracy and repeatability of MRI simulation exams. Results: The combination of a large bore scanner, high field strength, and circumferentially wrapped, flexible phased array RF receive coils permitted acquisition of thin slice images with high contrast-to-noise ratio (CNR) and image intensity uniformity, while simultaneously accommodating patient setup and immobilization devices. Postprocessing corrections and alternative acquisition methods were required to reduce or correct off-resonance and gradient nonlinearity induced geometric distortions. Conclusions: The methodology described herein contains practical strategies the authors have implemented through lessons learned performing clinical MRI simulation exams. In

  10. Simulating chemical reactions in ionic liquids using QM/MM methodology.

    Science.gov (United States)

    Acevedo, Orlando

    2014-12-18

    The use of ionic liquids as a reaction medium for chemical reactions has dramatically increased in recent years due in large part to the numerous reported advances in catalysis and organic synthesis. In some extreme cases, ionic liquids have been shown to induce mechanistic changes relative to conventional solvents. Despite the large interest in the solvents, a clear understanding of the molecular factors behind their chemical impact is largely unknown. This feature article reviews our efforts developing and applying mixed quantum and molecular mechanical (QM/MM) methodology to elucidate the microscopic details of how these solvents operate to enhance rates and alter mechanisms for industrially and academically important reactions, e.g., Diels-Alder, Kemp eliminations, nucleophilic aromatic substitutions, and β-eliminations. Explicit solvent representation provided the medium dependence of the activation barriers and atomic-level characterization of the solute-solvent interactions responsible for the experimentally observed "ionic liquid effects". Technical advances are also discussed, including a linear-scaling pairwise electrostatic interaction alternative to Ewald sums, an efficient polynomial fitting method for modeling proton transfers, and the development of a custom ionic liquid OPLS-AA force field.

  11. A combination of streamtube and geostatical simulation methodologies for the study of large oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Chakravarty, A.; Emanuel, A.S.; Bernath, J.A. [Chevron Petroleum Technology Company, LaHabra, CA (United States)

    1997-08-01

    The application of streamtube models for reservoir simulation has an extensive history in the oil industry. Although these models are strictly applicable only to fields under voidage balance, they have proved to be useful in a large number of fields provided that there is no solution gas evolution and production. These models combine the benefit of very fast computational time with the practical ability to model a large reservoir over the course of its history. These models do not, however, directly incorporate the detailed geological information that recent experience has taught is important. This paper presents a technique for mapping the saturation information contained in a history matched streamtube model onto a detailed geostatistically derived finite difference grid. With this technique, the saturation information in a streamtube model, data that is actually statistical in nature, can be identified with actual physical locations in a field and a picture of the remaining oil saturation can be determined. Alternatively, the streamtube model can be used to simulate the early development history of a field and the saturation data then used to initialize detailed late time finite difference models. The proposed method is presented through an example application to the Ninian reservoir. This reservoir, located in the North Sea (UK), is a heterogeneous sandstone characterized by a line drive waterflood, with about 160 wells, and a 16 year history. The reservoir was satisfactorily history matched and mapped for remaining oil saturation. A comparison to 3-D seismic survey and recently drilled wells have provided preliminary verification.

  12. HRP's Healthcare Spin-Offs Through Computational Modeling and Simulation Practice Methodologies

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Peng, Grace; Morrison, Tina; Erdemir, Ahmet; Myers, Jerry

    2014-01-01

    Spaceflight missions expose astronauts to novel operational and environmental conditions that pose health risks that are currently not well understood, and perhaps unanticipated. Furthermore, given the limited number of humans that have flown in long duration missions and beyond low Earth-orbit, the amount of research and clinical data necessary to predict and mitigate these health and performance risks are limited. Consequently, NASA's Human Research Program (HRP) conducts research and develops advanced methods and tools to predict, assess, and mitigate potential hazards to the health of astronauts. In this light, NASA has explored the possibility of leveraging computational modeling since the 1970s as a means to elucidate the physiologic risks of spaceflight and develop countermeasures. Since that time, substantial progress has been realized in this arena through a number of HRP funded activates such as the Digital Astronaut Project (DAP) and the Integrated Medical Model (IMM). Much of this success can be attributed to HRP's endeavor to establish rigorous verification, validation, and credibility (VV&C) processes that ensure computational models and simulations (M&S) are sufficiently credible to address issues within their intended scope. This presentation summarizes HRP's activities in credibility of modeling and simulation, in particular through its outreach to the community of modeling and simulation practitioners. METHODS: The HRP requires all M&S that can have moderate to high impact on crew health or mission success must be vetted in accordance to NASA Standard for Models and Simulations, NASA-STD-7009 (7009) [5]. As this standard mostly focuses on engineering systems, the IMM and DAP have invested substantial efforts to adapt the processes established in this standard for their application to biological M&S, which is more prevalent in human health and performance (HHP) and space biomedical research and operations [6,7]. These methods have also generated

  13. Kinetic Monte Carlo simulations for transient thermal fields: Computational methodology and application to the submicrosecond laser processes in implanted silicon.

    Science.gov (United States)

    Fisicaro, G; Pelaz, L; Lopez, P; La Magna, A

    2012-09-01

    Pulsed laser irradiation of damaged solids promotes ultrafast nonequilibrium kinetics, on the submicrosecond scale, leading to microscopic modifications of the material state. Reliable theoretical predictions of this evolution can be achieved only by simulating particle interactions in the presence of large and transient gradients of the thermal field. We propose a kinetic Monte Carlo (KMC) method for the simulation of damaged systems in the extremely far-from-equilibrium conditions caused by the laser irradiation. The reference systems are nonideal crystals containing point defect excesses, an order of magnitude larger than the equilibrium density, due to a preirradiation ion implantation process. The thermal and, eventual, melting problem is solved within the phase-field methodology, and the numerical solutions for the space- and time-dependent thermal field were then dynamically coupled to the KMC code. The formalism, implementation, and related tests of our computational code are discussed in detail. As an application example we analyze the evolution of the defect system caused by P ion implantation in Si under nanosecond pulsed irradiation. The simulation results suggest a significant annihilation of the implantation damage which can be well controlled by the laser fluence.

  14. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    International Nuclear Information System (INIS)

    Ko, Soon Heum; Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel; Jha, Shantenu

    2014-01-01

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  15. Multilevel Methodology for Simulation of Spatio-Temporal Systems with Heterogeneous Activity; Application to Spread of Valley Fever Fungus

    Science.gov (United States)

    Jammalamadaka, Rajanikanth

    2009-01-01

    This report consists of a dissertation submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Graduate College, The University of Arizona, 2008. Spatio-temporal systems with heterogeneity in their structure and behavior have two major problems associated with them. The first one is that such complex real world systems extend over very large spatial and temporal domains and consume so many computational resources to simulate that they are infeasible to study with current computational platforms. The second one is that the data available for understanding such systems is limited because they are spread over space and time making it hard to obtain micro and macro measurements. This also makes it difficult to get the data for validation of their constituent processes while simultaneously considering their global behavior. For example, the valley fever fungus considered in this dissertation is spread over a large spatial grid in the arid Southwest and typically needs to be simulated over several decades of time to obtain useful information. It is also hard to get the temperature and moisture data (which are two critical factors on which the survival of the valley fever fungus depends) at every grid point of the spatial domain over the region of study. In order to address the first problem, we develop a method based on the discrete event system specification which exploits the heterogeneity in the activity of the spatio-temporal system and which has been shown to be effective in solving relatively simple partial differential equation systems. The benefit of addressing the first problem is that it now makes it feasible to address the second problem. We address the second problem by making use of a multilevel methodology based on modeling and simulation and systems theory. This methodology helps us in the construction of models with different resolutions (base and

  16. Analysis of simulation methodology for calculation of the heat of transport for vacancy thermodiffusion

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, William C.; Schelling, Patrick K., E-mail: patrick.schelling@ucf.edu [Advanced Material Processing and Analysis Center and Department of Physics, University of Central Florida, 4000 Central Florida Blvd., Orlando, Florida 32816 (United States)

    2014-07-14

    Computation of the heat of transport Q{sub a}{sup *} in monatomic crystalline solids is investigated using the methodology first developed by Gillan [J. Phys. C: Solid State Phys. 11, 4469 (1978)] and further developed by Grout and coworkers [Philos. Mag. Lett. 74, 217 (1996)], referred to as the Grout-Gillan method. In the case of pair potentials, the hopping of a vacancy results in a heat wave that persists for up to 10 ps, consistent with previous studies. This leads to generally positive values for Q{sub a}{sup *} which can be quite large and are strongly dependent on the specific details of the pair potential. By contrast, when the interactions are described using the embedded atom model, there is no evidence of a heat wave, and Q{sub a}{sup *} is found to be negative. This demonstrates that the dynamics of vacancy hopping depends strongly on the details of the empirical potential. However, the results obtained here are in strong disagreement with experiment. Arguments are presented which demonstrate that there is a fundamental error made in the Grout-Gillan method due to the fact that the ensemble of states only includes successful atom hops and hence does not represent an equilibrium ensemble. This places the interpretation of the quantity computed in the Grout-Gillan method as the heat of transport in doubt. It is demonstrated that trajectories which do not yield hopping events are nevertheless relevant to computation of the heat of transport Q{sub a}{sup *}.

  17. An Eulerian two-phase model for steady sheet flow using large-eddy simulation methodology

    Science.gov (United States)

    Cheng, Zhen; Hsu, Tian-Jian; Chauchat, Julien

    2018-01-01

    A three-dimensional Eulerian two-phase flow model for sediment transport in sheet flow conditions is presented. To resolve turbulence and turbulence-sediment interactions, the large-eddy simulation approach is adopted. Specifically, a dynamic Smagorinsky closure is used for the subgrid fluid and sediment stresses, while the subgrid contribution to the drag force is included using a drift velocity model with a similar dynamic procedure. The contribution of sediment stresses due to intergranular interactions is modeled by the kinetic theory of granular flow at low to intermediate sediment concentration, while at high sediment concentration of enduring contact, a phenomenological closure for particle pressure and frictional viscosity is used. The model is validated with a comprehensive high-resolution dataset of unidirectional steady sheet flow (Revil-Baudard et al., 2015, Journal of Fluid Mechanics, 767, 1-30). At a particle Stokes number of about 10, simulation results indicate a reduced von Kármán coefficient of κ ≈ 0.215 obtained from the fluid velocity profile. A fluid turbulence kinetic energy budget analysis further indicates that the drag-induced turbulence dissipation rate is significant in the sheet flow layer, while in the dilute transport layer, the pressure work plays a similar role as the buoyancy dissipation, which is typically used in the single-phase stratified flow formulation. The present model also reproduces the sheet layer thickness and mobile bed roughness similar to measured data. However, the resulting mobile bed roughness is more than two times larger than that predicted by the empirical formulae. Further analysis suggests that through intermittent turbulent motions near the bed, the resolved sediment Reynolds stress plays a major role in the enhancement of mobile bed roughness. Our analysis on near-bed intermittency also suggests that the turbulent ejection motions are highly correlated with the upward sediment suspension flux, while

  18. Investigation of Radiation Protection Methodologies for Radiation Therapy Shielding Using Monte Carlo Simulation and Measurement

    Science.gov (United States)

    Tanny, Sean

    The advent of high-energy linear accelerators for dedicated medical use in the 1950's by Henry Kaplan and the Stanford University physics department began a revolution in radiation oncology. Today, linear accelerators are the standard of care for modern radiation therapy and can generate high-energy beams that can produce tens of Gy per minute at isocenter. This creates a need for a large amount of shielding material to properly protect members of the public and hospital staff. Standardized vault designs and guidance on shielding properties of various materials are provided by the National Council on Radiation Protection (NCRP) Report 151. However, physicists are seeking ways to minimize the footprint and volume of shielding material needed which leads to the use of non-standard vault configurations and less-studied materials, such as high-density concrete. The University of Toledo Dana Cancer Center has utilized both of these methods to minimize the cost and spatial footprint of the requisite radiation shielding. To ensure a safe work environment, computer simulations were performed to verify the attenuation properties and shielding workloads produced by a variety of situations where standard recommendations and guidance documents were insufficient. This project studies two areas of concern that are not addressed by NCRP 151, the radiation shielding workload for the vault door with a non-standard design, and the attenuation properties of high-density concrete for both photon and neutron radiation. Simulations have been performed using a Monte-Carlo code produced by the Los Alamos National Lab (LANL), Monte Carlo Neutrons, Photons 5 (MCNP5). Measurements have been performed using a shielding test port designed into the maze of the Varian Edge treatment vault.

  19. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conduction Finite Difference (CondFD) algorithms.

  20. Application of the methodology of surface of answer in the determination of the PCT in the simulation of a LOFT

    International Nuclear Information System (INIS)

    Alva N, J.; Ortiz V, J.; Amador G, R.

    2008-01-01

    This article summarizes the main typical of the methodology of surfaces and answer (MSA) and its connections with the lineal regression analysis. Also, an example of the application of MSA in the prediction of the principle cladding temperature (PCT) of a combustible assembly of a nuclear reactor, whose used data were taken from the simulation of a LOFT (Loss Of Fluid Test) during a course of experts. The made prediction will be used like one first approach to predict the behavior of the PCT, this is made in order to diminish the time of calculation when realizing the executions of codes thermal hydraulics of better estimation. The present work comprises of the theoretical base of the project in charge to delineate a methodology of uncertainty analysis for codes of better estimation, employees in the thermal hydraulics analysis and safety of plants and nuclear reactors. The institutions that participate in such project are: ININ, CFE, IPN and CNSNS, is possible to mention that this project is sponsored by the IAEA. (Author)

  1. Methodology for transient simulation of a small heliothermic central station; Metodologia para simulacao transiente de uma pequena central heliotermica

    Energy Technology Data Exchange (ETDEWEB)

    Wendel, Marcelo

    2010-08-15

    The final steps of generating electricity from concentrated solar power technologies are similar to conventional thermal processes, since steam or gas is also employed for moving turbines or pistons. The fundamental difference lies on the fact that steam or hot gas is generated by solar radiation instead of fossil fuels or nuclear heat. The cheapest electricity generated from solar energy has been achieved with large-scale power stations based on this concept. Computer simulations represent a low-cost option for the design of thermal systems. The present study aims to develop a methodology for the transient simulation of a micro-scale solar-thermal power plant (120 kWe) which should be appropriate in terms of accuracy and computational effort. The facility considered can optionally operate as a cogeneration plant producing electric power as well as chilled water. Solar radiation is collected by parabolic troughs, electricity is generated by an organic Rankine cycle and chilled water is produced by an absorption cooling cycle. The organic Rankine cycle is of interest because it allows for a plant with relatively simple structure and automated operation. The simulation methodology proposed in this study is implemented in TRNSYS with new components (TYPEs) developed for the solar field and thermal cycles. The parabolic trough field component is based on an experimental efficiency curve of the solar collector. In the case of the Rankine and absorption cycles, the components are based on performance polynomials generated with EES from detailed thermodynamic models, which are calibrated with performance data from manufacturers. Distinct plant configurations are considered. An optimization algorithm is used for searching the best operating point in each case. Results are presented for the following Brazilian sites: Fortaleza, Petrolina and Bom Jesus da Lapa. The latter offers the highest global plant performance. An analysis about the influence of the thermal storage on

  2. Software Abstractions and Methodologies for HPC Simulation Codes on Future Architectures

    Directory of Open Access Journals (Sweden)

    Anshu Dubey

    2014-07-01

    Full Text Available Simulations with multi-physics modeling have become crucial to many science and engineering fields, and multi-physics capable scientific software is as important to these fields as instruments and facilities are to experimental sciences. The current generation of mature multi-physics codes would have sustainably served their target communities with modest amount of ongoing investment for enhancing capabilities. However, the revolution occurring in the hardware architecture has made it necessary to tackle the parallelism and performance management in these codes at multiple levels. The requirements of various levels are often at cross-purposes with one another, and therefore hugely complicate the software design. All of these considerations make it essential to approach this challenge cooperatively as a community. We conducted a series of workshops under an NSF-SI2 conceptualization grant to get input from various stakeholders, and to identify broad approaches that might lead to a solution. In this position paper we detail the major concerns articulated by the application code developers, and emerging trends in utilization of programming abstractions that we found through these workshops.

  3. Efficient methodology for multibody simulations with discontinuous changes in system definition

    International Nuclear Information System (INIS)

    Mukherjee, Rudranarayan M.; Anderson, Kurt S.

    2007-01-01

    A new method is presented for accurately and efficiently simulating multi-scale multibody systems with discontinuous changes in system definitions as encountered in adaptive switching between models with different resolutions as well as models with different system topologies. An example of model resolution change is a transition of a system from a discrete particle model to a reduced order articulated multi-rigid body model. The discontinuous changes in system definition may be viewed as an instantaneous change (release or impulsive application of) the system constraints. The method uses a spatial impulse-momentum formulation in a divide and conquer scheme. The approach utilizes a hierarchic assembly-disassembly process by traversing the system topology in a binary tree map to solve for the jumps in the system generalized speeds and the constraint impulsive loads in linear and logarithmic cost in serial and parallel implementations, respectively. The method is applicable for systems in serial chain as well as kinematical loop topologies. The coupling between the unilateral and bilateral constraints is handled efficiently through the use of kinematic joint definitions. The equations of motion for the system are produced in a hierarchic sub-structured form. This has the advantage that changes in sub-structure definitions/models results in a change to the system equations only within the associated sub-structure. This allows for significant changes in model types and definitions without having to reformulate the equations for the whole system

  4. Experimental control versus realism: methodological solutions for simulator studies in complex operating environments

    Energy Technology Data Exchange (ETDEWEB)

    Skraaning, Gyrd Jr.

    2004-03-15

    This report is a reprint of a dr.philos. thesis written by Gyrd Skraaning Jr. The text was submitted and accepted by the Norwegian University of Science and Technology in 2003 (ISBN 82-471-5237-1). The thesis suggests a nonlinear model of the theoretical relationship between experimental control and realism, claiming that high degrees of realism and experimental control can be obtained simultaneously if the experimental methods are utilized strategically and developed further. This is in opposition to the conventional opinion that realism and experimental control are mutually excluding objectives. The thesis debates the impact of the operating task on human performance during simulator studies in HAMMLAB, and suggests how task variation can be experimentally controlled. In a within subject design, every subject is tested under all experimental conditions, and the presentation order of the conditions is counterbalanced to compensate for order effects. In realistic settings, it is essential that the experimental design imposes few artificial constrains on the research environment. At the same time, the design should be able to uncover experimental effects in situations where the number of participants is low. Within-subject design represents a reasonable compromise between these aspirations. In this respect, an alternative counterbalancing method is proposed (dis-ORDER). A theoretical analysis of the human performance concept and a discussion about performance measurement in complex operating environments, are followed by a debate on the shortcomings of traditional performance indicators. Two specialized operator performance assessment techniques are then presented and evaluated (OPAS and ORT). (Author)

  5. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  6. Experimental control versus realism: methodological solutions for simulator studies in complex operating environments

    International Nuclear Information System (INIS)

    Skraaning, Gyrd Jr.

    2004-03-01

    This report is a reprint of a dr.philos. thesis written by Gyrd Skraaning Jr. The text was submitted and accepted by the Norwegian University of Science and Technology in 2003 (ISBN 82-471-5237-1). The thesis suggests a nonlinear model of the theoretical relationship between experimental control and realism, claiming that high degrees of realism and experimental control can be obtained simultaneously if the experimental methods are utilized strategically and developed further. This is in opposition to the conventional opinion that realism and experimental control are mutually excluding objectives. The thesis debates the impact of the operating task on human performance during simulator studies in HAMMLAB, and suggests how task variation can be experimentally controlled. In a within subject design, every subject is tested under all experimental conditions, and the presentation order of the conditions is counterbalanced to compensate for order effects. In realistic settings, it is essential that the experimental design imposes few artificial constrains on the research environment. At the same time, the design should be able to uncover experimental effects in situations where the number of participants is low. Within-subject design represents a reasonable compromise between these aspirations. In this respect, an alternative counterbalancing method is proposed (dis-ORDER). A theoretical analysis of the human performance concept and a discussion about performance measurement in complex operating environments, are followed by a debate on the shortcomings of traditional performance indicators. Two specialized operator performance assessment techniques are then presented and evaluated (OPAS and ORT). (Author)

  7. SN 2010ay IS A LUMINOUS AND BROAD-LINED TYPE Ic SUPERNOVA WITHIN A LOW-METALLICITY HOST GALAXY

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, N. E.; Soderberg, A. M.; Foley, R. J.; Chornock, R.; Chomiuk, L.; Berger, E. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Valenti, S.; Smartt, S.; Botticella, M. T. [Astrophysics Research Centre, School of Maths and Physics, Queen' s University, Belfast BT7 1NN (United Kingdom); Hurley, K. [Space Sciences Laboratory, University of California Berkeley, 7 Gauss Way, Berkeley, CA 94720 (United States); Barthelmy, S. D.; Gehrels, N.; Cline, T. [NASA Goddard Space Flight Center, Code 661, Greenbelt, MD 20771 (United States); Levesque, E. M. [CASA, Department of Astrophysical and Planetary Sciences, University of Colorado, 389-UCB, Boulder, CO 80309 (United States); Narayan, G. [Department of Physics, Harvard University, Cambridge, MA 02138 (United States); Briggs, M. S.; Connaughton, V. [CSPAR, University of Alabama in Huntsville, Huntsville, AL (United States); Terada, Y. [Department of Physics, Saitama University, Shimo-Okubo, Sakura-ku, Saitama-shi, Saitama 338-8570 (Japan); Golenetskii, S.; Mazets, E., E-mail: nsanders@cfa.harvard.edu [Ioffe Physico-Technical Institute, Laboratory for Experimental Astrophysics, 26 Polytekhnicheskaya, St. Petersburg 194021 (Russian Federation); and others

    2012-09-10

    We report on our serendipitous pre-discovery detection and follow-up observations of the broad-lined Type Ic supernova (SN Ic) 2010ay at z = 0.067 imaged by the Pan-STARRS1 3{pi} survey just {approx}4 days after explosion. The supernova (SN) had a peak luminosity, M{sub R} Almost-Equal-To -20.2 mag, significantly more luminous than known GRB-SNe and one of the most luminous SNe Ib/c ever discovered. The absorption velocity of SN 2010ay is v{sub Si} Almost-Equal-To 19 Multiplication-Sign 10{sup 3} km s{sup -1} at {approx}40 days after explosion, 2-5 times higher than other broad-lined SNe and similar to the GRB-SN 2010bh at comparable epochs. Moreover, the velocity declines {approx}2 times slower than other SNe Ic-BL and GRB-SNe. Assuming that the optical emission is powered by radioactive decay, the peak magnitude implies the synthesis of an unusually large mass of {sup 56}Ni, M{sub Ni} = 0.9 M{sub Sun }. Applying scaling relations to the light curve, we estimate a total ejecta mass, M{sub ej} Almost-Equal-To 4.7 M{sub Sun }, and total kinetic energy, E{sub K} Almost-Equal-To 11 Multiplication-Sign 10{sup 51} erg. The ratio of M{sub Ni} to M{sub ej} is {approx}2 times as large for SN 2010ay as typical GRB-SNe and may suggest an additional energy reservoir. The metallicity (log (O/H){sub PP04} + 12 = 8.19) of the explosion site within the host galaxy places SN 2010ay in the low-metallicity regime populated by GRB-SNe, and {approx}0.5(0.2) dex lower than that typically measured for the host environments of normal (broad-lined) SNe Ic. We constrain any gamma-ray emission with E{sub {gamma}} {approx}< 6 Multiplication-Sign 10{sup 48} erg (25-150 keV), and our deep radio follow-up observations with the Expanded Very Large Array rule out relativistic ejecta with energy E {approx}> 10{sup 48} erg. We therefore rule out the association of a relativistic outflow like those that accompanied SN 1998bw and traditional long-duration gamma-ray bursts (GRBs), but we place less

  8. Integrated detoxification methodology of hazardous phenolic wastewaters in environmentally based trickle-bed reactors: Experimental investigation and CFD simulation

    International Nuclear Information System (INIS)

    Lopes, Rodrigo J.G.; Almeida, Teresa S.A.; Quinta-Ferreira, Rosa M.

    2011-01-01

    Centralized environmental regulations require the use of efficient detoxification technologies for the secure disposal of hazardous wastewaters. Guided by federal directives, existing plants need reengineering activities and careful analysis to improve their overall effectiveness and to become environmentally friendly. Here, we illustrate the application of an integrated methodology which encompasses the experimental investigation of catalytic wet air oxidation and CFD simulation of trickle-bed reactors. As long as trickle-bed reactors are determined by the flow environment coupled with chemical kinetics, first, on the optimization of prominent numerical solution parameters, the CFD model was validated with experimental data taken from a trickle bed pilot plant specifically designed for the catalytic wet oxidation of phenolic wastewaters. Second, several experimental and computational runs were carried out under unsteady-state operation to evaluate the dynamic performance addressing the TOC concentration and temperature profiles. CFD computations of total organic carbon conversion were found to agree better with experimental data at lower temperatures. Finally, the comparison of test data with simulation results demonstrated that this integrated framework was able to describe the mineralization of organic matter in trickle beds and the validated consequence model can be exploited to promote cleaner remediation technologies of contaminated waters.

  9. Integrated detoxification methodology of hazardous phenolic wastewaters in environmentally based trickle-bed reactors: Experimental investigation and CFD simulation.

    Science.gov (United States)

    Lopes, Rodrigo J G; Almeida, Teresa S A; Quinta-Ferreira, Rosa M

    2011-05-15

    Centralized environmental regulations require the use of efficient detoxification technologies for the secure disposal of hazardous wastewaters. Guided by federal directives, existing plants need reengineering activities and careful analysis to improve their overall effectiveness and to become environmentally friendly. Here, we illustrate the application of an integrated methodology which encompasses the experimental investigation of catalytic wet air oxidation and CFD simulation of trickle-bed reactors. As long as trickle-bed reactors are determined by the flow environment coupled with chemical kinetics, first, on the optimization of prominent numerical solution parameters, the CFD model was validated with experimental data taken from a trickle bed pilot plant specifically designed for the catalytic wet oxidation of phenolic wastewaters. Second, several experimental and computational runs were carried out under unsteady-state operation to evaluate the dynamic performance addressing the TOC concentration and temperature profiles. CFD computations of total organic carbon conversion were found to agree better with experimental data at lower temperatures. Finally, the comparison of test data with simulation results demonstrated that this integrated framework was able to describe the mineralization of organic matter in trickle beds and the validated consequence model can be exploited to promote cleaner remediation technologies of contaminated waters. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. The evolution of groundwater flow and mass transport in Canadian shield flow domains: a methodology for numerical simulation

    International Nuclear Information System (INIS)

    Sykes, J.F.; Sudicky, E.A.; Normani, S.D.; Park, Y.J.; Cornaton, F.; McLaren, R.G.

    2007-01-01

    The Deep Geologic Repository Technology Programme (DGRTP) of Ontario Power Generation (OPG) is developing numerous approaches and methodologies for integrated and multidisciplinary site characterisation. A principal element involves the use and further development of state-of-the-art numerical simulators, and immersive visualisation technologies, while fully honouring multi-disciplinary litho-structural, hydrogeologic, paleo-hydrogeologic, geophysical, hydrogeochemical and geomechanical field data. Paleo-climate reconstructions provide surface boundary conditions for numerical models of the subsurface, furthering the understanding of groundwater flow in deep geologic systems and quantifying the effects of glaciation and deglaciation events. The use of geo-statistically plausible fracture networks conditioned on surface lineaments within the numerical models results in more physically representative and realistic characterizations of the repository site. Finally, immersive three-dimensional visualisation technology is used to query, investigate, explore and understand both the raw data, and simulation results in a spatially and temporally consistent framework. This environment allows multi-disciplinary teams of geoscience professionals to explore each other's work and can significantly enhance understanding and knowledge, thereby creating stronger linkages between the geo-scientific disciplines. The use of more physically representative and realistic conceptual models, coupled with immersive visualisation, contributes to an overall integrated approach to site characterisation, instilling further confidence in the understanding of flow system evolution. (authors)

  11. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    Science.gov (United States)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in

  12. INTERACTION BETWEEN THE BROAD-LINED TYPE Ic SUPERNOVA 2012ap AND CARRIERS OF DIFFUSE INTERSTELLAR BANDS

    International Nuclear Information System (INIS)

    Milisavljevic, Dan; Margutti, Raffaella; Crabtree, Kyle N.; Soderberg, Alicia M.; Sanders, Nathan E.; Drout, Maria R.; Kamble, Atish; Chakraborti, Sayan; Kirshner, Robert P.; Foster, Jonathan B.; Fesen, Robert A.; Parrent, Jerod T.; Pickering, Timothy E.; Cenko, S. Bradley; Silverman, Jeffrey M.; Marion, G. H. Howie; Vinko, Jozsef; Filippenko, Alexei V.; Mazzali, Paolo; Maeda, Keiichi

    2014-01-01

    Diffuse interstellar bands (DIBs) are absorption features observed in optical and near-infrared spectra that are thought to be associated with carbon-rich polyatomic molecules in interstellar gas. However, because the central wavelengths of these bands do not correspond to electronic transitions of any known atomic or molecular species, their nature has remained uncertain since their discovery almost a century ago. Here we report on unusually strong DIBs in optical spectra of the broad-lined Type Ic supernova SN 2012ap that exhibit changes in equivalent width over short (≲ 30 days) timescales. The 4428 Å and 6283 Å DIB features get weaker with time, whereas the 5780 Å feature shows a marginal increase. These nonuniform changes suggest that the supernova is interacting with a nearby source of DIBs and that the DIB carriers possess high ionization potentials, such as small cations or charged fullerenes. We conclude that moderate-resolution spectra of supernovae with DIB absorptions obtained within weeks of outburst could reveal unique information about the mass-loss environment of their progenitor systems and provide new constraints on the properties of DIB carriers

  13. Long-Term Monitoring of the Broad-Line Region Properties in a Selected Sample of AGN

    Energy Technology Data Exchange (ETDEWEB)

    Ilić, Dragana [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Shapovalova, Alla I. [Special Astrophysical Observatory, Russian Academy of Sciences, Nizhnii Arkhyz (Russian Federation); Popović, Luka Č. [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Astronomical Observatory, Belgrade (Serbia); Chavushyan, Vahram [Instituto Nacional de Astrofísica, Óptica y Electrónica, Puebla (Mexico); Burenkov, Alexander N. [Special Astrophysical Observatory, Russian Academy of Sciences, Nizhnii Arkhyz (Russian Federation); Kollatschny, Wolfram [Institut fuer Astrophysik, Universitaet Goettingen, Göttingen (Germany); Kovačević, Andjelka [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Marčeta-Mandić, Sladjana [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Astronomical Observatory, Belgrade (Serbia); Rakić, Nemanja [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Belgrade (Serbia); Faculty of Science, University of Banjaluka, Banjaluka, Republic of Srpska (Bosnia and Herzegovina); La Mura, Giovanni; Rafanelli, Piero, E-mail: dilic@math.rs [Department of Physics and Astronomy, University of Padova, Padova (Italy)

    2017-09-14

    We present the results of the long-term optical monitoring campaign of active galactic nuclei (AGN) coordinated by the Special Astrophysical Observatory of the Russian Academy of Science. This campaign has produced a remarkable set of optical spectra, since we have monitored for several decades different types of broad-line (type 1) AGN, from a Seyfert 1, double-peaked line, radio loud and radio quiet AGN, to a supermassive binary black hole candidate. Our analysis of the properties of the broad line region (BLR) of these objects is based on the variability of the broad emission lines. We hereby give a comparative review of the variability properties of the broad emission lines and the BLR of seven different type 1 AGNs, emphasizing some important results, such as the variability rate, the BLR geometry, and the presence of the intrinsic Baldwin effect. We are discussing the difference and similarity in the continuum and emission line variability, focusing on what is the impact of our results to the supermassive black hole mass determination from the BLR properties.

  14. A REVISED BROAD-LINE REGION RADIUS AND BLACK HOLE MASS FOR THE NARROW-LINE SEYFERT 1 NGC 4051

    International Nuclear Information System (INIS)

    Denney, K. D.; Watson, L. C.; Peterson, B. M.

    2009-01-01

    We present the first results from a high sampling rate, multimonth reverberation mapping campaign undertaken primarily at MDM Observatory with supporting observations from telescopes around the world. The primary goal of this campaign was to obtain either new or improved Hβ reverberation lag measurements for several relatively low luminosity active galactic nuclei (AGNs). We feature results for NGC 4051 here because, until now, this object has been a significant outlier from AGN scaling relationships, e.g., it was previously a ∼2-3σ outlier on the relationship between the broad-line region (BLR) radius and the optical continuum luminosity-the R BLR -L relationship. Our new measurements of the lag time between variations in the continuum and Hβ emission line made from spectroscopic monitoring of NGC 4051 lead to a measured BLR radius of R BLR = 1.87 +0.54 -0.50 light days and black hole mass of M BH = (1.73 +0.55 -0.52 ) x 10 6 M sun . This radius is consistent with that expected from the R BLR -L relationship, based on the present luminosity of NGC 4051 and the most current calibration of the relation by Bentz et al.. We also present a preliminary look at velocity-resolved Hβ light curves and time delay measurements, although we are unable to reconstruct an unambiguous velocity-resolved reverberation signal.

  15. Long-Term Monitoring of the Broad-Line Region Properties in a Selected Sample of AGN

    Directory of Open Access Journals (Sweden)

    Dragana Ilić

    2017-09-01

    Full Text Available We present the results of the long-term optical monitoring campaign of active galactic nuclei (AGN coordinated by the Special Astrophysical Observatory of the Russian Academy of Science. This campaign has produced a remarkable set of optical spectra, since we have monitored for several decades different types of broad-line (type 1 AGN, from a Seyfert 1, double-peaked line, radio loud and radio quiet AGN, to a supermassive binary black hole candidate. Our analysis of the properties of the broad line region (BLR of these objects is based on the variability of the broad emission lines. We hereby give a comparative review of the variability properties of the broad emission lines and the BLR of seven different type 1 AGNs, emphasizing some important results, such as the variability rate, the BLR geometry, and the presence of the intrinsic Baldwin effect. We are discussing the difference and similarity in the continuum and emission line variability, focusing on what is the impact of our results to the supermassive black hole mass determination from the BLR properties.

  16. INTERACTION BETWEEN THE BROAD-LINED TYPE Ic SUPERNOVA 2012ap AND CARRIERS OF DIFFUSE INTERSTELLAR BANDS

    Energy Technology Data Exchange (ETDEWEB)

    Milisavljevic, Dan; Margutti, Raffaella; Crabtree, Kyle N.; Soderberg, Alicia M.; Sanders, Nathan E.; Drout, Maria R.; Kamble, Atish; Chakraborti, Sayan; Kirshner, Robert P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Foster, Jonathan B. [Yale Center for Astronomy and Astrophysics, Yale University, New Haven, CT 06520 (United States); Fesen, Robert A.; Parrent, Jerod T. [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Lab, Hanover, NH 03755 (United States); Pickering, Timothy E. [Southern African Large Telescope, P.O. Box 9, Observatory 7935, Cape Town (South Africa); Cenko, S. Bradley [Astrophysics Science Division, NASA Goddard Space Flight Center, Mail Code 661, Greenbelt, MD 20771 (United States); Silverman, Jeffrey M.; Marion, G. H. Howie; Vinko, Jozsef [University of Texas at Austin, 1 University Station C1400, Austin, TX 78712-0259 (United States); Filippenko, Alexei V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Mazzali, Paolo [Astrophysics Research Institute, Liverpool John Moores University, Liverpool L3 5RF (United Kingdom); Maeda, Keiichi, E-mail: dmilisav@cfa.harvard.edu [Department of Astronomy, Kyoto University Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan); and others

    2014-02-10

    Diffuse interstellar bands (DIBs) are absorption features observed in optical and near-infrared spectra that are thought to be associated with carbon-rich polyatomic molecules in interstellar gas. However, because the central wavelengths of these bands do not correspond to electronic transitions of any known atomic or molecular species, their nature has remained uncertain since their discovery almost a century ago. Here we report on unusually strong DIBs in optical spectra of the broad-lined Type Ic supernova SN 2012ap that exhibit changes in equivalent width over short (≲ 30 days) timescales. The 4428 Å and 6283 Å DIB features get weaker with time, whereas the 5780 Å feature shows a marginal increase. These nonuniform changes suggest that the supernova is interacting with a nearby source of DIBs and that the DIB carriers possess high ionization potentials, such as small cations or charged fullerenes. We conclude that moderate-resolution spectra of supernovae with DIB absorptions obtained within weeks of outburst could reveal unique information about the mass-loss environment of their progenitor systems and provide new constraints on the properties of DIB carriers.

  17. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms

    International Nuclear Information System (INIS)

    Guerra, J.G.; Rubiano, J.G.; Winter, G.; Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Gil, J.M.; Rodríguez, R.; Martel, P.; Bolivar, J.P.

    2015-01-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. - Highlights: • A computational method for characterizing an HPGe spectrometer has been developed. • Detector characterized using as reference photopeak efficiencies obtained experimentally or by Monte Carlo calibration. • The characterization obtained has been validated for samples with different geometries and composition. • Good agreement

  18. Digital Controller Development Methodology Based on Real-Time Simulations with LabVIEW FPGA Hardware-Software Toolset

    Directory of Open Access Journals (Sweden)

    Tommaso Caldognetto

    2013-12-01

    Full Text Available In this paper, we exemplify the use of NI Lab-VIEW FPGA as a rapid prototyping environment for digital controllers. In our power electronics laboratory, it has been successfully employed in the development, debugging, and test of different power converter controllers for microgrid applications.The paper shows how this high level programming language,together with its target hardware platforms, including CompactRIO and Single Board RIO systems, allows researchers and students to develop even complex applications in reasonable times. The availability of efficient drivers for the considered hardware platforms frees the users from the burden of low level programming. At the same time, the high level programming approach facilitates software re-utilization, allowing the laboratory know-how to steadily grow along time. Furthermore, it allows hardware-in-the-loop real-time simulation, that proved to be effective, and safe, in debugging even complex hardware and software co-designed controllers. To illustrate the effectiveness of these hardware-software toolsets and of the methodology based upon them, two case studies are

  19. A methodological study of environmental simulation in architecture and engineering. Integrating daylight and thermal performance across the urban and building scales

    DEFF Research Database (Denmark)

    Sattrup, Peter Andreas; Strømann-Andersen, Jakob Bjørn

    2011-01-01

    This study presents a methodological and conceptual framework that allows for the integration and creation of knowledge across professional borders in the field of environmental simulation. The framework has been developed on the basis of interviews with leading international practitioners, key...... in pointing out the need for improving metrics, software and not least the performance of the built environment itself....

  20. Methodological framework for economical and controllable design of heat exchanger networks: Steady-state analysis, dynamic simulation, and optimization

    International Nuclear Information System (INIS)

    Masoud, Ibrahim T.; Abdel-Jabbar, Nabil; Qasim, Muhammad; Chebbi, Rachid

    2016-01-01

    Highlights: • HEN total annualized cost, heat recovery, and controllability are considered in the framework. • Steady-state and dynamic simulations are performed. • Effect of bypass on total annualized cost and controllability is reported. • Optimum bypass fractions are found from closed and open-loop efforts. - Abstract: The problem of interaction between economic design and control system design of heat exchanger networks (HENs) is addressed in this work. The controllability issues are incorporated in the classical design of HENs. A new methodological framework is proposed to account for both economics and controllability of HENs. Two classical design methods are employed, namely, Pinch and superstructure designs. Controllability measures such as relative gain array (RGA) and singular value decomposition (SVD) are used. The proposed framework also presents a bypass placement strategy for optimal control of the designed network. A case study is used to test the applicability of the framework and to assess both economics and controllability. The results indicate that the superstructure design is more economical and controllable compared to the Pinch design. The controllability of the designed HEN is evaluated using Aspen-HYSYS closed-loop dynamic simulator. In addition, a sensitivity analysis is performed to study the effect of bypass fractions on the total annualized cost and controllability of the designed HEN. The analysis shows that increasing any bypass fraction increases the total annualized cost. However, the trend with the total annualized cost was not observed with respect to the control effort manifested by minimizing the integral of the squared errors (ISE) between the controlled stream temperatures and their targets (set-points). An optimal ISE point is found at a certain bypass fraction, which does not correspond to the minimal total annualized cost. The bypass fractions are validated via open-loop simulation and the additional cooling and

  1. DISCOVERY OF THE BROAD-LINED TYPE Ic SN 2013cq ASSOCIATED WITH THE VERY ENERGETIC GRB 130427A

    Energy Technology Data Exchange (ETDEWEB)

    Xu, D.; Krühler, T.; Hjorth, J.; Malesani, D.; Fynbo, J. P. U.; Watson, D. J.; Geier, S. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 København Ø (Denmark); De Ugarte Postigo, A.; Thöne, C. C.; Sánchez-Ramírez, R. [Instituto de Astrofísica de Andalucía, CSIC, Glorieta de la Astronomía s/n, E-18008 Granada (Spain); Leloudas, G. [The Oskar Klein Centre, Department of Physics, Stockholm University, AlbaNova, SE-10691 Stockholm (Sweden); Cano, Z.; Jakobsson, P. [Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, IS-107 Reykjavik (Iceland); Schulze, S. [Departamento de Astronomía y Astrofísica, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22 (Chile); Kaper, L. [Astronomical Institute Anton Pannekoek, University of Amsterdam, Science Park 904, NL-1098 XH Amsterdam (Netherlands); Sollerman, J. [The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-10691 Stockholm (Sweden); Cabrera-Lavers, A. [Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife (Spain); Cao, C. [Department of Space Science and Physics, Shandong University at Weihai, Weihai, Shandong 264209 (China); Covino, S. [INAF/Brera Astronomical Observatory, via Bianchi 46, I-23807 Merate (Italy); Flores, H., E-mail: dong@dark-cosmology.dk [Laboratoire Galaxies Etoiles Physique et Instrumentation, Observatoire de Paris, 5 place Jules Janssen, F-92195 Meudon (France); and others

    2013-10-20

    Long-duration gamma-ray bursts (GRBs) at z < 1 are found in most cases to be accompanied by bright, broad-lined Type Ic supernovae (SNe Ic-BL). The highest-energy GRBs are mostly located at higher redshifts, where the associated SNe are hard to detect observationally. Here, we present early and late observations of the optical counterpart of the very energetic GRB 130427A. Despite its moderate redshift, z = 0.3399 ± 0.0002, GRB 130427A is at the high end of the GRB energy distribution, with an isotropic-equivalent energy release of E{sub iso} ∼ 9.6 × 10{sup 53} erg, more than an order of magnitude more energetic than other GRBs with spectroscopically confirmed SNe. In our dense photometric monitoring, we detect excess flux in the host-subtracted r-band light curve, consistent with that expected from an emerging SN, ∼0.2 mag fainter than the prototypical SN 1998bw. A spectrum obtained around the time of the SN peak (16.7 days after the GRB) reveals broad undulations typical of SNe Ic-BL, confirming the presence of an SN, designated SN 2013cq. The spectral shape and early peak time are similar to those of the high expansion velocity SN 2010bh associated with GRB 100316D. Our findings demonstrate that high-energy, long-duration GRBs, commonly detected at high redshift, can also be associated with SNe Ic-BL, pointing to a common progenitor mechanism.

  2. Superluminous Transients at AGN Centers from Interaction between Black Hole Disk Winds and Broad-line Region Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Moriya, Takashi J.; Tanaka, Masaomi; Ohsuga, Ken [Division of Theoretical Astronomy, National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Morokuma, Tomoki, E-mail: takashi.moriya@nao.ac.jp [Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan)

    2017-07-10

    We propose that superluminous transients that appear at central regions of active galactic nuclei (AGNs) such as CSS100217:102913+404220 (CSS100217) and PS16dtm, which reach near- or super-Eddington luminosities of the central black holes, are powered by the interaction between accretion-disk winds and clouds in broad-line regions (BLRs) surrounding them. If the disk luminosity temporarily increases by, e.g., limit–cycle oscillations, leading to a powerful radiatively driven wind, strong shock waves propagate in the BLR. Because the dense clouds in the AGN BLRs typically have similar densities to those found in SNe IIn, strong radiative shocks emerge and efficiently convert the ejecta kinetic energy to radiation. As a result, transients similar to SNe IIn can be observed at AGN central regions. Since a typical black hole disk-wind velocity is ≃0.1 c , where c is the speed of light, the ejecta kinetic energy is expected to be ≃10{sup 52} erg when ≃1 M {sub ⊙} is ejected. This kinetic energy is transformed to radiation energy in a timescale for the wind to sweep up a similar mass to itself in the BLR, which is a few hundred days. Therefore, both luminosities (∼10{sup 44} erg s{sup −1}) and timescales (∼100 days) of the superluminous transients from AGN central regions match those expected in our interaction model. If CSS100217 and PS16dtm are related to the AGN activities triggered by limit–cycle oscillations, they become bright again in coming years or decades.

  3. NuSTAR reveals the Comptonizing corona of the broad-line radio galaxy 3C 382

    Energy Technology Data Exchange (ETDEWEB)

    Ballantyne, D. R.; Bollenbacher, J. M. [Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States); Brenneman, L. W. [Harvard-Smithsonian CfA, 60 Garden Street MS-67, Cambridge, MA 02138 (United States); Madsen, K. K.; Baloković, M.; Harrison, F. A.; Walton, D. J. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Boggs, S. E. [Space Science Laboratory, University of California, Berkeley, CA 94720 (United States); Christensen, F. E.; Craig, W. W. [DTU SpaceNational Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby (Denmark); Gandhi, P. [Department of Physics, University of Durham, South Road, Durham DH1 3LE (United Kingdom); Hailey, C. J. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Lohfink, A. M. [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States); Marinucci, A. [Dipartimento di Matematica e Fisica, Università degli Studi Roma Tre, via della Vasca Navale 84, I-00146 Roma (Italy); Markwardt, C. B.; Zhang, W. W. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Stern, D., E-mail: david.ballantyne@physics.gatech.edu [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States)

    2014-10-10

    Broad-line radio galaxies (BLRGs) are active galactic nuclei that produce powerful, large-scale radio jets, but appear as Seyfert 1 galaxies in their optical spectra. In the X-ray band, BLRGs also appear like Seyfert galaxies, but with flatter spectra and weaker reflection features. One explanation for these properties is that the X-ray continuum is diluted by emission from the jet. Here, we present two NuSTAR observations of the BLRG 3C 382 that show clear evidence that the continuum of this source is dominated by thermal Comptonization, as in Seyfert 1 galaxies. The two observations were separated by over a year and found 3C 382 in different states separated by a factor of 1.7 in flux. The lower flux spectrum has a photon-index of Γ=1.68{sub −0.02}{sup +0.03}, while the photon-index of the higher flux spectrum is Γ=1.78{sub −0.03}{sup +0.02}. Thermal and anisotropic Comptonization models provide an excellent fit to both spectra and show that the coronal plasma cooled from kT{sub e} = 330 ± 30 keV in the low flux data to 231{sub −88}{sup +50} keV in the high flux observation. This cooling behavior is typical of Comptonizing corona in Seyfert galaxies and is distinct from the variations observed in jet-dominated sources. In the high flux observation, simultaneous Swift data are leveraged to obtain a broadband spectral energy distribution and indicates that the corona intercepts ∼10% of the optical and ultraviolet emitting accretion disk. 3C 382 exhibits very weak reflection features, with no detectable relativistic Fe Kα line, that may be best explained by an outflowing corona combined with an ionized inner accretion disk.

  4. THE LICK AGN MONITORING PROJECT: BROAD-LINE REGION RADII AND BLACK HOLE MASSES FROM REVERBERATION MAPPING OF Hβ

    International Nuclear Information System (INIS)

    Bentz, Misty C.; Walsh, Jonelle L.; Barth, Aaron J.; Baliber, Nairn; Bennert, Vardha Nicola; Greene, Jenny E.; Hidas, Marton G.; Canalizo, Gabriela; Hiner, Kyle D.; Filippenko, Alexei V.; Ganeshalingam, Mohan; Lee, Nicholas; Li, Weidong; Serduke, Frank J. D.; Silverman, Jeffrey M.; Steele, Thea N.; Gates, Elinor L.; Malkan, Matthew A.; Minezaki, Takeo; Sakata, Yu

    2009-01-01

    We have recently completed a 64-night spectroscopic monitoring campaign at the Lick Observatory 3-m Shane telescope with the aim of measuring the masses of the black holes in 12 nearby (z 6 -10 7 M sun and also the well-studied nearby active galactic nucleus (AGN) NGC 5548. Nine of the objects in the sample (including NGC 5548) showed optical variability of sufficient strength during the monitoring campaign to allow for a time lag to be measured between the continuum fluctuations and the response to these fluctuations in the broad Hβ emission. We present here the light curves for all the objects in this sample and the subsequent Hβ time lags for the nine objects where these measurements were possible. The Hβ lag time is directly related to the size of the broad-line region (BLR) in AGNs, and by combining the Hβ lag time with the measured width of the Hβ emission line in the variable part of the spectrum, we determine the virial mass of the central supermassive black hole in these nine AGNs. The absolute calibration of the black hole masses is based on the normalization derived by Onken et al., which brings the masses determined by reverberation mapping into agreement with the local M BH -σ * relationship for quiescent galaxies. We also examine the time lag response as a function of velocity across the Hβ line profile for six of the AGNs. The analysis of four leads to rather ambiguous results with relatively flat time lags as a function of velocity. However, SBS 1116+583A exhibits a symmetric time lag response around the line center reminiscent of simple models for circularly orbiting BLR clouds, and Arp 151 shows an asymmetric profile that is most easily explained by a simple gravitational infall model. Further investigation will be necessary to fully understand the constraints placed on the physical models of the BLR by the velocity-resolved response in these objects.

  5. Response Surface Methodology and Aspen Plus Integration for the Simulation of the Catalytic Steam Reforming of Ethanol

    Directory of Open Access Journals (Sweden)

    Bernay Cifuentes

    2017-01-01

    Full Text Available The steam reforming of ethanol (SRE on a bimetallic RhPt/CeO2 catalyst was evaluated by the integration of Response Surface Methodology (RSM and Aspen Plus (version 9.0, Aspen Tech, Burlington, MA, USA, 2016. First, the effect of the Rh–Pt weight ratio (1:0, 3:1, 1:1, 1:3, and 0:1 on the performance of SRE on RhPt/CeO2 was assessed between 400 to 700 °C with a stoichiometric steam/ethanol molar ratio of 3. RSM enabled modeling of the system and identification of a maximum of 4.2 mol H2/mol EtOH (700 °C with the Rh0.4Pt0.4/CeO2 catalyst. The mathematical models were integrated into Aspen Plus through Excel in order to simulate a process involving SRE, H2 purification, and electricity production in a fuel cell (FC. An energy sensitivity analysis of the process was performed in Aspen Plus, and the information obtained was used to generate new response surfaces. The response surfaces demonstrated that an increase in H2 production requires more energy consumption in the steam reforming of ethanol. However, increasing H2 production rebounds in more energy production in the fuel cell, which increases the overall efficiency of the system. The minimum H2 yield needed to make the system energetically sustainable was identified as 1.2 mol H2/mol EtOH. According to the results of the integration of RSM models into Aspen Plus, the system using Rh0.4Pt0.4/CeO2 can produce a maximum net energy of 742 kJ/mol H2, of which 40% could be converted into electricity in the FC (297 kJ/mol H2 produced. The remaining energy can be recovered as heat.

  6. A Methodology for Validating Safety Heuristics Using Clinical Simulations: Identifying and Preventing Possible Technology-Induced Errors Related to Using Health Information Systems

    Science.gov (United States)

    Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher

    2013-01-01

    Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902

  7. Methodology to evaluate the crack growth rate by stress corrosion cracking in dissimilar metals weld in simulated environment of PWR nuclear reactor

    International Nuclear Information System (INIS)

    Paula, Raphael G.; Figueiredo, Celia A.; Rabelo, Emerson G.

    2013-01-01

    Inconel alloys weld metal is widely used to join dissimilar metals in nuclear reactors applications. It was recently observed failures of weld components in plants, which have triggered an international effort to determine reliable data on the stress corrosion cracking behavior of this material in reactor environment. The objective of this work is to develop a methodology to determine the crack growth rate caused by stress corrosion in Inconel alloy 182, using the specimen (Compact Tensile) in simulated PWR environment. (author)

  8. Application of the spine-layer jet radiation model to outbursts in the broad-line radio galaxy 3C 120

    Science.gov (United States)

    Janiak, M.; Sikora, M.; Moderski, R.

    2016-05-01

    We present a detailed Fermi/LAT data analysis for the broad-line radio galaxy 3C 120. This source has recently entered into a state of increased γ-ray activity which manifested itself in two major flares detected by Fermi/LAT in 2014 September and 2015 April with no significant flux changes reported in other wavelengths. We analyse available data focusing our attention on aforementioned outbursts. We find very fast variability time-scale during flares (of the order of hours) together with a significant γ-ray flux increase. We show that the ˜6.8 yr averaged γ-ray emission of 3C 120 is likely a sum of the external radiation Compton and the synchrotron self-Compton radiative components. To address the problem of violent γ-ray flares and fast variability we model the jet radiation dividing the jet structure into two components: the wide and relatively slow outer layer and the fast, narrow spine. We show that with the addition of the fast spine occasionally bent towards the observer we are able to explain observed spectral energy distribution of 3C 120 during flares with the Compton upscattered broad-line region and dusty torus photons as main γ-rays emission mechanism.

  9. A methodology of selection of exercises for operator training on a control room simulator and its application to the data bank of exercises at the Dukovany NPP

    International Nuclear Information System (INIS)

    Holy, J.

    2005-07-01

    The report describes the preparation of methodology for the selection of scenarios to be used during operator training on a full-scope simulator. The scenarios are selected from a data bank of scenarios, which is under preparation based on feedback from the operational history and theoretical analyses. The new methodology takes into account 3 basic attributes defining the priority for use within the training programme: frequency of occurrence, safety-related significance, and difficulty. The attributes are scored and based on a joint score, the importance of inclusion of the scenario in the training programme is also scored. The methodology was applied to the data bank of scenarios for simulation of abnormal states and incidents trained on the up-to-date simulator of the Dukovany NPP, and the results of this pilot application were made available to Dukovany operator training staff as a tool for the preparation of training plans for the years to come. The results of a PSA study are used for a non-trivial selection of the scenarios

  10. Stochastic techno-economic assessment based on Monte Carlo simulation and the Response Surface Methodology: The case of an innovative linear Fresnel CSP (concentrated solar power) system

    International Nuclear Information System (INIS)

    Bendato, Ilaria; Cassettari, Lucia; Mosca, Marco; Mosca, Roberto

    2016-01-01

    Combining technological solutions with investment profitability is a critical aspect in designing both traditional and innovative renewable power plants. Often, the introduction of new advanced-design solutions, although technically interesting, does not generate adequate revenue to justify their utilization. In this study, an innovative methodology is developed that aims to satisfy both targets. On the one hand, considering all of the feasible plant configurations, it allows the analysis of the investment in a stochastic regime using the Monte Carlo method. On the other hand, the impact of every technical solution on the economic performance indicators can be measured by using regression meta-models built according to the theory of Response Surface Methodology. This approach enables the design of a plant configuration that generates the best economic return over the entire life cycle of the plant. This paper illustrates an application of the proposed methodology to the evaluation of design solutions using an innovative linear Fresnel Concentrated Solar Power system. - Highlights: • A stochastic methodology for solar plants investment evaluation. • Study of the impact of new technologies on the investment results. • Application to an innovative linear Fresnel CSP system. • A particular application of Monte Carlo simulation and response surface methodology.

  11. INVESTIGATING THE COMPLEX X-RAY SPECTRUM OF A BROAD-LINE 2MASS RED QUASAR: XMM-NEWTON OBSERVATION OF FTM 0830+3759

    International Nuclear Information System (INIS)

    Piconcelli, Enrico; Nicastro, Fabrizio; Fiore, Fabrizio; Vignali, Cristian; Bianchi, Stefano; Miniutti, Giovanni

    2010-01-01

    We report results from a 50 ks XMM-Newton observation of the dust-reddened broad-line quasar FTM 0830+3759 (z = 0.413) selected from the Faint Images of the Radio Sky at Twenty cm/Two Micron All Sky Survey red quasar survey. For this active galactic nucleus (AGN), a very short 9 ks Chandra exposure had suggested a feature-rich X-ray spectrum and Hubble Space Telescope images revealed a very disturbed host galaxy morphology. Contrary to classical, optically selected quasars, the X-ray properties of red (i.e., with J - K s > 1.7 and R - K s > 4.0) broad-line quasars are still quite unexplored, although there is a growing consensus that, due to moderate obscuration, these objects can offer a unique view of spectral components typically swamped by the AGN light in normal, blue quasars. The XMM-Newton observation discussed here has definitely confirmed the complexity of the X-ray spectrum revealing the presence of a cold (or mildly ionized) absorber with N H ∼ 10 22 cm -2 along the line of sight to the nucleus and a Compton reflection component accompanied by an intense Fe Kα emission line in this quasar with a L 2-10 k eV ∼ 5 x 10 44 erg s -1 . A soft-excess component is also required by the data. The match between the column density derived by our spectral analysis and that expected on the basis of reddening due to the dust suggests the possibility that both absorptions occur in the same medium. FTM 0830+3759 is characterized by an extinction/absorption-corrected X-ray-to-optical flux ratio α ox = -2.3, which is steeper than expected on the basis of its UV luminosity. These findings indicate that the X-ray properties of FTM 0830+3759 differ from those typically observed for optically selected broad-line quasars with comparable hard X-ray luminosity.

  12. STAR FORMATION IN SELF-GRAVITATING DISKS IN ACTIVE GALACTIC NUCLEI. II. EPISODIC FORMATION OF BROAD-LINE REGIONS

    International Nuclear Information System (INIS)

    WangJianmin; Du Pu; Ge Junqiang; Hu Chen; Baldwin, Jack A.; Ferland, Gary J.

    2012-01-01

    This is the second in a series of papers discussing the process and effects of star formation in the self-gravitating disk around the supermassive black holes in active galactic nuclei (AGNs). We have previously suggested that warm skins are formed above the star-forming (SF) disk through the diffusion of warm gas driven by supernova explosions. Here we study the evolution of the warm skins when they are exposed to the powerful radiation from the inner part of the accretion disk. The skins initially are heated to the Compton temperature, forming a Compton atmosphere (CAS) whose subsequent evolution is divided into four phases. Phase I is the duration of pure accumulation supplied by the SF disk. During phase II clouds begin to form due to line cooling and sink to the SF disk. Phase III is a period of preventing clouds from sinking to the SF disk through dynamic interaction between clouds and the CAS because of the CAS overdensity driven by continuous injection of warm gas from the SF disk. Finally, phase IV is an inevitable collapse of the entire CAS through line cooling. This CAS evolution drives the episodic appearance of broad-line regions (BLRs). We follow the formation of cold clouds through the thermal instability of the CAS during phases II and III, using linear analysis. Since the clouds are produced inside the CAS, the initial spatial distribution of newly formed clouds and angular momentum naturally follow the CAS dynamics, producing a flattened disk of clouds. The number of clouds in phases II and III can be estimated, as well as the filling factor of clouds in the BLR. Since the cooling function depends on the metallicity, the metallicity gradients that originate in the SF disk give rise to different properties of clouds in different radial regions. We find from the instability analysis that clouds have column density N H ∼ 22 cm –2 in the metal-rich regions whereas they have N H ∼> 10 22 cm –2 in the metal-poor regions. The metal-rich clouds

  13. Development of risk assessment methodology of decay heat removal function against external hazards for sodium-cooled fast reactors. (3) Numerical simulations of forest fire spread and smoke transport as an external hazard assessment methodology development

    International Nuclear Information System (INIS)

    Okano, Yasushi; Yamano, Hidemasa

    2015-01-01

    As a part of a development of the risk assessment methodologies against external hazards, a new methodology to assess forest fire hazards is being developed. Frequency and consequence of the forest fire are analyzed to obtain the hazard intensity curve and then Level 1 probabilistic safety assessment is performed to obtain the conditional core damage probability due to the challenges by the forest fire. 'Heat', 'flame', 'smoke' and 'flying object' are the challenges to a nuclear power plant. For a sodium-cooled fast reactor, a decay heat removal under accident conditions is operated with an ultimate heat sink of air, then, the challenge by 'smoke' will potentially be on the air filter of the system. In this paper, numerical simulations of forest fire propagation and smoke transport were performed with sensibility studies to weather conditions, and the effect by the smoke on the air filter was quantitatively evaluated. Forest fire propagation simulations were performed using FARSITE code. A temporal increase of a forest fire spread area and a position of the frontal fireline are obtained by the simulation, and 'reaction intensity' and 'frontal fireline intensity' as the indexes of 'heat' are obtained as well. The boundary of the fire spread area is shaped like an ellipse on the terrain, and the boundary length is increased with time and fire spread. The sensibility analyses on weather conditions of wind, temperature, and humidity were performed, and it was summarized that 'forest fire spread rate' and 'frontal fireline intensity' depend much on wind speed and humidity. Smoke transport simulations were performed by ALOFT-FT code where three-dimensional spatial distribution of smoke density, especially of particle matters of PM2.5 and PM10, are evaluated. The snapshot outputs, namely 'reaction intensity' and 'position of frontal fireline', from the sensibility studies of the FARSITE were directly utilized as the input data for ALOFT-FT, whereas it is assumed that the

  14. CHEMICAL EVOLUTION OF THE UNIVERSE AT 0.7 < z < 1.6 DERIVED FROM ABUNDANCE DIAGNOSTICS OF THE BROAD-LINE REGION OF QUASARS

    Energy Technology Data Exchange (ETDEWEB)

    Sameshima, H. [Laboratory of Infrared High-resolution Spectroscopy, Koyama Astronomical Observatory, Kyoto Sangyo University, Motoyama, Kamigamo, Kita-ku, Kyoto 603-8555 (Japan); Yoshii, Y.; Kawara, K., E-mail: sameshima@cc.kyoto-su.ac.jp [Institute of Astronomy, School of Science, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan)

    2017-01-10

    We present an analysis of Mg ii λ 2798 and Fe ii UV emission lines for archival Sloan Digital Sky Survey (SDSS) quasars to explore the diagnostics of the magnesium-to-iron abundance ratio in a broad-line region cloud. Our sample consists of 17,432 quasars selected from the SDSS Data Release 7 with a redshift range of 0.72 <  z  < 1.63. A strong anticorrelation between the Mg ii equivalent width (EW) and the Eddington ratio is found, while only a weak positive correlation is found between the Fe ii EW and the Eddington ratio. To investigate the origin of these differing behaviors of Mg ii and Fe ii emission lines, we perform photoionization calculations using the Cloudy code, where constraints from recent reverberation mapping studies are considered. We find from calculations that (1) Mg ii and Fe ii emission lines are created at different regions in a photoionized cloud, and (2) their EW correlations with the Eddington ratio can be explained by just changing the cloud gas density. These results indicate that the Mg ii/Fe ii flux ratio, which has been used as a first-order proxy for the Mg/Fe abundance ratio in chemical evolution studies with quasar emission lines, depends largely on the cloud gas density. By correcting this density dependence, we propose new diagnostics of the Mg/Fe abundance ratio for a broad-line region cloud. In comparing the derived Mg/Fe abundance ratios with chemical evolution models, we suggest that α -enrichment by mass loss from metal-poor intermediate-mass stars occurred at z  ∼ 2 or earlier.

  15. The case for cases B and C: intrinsic hydrogen line ratios of the broad-line region of active galactic nuclei, reddenings, and accretion disc sizes

    Science.gov (United States)

    Gaskell, C. Martin

    2017-05-01

    Low-redshift active galactic nuclei (AGNs) with extremely blue optical spectral indices are shown to have a mean, velocity-averaged, broad-line Hα/Hβ ratio of ≈2.72 ± 0.04, consistent with a Baker-Menzel Case B value. Comparison of a wide range of properties of the very bluest AGNs with those of a luminosity-matched subset of the Dong et al. blue AGN sample indicates that the only difference is the internal reddening. Ultraviolet fluxes are brighter for the bluest AGNs by an amount consistent with the flat AGN reddening curve of Gaskell et al. The lack of a significant difference in the GALEX (far-ultraviolet-near-ultraviolet) colour index strongly rules out a steep Small Magellanic Cloud-like reddening curve and also argues against an intrinsically harder spectrum for the bluest AGNs. For very blue AGNs, the Ly α/Hβ ratio is also consistent with being the Case B value. The Case B ratios provide strong support for the self-shielded broad-line model of Gaskell, Klimek & Nazarova. It is proposed that the greatly enhanced Ly α/Hβ ratio at very high velocities is a consequence of continuum fluorescence in the Lyman lines (Case C). Reddenings of AGNs mean that the far-UV luminosity is often underestimated by up to an order of magnitude. This is a major factor causing the discrepancies between measured accretion disc sizes and the predictions of simple accretion disc theory. Dust covering fractions for most AGNs are lower than has been estimated. The total mass in lower mass supermassive black holes must be greater than hitherto estimated.

  16. Methodology for the computational simulation of the components in photovoltaic systems; Desarrollo de herramientas para la prediccion del comportamiento de sistemas fotovoltaicos

    Energy Technology Data Exchange (ETDEWEB)

    Galimberti, P.; Arcuri, G.; Manno, R.; Fasulo, A. J.

    2004-07-01

    This work presents a methodology for the computational simulation of the components that comprise photovoltaic systems, in order to study the behavior of each component and its relevance in the operation of the whole system, which would allow to make decisions in the selection process of these components and their improvements.As a result of the simulation, files with values of different variables which characterize the behaviour of the components are obtained. Different kind of plots can be drawn, which show the information in a summarized form. Finally, the results are discussed making a comparison with actual data for the city of Rio Cuarto in Argentina (33,1 degree South Latitude) and some advantages of the propose method are mentioned. (Author)

  17. The generic methodology for verification and validation applied to medium range anti-tank simulation training devices

    NARCIS (Netherlands)

    Voogd, J.M.; Roza, M.

    2015-01-01

    The Dutch Ministry of Defense (NL-MoD) has recently acquired an update of its medium range anti tank (MRAT) missile system, called the GILL. The update to the SPIKE Long Range (LR) weapon system is accompanied with the acquisition of new simulation training devices (STDs). These devices are bought

  18. Perceptions About the Present and Future of Surgical Simulation: A National Study of Mixed Qualitative and Quantitative Methodology.

    Science.gov (United States)

    Yiasemidou, Marina; Glassman, Daniel; Tomlinson, James; Song, David; Gough, Michael J

    Assess expert opinion on the current and future role of simulation in surgical education. Expert opinion was sought through an externally validated questionnaire that was disseminated electronically. Heads of Schools of Surgery (HoS) (and deputies) and Training Program Directors (TPD) (and deputies). Simulation was considered a good training tool (HoS: 15/15, TPD: 21/21). The concept that simulation is useful mostly to novices and for basic skills acquisition was rejected (HoS: 15/15, TPDs: 21/21; HoS: 13/15, TPDs: 18/21). Further, simulation is considered suitable for teaching nontechnical skills (HoS: 13/15, TPDs: 20/21) and re-enacting stressful situations (HoS: 14/15, TPDs: 15/21). Most respondents also felt that education centers should be formally accredited (HoS: 12/15, TPDs: 16/21) and that consultant mentors should be appointed by every trust (HoS: 12/15, TPDs: 19/21). In contrast, there were mixed views on its use for trainee assessment (HoS: 6/15, TPDs: 14/21) and whether it should be compulsory (HoS: 8/15, TPDs: 11/21). The use of simulation for the acquirement of both technical and nontechnical skills is strongly supported while views on other applications (e.g., assessment) are conflicting. Further, the need for center accreditation and supervised, consultant-led teaching is highlighted. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  19. Use of the AHP methodology in system dynamics: Modelling and simulation for health technology assessments to determine the correct prosthesis choice for hernia diseases.

    Science.gov (United States)

    Improta, Giovanni; Russo, Mario Alessandro; Triassi, Maria; Converso, Giuseppe; Murino, Teresa; Santillo, Liberatina Carmela

    2018-05-01

    Health technology assessments (HTAs) are often difficult to conduct because of the decisive procedures of the HTA algorithm, which are often complex and not easy to apply. Thus, their use is not always convenient or possible for the assessment of technical requests requiring a multidisciplinary approach. This paper aims to address this issue through a multi-criteria analysis focusing on the analytic hierarchy process (AHP). This methodology allows the decision maker to analyse and evaluate different alternatives and monitor their impact on different actors during the decision-making process. However, the multi-criteria analysis is implemented through a simulation model to overcome the limitations of the AHP methodology. Simulations help decision-makers to make an appropriate decision and avoid unnecessary and costly attempts. Finally, a decision problem regarding the evaluation of two health technologies, namely, the evaluation of two biological prostheses for incisional infected hernias, will be analysed to assess the effectiveness of the model. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Advancements in reactor physics modelling methodology of Monte Carlo Burnup Code MCB dedicated to higher simulation fidelity of HTR cores

    International Nuclear Information System (INIS)

    Cetnar, Jerzy

    2014-01-01

    The recent development of MCB - Monte Carlo Continuous Energy Burn-up code is directed towards advanced description of modern reactors, including double heterogeneity structures that exist in HTR-s. In this, we exploit the advantages of MCB methodology in integrated approach, where physics, neutronics, burnup, reprocessing, non-stationary process modeling (control rod operation) and refined spatial modeling are carried in a single flow. This approach allows for implementations of advanced statistical options like analysis of error propagation, perturbation in time domain, sensitivity and source convergence analyses. It includes statistical analysis of burnup process, emitted particle collection, thermal-hydraulic coupling, automatic power profile calculations, advanced procedures of burnup step normalization and enhanced post processing capabilities. (author)

  1. Convection methodology for fission track annealing: direct and inverse numerical simulations in the multi-exponential case

    International Nuclear Information System (INIS)

    Miellou, J.C.; Igli, H.; Grivet, M.; Rebetez, M.; Chambaudet, A.

    1994-01-01

    In minerals, the uranium fission tracks are sensitive to temperature and time. The consequence is that the etchable lengths are reduced. To simulate the phenomenon, at the last International Conference on Nuclear Tracks in solids at Beijing in 1992, we proposed a convection model for fission track annealing based on a reaction situation associated with only one activation energy. Moreover a simple inverse method based on the resolution of an ordinary differential equation was described, making it possible to retrace the thermal history in this mono-exponential situation. The aim of this paper is to consider a more involved class of models including multi-exponentials associated with several activation energies. We shall describe in this framework the modelling of the direct phenomenon and the resolution of the inverse problem. Results of numerical simulations and comparison with the mono-exponential case will be presented. 5 refs. (author)

  2. Contribution to the electrothermal simulation in power electronics. Development of a simulation methodology applied to switching circuits under variable operating conditions; Contribution a la simulation electrothermique en electronique de puissance. Developpement d`une methode de simulation pour circuits de commutation soumis a des commandes variables

    Energy Technology Data Exchange (ETDEWEB)

    Vales, P.

    1997-03-19

    In modern hybrid or monolithic integrated power circuits, electrothermal effects can no longer be ignored. A methodology is proposed in order to simulate electrothermal effects in power circuits, with a significant reduction of the computation time while taking into account electrical and thermal time constants which are usually widely different. A supervising program, written in Fortran, uses system call sequences and manages an interactive dialog between a fast thermal simulator and a general electrical simulator. This explicit coupling process between two specific simulators requires a multi-task operating system. The developed software allows for the prediction of the electrothermal power dissipation drift in the active areas of components, and the prediction of thermally-induced coupling effects between adjacent components. An application to the study of hard switching circuits working under variable operating conditions is presented

  3. A methodology for developing high-integrity knowledge base using document analysis and ECPN matrix analysis with backward simulation

    International Nuclear Information System (INIS)

    Park, Joo Hyun

    1999-02-01

    When transitions occur in large systems such as nuclear power plants (NPPs) or industrial process plants, it is often difficult to diagnose them. Various computer-based operator-aiding systems have been developed in order to help operators diagnose the transitions of the plants. In procedures for developing knowledge base system like operator-aiding systems, the knowledge acquisition and the knowledge base verification are core activities. This dissertation describes a knowledge acquisition method and a knowledge base verification method for developing high-integrity knowledge base system of NPP expert systems. The knowledge acquisition is one of the most difficult and time-consuming activities in developing knowledge base systems. There are two kinds of knowledge acquisition methods in view of knowledge sources. One is an acquisition method from human expert. This method, however, is not adequate to acquire the knowledge of NPP expert systems because the number of experts is not sufficient. In this work, we propose a novel knowledge acquisition method through documents analysis. The knowledge base can be built correctly, rapidly, and partially automatically through this method. This method is especially useful when it is difficult to find domain experts. Reliability of knowledge base systems depends on the quality of their knowledge base. Petri Net has been used to verify knowledge bases due to their formal outputs. The methods using Petri Net however are difficult to apply to large and complex knowledge bases because the Net becomes very large and complex. Also, with Petri Net, it is difficult to find proper input patterns that make anomalies occur. In order to overcome this difficulty, in this work, the anomaly candidates detection methods are developed based on Extended CPN (ECPN) matrix analysis. This work also defines the backward simulation of CPN to find compact input patterns for anomaly detection, which starts simulation from the anomaly candidates

  4. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  5. Nonperturbative quantum simulation of time-resolved nonlinear spectra: Methodology and application to electron transfer reactions in the condensed phase

    International Nuclear Information System (INIS)

    Wang Haobin; Thoss, Michael

    2008-01-01

    A quantum dynamical method is presented to accurately simulate time-resolved nonlinear spectra for complex molecular systems. The method combines the nonpertubative approach to describe nonlinear optical signals with the multilayer multiconfiguration time-dependent Hartree theory to calculate the laser-induced polarization for the overall field-matter system. A specific nonlinear optical signal is obtained by Fourier decomposition of the overall polarization. The performance of the method is demonstrated by applications to photoinduced ultrafast electron transfer reactions in mixed-valence compounds and at dye-semiconductor interfaces

  6. Evaluation of a new methodology to simulate damage and wear of polyethylene hip replacements subjected to edge loading in hip simulator testing.

    Science.gov (United States)

    Partridge, Susan; Tipper, Joanne L; Al-Hajjar, Mazen; Isaac, Graham H; Fisher, John; Williams, Sophie

    2018-05-01

    Wear and fatigue of polyethylene acetabular cups have been reported to play a role in the failure of total hip replacements. Hip simulator testing under a wide range of clinically relevant loading conditions is important. Edge loading of hip replacements can occur following impingement under extreme activities and can also occur during normal gait, where there is an offset deficiency and/or joint laxity. This study evaluated a hip simulator method that assessed wear and damage in polyethylene acetabular liners that were subjected to edge loading. The liners tested to evaluate the method were a currently manufactured crosslinked polyethylene acetabular liner and an aged conventional polyethylene acetabular liner. The acetabular liners were tested for 5 million standard walking cycles and following this 5 million walking cycles with edge loading. Edge loading conditions represented a separation of the centers of rotation of the femoral head and the acetabular liner during the swing phase, leading to loading of the liner rim on heel strike. Rim damage and cracking was observed in the aged conventional polyethylene liner. Steady-state wear rates assessed gravimetrically were lower under edge loading compared to standard loading. This study supports previous clinical findings that edge loading may cause rim cracking in liners, where component positioning is suboptimal or where material degradation is present. The simulation method developed has the potential to be used in the future to test the effect of aging and different levels of severity of edge loading on a range of cross-linked polyethylene materials. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 1456-1462, 2018. © 2017 Wiley Periodicals, Inc.

  7. Analysis and design of the SI-simulator software system for the VHTR-SI process by using the object-oriented analysis and object-oriented design methodology

    International Nuclear Information System (INIS)

    Chang, Jiwoon; Shin, Youngjoon; Kim, Jihwan; Lee, Kiyoung; Lee, Wonjae; Chang, Jonghwa; Youn, Cheung

    2008-01-01

    The SI-simulator is an application software system that simulates the dynamic behavior of the VHTR-SI process by the use of mathematical models. Object-oriented analysis (OOA) and object-oriented design (OOD) methodologies were employed for the SI simulator system development. OOA is concerned with developing software engineering requirements and specifications that are expressed as a system's object model (which is composed of a population of interacting objects), as opposed to the traditional data or functional views of systems. OOD techniques are useful for the development of large complex systems. Also, OOA/OOD methodology is usually employed to maximize the reusability and extensibility of a software system. In this paper, we present a design feature for the SI simulator software system by the using methodologies of OOA and OOD

  8. The acidic pH-induced structural changes in apo-CP43 by spectral methodologies and molecular dynamics simulations

    Science.gov (United States)

    Wang, Wang; Li, Xue; Wang, Qiuying; Zhu, Xixi; Zhang, Qingyan; Du, Linfang

    2018-01-01

    CP43 is closely associated with the photosystem II and exists the plant thylakoid membranes. The acidic pH-induced structural changes had been investigated by fluorescence spectrum, ANS spectrum, RLS spectrum, energy transfer experiment, acrylamide fluorescence quenching assay and MD simulation. The fluorescence spectrum indicated that the structural changes in acidic pH-induced process were a four-state model, which was nature state (N), partial unfolding state (PU), refolding state (R), and molten-globule state (M), respectively. Analysis of ANS spectrum illustrated that inner hydrophobic core exposed partially to surface below pH 2.0 and inferred also that the molten-globule state existed. The RLS spectrum showed the aggregation of apo-CP43 around the pI (pH 4.5-4.0). The alterations of apo-CP43 secondary structure with different acidic treatments were confirmed by FTIR spectrum. The energy transfer experiment and quenching research demonstrated structural change at pH 4.0 was loosest. The RMSF suggested two terminals played an important function in acidic denaturation process. The distance of two terminals shown slight difference in acidic pH-induced process during the unfolding process, both N-terminal and C-terminal occupied the dominant role. However, the N-terminal accounted for the main part in the refolding process. All kinds of SASA values corresponded to spectral results. The tertiary and secondary structure by MD simulation indicated that the part transmembrane α-helix was destroyed at low pH.

  9. Neutronics and thermal-hydraulics coupling: some contributions toward an improved methodology to simulate the initiating phase of a severe accident in a sodium fast reactor

    International Nuclear Information System (INIS)

    Guyot, Maxime

    2014-01-01

    This project is dedicated to the analysis and the quantification of bias corresponding to the computational methodology for simulating the initiating phase of severe accidents on Sodium Fast Reactors. A deterministic approach is carried out to assess the consequences of a severe accident by adopting best estimate design evaluations. An objective of this deterministic approach is to provide guidance to mitigate severe accident developments and re-criticalities through the implementation of adequate design measures. These studies are generally based on modern simulation techniques to test and verify a given design. The new approach developed in this project aims to improve the safety assessment of Sodium Fast Reactors by decreasing the bias related to the deterministic analysis of severe accident scenarios. During the initiating phase, the subassembly wrapper tubes keep their mechanical integrity. Material disruption and dispersal is primarily one-dimensional. For this reason, evaluation methodology for the initiating phase relies on a multiple-channel approach. Typically a channel represents an average pin in a subassembly or a group of similar subassemblies. In the multiple-channel approach, the core thermal-hydraulics model is composed of 1 or 2 D channels. The thermal-hydraulics model is coupled to a neutronics module to provide an estimate of the reactor power level. In this project, a new computational model has been developed to extend the initiating phase modeling. This new model is based on a multi-physics coupling. This model has been applied to obtain information unavailable up to now in regards to neutronics and thermal-hydraulics models and their coupling. (author) [fr

  10. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  11. A comprehensive review on the methodologies to simulate the nuclear fuel bundle for the thermal hydraulic experiments

    International Nuclear Information System (INIS)

    Vishnoi, A.K.; Chandraker, D.K.; Pal, A.K.; Vijayan, P.K.; Saha, D.

    2011-01-01

    The designer of a nuclear reactor system has to ensure its safety during normal operation as well as accidental conditions. This requires, among other things, a proper understanding of the various thermal hydraulic phenomena occurring in the reactor core. In a nuclear reactor core the fuel elements are the heat source and highly loaded components of the reactor system. Therefore their behaviour under normal and accidental conditions must be extensively investigated. Data generation for Critical heat flux (CHF) in full scale bundle and parallel channel instability studies with at least two full size channels are required in order to evaluate the thermal margin and stability margin of the reactor. The complex nature of these phenomena calls for exhaustive experimental investigations. Fuel Rod Cluster Simulator (FRCS) is a very important component required for the experimental investigation of the thermal hydraulic behaviour of reactor fuel elements under normal and accidental conditions. This paper brings out a comprehensive review of the FRCS elaborating the challenges and important design aspects of the FRCS. Some of the main features and analysis results on the performance of the developed FRCS with respect to the actual nuclear fuel bundle will be presented in the paper. (author)

  12. A Combined Methodology for Landslide Risk Mitigation in Basilicata Region by Using LIDAR Technique and Rockfall Simulation

    Directory of Open Access Journals (Sweden)

    G. Colangelo

    2011-01-01

    Full Text Available Rockfalls represent a significant geohazards along the SS18 road of Basilicata Region, Italy. The management of these rockfall hazards and the mitigation of the risk require innovative approaches and technologies. This paper discusses a hazard assessment strategy and risk mitigation for rockfalls in a section of SS118, along the coast of Maratea, using LIDAR technique and spatial modelling. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results of the simulations were used to define the intervention actions and engineering strategy for the mitigation of the phenomena. Within two months, 260 linear meters of high-energy rockfall barriers for impact energies up to 3000 kJ were installed. After that, according to road authority, the SS18 road was opened in a safe condition. The results represent a valid cognitive support to choose the most appropriate technical solution for topography strengthening and an example of good practice for the cooperation between innovative technologies and field emergency management.

  13. In vitro dissolution methodology, mini-Gastrointestinal Simulator (mGIS), predicts better in vivo dissolution of a weak base drug, dasatinib.

    Science.gov (United States)

    Tsume, Yasuhiro; Takeuchi, Susumu; Matsui, Kazuki; Amidon, Gregory E; Amidon, Gordon L

    2015-08-30

    USP apparatus I and II are gold standard methodologies for determining the in vitro dissolution profiles of test drugs. However, it is difficult to use in vitro dissolution results to predict in vivo dissolution, particularly the pH-dependent solubility of weak acid and base drugs, because the USP apparatus contains one vessel with a fixed pH for the test drug, limiting insight into in vivo drug dissolution of weak acid and weak base drugs. This discrepancy underscores the need to develop new in vitro dissolution methodology that better predicts in vivo response to assure the therapeutic efficacy and safety of oral drug products. Thus, the development of the in vivo predictive dissolution (IPD) methodology is necessitated. The major goals of in vitro dissolution are to ensure the performance of oral drug products and the support of drug formulation design, including bioequivalence (BE). Orally administered anticancer drugs, such as dasatinib and erlotinib (tyrosine kinase inhibitors), are used to treat various types of cancer. These drugs are weak bases that exhibit pH-dependent and high solubility in the acidic stomach and low solubility in the small intestine (>pH 6.0). Therefore, these drugs supersaturate and/or precipitate when they move from the stomach to the small intestine. Also of importance, gastric acidity for cancer patients may be altered with aging (reduction of gastric fluid secretion) and/or co-administration of acid-reducing agents. These may result in changes to the dissolution profiles of weak base and the reduction of drug absorption and efficacy. In vitro dissolution methodologies that assess the impact of these physiological changes in the GI condition are expected to better predict in vivo dissolution of oral medications for patients and, hence, better assess efficacy, toxicity and safety concerns. The objective of this present study is to determine the initial conditions for a mini-Gastrointestinal Simulator (mGIS) to assess in vivo

  14. Radio/X-ray monitoring of the broad-line radio galaxy 3C 382. High-energy view with XMM-Newtonand NuSTAR

    Science.gov (United States)

    Ursini, F.; Petrucci, P.-O.; Matt, G.; Bianchi, S.; Cappi, M.; Dadina, M.; Grandi, P.; Torresi, E.; Ballantyne, D. R.; De Marco, B.; De Rosa, A.; Giroletti, M.; Malzac, J.; Marinucci, A.; Middei, R.; Ponti, G.; Tortosa, A.

    2018-05-01

    We present the analysis of five joint XMM-Newton/NuSTARobservations, 20 ks each and separated by 12 days, of the broad-line radio galaxy 3C 382. The data were obtained as part of a campaign performed in September-October 2016 simultaneously with VLBA. The radio data and their relation with the X-ray ones will be discussed in a following paper. The source exhibits a moderate flux variability in the UV/X-ray bands, and a limited spectral variability especially in the soft X-ray band. In agreement with past observations, we find the presence of a warm absorber, an iron Kα line with no associated Compton reflection hump, and a variable soft excess well described by a thermal Comptonization component. The data are consistent with a "two-corona" scenario, in which the UV emission and soft excess are produced by a warm (kT ≃ 0.6 keV), optically thick (τ ≃ 20) corona consistent with being a slab fully covering a nearly passive accretion disc, while the hard X-ray emission is due to a hot corona intercepting roughly 10% of the soft emission. These results are remarkably similar to those generally found in radio-quiet Seyferts, thus suggesting a common accretion mechanism.

  15. Probing the accretion flow and emission-line regions of M81, the nearest broad-lined low-luminosity AGN

    Science.gov (United States)

    Barth, Aaron

    2017-08-01

    The nucleus of M81 is an object of singular importance as a template for low-luminosity accretion flows onto supermassive black holes. We propose to obtain a complete, small-aperture, high S/N STIS UV/optical spectrum of the M81 nucleus and multi-filter WFC3 imaging covering the UV through near-IR. Such data have never previously been obtained with HST; the only prior archival UV/optical spectra of M81 have low S/N, incomplete wavelength coverage, and are strongly contaminated by starlight. Combined with new Chandra X-ray data, our proposed observations will comprise the definitive reference dataset on the spectral energy distribution of this benchmark low-luminosity AGN. These data will provide unique new constraints on the possible contribution of a truncated thin accretion disk to the AGN emission spectrum, clarifying a fundamental property of low-luminosity accretion flows. The data will additionally provide new insights into broad-line region structure and black hole mass scaling relationships at the lowest AGN luminosities, and spatially resolved diagnostics of narrow-line region excitation conditions at unprecedented spatial resolution to assess the impact of the AGN on the ionization state of the gas in the host galaxy bulge.

  16. Partial dust obscuration in active galactic nuclei as a cause of broad-line profile and lag variability, and apparent accretion disc inhomogeneities

    Science.gov (United States)

    Gaskell, C. Martin; Harrington, Peter Z.

    2018-04-01

    The profiles of the broad emission lines of active galactic nuclei (AGNs) and the time delays in their response to changes in the ionizing continuum ("lags") give information about the structure and kinematics of the inner regions of AGNs. Line profiles are also our main way of estimating the masses of the supermassive black holes (SMBHs). However, the profiles often show ill-understood, asymmetric structure and velocity-dependent lags vary with time. Here we show that partial obscuration of the broad-line region (BLR) by outflowing, compact, dusty clumps produces asymmetries and velocity-dependent lags similar to those observed. Our model explains previously inexplicable changes in the ratios of the hydrogen lines with time and velocity, the lack of correlation of changes in line profiles with variability of the central engine, the velocity dependence of lags, and the change of lags with time. We propose that changes on timescales longer than the light-crossing time do not come from dynamical changes in the BLR, but are a natural result of the effect of outflowing dusty clumps driven by radiation pressure acting on the dust. The motion of these clumps offers an explanation of long-term changes in polarization. The effects of the dust complicate the study of the structure and kinematics of the BLR and the search for sub-parsec SMBH binaries. Partial obscuration of the accretion disc can also provide the local fluctuations in luminosity that can explain sizes deduced from microlensing.

  17. The development of a 4D treatment planning methodology to simulate the tracking of central lung tumors in an MRI-linac.

    Science.gov (United States)

    Al-Ward, Shahad M; Kim, Anthony; McCann, Claire; Ruschin, Mark; Cheung, Patrick; Sahgal, Arjun; Keller, Brian M

    2018-01-01

    Targeting and tracking of central lung tumors may be feasible on the Elekta MRI-linac (MRL) due to the soft-tissue visualization capabilities of MRI. The purpose of this work is to develop a novel treatment planning methodology to simulate tracking of central lung tumors with the MRL and to quantify the benefits in OAR sparing compared with the ITV approach. Full 4D-CT datasets for five central lung cancer patients were selected to simulate the condition of having 4D-pseudo-CTs derived from 4D-MRI data available on the MRL with real-time tracking capabilities. We used the MRL treatment planning system to generate two plans: (a) with a set of MLC-defined apertures around the target at each phase of the breathing ("4D-MRL" method); (b) with a fixed set of fields encompassing the maximum inhale and exhale of the breathing cycle ("ITV" method). For both plans, dose accumulation was performed onto a reference phase. To further study the potential benefits of a 4D-MRL method, the results were stratified by tumor motion amplitude, OAR-to-tumor proximity, and the relative OAR motion (ROM). With the 4D-MRL method, the reduction in mean doses was up to 3.0 Gy and 1.9 Gy for the heart and the lung. Moreover, the lung's V12.5 Gy was spared by a maximum of 300 cc. Maximum doses to serial organs were reduced by up to 6.1 Gy, 1.5 Gy, and 9.0 Gy for the esophagus, spinal cord, and the trachea, respectively. OAR dose reduction with our method depended on the tumor motion amplitude and the ROM. Some OARs with large ROMs and in close proximity to the tumor benefited from tracking despite small tumor amplitudes. We developed a novel 4D tracking methodology for the MRL for central lung tumors and quantified the potential dosimetric benefits compared with our current ITV approach. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. Distributed Mission Operations Within-Simulator Training Effectiveness Baseline Study. Volume 5. Using the Pathfinder Methodology to Assess Pilot Knowledge Structure Changes

    National Research Council Canada - National Science Library

    Schreiber, Brian T; DiSalvo, Pam; Stock, William A; Bennett, Jr., Winston

    2006-01-01

    ... collection methodology both before and after five days of DMO training. The Pathfinder methodology is a qualitative/quantitative method that can be used to assess if the pilots' underlying knowledge structures (i.e...

  19. Simulation statistical foundations and methodology

    CERN Document Server

    Mihram, G Arthur

    1972-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  20. CONSTRAINTS ON BLACK HOLE GROWTH, QUASAR LIFETIMES, AND EDDINGTON RATIO DISTRIBUTIONS FROM THE SDSS BROAD-LINE QUASAR BLACK HOLE MASS FUNCTION

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Hernquist, Lars; Siemiginowska, Aneta; Vestergaard, Marianne; Fan Xiaohui; Hopkins, Philip

    2010-01-01

    We present an estimate of the black hole mass function of broad-line quasars (BLQSOs) that self-consistently corrects for incompleteness and the statistical uncertainty in the mass estimates, based on a sample of 9886 quasars at 1 1 it is highly incomplete at M BH ∼ 9 M sun and L/L Edd ∼ BL > 150 ± 15 Myr for black holes at z = 1 with a mass of M BH = 10 9 M sun , and we constrain the maximum mass of a black hole in a BLQSO to be ∼3 x 10 10 M sun . Our estimated distribution of BLQSO Eddington ratios peaks at L/L Edd ∼ 0.05 and has a dispersion of ∼0.4 dex, implying that most BLQSOs are not radiating at or near the Eddington limit; however, the location of the peak is subject to considerable uncertainty. The steep increase in number density of BLQSOs toward lower Eddington ratios is expected if the BLQSO accretion rate monotonically decays with time. Furthermore, our estimated lifetime and Eddington ratio distributions imply that the majority of the most massive black holes spend a significant amount of time growing in an earlier obscured phase, a conclusion which is independent of the unknown obscured fraction. These results are consistent with models for self-regulated black hole growth, at least for massive systems at z > 1, where the BLQSO phase occurs at the end of a fueling event when black hole feedback unbinds the accreting gas, halting the accretion flow.

  1. THE DEMOGRAPHICS OF BROAD-LINE QUASARS IN THE MASS-LUMINOSITY PLANE. II. BLACK HOLE MASS AND EDDINGTON RATIO FUNCTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93107 (United States); Shen, Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States)

    2013-02-10

    We employ a flexible Bayesian technique to estimate the black hole (BH) mass and Eddington ratio functions for Type 1 (i.e., broad line) quasars from a uniformly selected data set of {approx}58, 000 quasars from the Sloan Digital Sky Survey (SDSS) DR7. We find that the SDSS becomes significantly incomplete at M {sub BH} {approx}< 3 Multiplication-Sign 10{sup 8} M {sub Sun} or L/L {sub Edd} {approx}< 0.07, and that the number densities of Type 1 quasars continue to increase down to these limits. Both the mass and Eddington ratio functions show evidence of downsizing, with the most massive and highest Eddington ratio BHs experiencing Type 1 quasar phases first, although the Eddington ratio number densities are flat at z < 2. We estimate the maximum Eddington ratio of Type 1 quasars in the observable universe to be L/L {sub Edd} {approx} 3. Consistent with our results in Shen and Kelly, we do not find statistical evidence for a so-called sub-Eddington boundary in the mass-luminosity plane of broad-line quasars, and demonstrate that such an apparent boundary in the observed distribution can be caused by selection effect and errors in virial BH mass estimates. Based on the typical Eddington ratio in a given mass bin, we estimate growth times for the BHs in Type 1 quasars and find that they are comparable to or longer than the age of the universe, implying an earlier phase of accelerated (i.e., with higher Eddington ratios) and possibly obscured growth. The large masses probed by our sample imply that most of our BHs reside in what are locally early-type galaxies, and we interpret our results within the context of models of self-regulated BH growth.

  2. Global Monitoring of Terrestrial Chlorophyll Fluorescence from Moderate-spectral-resolution Near-infrared Satellite Measurements: Methodology, Simulations, and Application to GOME-2

    Science.gov (United States)

    Joiner, J.; Gaunter, L.; Lindstrot, R.; Voigt, M.; Vasilkov, A. P.; Middleton, E. M.; Huemmrich, K. F.; Yoshida, Y.; Frankenberg, C.

    2013-01-01

    Globally mapped terrestrial chlorophyll fluorescence retrievals are of high interest because they can provide information on the functional status of vegetation including light-use efficiency and global primary productivity that can be used for global carbon cycle modeling and agricultural applications. Previous satellite retrievals of fluorescence have relied solely upon the filling-in of solar Fraunhofer lines that are not significantly affected by atmospheric absorption. Although these measurements provide near-global coverage on a monthly basis, they suffer from relatively low precision and sparse spatial sampling. Here, we describe a new methodology to retrieve global far-red fluorescence information; we use hyperspectral data with a simplified radiative transfer model to disentangle the spectral signatures of three basic components: atmospheric absorption, surface reflectance, and fluorescence radiance. An empirically based principal component analysis approach is employed, primarily using cloudy data over ocean, to model and solve for the atmospheric absorption. Through detailed simulations, we demonstrate the feasibility of the approach and show that moderate-spectral-resolution measurements with a relatively high signal-to-noise ratio can be used to retrieve far-red fluorescence information with good precision and accuracy. The method is then applied to data from the Global Ozone Monitoring Instrument 2 (GOME-2). The GOME-2 fluorescence retrievals display similar spatial structure as compared with those from a simpler technique applied to the Greenhouse gases Observing SATellite (GOSAT). GOME-2 enables global mapping of far-red fluorescence with higher precision over smaller spatial and temporal scales than is possible with GOSAT. Near-global coverage is provided within a few days. We are able to show clearly for the first time physically plausible variations in fluorescence over the course of a single month at a spatial resolution of 0.5 deg × 0.5 deg

  3. Global monitoring of terrestrial chlorophyll fluorescence from moderate-spectral-resolution near-infrared satellite measurements: methodology, simulations, and application to GOME-2

    Directory of Open Access Journals (Sweden)

    J. Joiner

    2013-10-01

    Full Text Available Globally mapped terrestrial chlorophyll fluorescence retrievals are of high interest because they can provide information on the functional status of vegetation including light-use efficiency and global primary productivity that can be used for global carbon cycle modeling and agricultural applications. Previous satellite retrievals of fluorescence have relied solely upon the filling-in of solar Fraunhofer lines that are not significantly affected by atmospheric absorption. Although these measurements provide near-global coverage on a monthly basis, they suffer from relatively low precision and sparse spatial sampling. Here, we describe a new methodology to retrieve global far-red fluorescence information; we use hyperspectral data with a simplified radiative transfer model to disentangle the spectral signatures of three basic components: atmospheric absorption, surface reflectance, and fluorescence radiance. An empirically based principal component analysis approach is employed, primarily using cloudy data over ocean, to model and solve for the atmospheric absorption. Through detailed simulations, we demonstrate the feasibility of the approach and show that moderate-spectral-resolution measurements with a relatively high signal-to-noise ratio can be used to retrieve far-red fluorescence information with good precision and accuracy. The method is then applied to data from the Global Ozone Monitoring Instrument 2 (GOME-2. The GOME-2 fluorescence retrievals display similar spatial structure as compared with those from a simpler technique applied to the Greenhouse gases Observing SATellite (GOSAT. GOME-2 enables global mapping of far-red fluorescence with higher precision over smaller spatial and temporal scales than is possible with GOSAT. Near-global coverage is provided within a few days. We are able to show clearly for the first time physically plausible variations in fluorescence over the course of a single month at a spatial resolution of 0

  4. Response Surface Methodology

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.

    2014-01-01

    Abstract: This chapter first summarizes Response Surface Methodology (RSM), which started with Box and Wilson’s article in 1951 on RSM for real, non-simulated systems. RSM is a stepwise heuristic that uses first-order polynomials to approximate the response surface locally. An estimated polynomial

  5. Simulation

    DEFF Research Database (Denmark)

    Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J

    2012-01-01

    Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....

  6. The Broad-Lined Type Ic SN 2012ap and the Nature of Relativistic Supernovae Lacking a Gamma-Ray Burst Detection

    Science.gov (United States)

    Milisavljevic, D.; Margutti, R.; Parrent, J. T.; Soderberg, A. M.; Fesen, R. A.; Mazzali, P.; Maeda, K.; Sanders, N. E.; Cenko, S. B.; Silverman, J. M.

    2014-01-01

    We present ultraviolet, optical, and near-infrared observations of SN2012ap, a broad-lined Type Ic supernova in the galaxy NGC 1729 that produced a relativistic and rapidly decelerating outflow without a gamma-ray burst signature. Photometry and spectroscopy follow the flux evolution from -13 to +272 days past the B-band maximum of -17.4 +/- 0.5 mag. The spectra are dominated by Fe II, O I, and Ca II absorption lines at ejecta velocities of v approx. 20,000 km s(exp. -1) that change slowly over time. Other spectral absorption lines are consistent with contributions from photospheric He I, and hydrogen may also be present at higher velocities (v approx. greater than 27,000 km s(exp. -1)). We use these observations to estimate explosion properties and derive a total ejecta mass of 2.7 Solar mass, a kinetic energy of 1.0×1052 erg, and a (56)Ni mass of 0.1-0.2 Solar mass. Nebular spectra (t > 200 d) exhibit an asymmetric double-peaked [O I] lambda lambda 6300, 6364 emission profile that we associate with absorption in the supernova interior, although toroidal ejecta geometry is an alternative explanation. SN2012ap joins SN2009bb as another exceptional supernova that shows evidence for a central engine (e.g., black-hole accretion or magnetar) capable of launching a non-negligible portion of ejecta to relativistic velocities without a coincident gamma-ray burst detection. Defining attributes of their progenitor systems may be related to notable properties including above-average environmental metallicities of Z approx. greater than Solar Z, moderate to high levels of host-galaxy extinction (E(B -V ) > 0.4 mag), detection of high-velocity helium at early epochs, and a high relative flux ratio of [Ca II]/[O I] > 1 at nebular epochs. These events support the notion that jet activity at various energy scales may be present in a wide range of supernovae.

  7. THE BROAD-LINED Type Ic SN 2012ap AND THE NATURE OF RELATIVISTIC SUPERNOVAE LACKING A GAMMA-RAY BURST DETECTION

    Energy Technology Data Exchange (ETDEWEB)

    Milisavljevic, D.; Margutti, R.; Parrent, J. T.; Soderberg, A. M.; Sanders, N. E.; Kamble, A.; Chakraborti, S.; Drout, M. R.; Kirshner, R. P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Fesen, R. A. [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755 (United States); Mazzali, P. [Astrophysics Research Institute, Liverpool John Moores University, Liverpool L3 5RF (United Kingdom); Maeda, K. [Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan); Cenko, S. B. [Astrophysics Science Division, NASA Goddard Space Flight Center, Mail Code 661, Greenbelt, MD 20771 (United States); Silverman, J. M. [University of Texas at Austin, 1 University Station C1400, Austin, TX 78712-0259 (United States); Filippenko, A. V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Pickering, T. E. [Southern African Large Telescope, P.O. Box 9, Observatory 7935, Cape Town (South Africa); Kawabata, K. [Hiroshima Astrophysical Science Center, Hiroshima University, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Hattori, T. [Subaru Telescope, National Astronomical Observatory of Japan, Hilo, HI 96720 (United States); Hsiao, E. Y. [Carnegie Observatories, Las Campanas Observatory, Colina El Pino, Casilla 601 (Chile); Stritzinger, M. D., E-mail: dmilisav@cfa.harvard.edu [Department of Physics and Astronomy, Aarhus University, Ny Munkegade, DK-8000 Aarhus C (Denmark); and others

    2015-01-20

    We present ultraviolet, optical, and near-infrared observations of SN 2012ap, a broad-lined Type Ic supernova in the galaxy NGC 1729 that produced a relativistic and rapidly decelerating outflow without a gamma-ray burst signature. Photometry and spectroscopy follow the flux evolution from –13 to +272 days past the B-band maximum of –17.4 ± 0.5 mag. The spectra are dominated by Fe II, O I, and Ca II absorption lines at ejecta velocities of v ≈ 20,000 km s{sup –1} that change slowly over time. Other spectral absorption lines are consistent with contributions from photospheric He I, and hydrogen may also be present at higher velocities (v ≳ 27,000 km s{sup –1}). We use these observations to estimate explosion properties and derive a total ejecta mass of ∼2.7 M {sub ☉}, a kinetic energy of ∼1.0 × 10{sup 52} erg, and a {sup 56}Ni mass of 0.1-0.2 M {sub ☉}. Nebular spectra (t > 200 days) exhibit an asymmetric double-peaked [O I] λλ6300, 6364 emission profile that we associate with absorption in the supernova interior, although toroidal ejecta geometry is an alternative explanation. SN 2012ap joins SN 2009bb as another exceptional supernova that shows evidence for a central engine (e.g., black hole accretion or magnetar) capable of launching a non-negligible portion of ejecta to relativistic velocities without a coincident gamma-ray burst detection. Defining attributes of their progenitor systems may be related to notable observed properties including environmental metallicities of Z ≳ Z {sub ☉}, moderate to high levels of host galaxy extinction (E(B – V) > 0.4 mag), detection of high-velocity helium at early epochs, and a high relative flux ratio of [Ca II]/[O I] >1 at nebular epochs. These events support the notion that jet activity at various energy scales may be present in a wide range of supernovae.

  8. Simulation

    CERN Document Server

    Ross, Sheldon

    2006-01-01

    Ross's Simulation, Fourth Edition introduces aspiring and practicing actuaries, engineers, computer scientists and others to the practical aspects of constructing computerized simulation studies to analyze and interpret real phenomena. Readers learn to apply results of these analyses to problems in a wide variety of fields to obtain effective, accurate solutions and make predictions about future outcomes. This text explains how a computer can be used to generate random numbers, and how to use these random numbers to generate the behavior of a stochastic model over time. It presents the statist

  9. On methodology

    DEFF Research Database (Denmark)

    Cheesman, Robin; Faraone, Roque

    2002-01-01

    This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública".......This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública"....

  10. Methodological guidelines

    International Nuclear Information System (INIS)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-01-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs

  11. Methodological guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-04-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs.

  12. Requirements to be made on the methodology of staff training on a nuclear power plant simulator of the new Konvoi generation

    International Nuclear Information System (INIS)

    Reinartz, S.J.; Reinartz, G.

    1984-01-01

    This report is based on a review of the literature published on simulator training and on discussions with representatives from the German nuclear power plant operator training schools. A brief description of the organisation and content of the simulator training of control room operators in a number of countries, together with a catagorisation of the various types of simulators which are used. The concepts of the systems approach to training and simulator fidelity are discussed. Some general training principles which are considered important for simulator training are summarised. From the available descriptions and analyses of control room operator tasks, the skills (in most general terms) which can be trained at simulators have been identified. Methods for training these skills which are used in the simulator training programmes in various industries and which have been developed in research work in the area of training psychology have been summarised. Using these methods as a basis, the necessary instructor facilities which should be included in the design of a full simulator for the Konvoi generation of nuclear power plants have been derived. (orig.) [de

  13. Distributed Mission Operations Within-Simulator Training Effectiveness Baseline Study. Volume 5. Using the Pathfinder Methodology to Assess Pilot Knowledge Structure Changes

    National Research Council Canada - National Science Library

    Schreiber, Brian T; DiSalvo, Pam; Stock, William A; Bennett, Jr., Winston

    2006-01-01

    ...) Within Simulator Training Effectiveness Baseline Study as described in Volume I, Summary Report, of AFRL-HE-AZ-TR-2006-0015, the current work examined pilots who participated in a Pathfinder data...

  14. Methodology for the development and the UML (unified modified language) simulation of data acquisition and data processing systems dedicated to high energy physics experiments

    International Nuclear Information System (INIS)

    Anvar, S.

    2002-09-01

    The increasing complexity of the real-time data acquisition and processing systems (TDAQ: the so called Trigger and Data AcQuisition systems) in high energy physics calls for an appropriate evolution of development tools. This work is about the interplay between in principle specifications of TDAQ systems and their actual design and realization on a concrete hardware and software platform. The basis of our work is to define a methodology for the development of TDAQ systems that meets the specific demands for the development of such systems. The result is the detailed specification of a 'methodological framework' based on the Unified Modeling Language (UML) and designed to manage a development process. The use of this UML-based methodological framework progressively leads to the setting up of a 'home-made' framework, i.e. a development tool that comprises reusable components and generic architectural elements adapted to TDAQ systems. The main parts of this dissertation are sections II to IV. Section II is devoted to the characterization and evolution of TDAQ systems. In section III, we review the main technologies that are relevant to our problematic, namely software reuse techniques such as design patterns and frameworks, especially concerning the real-time and embedded systems domain. Our original conceptual contribution is presented in section IV, where we give a detailed, formalized and example-driven specification of our development model. Our final conclusions are presented in section V, where we present the MORDICUS project devoted to a concrete realization of our UML methodological framework, and the deep affinities between our work and the emerging 'Model Driven Architecture' (MDA) paradigm developed by the Object Management Group. (author)

  15. Finite Element Simulation and Assessment of Single-Degree-of-Freedom Prediction Methodology for Insulated Concrete Sandwich Panels Subjected to Blast Loads

    Science.gov (United States)

    2011-02-01

    Precast /prestressed components, along with their connections to the structure, should be designed to withstand the blast to prevent falling or...response of the component. Connections used for precast components subjected to blast are normally designed with small to zero dynamic increase...methodology considers fixed boundary condition to be more similar to continuous beams or columns . Figure 71 and Table 14 present the comparisons

  16. MIRD methodology

    International Nuclear Information System (INIS)

    Rojo, Ana M.; Gomez Parada, Ines

    2004-01-01

    The MIRD (Medical Internal Radiation Dose) system was established by the Society of Nuclear Medicine of USA in 1960 to assist the medical community in the estimation of the dose in organs and tissues due to the incorporation of radioactive materials. Since then, 'MIRD Dose Estimate Report' (from the 1 to 12) and 'Pamphlets', of great utility for the dose calculations, were published. The MIRD system was planned essentially for the calculation of doses received by the patients during nuclear medicine diagnostic procedures. The MIRD methodology for the absorbed doses calculations in different tissues is explained

  17. PSA methodology

    Energy Technology Data Exchange (ETDEWEB)

    Magne, L

    1997-12-31

    The purpose of this text is first to ask a certain number of questions on the methods related to PSAs. Notably we will explore the positioning of the French methodological approach - as applied in the EPS 1300{sup 1} and EPS 900{sup 2} PSAs - compared to other approaches (Part One). This reflection leads to more general reflection: what contents, for what PSA? This is why, in Part Two, we will try to offer a framework for definition of the criteria a PSA should satisfy to meet the clearly identified needs. Finally, Part Three will quickly summarize the questions approached in the first two parts, as an introduction to the debate. 15 refs.

  18. PSA methodology

    International Nuclear Information System (INIS)

    Magne, L.

    1996-01-01

    The purpose of this text is first to ask a certain number of questions on the methods related to PSAs. Notably we will explore the positioning of the French methodological approach - as applied in the EPS 1300 1 and EPS 900 2 PSAs - compared to other approaches (Part One). This reflection leads to more general reflection: what contents, for what PSA? This is why, in Part Two, we will try to offer a framework for definition of the criteria a PSA should satisfy to meet the clearly identified needs. Finally, Part Three will quickly summarize the questions approached in the first two parts, as an introduction to the debate. 15 refs

  19. Research on the theory and methodology of integrating GIS and MAS and its application in simulating of pedestrians flows in a crowd's activity centre of Shanghai metropolitan

    Science.gov (United States)

    Liu, Miaolong; Chen, Peng

    2006-10-01

    Based on the development trend of research on urban morphology and its evolution from macro scale to micro scale, a new tight-coupling integrating method of GIS and MAS has been discussed briefly in this paper. After analyzing the characteristics and mechanism of pedestrian's flows in a crowds' activity center in a metropolitan, a prototype and mathematical expression of pedestrian's flows simulation have been put forward in the paper. A few key expressions and techniques for treating the specific behaviors of pedestrians flows, especially how the individuals of the flows make a decision to follow a original designed direction, how to make a decision whether stop or change his movement and select a new direction when the individuals meet a obstacle have been explored and discussed in detail. Using some tools provided by general GIS systems (such as ArcGIS 9) and a few specific programming languages, a new software system integrating GIS and MAS applicable for simulating pedestrians flows in a crowd activity centre has been developed successfully. Under the environment supported by the software system, as an applicable case, a dynamic evolution process of the pedestrian's flows (dispersed process for the spectators) in a crowds' activity center - The Shanghai Stadium has been simulated successfully. The successful simulating of a case of emergence when one or more exits emerge accidents will be very useful for managing and treating crowds' safety in a lot of assembling centers. At the end of the paper, some new research problems have been pointed out for the future.

  20. A methodological approach to a realistic evaluation of skin absorbed doses during manipulation of radioactive sources by means of GAMOS Monte Carlo simulations

    Science.gov (United States)

    Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio

    2018-05-01

    The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.

  1. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, volume 2, part 1. Appendix A: Software documentation

    Science.gov (United States)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    Documentation of the preliminary software developed as a framework for a generalized integrated robotic system simulation is presented. The program structure is composed of three major functions controlled by a program executive. The three major functions are: system definition, analysis tools, and post processing. The system definition function handles user input of system parameters and definition of the manipulator configuration. The analysis tools function handles the computational requirements of the program. The post processing function allows for more detailed study of the results of analysis tool function executions. Also documented is the manipulator joint model software to be used as the basis of the manipulator simulation which will be part of the analysis tools capability.

  2. Testing methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs.

  3. Testing methodologies

    International Nuclear Information System (INIS)

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical ''signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs

  4. Numerical simulations of forest fire propagation and smoke transport as an external hazard assessment methodology development for a nuclear power plant

    International Nuclear Information System (INIS)

    Okano, Yasushi; Yamano, Hidemasa

    2016-01-01

    A new method has been developed to assess potential challenges by forest fire smoke on a cooling function of a decay heat removal system (DHRS) of a sodium-cooled fast reactor. Combinational numerical simulations of a forest fire propagation and a smoke transport were performed to evaluate a cumulative amount of smoke captured on air filters of the DHRS. The forest fire propagation simulations were performed using FARSITE code to evaluate a temporal increase of a forest fire spread area, a frontal fireline location, reaction intensity, and fireline intensity. Peripheral boundary of the forest fire spread area is shaped like an ellipse on the terrain, and the active forest fire area from which smoke is produced as a forest fire product is increased with forest fire spread. The smoke transport simulations were performed using ALOFT-FT code where a spatial distribution of smoke density, especially of particle matter (PM), is evaluated. The snapshot (i.e. at a certain time step) outputs by FARSITE on the reaction intensity and the fireline intensity were utilized as the input data for ALOFT-FT, while it was conservatively assumed that the smoke generated from the active forest fire area along the periphery boundary rises up from the frontal fireline location nearest to a nuclear power plant (NPP) and that prevailing wind transports all smoke to an NPP in the leeward side. The evaluated time-dependent changes of spatial PM density were utilized to calculate a cumulative amount of PM captured on the air filters of the DHRS. Sensitivity analysis was performed on prevailing wind speed to which both the fireline intensity and the smoke transport behavior are sensitive. The total amount of PM on the air filters was conservatively estimated around several hundred grams per m 2 which is well below the utilization limit. (author)

  5. Methodological study of the diffusion of interacting cations through clays. Application: experimental tests and simulation of coupled chemistry-diffusion transport of alkaline ions through a synthetical bentonite

    International Nuclear Information System (INIS)

    Melkior, Th.

    2000-01-01

    The subject of this work deals with the project of underground disposal of radioactive wastes in deep geological formations. It concerns the study of the migration of radionuclides through clays. In these materials, the main transport mechanism is assumed to be diffusion under natural conditions. Therefore, some diffusion experiments are conducted. With interacting solutes which present a strong affinity for the material, the duration of these tests will be too long, for the range of concentrations of interest. An alternative is to determine on one hand the geochemical retention properties using batch tests and crushed rock samples and, on the other hand, to deduce the transport parameters from diffusion tests realised with a non-interacting tracer, tritiated water. These data are then used to simulate the migration of the reactive elements with a numerical code which can deal with coupled chemistry-diffusion equations. The validity of this approach is tested by comparing the numerical simulations with the results of diffusion experiments of cations through a clay. The subject is investigated in the case of the diffusion of cesium, lithium and sodium through a compacted sodium bentonite. The diffusion tests are realised with the through-diffusion method. The comparison between the experimental results and the simulations shows that the latter tends to under estimate the propagation of the considered species. The differences could be attributed to surface diffusion and to a decrease of the accessibility to the sites of fixation of the bentonite, from the conditions of clay suspensions in batch tests to the situation of compacted samples. The influence of the experimental apparatus used during the diffusion tests on the results of the measurement has also been tested. It showed that these apparatus have to be taken into consideration when the experimental data are interpreted. A specific model has been therefore developed with the numerical code CASTEM 2000. (author)

  6. Spatially-Explicit Simulation Modeling of Ecological Response to Climate Change: Methodological Considerations in Predicting Shifting Population Dynamics of Infectious Disease Vectors

    Directory of Open Access Journals (Sweden)

    Justin V. Remais

    2013-07-01

    Full Text Available Poikilothermic disease vectors can respond to altered climates through spatial changes in both population size and phenology. Quantitative descriptors to characterize, analyze and visualize these dynamic responses are lacking, particularly across large spatial domains. In order to demonstrate the value of a spatially explicit, dynamic modeling approach, we assessed spatial changes in the population dynamics of Ixodes scapularis, the Lyme disease vector, using a temperature-forced population model simulated across a grid of 4 × 4 km cells covering the eastern United States, using both modeled (Weather Research and Forecasting (WRF 3.2.1 baseline/current (2001–2004 and projected (Representative Concentration Pathway (RCP 4.5 and RCP 8.5; 2057–2059 climate data. Ten dynamic population features (DPFs were derived from simulated populations and analyzed spatially to characterize the regional population response to current and future climate across the domain. Each DPF under the current climate was assessed for its ability to discriminate observed Lyme disease risk and known vector presence/absence, using data from the US Centers for Disease Control and Prevention. Peak vector population and month of peak vector population were the DPFs that performed best as predictors of current Lyme disease risk. When examined under baseline and projected climate scenarios, the spatial and temporal distributions of DPFs shift and the seasonal cycle of key questing life stages is compressed under some scenarios. Our results demonstrate the utility of spatial characterization, analysis and visualization of dynamic population responses—including altered phenology—of disease vectors to altered climate.

  7. Spatially-Explicit Simulation Modeling of Ecological Response to Climate Change: Methodological Considerations in Predicting Shifting Population Dynamics of Infectious Disease Vectors.

    Science.gov (United States)

    Dhingra, Radhika; Jimenez, Violeta; Chang, Howard H; Gambhir, Manoj; Fu, Joshua S; Liu, Yang; Remais, Justin V

    2013-09-01

    Poikilothermic disease vectors can respond to altered climates through spatial changes in both population size and phenology. Quantitative descriptors to characterize, analyze and visualize these dynamic responses are lacking, particularly across large spatial domains. In order to demonstrate the value of a spatially explicit, dynamic modeling approach, we assessed spatial changes in the population dynamics of Ixodes scapularis , the Lyme disease vector, using a temperature-forced population model simulated across a grid of 4 × 4 km cells covering the eastern United States, using both modeled (Weather Research and Forecasting (WRF) 3.2.1) baseline/current (2001-2004) and projected (Representative Concentration Pathway (RCP) 4.5 and RCP 8.5; 2057-2059) climate data. Ten dynamic population features (DPFs) were derived from simulated populations and analyzed spatially to characterize the regional population response to current and future climate across the domain. Each DPF under the current climate was assessed for its ability to discriminate observed Lyme disease risk and known vector presence/absence, using data from the US Centers for Disease Control and Prevention. Peak vector population and month of peak vector population were the DPFs that performed best as predictors of current Lyme disease risk. When examined under baseline and projected climate scenarios, the spatial and temporal distributions of DPFs shift and the seasonal cycle of key questing life stages is compressed under some scenarios. Our results demonstrate the utility of spatial characterization, analysis and visualization of dynamic population responses-including altered phenology-of disease vectors to altered climate.

  8. Application of the methodology of surface of answer in the determination of the PCT in the simulation of a LOFT; Aplicacion de la metodologia de superficies de respuesta en la determinacion del PCT en la simulacion de un LOFT

    Energy Technology Data Exchange (ETDEWEB)

    Alva N, J. [IPN, Escuela Superior de Fisica y Matematicas, Departamento de Ingenieria Nuclear, Av. IPN s/n, Col. Lindavista, Mexico 07738 D.F. (Mexico); Ortiz V, J.; Amador G, R. [ININ, 52750 La Marquesa, Estado de Mexico (Mexico)]. e-mail: neriaesfm@gmail.com

    2008-07-01

    This article summarizes the main typical of the methodology of surfaces and answer (MSA) and its connections with the lineal regression analysis. Also, an example of the application of MSA in the prediction of the principle cladding temperature (PCT) of a combustible assembly of a nuclear reactor, whose used data were taken from the simulation of a LOFT (Loss Of Fluid Test) during a course of experts. The made prediction will be used like one first approach to predict the behavior of the PCT, this is made in order to diminish the time of calculation when realizing the executions of codes thermal hydraulics of better estimation. The present work comprises of the theoretical base of the project in charge to delineate a methodology of uncertainty analysis for codes of better estimation, employees in the thermal hydraulics analysis and safety of plants and nuclear reactors. The institutions that participate in such project are: ININ, CFE, IPN and CNSNS, is possible to mention that this project is sponsored by the IAEA. (Author)

  9. A methodology to relate octane numbers of binary and ternary n-heptane, iso-octane and toluene mixtures with simulated ignition delay times

    KAUST Repository

    Badra, Jihad A.

    2015-08-11

    Predicting octane numbers (ON) of gasoline surrogate mixtures is of significant importance to the optimization and development of internal combustion (IC) engines. Most ON predictive tools utilize blending rules wherein measured octane numbers are fitted using linear or non-linear mixture fractions on a volumetric or molar basis. In this work, the octane numbers of various binary and ternary n-heptane/iso-octane/toluene blends, referred to as toluene primary reference fuel (TPRF) mixtures, are correlated with a fundamental chemical kinetic parameter, specifically, homogeneous gas-phase fuel/air ignition delay time. Ignition delay times for stoichiometric fuel/air mixtures are calculated at various constant volume conditions (835 K and 20 atm, 825 K and 25 atm, 850 K and 50 atm (research octane number RON-like) and 980 K and 45 atm (motor octane number MON-like)), and for variable volume profiles calculated from cooperative fuel research (CFR) engine pressure and temperature simulations. Compression ratio (or ON) dependent variable volume profile ignition delay times are investigated as well. The constant volume RON-like ignition delay times correlation with RON was the best amongst the other studied conditions. The variable volume ignition delay times condition correlates better with MON than the ignition delay times at the other tested conditions. The best correlation is achieved when using compression ratio dependent variable volume profiles to calculate the ignition delay times. Most of the predicted research octane numbers (RON) have uncertainties that are lower than the repeatability and reproducibility limits of the measurements. Motor octane number (MON) correlation generally has larger uncertainties than that of RON.

  10. METHODOLOGICAL PROBLEMS OF E-LEARNING DIDACTICS

    Directory of Open Access Journals (Sweden)

    Sergey F. Sergeev

    2015-01-01

    Full Text Available The article is devoted to the discussion of the methodological problems of e-learning, didactic issues the use of advanced networking and Internet technologies to create training systems and simulators based on the methodological principles of non-classical and post-non-classical psychology and pedagogy. 

  11. Clinical trial methodology

    National Research Council Canada - National Science Library

    Peace, Karl E; Chen, Ding-Geng

    2011-01-01

    ... in the pharmaceutical industry, Clinical trial methodology emphasizes the importance of statistical thinking in clinical research and presents the methodology as a key component of clinical research...

  12. Ares I-X Launch Abort System, Crew Module, and Upper Stage Simulator Vibroacoustic Flight Data Evaluation, Comparison to Predictions, and Recommendations for Adjustments to Prediction Methodology and Assumptions

    Science.gov (United States)

    Smith, Andrew; Harrison, Phil

    2010-01-01

    The National Aeronautics and Space Administration (NASA) Constellation Program (CxP) has identified a series of tests to provide insight into the design and development of the Crew Launch Vehicle (CLV) and Crew Exploration Vehicle (CEV). Ares I-X was selected as the first suborbital development flight test to help meet CxP objectives. The Ares I-X flight test vehicle (FTV) is an early operational model of CLV, with specific emphasis on CLV and ground operation characteristics necessary to meet Ares I-X flight test objectives. The in-flight part of the test includes a trajectory to simulate maximum dynamic pressure during flight and perform a stage separation of the Upper Stage Simulator (USS) from the First Stage (FS). The in-flight test also includes recovery of the FS. The random vibration response from the ARES 1-X flight will be reconstructed for a few specific locations that were instrumented with accelerometers. This recorded data will be helpful in validating and refining vibration prediction tools and methodology. Measured vibroacoustic environments associated with lift off and ascent phases of the Ares I-X mission will be compared with pre-flight vibration predictions. The measured flight data was given as time histories which will be converted into power spectral density plots for comparison with the maximum predicted environments. The maximum predicted environments are documented in the Vibroacoustics and Shock Environment Data Book, AI1-SYS-ACOv4.10 Vibration predictions made using statistical energy analysis (SEA) VAOne computer program will also be incorporated in the comparisons. Ascent and lift off measured acoustics will also be compared to predictions to assess whether any discrepancies between the predicted vibration levels and measured vibration levels are attributable to inaccurate acoustic predictions. These comparisons will also be helpful in assessing whether adjustments to prediction methodologies are needed to improve agreement between the

  13. Use of calibration methodology of gamma cameras for the workers surveillance using a thyroid simulator; Uso de una metodologia de calibracion de camaras gamma para la vigilancia de trabajadores usando un simulador de tiroides

    Energy Technology Data Exchange (ETDEWEB)

    Alfaro, M.; Molina, G.; Vazquez, R.; Garcia, O., E-mail: mercedes.alfaro@inin.gob.m [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2010-09-15

    In Mexico there are a significant number of nuclear medicine centers in operation. For what the accidents risk related to the transport and manipulation of open sources used in nuclear medicine can exist. The National Institute of Nuclear Research (ININ) has as objective to establish a simple and feasible methodology for the workers surveillance related with the field of the nuclear medicine. This radiological surveillance can also be applied to the public in the event of a radiological accident. To achieve this it intends to use the available equipment s in the nuclear medicine centers, together with the neck-thyroid simulators elaborated by the ININ to calibrate the gamma cameras. The gamma cameras have among their component elements that conform spectrometric systems like the employees in the evaluation of the internal incorporation for direct measurements, reason why, besides their use for diagnostic for image, they can be calibrated with anthropomorphic simulators and also with punctual sources for the quantification of the radionuclides activity distributed homogeneously in the human body, or located in specific organs. Inside the project IAEA-ARCAL-RLA/9/049-LXXVIII -Procedures harmonization of internal dosimetry- where 9 countries intervened (Argentina, Brazil, Colombia, Cuba, Chile, Mexico, Peru, Uruguay and Spain). It was developed a protocol of cameras gamma calibration for the determination in vivo of radionuclides. The protocol is the base to establish and integrated network in Latin America to attend in response to emergencies, using nuclear medicine centers of public hospitals of the region. The objective is to achieve the appropriate radiological protection of the workers, essential for the sure and acceptable radiation use, the radioactive materials and the nuclear energy. (Author)

  14. Probabilistic methodology for turbine missile risk analysis

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.; Frank, R.A.

    1984-01-01

    A methodology has been developed for estimation of the probabilities of turbine-generated missile damage to nuclear power plant structures and systems. Mathematical models of the missile generation, transport, and impact events have been developed and sequenced to form an integrated turbine missile simulation methodology. Probabilistic Monte Carlo techniques are used to estimate the plant impact and damage probabilities. The methodology has been coded in the TURMIS computer code to facilitate numerical analysis and plant-specific turbine missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and probabilities have been estimated for a hypothetical nuclear power plant case study. (orig.)

  15. USGS Methodology for Assessing Continuous Petroleum Resources

    Science.gov (United States)

    Charpentier, Ronald R.; Cook, Troy A.

    2011-01-01

    The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.

  16. Scenario development methodologies

    International Nuclear Information System (INIS)

    Eng, T.; Hudson, J.; Stephansson, O.

    1994-11-01

    In the period 1981-1994, SKB has studied several methodologies to systematize and visualize all the features, events and processes (FEPs) that can influence a repository for radioactive waste in the future. All the work performed is based on the terminology and basic findings in the joint SKI/SKB work on scenario development presented in the SKB Technical Report 89-35. The methodologies studied are a) Event tree analysis, b) Influence diagrams and c) Rock Engineering Systems (RES) matrices. Each one of the methodologies is explained in this report as well as examples of applications. One chapter is devoted to a comparison between the two most promising methodologies, namely: Influence diagrams and the RES methodology. In conclusion a combination of parts of the Influence diagram and the RES methodology is likely to be a promising approach. 26 refs

  17. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  18. Theory of mind and Verstehen (understanding) methodology.

    Science.gov (United States)

    Kumazaki, Tsutomu

    2016-09-01

    Theory of mind is a prominent, but highly controversial, field in psychology, psychiatry, and philosophy of mind. Simulation theory, theory-theory and other views have been presented in recent decades, none of which are monolithic. In this article, various views on theory of mind are reviewed, and methodological problems within each view are investigated. The relationship between simulation theory and Verstehen (understanding) methodology in traditional human sciences is an intriguing issue, although the latter is not a direct ancestor of the former. From that perspective, lessons for current clinical psychiatry are drawn. © The Author(s) 2016.

  19. Simplified methodology for Angra 1 containment analysis

    International Nuclear Information System (INIS)

    Neves Conti, T. das; Souza, A.L. de; Sabundjian, G.

    1991-08-01

    A simplified methodology of analysis was developed to simulate a Large Break Loss of Coolant Accident in the Angra 1 Nuclear Power Station. Using the RELAP5/MOD1, RELAP4/MOD5 and CONTEMPT-LT Codes, the time variation of pressure and temperature in the containment was analysed. The obtained data was compared with the Angra 1 Final Safety Analysis Report, and too those calculated by a Detailed Model. The results obtained by this new methodology such as the small computational time of simulation, were satisfactory when getting the preliminary evaluation of the Angra 1 global parameters. (author)

  20. Simulation model and methodology for calculating the damage by internal radiation in a PWR reactor; Modelo de simulacion y metodologia para el calculo del dano por irradiacion en los internos de un reactor PWR

    Energy Technology Data Exchange (ETDEWEB)

    Cadenas Mendicoa, A. M.; Benito Hernandez, M.; Barreira Pereira, P.

    2012-07-01

    This study involves the development of the methodology and three-dimensional models to estimate the damage to the vessel internals of a commercial PWR reactor from irradiation history of operating cycles.

  1. Introduction to LCA Methodology

    DEFF Research Database (Denmark)

    Hauschild, Michael Z.

    2018-01-01

    In order to offer the reader an overview of the LCA methodology in the preparation of the more detailed description of its different phases, a brief introduction is given to the methodological framework according to the ISO 14040 standard and the main elements of each of its phases. Emphasis...

  2. Methodologies, languages and tools

    International Nuclear Information System (INIS)

    Amako, Katsuya

    1994-01-01

    This is a summary of the open-quotes Methodologies, Languages and Toolsclose quotes session in the CHEP'94 conference. All the contributions to methodologies and languages are relevant to the object-oriented approach. Other topics presented are related to various software tools in the down-sized computing environment

  3. Archetype modeling methodology.

    Science.gov (United States)

    Moner, David; Maldonado, José Alberto; Robles, Montserrat

    2018-03-01

    Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Menopause and Methodological Doubt

    Science.gov (United States)

    Spence, Sheila

    2005-01-01

    Menopause and methodological doubt begins by making a tongue-in-cheek comparison between Descartes' methodological doubt and the self-doubt that can arise around menopause. A hermeneutic approach is taken in which Cartesian dualism and its implications for the way women are viewed in society are examined, both through the experiences of women…

  5. VEM: Virtual Enterprise Methodology

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a virtual enterprise methodology (VEM) that outlines activities to consider when setting up and managing virtual enterprises (VEs). As a methodology the VEM helps companies to ask the right questions when preparing for and setting up an enterprise network, which works...

  6. Data Centric Development Methodology

    Science.gov (United States)

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  7. The Methodology of Magpies

    Science.gov (United States)

    Carter, Susan

    2014-01-01

    Arts/Humanities researchers frequently do not explain methodology overtly; instead, they "perform" it through their use of language, textual and historic cross-reference, and theory. Here, methodologies from literary studies are shown to add to Higher Education (HE) an exegetical and critically pluralist approach. This includes…

  8. Civil migration and risk assessment methodology

    International Nuclear Information System (INIS)

    Onishi, Y.; Brown, S.M.; Olsen, A.R.; Parkhurst, M.A.

    1981-01-01

    To provide a scientific basis for risk assessment and decision making, the Chemical Migration and Risk Assessment (CMRA) Methodology was developed to simulate overland and instream toxic containment migration and fate, and to predict the probability of acute and chronic impacts on aquatic biota. The simulation results indicated that the time between the pesticide application and the subsequent runoff producing event was the most important factor determining the amount of the alachlor. The study also revealed that sediment transport has important effects on contaminant migration when sediment concentrations in receiving streams are high or contaminants are highly susceptible to adsorption by sediment. Although the capabilities of the CMRA methodology were only partially tested in this study, the results demonstrate that methodology can be used as a scientific decision-making tool for toxic chemical regulations, a research tool to evaluate the relative significance of various transport and degradation phenomena, as well as a tool to examine the effectiveness of toxic chemical control practice

  9. Design Methodology - Design Synthesis

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup

    2003-01-01

    Design Methodology is part of our practice and our knowledge about designing, and it has been strongly supported by the establishing and work of a design research community. The aim of this article is to broaden the reader¿s view of designing and Design Methodology. This is done by sketching...... the development of Design Methodology through time and sketching some important approaches and methods. The development is mainly forced by changing industrial condition, by the growth of IT support for designing, but also by the growth of insight into designing created by design researchers.......ABSTRACT Design Methodology shall be seen as our understanding of how to design; it is an early (emerging late 60ies) and original articulation of teachable and learnable methodics. The insight is based upon two sources: the nature of the designed artefacts and the nature of human designing. Today...

  10. Hazard classification methodology

    International Nuclear Information System (INIS)

    Brereton, S.J.

    1996-01-01

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  11. Nonlinear Image Denoising Methodologies

    National Research Council Canada - National Science Library

    Yufang, Bao

    2002-01-01

    In this thesis, we propose a theoretical as well as practical framework to combine geometric prior information to a statistical/probabilistic methodology in the investigation of a denoising problem...

  12. Clinical trial methodology

    National Research Council Canada - National Science Library

    Peace, Karl E; Chen, Ding-Geng

    2011-01-01

    "Now viewed as its own scientific discipline, clinical trial methodology encompasses the methods required for the protection of participants in a clinical trial and the methods necessary to provide...

  13. Methodology of sustainability accounting

    Directory of Open Access Journals (Sweden)

    O.H. Sokil

    2017-03-01

    Full Text Available Modern challenges of the theory and methodology of accounting are realized through the formation and implementation of new concepts, the purpose of which is to meet the needs of users in standard and unique information. The development of a methodology for sustainability accounting is a key aspect of the management of an economic entity. The purpose of the article is to form the methodological bases of accounting for sustainable development and determine its goals, objectives, object, subject, methods, functions and key aspects. The author analyzes the theoretical bases of the definition and considers the components of the traditional accounting methodology. Generalized structural diagram of the methodology for accounting for sustainable development is offered in the article. The complex of methods and principles of sustainable development accounting for systematized and non-standard provisions has been systematized. The new system of theoretical and methodological provisions of accounting for sustainable development is justified in the context of determining its purpose, objective, subject, object, methods, functions and key aspects.

  14. Optimization of investment economic in PCI using the methodology of benefits design in analysis of the spread of fires with FDS (Fire Dynamics Simulator) in areas of nuclear fire

    International Nuclear Information System (INIS)

    Salellas, J.

    2015-01-01

    Fire simulation analysis allows knowing the evolution and spread fire in areas of interest within a NPP such as control room, cable room and multi zone comportment among others. fires are a main concern regarding safety analysis of NPP. IDOM has the capability to carry out fire simulations, taken in to account smoke control, fire spread, toxicity levels, ventilation and all physical phenomena. As a result, appropriate fire protection measures can be assessed in each scenario. CFD tools applied to fire simulations can determine with higher resolution all damages caused during the fire. Furthermore, such tools can reduce costs due to a lower impact of design modifications. (Author)

  15. Global Methodology to Integrate Innovative Models for Electric Motors in Complete Vehicle Simulators Méthodologie générale d’intégration de modèles innovants de moteurs électriques dans des simulateurs véhicules complets

    Directory of Open Access Journals (Sweden)

    Abdelli A.

    2011-11-01

    Full Text Available By what means the greenhouse gas emissions of passenger cars can be reduced to 120 g/km in 2012 and 95 g/km in 2020 as the European Commission and the automotive manufacturers are stated? This question with multi answers preoccupies at the moment the whole automobile world. One of the most promising solutions which receive attention is the electrification of the vehicle. It is this idea that has prompted the automobile manufacturers to envisage increasingly innovative hybrid vehicles. However, this theoretically interesting solution makes more complex the powertrain, which requires the use of simulation tools in order to reduce the cost and the time of system development. System simulation, which is already a crucial tool for the design process of internal combustion engines, becomes indispensable in the development of the Hybrid Electric Vehicle (HEV. To study the complex structures of HEV, following the example of the physical models developed for the internal combustion engine, system simulation has to provide itself of the same predictive models for electric machines. From their specifications, these models have to take into account the strict constraint on the time simulation. This constraint guarantees the wide use of simulators, notably to help the development and the validation of control strategies. This paper aims to present a global methodology to develop innovative models of electrical machines. The final objective of these models is to be integrated in a global vehicle simulator. This methodology includes several types of models and tools, as Finite Elements Models (FEM, characterization and simulating models. This methodology was applied successfully to model an internal permanent magnet synchronous motors. At the end of the modelling process of the electric motor, the latter has been integrated in a complete global hybrid vehicle. This guarantees the good operation and integration in the global process of a new vehicle concept

  16. Regional Shelter Analysis Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dennison, Deborah [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Jave [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Miller, Paul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-01

    The fallout from a nuclear explosion has the potential to injure or kill 100,000 or more people through exposure to external gamma (fallout) radiation. Existing buildings can reduce radiation exposure by placing material between fallout particles and exposed people. Lawrence Livermore National Laboratory was tasked with developing an operationally feasible methodology that could improve fallout casualty estimates. The methodology, called a Regional Shelter Analysis, combines the fallout protection that existing buildings provide civilian populations with the distribution of people in various locations. The Regional Shelter Analysis method allows the consideration of (a) multiple building types and locations within buildings, (b) country specific estimates, (c) population posture (e.g., unwarned vs. minimally warned), and (d) the time of day (e.g., night vs. day). The protection estimates can be combined with fallout predictions (or measurements) to (a) provide a more accurate assessment of exposure and injury and (b) evaluate the effectiveness of various casualty mitigation strategies. This report describes the Regional Shelter Analysis methodology, highlights key operational aspects (including demonstrating that the methodology is compatible with current tools), illustrates how to implement the methodology, and provides suggestions for future work.

  17. Modeling Methodologies for Representing Urban Cultural Geographies in Stability Operations

    National Research Council Canada - National Science Library

    Ferris, Todd P

    2008-01-01

    ... 2.0.0, in an effort to provide modeling methodologies for a single simulation tool capable of exploring the complex world of urban cultural geographies undergoing Stability Operations in an irregular warfare (IW) environment...

  18. The policy trail methodology

    DEFF Research Database (Denmark)

    Holford, John; Larson, Anne; Melo, Susana

    of ‘policy trail’, arguing that it can overcome ‘methodological nationalism’ and link structure and agency in research on the ‘European educational space’. The ‘trail’ metaphor, she suggests, captures the intentionality and the erratic character of policy. The trail connects sites and brings about change......, but – although policy may be intended to be linear, with specific outcomes – policy often has to bend, and sometimes meets insurmountable obstacles. This symposium outlines and develops the methodology, but also reports on research undertaken within a major FP7 project (LLLIght’in’Europe, 2012-15) which made use......In recent years, the “policy trail” has been proposed as a methodology appropriate to the shifting and fluid governance of lifelong learning in the late modern world (Holford et al. 2013, Holford et al. 2013, Cort 2014). The contemporary environment is marked by multi-level governance (global...

  19. A study on methodological of software development for HEP

    International Nuclear Information System (INIS)

    Ding Yuzheng; Dai Guiliang

    1999-01-01

    The HEP related software system is a large one. It comprises mainly detector simulation software, DAQ software and offline system. The author discusses the advantages of OO object oriented methodologies applying to such software system, and the basic strategy for the usage of OO methodologies, languages and tools in the development of the HEP related software are given

  20. Changing methodologies in TESOL

    CERN Document Server

    Spiro, Jane

    2013-01-01

    Covering core topics from vocabulary and grammar to teaching, writing speaking and listening, this textbook shows you how to link research to practice in TESOL methodology. It emphasises how current understandings have impacted on the language classroom worldwide and investigates the meaning of 'methods' and 'methodology' and the importance of these for the teacher: as well as the underlying assumptions and beliefs teachers bring to bear in their practice. By introducing you to language teaching approaches, you will explore the way these are influenced by developments in our understanding of l

  1. Creativity in phenomenological methodology

    DEFF Research Database (Denmark)

    Dreyer, Pia; Martinsen, Bente; Norlyk, Annelise

    2014-01-01

    on the methodologies of van Manen, Dahlberg, Lindseth & Norberg, the aim of this paper is to argue that the increased focus on creativity and arts in research methodology is valuable to gain a deeper insight into lived experiences. We illustrate this point through examples from empirical nursing studies, and discuss......Nursing research is often concerned with lived experiences in human life using phenomenological and hermeneutic approaches. These empirical studies may use different creative expressions and art-forms to describe and enhance an embodied and personalised understanding of lived experiences. Drawing...... may support a respectful renewal of phenomenological research traditions in nursing research....

  2. Computer Network Operations Methodology

    Science.gov (United States)

    2004-03-01

    means of their computer information systems. Disrupt - This type of attack focuses on disrupting as “attackers might surreptitiously reprogram enemy...by reprogramming the computers that control distribution within the power grid. A disruption attack introduces disorder and inhibits the effective...between commanders. The use of methodologies is widespread and done subconsciously to assist individuals in decision making. The processes that

  3. SCI Hazard Report Methodology

    Science.gov (United States)

    Mitchell, Michael S.

    2010-01-01

    This slide presentation reviews the methodology in creating a Source Control Item (SCI) Hazard Report (HR). The SCI HR provides a system safety risk assessment for the following Ares I Upper Stage Production Contract (USPC) components (1) Pyro Separation Systems (2) Main Propulsion System (3) Reaction and Roll Control Systems (4) Thrust Vector Control System and (5) Ullage Settling Motor System components.

  4. A Functional HAZOP Methodology

    DEFF Research Database (Denmark)

    Liin, Netta; Lind, Morten; Jensen, Niels

    2010-01-01

    A HAZOP methodology is presented where a functional plant model assists in a goal oriented decomposition of the plant purpose into the means of achieving the purpose. This approach leads to nodes with simple functions from which the selection of process and deviation variables follow directly...

  5. Complicating Methodological Transparency

    Science.gov (United States)

    Bridges-Rhoads, Sarah; Van Cleave, Jessica; Hughes, Hilary E.

    2016-01-01

    A historical indicator of the quality, validity, and rigor of qualitative research has been the documentation and disclosure of the behind-the-scenes work of the researcher. In this paper, we use what we call "methodological data" as a tool to complicate the possibility and desirability of such transparency. Specifically, we draw on our…

  6. Methodological Advances in Dea

    NARCIS (Netherlands)

    L. Cherchye (Laurens); G.T. Post (Thierry)

    2001-01-01

    textabstractWe survey the methodological advances in DEA over the last 25 years and discuss the necessary conditions for a sound empirical application. We hope this survey will contribute to the further dissemination of DEA, the knowledge of its relative strengths and weaknesses, and the tools

  7. NUSAM Methodology for Assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Leach, Janice [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Snell, Mark K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-07-01

    This document provides a methodology for the performance-based assessment of security systems designed for the protection of nuclear and radiological materials and the processes that produce and/or involve them. It is intended for use with both relatively simple installations and with highly regulated complex sites with demanding security requirements.

  8. MIRD methodology. Part 1

    International Nuclear Information System (INIS)

    Rojo, Ana M.

    2004-01-01

    This lecture develops the MIRD (Medical Internal Radiation Dose) methodology for the evaluation of the internal dose due to the administration of radiopharmaceuticals. In this first part, the basic concepts and the main equations are presented. The ICRP Dosimetric System is also explained. (author)

  9. MIRD methodology. Part 2

    International Nuclear Information System (INIS)

    Gomez Parada, Ines

    2004-01-01

    This paper develops the MIRD (Medical Internal Radiation Dose) methodology for the evaluation of the internal dose due to the administration of radiopharmaceuticals. In this second part, different methods for the calculation of the accumulated activity are presented, together with the effective half life definition. Different forms of Retention Activity curves are also shown. (author)

  10. Methodology for evaluation of industrial CHP production

    International Nuclear Information System (INIS)

    Pavlovic, Nenad V.; Studovic, Milovan

    2000-01-01

    At the end of the century industry switched from exclusive power consumer into power consumer-producer which is one of the players on the deregulated power market. Consequently, goals of industrial plant optimization have to be changed, making new challenges that industrial management has to be faced with. In the paper is reviewed own methodology for evaluation of industrial power production on deregulated power market. The methodology recognizes economic efficiency of industrial CHP facilities as a main criterion for evaluation. Energy and ecological efficiency are used as additional criteria, in which implicit could be found social goals. Also, methodology recognizes key and limit factors for CHP production in industry. It could be successful applied, by use of available commercial software for energy simulation in CHP plants and economic evaluation. (Authors)

  11. Coupling methodology within the software platform alliances

    Energy Technology Data Exchange (ETDEWEB)

    Montarnal, Ph; Deville, E; Adam, E; Bengaouer, A [CEA Saclay, Dept. de Modelisation des Systemes et Structures 91 - Gif-sur-Yvette (France); Dimier, A; Gaombalet, J; Loth, L [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France); Chavant, C [Electricite de France (EDF), 92 - Clamart (France)

    2005-07-01

    CEA, ANDRA and EDF are jointly developing the software platform ALLIANCES which aim is to produce a tool for the simulation of nuclear waste storage and disposal repository. This type of simulations deals with highly coupled thermo-hydro-mechanical and chemical (T-H-M-C) processes. A key objective of Alliances is to give the capability for coupling algorithms development between existing codes. The aim of this paper is to present coupling methodology use in the context of this software platform. (author)

  12. Coupling methodology within the software platform alliances

    International Nuclear Information System (INIS)

    Montarnal, Ph.; Deville, E.; Adam, E.; Bengaouer, A.; Dimier, A.; Gaombalet, J.; Loth, L.; Chavant, C.

    2005-01-01

    CEA, ANDRA and EDF are jointly developing the software platform ALLIANCES which aim is to produce a tool for the simulation of nuclear waste storage and disposal repository. This type of simulations deals with highly coupled thermo-hydro-mechanical and chemical (T-H-M-C) processes. A key objective of Alliances is to give the capability for coupling algorithms development between existing codes. The aim of this paper is to present coupling methodology use in the context of this software platform. (author)

  13. Soft Systems Methodology

    Science.gov (United States)

    Checkland, Peter; Poulter, John

    Soft systems methodology (SSM) is an approach for tackling problematical, messy situations of all kinds. It is an action-oriented process of inquiry into problematic situations in which users learn their way from finding out about the situation, to taking action to improve it. The learning emerges via an organised process in which the situation is explored using a set of models of purposeful action (each built to encapsulate a single worldview) as intellectual devices, or tools, to inform and structure discussion about a situation and how it might be improved. This paper, written by the original developer Peter Checkland and practitioner John Poulter, gives a clear and concise account of the approach that covers SSM's specific techniques, the learning cycle process of the methodology and the craft skills which practitioners develop. This concise but theoretically robust account nevertheless includes the fundamental concepts, techniques, core tenets described through a wide range of settings.

  14. Transparent Guideline Methodology Needed

    DEFF Research Database (Denmark)

    Lidal, Ingeborg; Norén, Camilla; Mäkelä, Marjukka

    2013-01-01

    As part of learning at the Nordic Workshop of Evidence-based Medicine, we have read with interest the practice guidelines for central venous access, published in your Journal in 2012.1 We appraised the quality of this guideline using the checklist developed by The Evidence-Based Medicine Working ...... are based on best currently available evidence. Our concerns are in two main categories: the rigor of development, including methodology of searching, evaluating, and combining the evidence; and editorial independence, including funding and possible conflicts of interest....... Group.2 Similar criteria for guideline quality have been suggested elsewhere.3 Our conclusion was that this much needed guideline is currently unclear about several aspects of the methodology used in developing the recommendations. This means potential users cannot be certain that the recommendations...

  15. Web survey methodology

    CERN Document Server

    Callegaro, Mario; Vehovar, Asja

    2015-01-01

    Web Survey Methodology guides the reader through the past fifteen years of research in web survey methodology. It both provides practical guidance on the latest techniques for collecting valid and reliable data and offers a comprehensive overview of research issues. Core topics from preparation to questionnaire design, recruitment testing to analysis and survey software are all covered in a systematic and insightful way. The reader will be exposed to key concepts and key findings in the literature, covering measurement, non-response, adjustments, paradata, and cost issues. The book also discusses the hottest research topics in survey research today, such as internet panels, virtual interviewing, mobile surveys and the integration with passive measurements, e-social sciences, mixed modes and business intelligence. The book is intended for students, practitioners, and researchers in fields such as survey and market research, psychological research, official statistics and customer satisfaction research.

  16. Steganography: LSB Methodology

    Science.gov (United States)

    2012-08-02

    of LSB steganography in grayscale and color images . In J. Dittmann, K. Nahrstedt, and P. Wohlmacher, editors, Proceedings of the ACM, Special...Fridrich, M. Gojan and R. Du paper titled “Reliable detection of LSB steganography in grayscale and color images ”. From a general perspective Figure 2...REPORT Steganography : LSB Methodology (Progress Report) 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: In computer science, steganography is the science

  17. Soil Radiological Characterisation Methodology

    International Nuclear Information System (INIS)

    Attiogbe, Julien; Aubonnet, Emilie; De Maquille, Laurence; De Moura, Patrick; Desnoyers, Yvon; Dubot, Didier; Feret, Bruno; Fichet, Pascal; Granier, Guy; Iooss, Bertrand; Nokhamzon, Jean-Guy; Ollivier Dehaye, Catherine; Pillette-Cousin, Lucien; Savary, Alain

    2014-12-01

    This report presents the general methodology and best practice approaches which combine proven existing techniques for sampling and characterisation to assess the contamination of soils prior to remediation. It is based on feedback of projects conducted by main French nuclear stakeholders involved in the field of remediation and dismantling (EDF, CEA, AREVA and IRSN). The application of this methodology will enable the project managers to obtain the elements necessary for the drawing up of files associated with remediation operations, as required by the regulatory authorities. It is applicable to each of the steps necessary for the piloting of remediation work-sites, depending on the objectives targeted (release into the public domain, re-use, etc.). The main part describes the applied statistical methodology with the exploratory analysis and variogram data, identification of singular points and their location. The results obtained permit assessment of a mapping to identify the contaminated surface and subsurface areas. It stakes the way for radiological site characterisation since the initial investigations from historical and functional analysis to check that the remediation objectives have been met. It follows an example application from the feedback of the remediation of a contaminated site on the Fontenay aux Roses facility. It is supplemented by a glossary of main terms used in the field from different publications or international standards. This technical report is a support of the ISO Standard ISO ISO/TC 85/SC 5 N 18557 'Sampling and characterisation principles for soils, buildings and infrastructures contaminated by radionuclides for remediation purposes'. (authors) [fr

  18. Demonstration of an infiltration evaluation methodology

    International Nuclear Information System (INIS)

    Smyth, J.D.; Gee, G.W.; Kincaid, C.T.; Nichols, W.M.; Bresler, E.

    1990-07-01

    An Infiltration Evaluation Methodology (IEM) was developed for the US Nuclear Regulatory Commission (NRC) by Pacific Northwest Laboratory (PNL) to provide a consistent, well formulated approach for evaluating drainage through engineered covers at low-level radioactive waste (LLW) sites. The methodology is designed to help evaluate the ability of proposed waste site covers to minimize drainage for LLW site license applications and for sites associated with the Uranium Mill Tailings Remedial Action (UMTRA) program. The objective of this methodology is to estimate the drainage through an engineered burial site cover system. The drainage estimate can be used as an input to a broader performance assessment methodology currently under development by the NRC. The methodology is designed to simulate, at the field scale, significant factors and hydrologic conditions which determine or influence estimates of infiltration, long-term moisture content profiles, and drainage from engineered covers and barriers. The IEM developed under this study acknowledges the uncertainty inherent in soil properties and quantifies the influence of such uncertainty on the estimates of drainage in engineered cover systems at waste disposal sites. 6 refs., 1 fig

  19. Methodology applied to develop the DHIE: applied methodology

    CSIR Research Space (South Africa)

    Herselman, Marlien

    2016-12-01

    Full Text Available This section will address the methodology that was applied to develop the South African Digital Health Innovation Ecosystem (DHIE). Each chapter under Section B represents a specific phase in the methodology....

  20. Using Simulation as an Investigational Methodology to Explore the Impact of Technology on Team Communication and Patient Management: A Pilot Evaluation of the Effect of an Automated Compression Device.

    Science.gov (United States)

    Gittinger, Matthew; Brolliar, Sarah M; Grand, James A; Nichol, Graham; Fernandez, Rosemarie

    2017-06-01

    This pilot study used a simulation-based platform to evaluate the effect of an automated mechanical chest compression device on team communication and patient management. Four-member emergency department interprofessional teams were randomly assigned to perform manual chest compressions (control, n = 6) or automated chest compressions (intervention, n = 6) during a simulated cardiac arrest with 2 phases: phase 1 baseline (ventricular tachycardia), followed by phase 2 (ventricular fibrillation). Patient management was coded using an Advanced Cardiovascular Life Support-based checklist. Team communication was categorized in the following 4 areas: (1) teamwork focus; (2) huddle events, defined as statements focused on re-establishing situation awareness, reinforcing existing plans, and assessing the need to adjust the plan; (3) clinical focus; and (4) profession of team member. Statements were aggregated for each team. At baseline, groups were similar with respect to total communication statements and patient management. During cardiac arrest, the total number of communication statements was greater in teams performing manual compressions (median, 152.3; interquartile range [IQR], 127.6-181.0) as compared with teams using an automated compression device (median, 105; IQR, 99.5-123.9). Huddle events were more frequent in teams performing automated chest compressions (median, 4.0; IQR, 3.1-4.3 vs. 2.0; IQR, 1.4-2.6). Teams randomized to the automated compression intervention had a delay to initial defibrillation (median, 208.3 seconds; IQR, 153.3-222.1 seconds) as compared with control teams (median, 63.2 seconds; IQR, 30.1-397.2 seconds). Use of an automated compression device may impact both team communication and patient management. Simulation-based assessments offer important insights into the effect of technology on healthcare teams.

  1. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  2. Case Study Research Methodology

    Directory of Open Access Journals (Sweden)

    Mark Widdowson

    2011-01-01

    Full Text Available Commenting on the lack of case studies published in modern psychotherapy publications, the author reviews the strengths of case study methodology and responds to common criticisms, before providing a summary of types of case studies including clinical, experimental and naturalistic. Suggestions are included for developing systematic case studies and brief descriptions are given of a range of research resources relating to outcome and process measures. Examples of a pragmatic case study design and a hermeneutic single-case efficacy design are given and the paper concludes with some ethical considerations and an exhortation to the TA community to engage more widely in case study research.

  3. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  4. Geotechnical site assessment methodology

    International Nuclear Information System (INIS)

    Tunbridge, L.W.; Richards, L.R.

    1985-09-01

    The reports comprising this volume concern the research conducted on geotechnical site assessment methodology at the Carwynnen test mine in granites in Cornwall, with particular reference to the effect of structures imposed by discontinuities on the engineering behaviour of rock masses. The topics covered are: in-situ stress measurements using (a) the hydraulic fracturing method, or (b) the US Bureau of Mines deformation probe; scanline discontinuity survey - coding form and instructions, and data; applicability of geostatistical estimation methods to scalar rock properties; comments on in-situ stress at the Carwynnen test mine and the state of stress in the British Isles. (U.K.)

  5. Microphysics evolution and methodology

    International Nuclear Information System (INIS)

    Dionisio, J.S.

    1990-01-01

    A few general features of microscopics evolution and their relationship with microscopics methodology are briefly surveyed. Several pluri-disciplinary and interdisciplinary aspects of microscopics research are also discussed in the present scientific context. The need for an equilibrium between individual tendencies and collective constraints required by team work, already formulated thirty years ago by Frederic Joliot, is particularly stressed in the present conjuncture of Nuclear Research favouring very large team projects and discouraging individual initiatives. The increasing importance of the science of science (due to their multiple social, economical, ecological aspects) and the stronger competition between national and international tendencies of scientific (and technical) cooperation are also discussed. (author)

  6. MIRD methodology; Metodologia MIRD

    Energy Technology Data Exchange (ETDEWEB)

    Rojo, Ana M [Autoridad Regulatoria Nuclear, Buenos Aires (Argentina); Gomez Parada, Ines [Sociedad Argentina de Radioproteccion, Buenos Aires (Argentina)

    2004-07-01

    The MIRD (Medical Internal Radiation Dose) system was established by the Society of Nuclear Medicine of USA in 1960 to assist the medical community in the estimation of the dose in organs and tissues due to the incorporation of radioactive materials. Since then, 'MIRD Dose Estimate Report' (from the 1 to 12) and 'Pamphlets', of great utility for the dose calculations, were published. The MIRD system was planned essentially for the calculation of doses received by the patients during nuclear medicine diagnostic procedures. The MIRD methodology for the absorbed doses calculations in different tissues is explained.

  7. Beam optimization: improving methodology

    International Nuclear Information System (INIS)

    Quinteiro, Guillermo F.

    2004-01-01

    Different optimization techniques commonly used in biology and food technology allow a systematic and complete analysis of response functions. In spite of the great interest in medical and nuclear physics in the problem of optimizing mixed beams, little attention has been given to sophisticate mathematical tools. Indeed, many techniques are perfectly suited to the typical problem of beam optimization. This article is intended as a guide to the use of two methods, namely Response Surface Methodology and Simplex, that are expected to fasten the optimization process and, meanwhile give more insight into the relationships among the dependent variables controlling the response

  8. Literacy research methodologies

    CERN Document Server

    Duke, Nell K

    2012-01-01

    The definitive reference on literacy research methods, this book serves as a key resource for researchers and as a text in graduate-level courses. Distinguished scholars clearly describe established and emerging methodologies, discuss the types of questions and claims for which each is best suited, identify standards of quality, and present exemplary studies that illustrate the approaches at their best. The book demonstrates how each mode of inquiry can yield unique insights into literacy learning and teaching and how the methods can work together to move the field forward.   New to This Editi

  9. Methodology of site studies

    International Nuclear Information System (INIS)

    Caries, J.C.; Hugon, J.; Grauby, A.

    1980-01-01

    This methodology consists in an essentially dynamic, estimated and follow-up analysis of the impact of discharges on all the environment compartments, whether natural or not, that play a part in the protection of man and his environment. It applies at two levels, to wit: the choice of site, or the detailed study of the site selected. Two examples of its application will be developed, namely: at the choice of site level in the case of marine sites, and of the detailed study level of the chosen site in that of a riverside site [fr

  10. Alternative pricing methodologies

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    With the increased interest in competitive market forces and growing recognition of the deficiencies in current practices, FERC and others are exploring alternatives to embedded cost pricing. A number of these alternatives are discussed in this chapter. Marketplace pricing, discussed briefly here, is the subject of the next chapter. Obviously, the pricing formula may combine several of these methodologies. One utility of which the authors are aware is seeking a price equal to the sum of embedded costs, opportunity costs, line losses, value of service, FERC's percentage adder formula and a contract service charge

  11. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses

  12. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses

  13. RHIC Data Correlation Methodology

    International Nuclear Information System (INIS)

    Michnoff, R.; D'Ottavio, T.; Hoff, L.; MacKay, W.; Satogata, T.

    1999-01-01

    A requirement for RHIC data plotting software and physics analysis is the correlation of data from all accelerator data gathering systems. Data correlation provides the capability for a user to request a plot of multiple data channels vs. time, and to make meaningful time-correlated data comparisons. The task of data correlation for RHIC requires careful consideration because data acquisition triggers are generated from various asynchronous sources including events from the RHIC Event Link, events from the two Beam Sync Links, and other unrelated clocks. In order to correlate data from asynchronous acquisition systems a common time reference is required. The RHIC data correlation methodology will allow all RHIC data to be converted to a common wall clock time, while still preserving native acquisition trigger information. A data correlation task force team, composed of the authors of this paper, has been formed to develop data correlation design details and provide guidelines for software developers. The overall data correlation methodology will be presented in this paper

  14. Intelligent systems engineering methodology

    Science.gov (United States)

    Fouse, Scott

    1990-01-01

    An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.

  15. SMART performance analysis methodology

    International Nuclear Information System (INIS)

    Lim, H. S.; Kim, H. C.; Lee, D. J.

    2001-04-01

    To ensure the required and desired operation over the plant lifetime, the performance analysis for the SMART NSSS design is done by means of the specified analysis methodologies for the performance related design basis events(PRDBE). The PRDBE is an occurrence(event) that shall be accommodated in the design of the plant and whose consequence would be no more severe than normal service effects of the plant equipment. The performance analysis methodology which systematizes the methods and procedures to analyze the PRDBEs is as follows. Based on the operation mode suitable to the characteristics of the SMART NSSS, the corresponding PRDBEs and allowable range of process parameters for these events are deduced. With the developed control logic for each operation mode, the system thermalhydraulics are analyzed for the chosen PRDBEs using the system analysis code. Particularly, because of different system characteristics of SMART from the existing commercial nuclear power plants, the operation mode, PRDBEs, control logic, and analysis code should be consistent with the SMART design. This report presents the categories of the PRDBEs chosen based on each operation mode and the transition among these and the acceptance criteria for each PRDBE. It also includes the analysis methods and procedures for each PRDBE and the concept of the control logic for each operation mode. Therefore this report in which the overall details for SMART performance analysis are specified based on the current SMART design, would be utilized as a guide for the detailed performance analysis

  16. Relative Hazard Calculation Methodology

    International Nuclear Information System (INIS)

    DL Strenge; MK White; RD Stenner; WB Andrews

    1999-01-01

    The methodology presented in this document was developed to provide a means of calculating the RH ratios to use in developing useful graphic illustrations. The RH equation, as presented in this methodology, is primarily a collection of key factors relevant to understanding the hazards and risks associated with projected risk management activities. The RH equation has the potential for much broader application than generating risk profiles. For example, it can be used to compare one risk management activity with another, instead of just comparing it to a fixed baseline as was done for the risk profiles. If the appropriate source term data are available, it could be used in its non-ratio form to estimate absolute values of the associated hazards. These estimated values of hazard could then be examined to help understand which risk management activities are addressing the higher hazard conditions at a site. Graphics could be generated from these absolute hazard values to compare high-hazard conditions. If the RH equation is used in this manner, care must be taken to specifically define and qualify the estimated absolute hazard values (e.g., identify which factors were considered and which ones tended to drive the hazard estimation)

  17. Insights into PRA methodologies

    International Nuclear Information System (INIS)

    Gallagher, D.; Lofgren, E.; Atefi, B.; Liner, R.; Blond, R.; Amico, P.

    1984-08-01

    Probabilistic Risk Assessments (PRAs) for six nuclear power plants were examined to gain insight into how the choice of analytical methods can affect the results of PRAs. The PRA sreflectope considered was limited to internally initiated accidents sequences through core melt. For twenty methodological topic areas, a baseline or minimal methodology was specified. The choice of methods for each topic in the six PRAs was characterized in terms of the incremental level of effort above the baseline. A higher level of effort generally reflects a higher level of detail or a higher degree of sophistication in the analytical approach to a particular topic area. The impact on results was measured in terms of how additional effort beyond the baseline level changed the relative importance and ordering of dominant accident sequences compared to what would have been observed had methods corresponding to the baseline level of effort been employed. This measure of impact is a more useful indicator of how methods affect perceptions of plant vulnerabilities than changes in core melt frequency would be. However, the change in core melt frequency was used as a secondary measure of impact for nine topics where availability of information permitted. Results are presented primarily in the form of effort-impact matrices for each of the twenty topic areas. A suggested effort-impact profile for future PRAs is presented

  18. Scrum methodology in banking environment

    OpenAIRE

    Strihová, Barbora

    2015-01-01

    Bachelor thesis "Scrum methodology in banking environment" is focused on one of agile methodologies called Scrum and description of the methodology used in banking environment. Its main goal is to introduce the Scrum methodology and outline a real project placed in a bank focused on software development through a case study, address problems of the project, propose solutions of the addressed problems and identify anomalies of Scrum in software development constrained by the banking environmen...

  19. Experimental Economics: Some Methodological Notes

    OpenAIRE

    Fiore, Annamaria

    2009-01-01

    The aim of this work is presenting in a self-contained paper some methodological aspects as they are received in the current experimental literature. The purpose has been to make a critical review of some very influential papers dealing with methodological issues. In other words, the idea is to have a single paper where people first approaching experimental economics can find summarised (some) of the most important methodological issues. In particular, the focus is on some methodological prac...

  20. Using Modern Methodologies with Maintenance Software

    Science.gov (United States)

    Streiffert, Barbara A.; Francis, Laurie K.; Smith, Benjamin D.

    2014-01-01

    Jet Propulsion Laboratory uses multi-mission software produced by the Mission Planning and Sequencing (MPS) team to process, simulate, translate, and package the commands that are sent to a spacecraft. MPS works under the auspices of the Multi-Mission Ground Systems and Services (MGSS). This software consists of nineteen applications that are in maintenance. The MPS software is classified as either class B (mission critical) or class C (mission important). The scheduling of tasks is difficult because mission needs must be addressed prior to performing any other tasks and those needs often spring up unexpectedly. Keeping track of the tasks that everyone is working on is also difficult because each person is working on a different software component. Recently the group adopted the Scrum methodology for planning and scheduling tasks. Scrum is one of the newer methodologies typically used in agile development. In the Scrum development environment, teams pick their tasks that are to be completed within a sprint based on priority. The team specifies the sprint length usually a month or less. Scrum is typically used for new development of one application. In the Scrum methodology there is a scrum master who is a facilitator who tries to make sure that everything moves smoothly, a product owner who represents the user(s) of the software and the team. MPS is not the traditional environment for the Scrum methodology. MPS has many software applications in maintenance, team members who are working on disparate applications, many users, and is interruptible based on mission needs, issues and requirements. In order to use scrum, the methodology needed adaptation to MPS. Scrum was chosen because it is adaptable. This paper is about the development of the process for using scrum, a new development methodology, with a team that works on disparate interruptible tasks on multiple software applications.

  1. Methodology of site protection studies

    International Nuclear Information System (INIS)

    Farges, L.

    1980-01-01

    Preliminary studies preceding building of a nuclear facility aim at assessing the choice of a site and establishing operating and control procedures. These studies are of two types. Studies on the impact of environment on the nuclear facility to be constructed form one type and studies on the impact of nuclear facilities on the environment form the second type. A methodology giving a framework to studies of second type is presented. These studies are undertaken to choose suitable sites for nuclear facilities. After a preliminary selection of a site based on the first estimate, a detailed site study is undertaken. The procedure for this consists of five successive phases, namely, (1) an inquiry assessing the initial state of the site, (2) an initial synthesis of accumulated information for assessing the health and safety consequences of releases, (3) laboratory and field studies simulating the movement of waste products for a quantitative assessment of effects, (4) final synthesis for laying down the release limits and radiological control methods, and (5) conclusions based on comparing the data of final synthesis to the limits prescribed by regulations. These five phases are outlined. Role of periodic reassessments after the facility is in operation for same time is explained. (M.G.B.)

  2. Geotechnical site assessment methodology

    International Nuclear Information System (INIS)

    Tunbridge, L.W.; Richards, L.R.

    1985-09-01

    A final report summarizing the research conducted on geotechnical site assessment methodology at the Carwynnen test mine in Cornwall. The geological setting of the test site in the Cornubian granite batholith is described. The effect of structure imposed by discontinuities on the engineering behaviour of rock masses is discussed and the scanline survey method of obtaining data on discontinuities in the rock mass is described. The applicability of some methods of statistical analysis for discontinuity data is reviewed. The requirement for remote geophysical methods of characterizing the mass is discussed and experiments using seismic and ultrasonic velocity measurements are reported. Methods of determining the in-situ stresses are described and the final results of a programme of in-situ stress measurements using the overcoring and hydrofracture methods are reported. (author)

  3. UNCOMMON SENSORY METHODOLOGIES

    Directory of Open Access Journals (Sweden)

    Vladimír Vietoris

    2015-02-01

    Full Text Available Sensory science is the young but the rapidly developing field of the food industry. Actually, the great emphasis is given to the production of rapid techniques of data collection, the difference between consumers and trained panel is obscured and the role of sensory methodologists is to prepare the ways for evaluation, by which a lay panel (consumers can achieve identical results as a trained panel. Currently, there are several conventional methods of sensory evaluation of food (ISO standards, but more sensory laboratories are developing methodologies that are not strict enough in the selection of evaluators, their mechanism is easily understandable and the results are easily interpretable. This paper deals with mapping of marginal methods used in sensory evaluation of food (new types of profiles, CATA, TDS, napping.

  4. Safety class methodology

    International Nuclear Information System (INIS)

    Donner, E.B.; Low, J.M.; Lux, C.R.

    1992-01-01

    DOE Order 6430.1A, General Design Criteria (GDC), requires that DOE facilities be evaluated with respect to ''safety class items.'' Although the GDC defines safety class items, it does not provide a methodology for selecting safety class items. The methodology described in this paper was developed to assure that Safety Class Items at the Savannah River Site (SRS) are selected in a consistent and technically defensible manner. Safety class items are those in the highest of four categories determined to be of special importance to nuclear safety and, merit appropriately higher-quality design, fabrication, and industrial test standards and codes. The identification of safety class items is approached using a cascading strategy that begins at the 'safety function' level (i.e., a cooling function, ventilation function, etc.) and proceeds down to the system, component, or structure level. Thus, the items that are required to support a safety function are SCls. The basic steps in this procedure apply to the determination of SCls for both new project activities, and for operating facilities. The GDC lists six characteristics of SCls to be considered as a starting point for safety item classification. They are as follows: 1. Those items whose failure would produce exposure consequences that would exceed the guidelines in Section 1300-1.4, ''Guidance on Limiting Exposure of the Public,'' at the site boundary or nearest point of public access 2. Those items required to maintain operating parameters within the safety limits specified in the Operational Safety Requirements during normal operations and anticipated operational occurrences. 3. Those items required for nuclear criticality safety. 4. Those items required to monitor the release of radioactive material to the environment during and after a Design Basis Accident. Those items required to achieve, and maintain the facility in a safe shutdown condition 6. Those items that control Safety Class Item listed above

  5. Methodology for the accelerated simulation of the deterioration that by atmospheric corrosion appears in electronic equipment; Metodologia para la simulacion acelerada del deterioro que por corrosion atmosferica se presenta en equipo electronico

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz Prado, A.; Schouwenaars, R.; Cerrud Sanchez, S.M. [Facultad de Ingenieria, UNAM, Mexico, D.F. (Mexico)

    2002-12-01

    The corrosion resistance of systems and electronic parts which are designed to work in atmospheric conditions have been tested for decades; some of these methods were the Cyclic Humidity Test, Field Tests and Salt Spray (Fog) Testing, the latter was one of the most popular methods. However, the salt spray test and most of the other existing methods do not show strong relationships with the real conditions of service. For this reason, it is necessary to develop appropriated methods and equipment for the accelerated simulation of real atmospheric corrosion phenomena. This article seeks to demonstrate the need to develop a test and the necessary equipment to reproduce the damage in electronic systems and equipment by atmospheric corrosion. [Spanish] Para la evaluacion de la resistencia a la corrosion de sistemas y equipo electronico que trabajaran bajo condiciones de deterioro generadas por el medio ambiente, se han aplicado una serie de ensayos, donde el mas popular es el de camara de niebla salina. Sin embargo, este y otros que se han elaborado para tal efecto no tienen ninguna relacion con las condiciones reales de servicio, por lo que es necesario un metodo de evaluacion que permita simular de forma acelerada los fenomenos de deterioro por efectos ambientales. Este articulo pretende demostrar la necesidad de desarrollar una prueba, que en forma acelerada, reproduzca el dano que sufre el material por efecto de la atmosfera; el cual se orienta a la evaluacion de equipo electrico y electronico.

  6. Situating methodology within qualitative research.

    Science.gov (United States)

    Kramer-Kile, Marnie L

    2012-01-01

    Qualitative nurse researchers are required to make deliberate and sometimes complex methodological decisions about their work. Methodology in qualitative research is a comprehensive approach in which theory (ideas) and method (doing) are brought into close alignment. It can be difficult, at times, to understand the concept of methodology. The purpose of this research column is to: (1) define qualitative methodology; (2) illuminate the relationship between epistemology, ontology and methodology; (3) explicate the connection between theory and method in qualitative research design; and 4) highlight relevant examples of methodological decisions made within cardiovascular nursing research. Although there is no "one set way" to do qualitative research, all qualitative researchers should account for the choices they make throughout the research process and articulate their methodological decision-making along the way.

  7. A methodology to enlarge narrow stability windows

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Ewerton M.P.; Pastor, Jorge A.S.C.; Fontoura, Sergio A.B. [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Civil. Grupo de Tecnologia e Engenharia de Petroleo

    2004-07-01

    The stability window in a wellbore design is defined by the difference between fracture pressure and collapse pressure. Deep water environments typically present narrow stability windows, because rocks have low strength due to under-compaction process. Often also, horizontal wells are drilled to obtain a better development of reservoirs placed in thin layers of sandstone. In this scenario, several challenges are faced when drilling in deep water. The traditional approach for predicting instabilities is to determine collapses and fractures at borehole wall. However, the initiation of rupture does not indicate that the borehole fails to perform its function as a wellbore. Thus, a methodology in which the stability window may be enlarged is desirable. This paper presents one practical analytical methodology that consists in allowing wellbore pressures smaller than the conventional collapse pressure, i.e., based upon failure on the borehole wall. This means that a collapse region (shear failure) will be developed around the borehole wall. This collapse region is pre-defined and to estimate its size is used a failure criterion. The aforementioned methodology is implemented in a user-friendly software, which can perform analyses of stress, pore pressure, formation failure, mud weight and mud salinity design for drilling in shale formations. Simulations of a wellbore drilling in a narrow stability window environment are performed to demonstrate the improvements of using the methodology. (author)

  8. Study of methodology diversification in diagnostics

    International Nuclear Information System (INIS)

    Suda, Kazunori; Yonekawa, Tsuyoshi; Yoshikawa, Shinji; Hasegawa, Makoto

    1999-03-01

    There are several research activities to enhance safety and reliability of nuclear power plant operation and maintenance. We are developing a concept of an autonomous operation system where the role of operators is replaced with artificial intelligence. The purpose of the study described in this report is to develop a operator support system in abnormal plant situations. Conventionally, diagnostic modules based on individual methodology such as expert system have been developed and verified. In this report, methodology diversification is considered to integrate diagnostic modules which performance are confirmed using information processing technique. Technical issues to be considered in diagnostic methodology diversification are; 1)reliability of input data, 2)diversification of knowledge models, algorithms and reasoning schemes, 3)mutual complement and robustness. The diagnostic module utilizing the different approaches defined along with strategy of diversification was evaluated using fast breeder plant simulator. As a result, we confirmed that any singular diagnostic module can not meet accuracy criteria for the entire set of anomaly events. In contrast with this, we confirmed that every abnormality could be precisely diagnosed by a mutual combination. In other words, legitimacy of approach selected by strategy of diversification was shown, and methodology diversification attained clear efficiency for abnormal diagnosis. It has been also confirmed that the diversified diagnostic system implemented in this study is able to maintain its accuracy even in case that encountered scale of abnormality is different from reference cases embedded in the knowledge base. (author)

  9. Optimization of investment economic in PCI using the methodology of benefits design in analysis of the spread of fires with FDS (Fire Dynamics Simulator) in areas of nuclear fire; Optimizacion de la inversion economica en PCI mediante la metodologia de diseo prestaional en el analisis de la propagacion de incendios con FDS (Fire Dynnamics Simulator) en areas de fuego de centrales nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Salellas, J.

    2015-07-01

    Fire simulation analysis allows knowing the evolution and spread fire in areas of interest within a NPP such as control room, cable room and multi zone comportment among others. fires are a main concern regarding safety analysis of NPP. IDOM has the capability to carry out fire simulations, taken in to account smoke control, fire spread, toxicity levels, ventilation and all physical phenomena. As a result, appropriate fire protection measures can be assessed in each scenario. CFD tools applied to fire simulations can determine with higher resolution all damages caused during the fire. Furthermore, such tools can reduce costs due to a lower impact of design modifications. (Author)

  10. Laser Tracker Utilization Methodology in Measuring Truth Trajectories for INS Testing on 6 Degree of Freedom Table at the Marshall Space Flight Center's Contact Dynamics Simulation Laboratory with Lessons Learned

    Science.gov (United States)

    Leggett, Jared O.; Bryant, Thomas C.; Cowen, Charles T.; Clifton, Billy W.

    2018-01-01

    When performing Inertial Navigation System (INS) testing at the Marshall Space Flight Center's (MSFC) Contact Dynamics Simulation Laboratory (CDSL) early in 2017, a Leica Geosystems AT901 Laser Tracker system (LLT) measured the twist & sway trajectories as generated by the 6 Degree Of Freedom (6DOF) Table in the CDSL. These LLT measured trajectories were used in the INS software model validation effort. Several challenges were identified and overcome during the preparation for the INS testing, as well as numerous lessons learned. These challenges included determining the position and attitude of the LLT with respect to an INS-shared coordinate frame using surveyed monument locations in the CDSL and the accompanying mathematical transformation, accurately measuring the spatial relationship between the INS and a 6DOF tracking probe due to lack of INS visibility from the LLT location, obtaining the data from the LLT during a test, determining how to process the results for comparison with INS data in time and frequency domains, and using a sensitivity analysis of the results to verify the quality of the results. While many of these challenges were identified and overcome before or during testing, a significant lesson on test set-up was not learned until later in the data analysis process. It was found that a combination of trajectory-dependent gimbal locking and environmental noise introduced non-negligible noise in the angular measurements of the LLT that spanned the evaluated frequency spectrum. The lessons learned in this experiment may be useful for others performing INS testing in similar testing facilities.

  11. Methodological issues regarding power of classical test theory (CTT and item response theory (IRT-based approaches for the comparison of patient-reported outcomes in two groups of patients - a simulation study

    Directory of Open Access Journals (Sweden)

    Boyer François

    2010-03-01

    Full Text Available Abstract Background Patients-Reported Outcomes (PRO are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT based on the observed scores and models coming from Item Response Theory (IRT. However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula.

  12. Methodology for the hybrid solution of systems of differential equations

    International Nuclear Information System (INIS)

    Larrinaga, E.F.; Lopez, M.A.

    1993-01-01

    This work shows a general methodology of solution to systems of differential equations in hybrid computers. Taking into account this methodology, a mathematical model was elaborated. It offers wide possibilities of recording and handling the results on the basis of using the hybrid system IBM-VIDAC 1224 which the ISCTN has. It also presents the results gained when simulating a simple model of a nuclear reactor, which was used in the validation of the results of the computational model

  13. Simulation of sea water intrusion in coastal aquifers

    Indian Academy of Sciences (India)

    dependent miscible flow and transport modelling approach for simulation of seawater intrusion in coastal aquifers. A nonlinear optimization-based simulation methodology was used in this study. Various steady state simulations are performed for a ...

  14. Vertical Footbridge Vibrations: The Response Spectrum Methodology

    DEFF Research Database (Denmark)

    Georgakis, Christos; Ingólfsson, Einar Thór

    2008-01-01

    In this paper, a novel, accurate and readily codifiable methodology for the prediction of vertical footbridge response is presented. The methodology is based on the well-established response spectrum approach used in the majority of the world’s current seismic design codes of practice. The concept...... of a universally applicable reference response spectrum is introduced, from which the pedestrian-induced vertical response of any footbridge may be determined, based on a defined “event” and the probability of occurrence of that event. A series of Monte Carlo simulations are undertaken for the development...... period is introduced and its implication on the calculation of footbridge response is discussed. Finally, a brief comparison is made between the theoretically predicted pedestrian-induced vertical response of an 80m long RC footbridge (as an example) and actual field measurements. The comparison shows...

  15. Scenario aggregation and analysis via Mean-Shift Methodology

    International Nuclear Information System (INIS)

    Mandelli, D.; Yilmaz, A.; Metzroth, K.; Aldemir, T.; Denning, R.

    2010-01-01

    A new generation of dynamic methodologies is being developed for nuclear reactor probabilistic risk assessment (PRA) which explicitly account for the time element in modeling the probabilistic system evolution and use numerical simulation tools to account for possible dependencies between failure events. The dynamic event tree (DET) approach is one of these methodologies. One challenge with dynamic PRA methodologies is the large amount of data they produce which may be difficult to analyze without appropriate software tools. The concept of 'data mining' is well known in the computer science community and several methodologies have been developed in order to extract useful information from a dataset with a large number of records. Using the dataset generated by the DET analysis of the reactor vessel auxiliary cooling system (RVACS) of an ABR-1000 for an aircraft crash recovery scenario and the Mean-Shift Methodology for data mining, it is shown how clusters of transients with common characteristics can be identified and classified. (authors)

  16. Methodological Problems of Nanotechnoscience

    Science.gov (United States)

    Gorokhov, V. G.

    Recently, we have reported on the definitions of nanotechnology as a new type of NanoTechnoScience and on the nanotheory as a cluster of the different natural and engineering theories. Nanotechnology is not only a new type of scientific-engineering discipline, but it evolves also in a “nonclassical” way. Nanoontology or nano scientific world view has a function of the methodological orientation for the choice the theoretical means and methods toward a solution to the scientific and engineering problems. This allows to change from one explanation and scientific world view to another without any problems. Thus, nanotechnology is both a field of scientific knowledge and a sphere of engineering activity, in other words, NanoTechnoScience is similar to Systems Engineering as the analysis and design of large-scale, complex, man/machine systems but micro- and nanosystems. Nano systems engineering as well as Macro systems engineering includes not only systems design but also complex research. Design orientation has influence on the change of the priorities in the complex research and of the relation to the knowledge, not only to “the knowledge about something”, but also to the knowledge as the means of activity: from the beginning control and restructuring of matter at the nano-scale is a necessary element of nanoscience.

  17. Methodological themes and variations

    International Nuclear Information System (INIS)

    Tetlock, P.E.

    1989-01-01

    This paper reports on the tangible progress that has been made in clarifying the underlying processes that affect both the likelihood of war in general and of nuclear war in particular. It also illustrates how difficult it is to make progress in this area. Nonetheless, what has been achieved should not be minimized. We have learned a good deal on both the theoretical and the methodological fronts and, perhaps, most important, we have learned a good deal about the limits of our knowledge. Knowledge of our ignorance---especially in a policy domain where confident, even glib, causal assertions are so common---can be a major contribution in itself. The most important service the behavioral and social sciences can currently provide to the policy making community may well be to make thoughtful skepticism respectable: to sensitize those who make key decisions to the uncertainty surrounding our understanding of international conflict and to the numerous qualifications that now need to be attached to simple causal theories concerning the origins of war

  18. Engineering radioecology: Methodological considerations

    International Nuclear Information System (INIS)

    Nechaev, A.F.; Projaev, V.V.; Sobolev, I.A.; Dmitriev, S.A.

    1995-01-01

    The term ''radioecology'' has been widely recognized in scientific and technical societies. At the same time, this scientific school (radioecology) does not have a precise/generally acknowledged structure, unified methodical basis, fixed subjects of investigation, etc. In other words, radioecology is a vast, important but rather amorphous conglomerate of various ideas, amalgamated mostly by their involvement in biospheric effects of ionizing radiation and some conceptual stereotypes. This paradox was acceptable up to a certain time. However, with the termination of the Cold War and because of remarkable political changes in the world, it has become possible to convert the problem of environmental restoration from the scientific sphere in particularly practical terms. Already the first steps clearly showed an imperfection of existing technologies, managerial and regulatory schemes; lack of qualified specialists, relevant methods and techniques; uncertainties in methodology of decision-making, etc. Thus, building up (or maybe, structuring) of special scientific and technological basis, which the authors call ''engineering radioecology'', seems to be an important task. In this paper they endeavored to substantiate the last thesis and to suggest some preliminary ideas concerning the subject matter of engineering radioecology

  19. A 3D Hybrid Integration Methodology for Terabit Transceivers

    DEFF Research Database (Denmark)

    Dong, Yunfeng; Johansen, Tom Keinicke; Zhurbenko, Vitaliy

    2015-01-01

    integration are described. An equivalent circuit model of the via-throughs connecting the RF circuitry to the modulator is proposed and its lumped element parameters are extracted. Wire bonding transitions between the driving and RF circuitry were designed and simulated. An optimized 3D interposer design......This paper presents a three-dimensional (3D) hybrid integration methodology for terabit transceivers. The simulation methodology for multi-conductor structures are explained. The effect of ground vias on the RF circuitry and the preferred interposer substrate material for large bandwidth 3D hybrid...

  20. Simplified methodology for analysis of Angra-1 containing

    International Nuclear Information System (INIS)

    Neves Conti, T. das; Souza, A.L. de; Sabundjian, G.

    1988-01-01

    A simplified methodology of analysis was developed to simulate a Large Break Loss of Coolant Accident in the Angra 1 Nuclear Power Station. Using the RELAP5/MOD1, RELAP4/MOD5 and CONTEMPT-LT Codes, the time the variation of pressure and temperature in the containment was analysed. The obtained data was compared with the Angra 1 Final Safety Analysis Report, and too those calculated by a Detailed Model. The results obtained by this new methodology such as the small computational time of simulation, were satisfactory when getting the preliminar avaliation of the Angra 1 global parameters. (author) [pt

  1. Research in Modeling and Simulation for Airspace Systems Innovation

    Science.gov (United States)

    Ballin, Mark G.; Kimmel, William M.; Welch, Sharon S.

    2007-01-01

    This viewgraph presentation provides an overview of some of the applied research and simulation methodologies at the NASA Langley Research Center that support aerospace systems innovation. Risk assessment methodologies, complex systems design and analysis methodologies, and aer ospace operations simulations are described. Potential areas for future research and collaboration using interactive and distributed simula tions are also proposed.

  2. Bolometer Simulation Using SPICE

    Science.gov (United States)

    Jones, Hollis H.; Aslam, Shahid; Lakew, Brook

    2004-01-01

    A general model is presented that assimilates the thermal and electrical properties of the bolometer - this block model demonstrates the Electro-Thermal Feedback (ETF) effect on the bolometers performance. This methodology is used to construct a SPICE model that by way of analogy combines the thermal and electrical phenomena into one simulation session. The resulting circuit diagram is presented and discussed.

  3. Dosimetric methodology of the ICRP

    International Nuclear Information System (INIS)

    Eckerman, K.F.

    1994-01-01

    Establishment of guidance for the protection of workers and members of the public from radiation exposures necessitates estimation of the radiation dose to tissues of the body at risk. The dosimetric methodology formulated by the International Commission on Radiological Protection (ICRP) is intended to be responsive to this need. While developed for radiation protection, elements of the methodology are often applied in addressing other radiation issues; e.g., risk assessment. This chapter provides an overview of the methodology, discusses its recent extension to age-dependent considerations, and illustrates specific aspects of the methodology through a number of numerical examples

  4. Transmission pricing: paradigms and methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Shirmohammadi, Dariush [Pacific Gas and Electric Co., San Francisco, CA (United States); Vieira Filho, Xisto; Gorenstin, Boris [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil); Pereira, Mario V.P. [Power System Research, Rio de Janeiro, RJ (Brazil)

    1994-12-31

    In this paper we describe the principles of several paradigms and methodologies for pricing transmission services. The paper outlines some of the main characteristics of these paradigms and methodologies such as where they may be used for best results. Due to their popularity, power flow based MW-mile and short run marginal cost pricing methodologies will be covered in some detail. We conclude the paper with examples of the application of these two pricing methodologies for pricing transmission services in Brazil. (author) 25 refs., 2 tabs.

  5. Analytical methodology for nuclear safeguards

    International Nuclear Information System (INIS)

    Ramakumar, K.L.

    2011-01-01

    This paper attempts to briefly describe the analytical methodologies available and also highlight some of the challenges, expectations from nuclear material accounting and control (NUMAC) point of view

  6. Country report: a methodology

    International Nuclear Information System (INIS)

    Colin, A.

    2013-01-01

    This paper describes a methodology which could be applicable to establish a country report. In the framework of nuclear non proliferation appraisal and IAEA safeguards implementation, it is important to be able to assess the potential existence of undeclared nuclear materials and activities as undeclared facilities in the country under review. In our views a country report should aim at providing detailed information on nuclear related activities for each country examined taken 'as a whole' such as nuclear development, scientific and technical capabilities, etc. In order to study a specific country, we need to know if there is already an operating nuclear civil programme or not. In the first case, we have to check carefully if it could divert nuclear material, if there are misused declared facilities or if they operate undeclared facilities and conduct undeclared activities aiming at manufacturing nuclear weapon. In the second case, we should pay attention to the development of a nuclear civil project. A country report is based on a wide span of information (most of the time coming from open sources but sometimes coming also from confidential or private ones). Therefore, it is important to carefully check the nature and the credibility (reliability?) of these sources through cross-check examination. Eventually, it is necessary to merge information from different sources and apply an expertise filter. We have at our disposal a lot of performing tools to help us to assess, understand and evaluate the situation (cartography, imagery, bibliometry, etc.). These tools allow us to offer the best conclusions as far as possible. The paper is followed by the slides of the presentation. (author)

  7. Microbiological Methodology in Astrobiology

    Science.gov (United States)

    Abyzov, S. S.; Gerasimenko, L. M.; Hoover, R. B.; Mitskevich, I. N.; Mulyukin, A. L.; Poglazova, M. N.; Rozanov, A. Y.

    2005-01-01

    Searching for life in astromaterials to be delivered from the future missions to extraterrestrial bodies is undoubtedly related to studies of the properties and signatures of living microbial cells and microfossils on Earth. As model terrestrial analogs of Martian polar subsurface layers are often regarded the Antarctic glacier and Earth permafrost habitats where alive microbial cells preserved viability for millennia years due to entering the anabiotic state. For the future findings of viable microorganisms in samples from extraterrestrial objects, it is important to use a combined methodology that includes classical microbiological methods, plating onto nutrient media, direct epifluorescence and electron microscopy examinations, detection of the elemental composition of cells, radiolabeling techniques, PCR and FISH methods. Of great importance is to ensure authenticity of microorganisms (if any in studied samples) and to standardize the protocols used to minimize a risk of external contamination. Although the convincing evidence of extraterrestrial microbial life will may come from the discovery of living cells in astromaterials, biomorphs and microfossils must also be regarded as a target in search of life evidence bearing in mind a scenario that alive microorganisms had not be preserved and underwent mineralization. Under the laboratory conditions, processes that accompanied fossilization of cyanobacteria were reconstructed, and artificially produced cyanobacterial stromatolites resembles by their morphological properties those found in natural Earth habitats. Regarding the vital importance of distinguishing between biogenic and abiogenic signatures and between living and fossil microorganisms in analyzed samples, it is worthwhile to use some previously developed approaches based on electron microscopy examinations and analysis of elemental composition of biomorphs in situ and comparison with the analogous data obtained for laboratory microbial cultures and

  8. Kaupapa Maori Methodology: Trusting the Methodology through Thick and Thin

    Science.gov (United States)

    Hiha, Anne Aroha

    2016-01-01

    Kaupapa Maori is thoroughly theorised in academia in Aotearoa and those wishing to use it as their research methodology can find support through the writing of a number of Maori academics. What is not so well articulated, is the experiential voice of those who have used Kaupapa Maori as research methodology. My identity as a Maori woman…

  9. Airfoil selection methodology for Small Wind Turbines

    DEFF Research Database (Denmark)

    Salgado Fuentes, Valentin; Troya, Cesar; Moreno, Gustavo

    2016-01-01

    On wind turbine technology, the aerodynamic performance is fundamental to increase efficiency. Nowadays there are several databases with airfoils designed and simulated for different applications; that is why it is necessary to select those suitable for a specific application. This work presents...... a new methodology for airfoil selection used in feasibility and optimization of small wind turbines with low cut-in speed. On the first stage, airfoils data is tested on XFOIL software to check its compatibility with the simulator; then, arithmetic mean criteria is recursively used to discard...... underperformed airfoils; the best airfoil data was exported to Matlab for a deeper analysis. In the second part, data points were interpolated using "splines" to calculate glide ratio and stability across multiple angles of attack, those who present a bigger steadiness were conserved. As a result, 3 airfoils...

  10. A methodology for social experimentation

    DEFF Research Database (Denmark)

    Ravn, Ib

    A methodology is outlined whereby one may improve the performance of a social system to the satisfaction of its stakeholders, that is, facilitate desirable social and organizational transformations......A methodology is outlined whereby one may improve the performance of a social system to the satisfaction of its stakeholders, that is, facilitate desirable social and organizational transformations...

  11. Workshops as a Research Methodology

    Science.gov (United States)

    Ørngreen, Rikke; Levinsen, Karin

    2017-01-01

    This paper contributes to knowledge on workshops as a research methodology, and specifically on how such workshops pertain to e-learning. A literature review illustrated that workshops are discussed according to three different perspectives: workshops as a means, workshops as practice, and workshops as a research methodology. Focusing primarily on…

  12. Methodological Pluralism and Narrative Inquiry

    Science.gov (United States)

    Michie, Michael

    2013-01-01

    This paper considers how the integral theory model of Nancy Davis and Laurie Callihan might be enacted using a different qualitative methodology, in this case the narrative methodology. The focus of narrative research is shown to be on "what meaning is being made" rather than "what is happening here" (quadrant 2 rather than…

  13. Building ASIPS the Mescal methodology

    CERN Document Server

    Gries, Matthias

    2006-01-01

    A number of system designers use ASIP's rather than ASIC's to implement their system solutions. This book gives a comprehensive methodology for the design of these application-specific instruction processors (ASIPs). It includes demonstrations of applications of the methodologies using the Tipi research framework.

  14. A methodology for software documentation

    OpenAIRE

    Torres Júnior, Roberto Dias; Ahlert, Hubert

    2000-01-01

    With the growing complexity of window based software and the use of object-oriented, the development of software is getting more complex than ever. Based on that, this article intends to present a methodology for software documentation and to analyze our experience and how this methodology can aid the software maintenance

  15. Terminological Ambiguity: Game and Simulation

    Science.gov (United States)

    Klabbers, Jan H. G.

    2009-01-01

    Since its introduction in academia and professional practice during the 1950s, gaming has been linked to simulation. Although both fields have a few important characteristics in common, they are distinct in their form and underlying theories of knowledge and methodology. Nevertheless, in the literature, hybrid terms such as "gaming/simulation" and…

  16. AEGIS methodology and a perspective from AEGIS methodology demonstrations

    International Nuclear Information System (INIS)

    Dove, F.H.

    1981-03-01

    Objectives of AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) are to develop the capabilities needed to assess the post-closure safety of waste isolation in geologic formation; demonstrate these capabilities on reference sites; apply the assessment methodology to assist the NWTS program in site selection, waste package and repository design; and perform repository site analyses for the licensing needs of NWTS. This paper summarizes the AEGIS methodology, the experience gained from methodology demonstrations, and provides an overview in the following areas: estimation of the response of a repository to perturbing geologic and hydrologic events; estimation of the transport of radionuclides from a repository to man; and assessment of uncertainties

  17. Warship Combat System Selection Methodology Based on Discrete Event Simulation

    Science.gov (United States)

    2010-09-01

    fathoms (average) Ambient Temperature Range 24 - 34 C° (75.2 – 93.2 F° ) Figure 1. Area of Operations 14 The following four OPSITs will be...del clima de oleaje medio y extremal en el Caribe Colombiano,” Boletín Científico CIOH, No. 23, Dic. 2005, pp. 33–45. [Online]. Available: http

  18. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    technique involve model structure, system representation and the degree of validity, coupled with the simplicity, of the overall model. ABM is best suited... system representation of the air combat system . We feel that a simulation model that combines ABM with equation-based representation of weapons and...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA

  19. Response Surface Methodology's Steepest Ascent and Step Size Revisited

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; den Hertog, D.; Angun, M.E.

    2002-01-01

    Response Surface Methodology (RSM) searches for the input combination maximizing the output of a real system or its simulation.RSM is a heuristic that locally fits first-order polynomials, and estimates the corresponding steepest ascent (SA) paths.However, SA is scale-dependent; and its step size is

  20. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  1. A new methodology for sizing hybrid photovoltaic-wind energy system using simulation and optimization tools = Uma nova metodologia para dimensionamento de sistemas híbridos de energia (solar-eólica utilizando ferramentas de simulação e otimização

    Directory of Open Access Journals (Sweden)

    Samuel Nelson Melegari de Souza

    2005-01-01

    Full Text Available This paper presents a new methodology for sizing an autonomousphotovoltaic-wind hybrid energy system with battery storage, using simulation and optimization tools. The developed model is useful for energizing remote rural areas and produces a system with minimum cost and high reliability, based on the concept of Loss of Power Supply Probability (LPSP applied for consecutive hours. Some scenarios arecalculated and compared, using different numbers of consecutive hours and different LPSP values. As a result, a complete sizing of the system and a long-term cost evaluation are presented.Este trabalho apresenta uma nova metodologia para dimensionamento de sistemas híbridos de energia (solar-eólica com armazenamento em banco de baterias, utilizando ferramentas de simulação e otimização. O modelo desenvolvido é útil para a energização de áreas ruraisisoladas e resulta num sistema com custo mínimo e alta confiabilidade, baseado no conceito de perda de fornecimento de energia à carga (LPSP aplicado para horas consecutivas. Alguns cenários são calculados e comparados, utilizando-se diferentes períodos de horas consecutivas e diferentes valores de LPSP. Os resultados apresentam um dimensionamento completo do sistema e uma avaliação de custos ao longo de vários anos.

  2. An Innovative Fuzzy-Logic-Based Methodology for Trend Identification

    International Nuclear Information System (INIS)

    Wang Xin; Tsoukalas, Lefteri H.; Wei, Thomas Y.C.; Reifman, Jaques

    2001-01-01

    A new fuzzy-logic-based methodology for on-line signal trend identification is introduced. The methodology may be used for detecting the onset of nuclear power plant (NPP) transients at the earliest possible time and could be of great benefit to diagnostic, maintenance, and performance-monitoring programs. Although signal trend identification is complicated by the presence of noise, fuzzy methods can help capture important features of on-line signals, integrate the information included in these features, and classify incoming NPP signals into increasing, decreasing, and steady-state trend categories. A computer program named PROTREN is developed and tested for the purpose of verifying this methodology using NPP and simulation data. The results indicate that the new fuzzy-logic-based methodology is capable of detecting transients accurately, it identifies trends reliably and does not misinterpret a steady-state signal as a transient one

  3. Methodological practicalities in analytical generalization

    DEFF Research Database (Denmark)

    Halkier, Bente

    2011-01-01

    generalization. Theoretically, the argumentation in the article is based on practice theory. The main part of the article describes three different examples of ways of generalizing on the basis of the same qualitative data material. There is a particular focus on describing the methodological strategies......In this article, I argue that the existing literature on qualitative methodologies tend to discuss analytical generalization at a relatively abstract and general theoretical level. It is, however, not particularly straightforward to “translate” such abstract epistemological principles into more...... operative methodological strategies for producing analytical generalizations in research practices. Thus, the aim of the article is to contribute to the discussions among qualitatively working researchers about generalizing by way of exemplifying some of the methodological practicalities in analytical...

  4. Nanotoxicology materials, methodologies, and assessments

    CERN Document Server

    Durán, Nelson; Alves, Oswaldo L; Zucolotto, Valtencir

    2014-01-01

    This book begins with a detailed introduction to engineered nanostructures, followed by a section on methodologies used in research on cytotoxicity and genotoxicity, and concluding with evidence for the cyto- and genotoxicity of specific nanoparticles.

  5. Reflective Methodology: The Beginning Teacher

    Science.gov (United States)

    Templeton, Ronald K.; Siefert, Thomas E.

    1970-01-01

    Offers a variety of specific techniques which will help the beginning teacher to implement reflective methodology and create an inquiry-centered classroom atmosphere, at the same time meeting the many more pressing demands of first-year teaching. (JES)

  6. Methodologies used in Project Management

    OpenAIRE

    UNGUREANU, Adrian; UNGUREANU, Anca

    2014-01-01

    Undoubtedly, a methodology properly defined and strictly followed for project management provides a firm guarantee that the work will be done on time, in budget and according to specifications. A project management methodology in simple terms is a “must-have” to avoid failure and reduce risks, because is one of the critical success factors, such basic skills of the management team. This is the simple way to guide the team through the design and execution phases, processes and tasks throughout...

  7. Methodology for ranking restoration options

    DEFF Research Database (Denmark)

    Jensen, Per Hedemann

    1999-01-01

    techniques as a function of contamination and site characteristics. The project includes analyses of existing remediation methodologies and contaminated sites, and is structured in the following steps:-characterisation of relevant contaminated sites -identication and characterisation of relevant restoration...... techniques -assessment of the radiological impact -development and application of a selection methodology for restoration options -formulation ofgeneric conclusions and development of a manual The project is intended to apply to situations in which sites with nuclear installations have been contaminated...

  8. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  9. Evaluation of methodologies for remunerating wind power's reliability in Colombia

    International Nuclear Information System (INIS)

    Botero B, Sergio; Isaza C, Felipe; Valencia, Adriana

    2010-01-01

    Colombia strives to have enough firm capacity available to meet unexpected power shortages and peak demand; this is clear from mechanisms currently in place that provide monetary incentives (in the order of nearly US$ 14/MW h) to power producers that can guarantee electricity provision during scarcity periods. Yet, wind power in Colombia is not able to currently guarantee firm power because an accepted methodology to calculate its potential firm capacity does not exist. In this paper we argue that developing such methodology would provide an incentive to potential investors to enter into this low carbon technology. This paper analyzes three methodologies currently used in energy markets around the world to calculate firm wind energy capacity: PJM, NYISO, and Spain. These methodologies are initially selected due to their ability to accommodate to the Colombian energy regulations. The objective of this work is to determine which of these methodologies makes most sense from an investor's perspective, to ultimately shed light into developing a methodology to be used in Colombia. To this end, the authors developed a methodology consisting on the elaboration of a wind model using the Monte-Carlo simulation, based on known wind behaviour statistics of a region with adequate wind potential in Colombia. The simulation gives back random generation data, representing the resource's inherent variability and simulating the historical data required to evaluate the mentioned methodologies, thus achieving the technology's theoretical generation data. The document concludes that the evaluated methodologies are easy to implement and that these do not require historical data (important for Colombia, where there is almost no historical wind power data). It is also found that the Spanish methodology provides a higher Capacity Value (and therefore a higher return to investors). The financial assessment results show that it is crucial that these types of incentives exist to make viable

  10. A methodology for the parametric modelling of the flow coefficients and flow rate in hydraulic valves

    International Nuclear Information System (INIS)

    Valdés, José R.; Rodríguez, José M.; Saumell, Javier; Pütz, Thomas

    2014-01-01

    Highlights: • We develop a methodology for the parametric modelling of flow in hydraulic valves. • We characterize the flow coefficients with a generic function with two parameters. • The parameters are derived from CFD simulations of the generic geometry. • We apply the methodology to two cases from the automotive brake industry. • We validate by comparing with CFD results varying the original dimensions. - Abstract: The main objective of this work is to develop a methodology for the parametric modelling of the flow rate in hydraulic valve systems. This methodology is based on the derivation, from CFD simulations, of the flow coefficient of the critical restrictions as a function of the Reynolds number, using a generalized square root function with two parameters. The methodology is then demonstrated by applying it to two completely different hydraulic systems: a brake master cylinder and an ABS valve. This type of parametric valve models facilitates their implementation in dynamic simulation models of complex hydraulic systems

  11. Managerial Methodology in Public Institutions

    Directory of Open Access Journals (Sweden)

    Ion VERBONCU

    2010-10-01

    Full Text Available One of the most important ways of making public institutions more efficient is by applying managerial methodology, embodied in the promotion of management tools, modern and sophisticated methodologies, as well as operation of designing/redesigning and maintenance of the management process and its components. Their implementation abides the imprint of constructive and functional particularities of public institutions, decentralized and devolved, and, of course, the managers’ expertise of these organizations. Managerial methodology is addressed through three important instruments diagnosis, management by objectives and scoreboard. Its presence in the performance management process should be mandatory, given the favorable influence on the management and economic performance and the degree of scholastic approach of the managers’ performance.

  12. Blanket safety by GEMSAFE methodology

    International Nuclear Information System (INIS)

    Sawada, Tetsuo; Saito, Masaki

    2001-01-01

    General Methodology of Safety Analysis and Evaluation for Fusion Energy Systems (GEMSAFE) has been applied to a number of fusion system designs, such as R-tokamak, Fusion Experimental Reactor (FER), and the International Thermonuclear Experimental Reactor (ITER) designs in the both stages of Conceptual Design Activities (CDA) and Engineering Design Activities (EDA). Though the major objective of GEMSAFE is to reasonably select design basis events (DBEs) it is also useful to elucidate related safety functions as well as requirements to ensure its safety. In this paper, we apply the methodology to fusion systems with future tritium breeding blankets and make clear which points of the system should be of concern from safety ensuring point of view. In this context, we have obtained five DBEs that are related to the blanket system. We have also clarified the safety functions required to prevent accident propagations initiated by those blanket-specific DBEs. The outline of the methodology is also reviewed. (author)

  13. The NLC Software Requirements Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Shoaee, Hamid

    2002-08-20

    We describe the software requirements and development methodology developed for the NLC control system. Given the longevity of that project, and the likely geographical distribution of the collaborating engineers, the planned requirements management process is somewhat more formal than the norm in high energy physics projects. The short term goals of the requirements process are to accurately estimate costs, to decompose the problem, and to determine likely technologies. The long term goal is to enable a smooth transition from high level functional requirements to specific subsystem and component requirements for individual programmers, and to support distributed development. The methodology covers both ends of that life cycle. It covers both the analytical and documentary tools for software engineering, and project management support. This paper introduces the methodology, which is fully described in [1].

  14. Progressive design methodology for complex engineering systems based on multiobjective genetic algorithms and linguistic decision making

    NARCIS (Netherlands)

    Kumar, P.; Bauer, P.

    2008-01-01

    This work focuses on a design methodology that aids in design and development of complex engineering systems. This design methodology consists of simulation, optimization and decision making. Within this work a framework is presented in which modelling, multi-objective optimization and multi

  15. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...

  16. Vibration-Driven Microrobot Positioning Methodologies for Nonholonomic Constraint Compensation

    Directory of Open Access Journals (Sweden)

    Kostas Vlachos

    2015-03-01

    Full Text Available This paper presents the formulation and practical implementation of positioning methodologies that compensate for the nonholonomic constraints of a mobile microrobot that is driven by two vibrating direct current (DC micromotors. The open-loop and closed-loop approaches described here add the capability for net sidewise displacements of the microrobotic platform. A displacement is achieved by the execution of a number of repeating steps that depend on the desired displacement, the speed of the micromotors, and the elapsed time. Simulation and experimental results verified the performance of the proposed methodologies.

  17. Methodology, theoretical framework and scholarly significance: An ...

    African Journals Online (AJOL)

    Methodology, theoretical framework and scholarly significance: An overview ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search ... Keywords: Legal Research, Methodology, Theory, Pedagogy, Legal Training, Scholarship ...

  18. Methodology for reliability based condition assessment

    International Nuclear Information System (INIS)

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period

  19. Urban metabolism: A review of research methodologies

    International Nuclear Information System (INIS)

    Zhang, Yan

    2013-01-01

    Urban metabolism analysis has become an important tool for the study of urban ecosystems. The problems of large metabolic throughput, low metabolic efficiency, and disordered metabolic processes are a major cause of unhealthy urban systems. In this paper, I summarize the international research on urban metabolism, and describe the progress that has been made in terms of research methodologies. I also review the methods used in accounting for and evaluating material and energy flows in urban metabolic processes, simulation of these flows using a network model, and practical applications of these methods. Based on this review of the literature, I propose directions for future research, and particularly the need to study the urban carbon metabolism because of the modern context of global climate change. Moreover, I recommend more research on the optimal regulation of urban metabolic systems. Highlights: •Urban metabolic processes can be analyzed by regarding cities as superorganisms. •Urban metabolism methods include accounting, assessment, modeling, and regulation. •Research methodologies have improved greatly since this field began in 1965. •Future research should focus on carbon metabolism and optimal regulation. -- The author reviews research progress in the field of urban metabolism, and based on her literature review, proposes directions for future research

  20. Methodological Guidelines for Advertising Research

    DEFF Research Database (Denmark)

    Rossiter, John R.; Percy, Larry

    2017-01-01

    In this article, highly experienced advertising academics and advertising research consultants John R. Rossiter and Larry Percy present and discuss what they believe to be the seven most important methodological guidelines that need to be implemented to improve the practice of advertising research....... Their focus is on methodology, defined as first choosing a suitable theoretical framework to guide the research study and then identifying the advertising responses that need to be studied. Measurement of those responses is covered elsewhere in this special issue in the article by Bergkvist and Langner. Most...

  1. Acoustic emission methodology and application

    CERN Document Server

    Nazarchuk, Zinoviy; Serhiyenko, Oleh

    2017-01-01

    This monograph analyses in detail the physical aspects of the elastic waves radiation during deformation or fracture of materials. I presents the  methodological bases for the practical use of acoustic emission device, and describes the results of theoretical and experimental researches of evaluation of the crack growth resistance of materials, selection of the useful AE signals. The efficiency of this methodology is shown through the diagnostics of various-purpose industrial objects. The authors obtain results of experimental researches with the help of the new methods and facilities.

  2. An LWR design decision Methodology

    International Nuclear Information System (INIS)

    Leahy, T.J.; Rees, D.C.; Young, J.

    1982-01-01

    While all parties involved in nuclear plant regulation endeavor to make decisions which optimize the considerations of plant safety and financial impacts, these decisions are generally made without the benefit of a systematic and rigorous approach to the questions confronting the decision makers. A Design Decision Methodology has been developed which provides such a systematic approach. By employing this methodology, which makes use of currently accepted probabilistic risk assessment techniques and cost estimation, informed decisions may be made against a background of comparisons between the relative levels of safety and costs associated with various design alternatives

  3. Qualitative methodology in developmental psychology

    DEFF Research Database (Denmark)

    Demuth, Carolin; Mey, Günter

    2015-01-01

    Qualitative methodology presently is gaining increasing recognition in developmental psychology. Although the founders of developmental psychology to a large extent already used qualitative procedures, the field was long dominated by a (post) positivistic quantitative paradigm. The increasing rec...... in qualitative research offers a promising avenue to advance the field in this direction.......Qualitative methodology presently is gaining increasing recognition in developmental psychology. Although the founders of developmental psychology to a large extent already used qualitative procedures, the field was long dominated by a (post) positivistic quantitative paradigm. The increasing...

  4. Qualitative Methodology in Unfamiliar Cultures

    DEFF Research Database (Denmark)

    Svensson, Christian Franklin

    2014-01-01

    on a qualitative methodology, conscious reflection on research design and objectivity is important when doing fieldwork. This case study discusses such reflections. Emphasis throughout is given to applied qualitative methodology and its contributions to the social sciences, in particular having to do......This case study discusses qualitative fieldwork in Malaysia. The trends in higher education led to investigating how and why young Indians and Chinese in Malaysia are using the university to pursue a life strategy. Given the importance of field context in designing and analysing research based...

  5. Observational methodology in sport sciences

    Directory of Open Access Journals (Sweden)

    M. Teresa Anguera

    2013-11-01

    Full Text Available This paper reviews the conceptual framework, the key literature and the methods (observation tools, such as category systems and field formats, and coding software, etc. that should be followed when conducting research from the perspective of observational methodology. The observational designs used by the authors’ research group over the last twenty years are discussed, and the procedures for analysing data and assessing their quality are described. Mention is also made of the latest methodological trends in this field, such as the use of mixed methods.

  6. Reflections on Design Methodology Research

    DEFF Research Database (Denmark)

    2011-01-01

    We shall reflect on the results of Design Methodology research and their impact on design practice. In the past 50 years the number of researchers in the field has expanded enormously – as has the number of publications. During the same period design practice and its products have changed...... and produced are also now far more complex and distributed, putting designers under ever increasing pressure. We shall address the question: Are the results of Design Methodology research appropriate and are they delivering the expected results in design practice? In our attempt to answer this question we...

  7. Introducing an ILS methodology into research reactors

    International Nuclear Information System (INIS)

    Lorenzo, N. de; Borsani, R.C.

    2003-01-01

    subsequent design stages. Staff should be allocated to operate the system after assessments based on the workload and safety issues. A methodology for a Plant Tasks Analysis (used as the input for a Manning Analysis Assessment) to define a cost-effective organisational structure is presented. Training is a key issue to support a well-designed plant. This paper describes general training aspects to be considered in the ILS approach. General considerations to tailor a Training Plan are presented as well as for developing training tools such as Plant Simulators and 3D Electronic Models. Manuals, procedures and instructions (relevant for system operation and maintenance) are generally developed by designers or operators focussing on technical characteristics rather than considering the documentation framework and training needs. Methodology and general recommendations regarding documents structure and scope to achieve world class plant documents are also presented. Plant Maintenance should be consistent with in house capabilities regarding the appropriate Level of Repair of each plant item. Reliability, Availability, Maintainability and Supportability assessment methodology is presented in order to focus maintenance activities on relevant issues. Spare parts management is a critical issue and hence is also included in this logistical approach. References regarding optimisation of these and related issues are included. All the mentioned factors are optimally integrated from the beginning of the process application in order to achieve the major outcomes with the available resources. (author)

  8. Methodologic frontiers in environmental epidemiology.

    OpenAIRE

    Rothman, K J

    1993-01-01

    Environmental epidemiology comprises the epidemiologic study of those environmental factors that are outside the immediate control of the individual. Exposures of interest to environmental epidemiologists include air pollution, water pollution, occupational exposure to physical and chemical agents, as well as psychosocial elements of environmental concern. The main methodologic problem in environmental epidemiology is exposure assessment, a problem that extends through all of epidemiologic re...

  9. IMSF: Infinite Methodology Set Framework

    Science.gov (United States)

    Ota, Martin; Jelínek, Ivan

    Software development is usually an integration task in enterprise environment - few software applications work autonomously now. It is usually a collaboration of heterogeneous and unstable teams. One serious problem is lack of resources, a popular result being outsourcing, ‘body shopping’, and indirectly team and team member fluctuation. Outsourced sub-deliveries easily become black boxes with no clear development method used, which has a negative impact on supportability. Such environments then often face the problems of quality assurance and enterprise know-how management. The used methodology is one of the key factors. Each methodology was created as a generalization of a number of solved projects, and each methodology is thus more or less connected with a set of task types. When the task type is not suitable, it causes problems that usually result in an undocumented ad-hoc solution. This was the motivation behind formalizing a simple process for collaborative software engineering. Infinite Methodology Set Framework (IMSF) defines the ICT business process of adaptive use of methods for classified types of tasks. The article introduces IMSF and briefly comments its meta-model.

  10. Unattended Monitoring System Design Methodology

    International Nuclear Information System (INIS)

    Drayer, D.D.; DeLand, S.M.; Harmon, C.D.; Matter, J.C.; Martinez, R.L.; Smith, J.D.

    1999-01-01

    A methodology for designing Unattended Monitoring Systems starting at a systems level has been developed at Sandia National Laboratories. This proven methodology provides a template that describes the process for selecting and applying appropriate technologies to meet unattended system requirements, as well as providing a framework for development of both training courses and workshops associated with unattended monitoring. The design and implementation of unattended monitoring systems is generally intended to respond to some form of policy based requirements resulting from international agreements or domestic regulations. Once the monitoring requirements are established, a review of the associated process and its related facilities enables identification of strategic monitoring locations and development of a conceptual system design. The detailed design effort results in the definition of detection components as well as the supporting communications network and data management scheme. The data analyses then enables a coherent display of the knowledge generated during the monitoring effort. The resultant knowledge is then compared to the original system objectives to ensure that the design adequately addresses the fundamental principles stated in the policy agreements. Implementation of this design methodology will ensure that comprehensive unattended monitoring system designs provide appropriate answers to those critical questions imposed by specific agreements or regulations. This paper describes the main features of the methodology and discusses how it can be applied in real world situations

  11. Environmental Zoning: Some methodological implications

    NARCIS (Netherlands)

    Ike, Paul; Voogd, Henk

    1991-01-01

    The purpose of this article is to discuss some methodological problems of environmental zoning. The principle of environmental zoning will be elaborated. In addition an overview is given of a number of approaches that have been followed in practice to arrive at an integral judgement. Finally some

  12. A methodology for string resolution

    International Nuclear Information System (INIS)

    Karonis, N.T.

    1992-11-01

    In this paper we present a methodology, not a tool. We present this methodology with the intent that it be adopted, on a case by case basis, by each of the existing tools in EPICS. In presenting this methodology, we describe each of its two components in detail and conclude with an example depicting how the methodology can be used across a pair of tools. The task of any control system is to provide access to the various components of the machine being controlled, for example, the Advanced Photon Source (APS). By access, we mean the ability to monitor the machine's status (reading) as well as the ability to explicitly change its status (writing). The Experimental Physics and Industrial Control System (EPICS) is a set of tools, designed to act in concert, that allows one to construct a control system. EPICS provides the ability to construct a control system that allows reading and writing access to the machine. It does this through the notion of databases. Each of the components of the APS that is accessed by the control system is represented in EPICS by a set of named database records. Once this abstraction is made, from physical device to named database records, the process of monitoring and changing the state of that device becomes the simple process of reading and writing information from and to its associated named records

  13. Counting stem cells : methodological constraints

    NARCIS (Netherlands)

    Bystrykh, Leonid V.; Verovskaya, Evgenia; Zwart, Erik; Broekhuis, Mathilde; de Haan, Gerald

    The number of stem cells contributing to hematopoiesis has been a matter of debate. Many studies use retroviral tagging of stem cells to measure clonal contribution. Here we argue that methodological factors can impact such clonal analyses. Whereas early studies had low resolution, leading to

  14. Test reactor risk assessment methodology

    International Nuclear Information System (INIS)

    Jennings, R.H.; Rawlins, J.K.; Stewart, M.E.

    1976-04-01

    A methodology has been developed for the identification of accident initiating events and the fault modeling of systems, including common mode identification, as these methods are applied in overall test reactor risk assessment. The methods are exemplified by a determination of risks to a loss of primary coolant flow in the Engineering Test Reactor

  15. METHODOLOGICAL ELEMENTS OF SITUATIONAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Tetyana KOVALCHUK

    2016-07-01

    Full Text Available The article deals with the investigation of theoretical and methodological principles of situational analysis. The necessity of situational analysis is proved in modern conditions. The notion “situational analysis” is determined. We have concluded that situational analysis is a continuous system study which purpose is to identify dangerous situation signs, to evaluate comprehensively such signs influenced by a system of objective and subjective factors, to search for motivated targeted actions used to eliminate adverse effects of the exposure of the system to the situation now and in the future and to develop the managerial actions needed to bring the system back to norm. It is developed a methodological approach to the situational analysis, its goal is substantiated, proved the expediency of diagnostic, evaluative and searching functions in the process of situational analysis. The basic methodological elements of the situational analysis are grounded. The substantiation of the principal methodological elements of system analysis will enable the analyst to develop adaptive methods able to take into account the peculiar features of a unique object which is a situation that has emerged in a complex system, to diagnose such situation and subject it to system and in-depth analysis, to identify risks opportunities, to make timely management decisions as required by a particular period.

  16. 16 Offsetting deficit conceptualisations: methodological ...

    African Journals Online (AJOL)

    uses the concepts of literacy practices and knowledge recontextualisation to ... 1996, 2000) theory of 'knowledge recontextualisation' in the development of curricula .... cognitive, social and cultural abilities needed to fit in and thrive in the HE learning .... this argument, that a methodology and analytic framework that seeks to ...

  17. Methodology for the case studies

    NARCIS (Netherlands)

    Smits, M.J.W.; Woltjer, G.B.

    2017-01-01

    This document is about the methodology and selection of the case studies. It is meant as a guideline for the case studies, and together with the other reports in this work package can be a source of inform ation for policy officers, interest groups and researchers evaluating or performing impact

  18. Safety at Work : Research Methodology

    NARCIS (Netherlands)

    Beurden, van K. (Karin); Boer, de J. (Johannes); Brinks, G. (Ger); Goering-Zaburnenko, T. (Tatiana); Houten, van Y. (Ynze); Teeuw, W. (Wouter)

    2012-01-01

    In this document, we provide the methodological background for the Safety atWork project. This document combines several project deliverables as defined inthe overall project plan: validation techniques and methods (D5.1.1), performanceindicators for safety at work (D5.1.2), personal protection

  19. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  20. Integrated building (and) airflow simulation: an overview

    NARCIS (Netherlands)

    Hensen, J.L.M.

    2002-01-01

    This paper aims to give a broad overview of building airflow simulation, and advocates that the essential ingredients for quality assurance are: domain knowledge; selection of appropriate level of resolution; calibration and validation; and a correct performance assessment methodology. Directions

  1. Simulation of investment returns of toll projects.

    Science.gov (United States)

    2013-08-01

    This research develops a methodological framework to illustrate key stages in applying the simulation of investment returns of toll projects, acting as an example process of helping agencies conduct numerical risk analysis by taking certain uncertain...

  2. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  3. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  4. Simulators IV

    International Nuclear Information System (INIS)

    Fairchild, B.T.

    1987-01-01

    These proceedings contain papers on simulators with artificial intelligence, and the human decision making process; visuals for simulators: human factors, training, and psycho-physical impacts; the role of institutional structure on simulation projects; maintenance trainers for economic value and safety; biomedical simulators for understanding nature, for medical benefits, and the physiological effects of simulators; the mathematical models and numerical techniques that drive today's simulators; and the demography of simulators, with census papers identifying the population of real-time simulator training devices; nuclear reactors

  5. Environmental impact statement analysis: dose methodology

    International Nuclear Information System (INIS)

    Mueller, M.A.; Strenge, D.L.; Napier, B.A.

    1981-01-01

    Standardized sections and methodologies are being developed for use in environmental impact statements (EIS) for activities to be conducted on the Hanford Reservation. Five areas for standardization have been identified: routine operations dose methodologies, accident dose methodology, Hanford Site description, health effects methodology, and socioeconomic environment for Hanford waste management activities

  6. A Critique of Methodological Dualism in Education

    Science.gov (United States)

    Yang, Jeong A.; Yoo, Jae-Bong

    2018-01-01

    This paper aims to critically examine the paradigm of methodological dualism and explore whether methodologies in social science currently are appropriate for educational research. There are two primary methodologies for studying education: quantitative and qualitative methods. This is what we mean by "methodological dualism". Is…

  7. Feminist Methodologies and Engineering Education Research

    Science.gov (United States)

    Beddoes, Kacey

    2013-01-01

    This paper introduces feminist methodologies in the context of engineering education research. It builds upon other recent methodology articles in engineering education journals and presents feminist research methodologies as a concrete engineering education setting in which to explore the connections between epistemology, methodology and theory.…

  8. Gas Turbine Blade Damper Optimization Methodology

    Directory of Open Access Journals (Sweden)

    R. K. Giridhar

    2012-01-01

    Full Text Available The friction damping concept is widely used to reduce resonance stresses in gas turbines. A friction damper has been designed for high pressure turbine stage of a turbojet engine. The objective of this work is to find out effectiveness of the damper while minimizing resonant stresses for sixth and ninth engine order excitation of first flexure mode. This paper presents a methodology that combines three essential phases of friction damping optimization in turbo-machinery. The first phase is to develop an analytical model of blade damper system. The second phase is experimentation and model tuning necessary for response studies while the third phase is evaluating damper performance. The reduced model of blade is developed corresponding to the mode under investigation incorporating the friction damper then the simulations were carried out to arrive at an optimum design point of the damper. Bench tests were carried out in two phases. Phase-1 deals with characterization of the blade dynamically and the phase-2 deals with finding optimal normal load at which the blade resonating response is minimal for a given excitation. The test results are discussed, and are corroborated with simulated results, are in good agreement.

  9. New quickest transient detection methodology. Nuclear engineering applications

    International Nuclear Information System (INIS)

    Wang, Xin; Jevremovic, Tatjana; Tsoukalas, Lefteri H.

    2003-01-01

    A new intelligent systems methodology for quickest online transient detection is presented. Based on information that includes, but is not limited to, statistical features, energy of frequency components and wavelet coefficients, the new methodology decides whether a transient has emerged. A fuzzy system makes the final decision, the membership functions of which are obtained by artificial neural networks and adjusted in an online manner. Comparisons are performed with conventional methods for transient detection using simulated and plant data. The proposed methodology could be useful in power plant operations, diagnostic and maintenance activities. It is also considered as a design tool for quick design modifications in a virtual design environment aimed at next generation University Research and Training Reactors (URTRs). (The virtual design environment is pursued as part of the Big-10 Consortium sponsored by the new Innovations in Nuclear Infrastructure and Education (INIE) program sponsored by the US Department of Energy.) (author)

  10. Implementing the cost-optimal methodology in EU countries

    DEFF Research Database (Denmark)

    Atanasiu, Bogdan; Kouloumpi, Ilektra; Thomsen, Kirsten Engelund

    This study presents three cost-optimal calculations. The overall aim is to provide a deeper analysis and to provide additional guidance on how to properly implement the cost-optimality methodology in Member States. Without proper guidance and lessons from exemplary case studies using realistic...... input data (reflecting the likely future development), there is a risk that the cost-optimal methodology may be implemented at sub-optimal levels. This could lead to a misalignment between the defined cost-optimal levels and the long-term goals, leaving a significant energy saving potential unexploited....... Therefore, this study provides more evidence on the implementation of the cost-optimal methodology and highlights the implications of choosing different values for key factors (e.g. discount rates, simulation variants/packages, costs, energy prices) at national levels. The study demonstrates how existing...

  11. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  12. Methodologies of Uncertainty Propagation Calculation

    International Nuclear Information System (INIS)

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  13. Multicriteria methodology for decision aiding

    CERN Document Server

    Roy, Bernard

    1996-01-01

    This is the first comprehensive book to present, in English, the multicriteria methodology for decision aiding In the foreword the distinctive features and main ideas of the European School of MCDA are outlined The twelve chapters are essentially expository in nature, but scholarly in treatment Some questions, which are too often neglected in the literature on decision theory, such as how is a decision made, who are the actors, what is a decision aiding model, how to define the set of alternatives, are discussed Examples are used throughout the book to illustrate the various concepts Ways to model the consequences of each alternative and building criteria taking into account the inevitable imprecisions, uncertainties and indeterminations are described and illustrated The three classical operational approaches of MCDA synthesis in one criterion (including MAUT), synthesis by outranking relations, interactive local judgements, are studied This methodology tries to be a theoretical or intellectual framework dire...

  14. AGR core safety assessment methodologies

    International Nuclear Information System (INIS)

    McLachlan, N.; Reed, J.; Metcalfe, M.P.

    1996-01-01

    To demonstrate the safety of its gas-cooled graphite-moderated AGR reactors, nuclear safety assessments of the cores are based upon a methodology which demonstrates no component failures, geometrical stability of the structure and material properties bounded by a database. All AGRs continue to meet these three criteria. However, predictions of future core behaviour indicate that the safety case methodology will eventually need to be modified to deal with new phenomena. A new approach to the safety assessment of the cores is currently under development, which can take account of these factors while at the same time providing the same level of protection for the cores. This approach will be based on the functionality of the core: unhindered movement of control rods, continued adequate cooling of the fuel and the core, continued ability to charge and discharge fuel. (author). 5 figs

  15. Methodological update in Medicina Intensiva.

    Science.gov (United States)

    García Garmendia, J L

    2018-04-01

    Research in the critically ill is complex by the heterogeneity of patients, the difficulties to achieve representative sample sizes and the number of variables simultaneously involved. However, the quantity and quality of records is high as well as the relevance of the variables used, such as survival. The methodological tools have evolved to offering new perspectives and analysis models that allow extracting relevant information from the data that accompanies the critically ill patient. The need for training in methodology and interpretation of results is an important challenge for the intensivists who wish to be updated on the research developments and clinical advances in Intensive Medicine. Copyright © 2017 Elsevier España, S.L.U. y SEMNIM. All rights reserved.

  16. Methodology for building confidence measures

    Science.gov (United States)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  17. Mo(ve)ment-methodology

    DEFF Research Database (Denmark)

    Mørck, Line Lerche; Christian Celosse-Andersen, Martin

    2018-01-01

    This paper describes the theoretical basis for and development of a moment-movement research methodology, based on an integration of critical psychological practice research and critical ethnographic social practice theory. Central theoretical conceptualizations, such as human agency, life...... conditions and identity formation, are discussed in relation to criminological theories of gang desistance. The paper illustrates how the mo(ve)ment methodology was applied in a study of comprehensive processes of identity (re)formation and gang exit processes. This study was conducted with Martin, a former....... This is a moment which captures Martin’s complex and ambiguous feelings of conflictual concerns, frustration, anger, and a new feeling of insecurity in his masculinity, as well as engagement and a sense of deep meaningfulness as he becomes a more reflective academic. All these conflicting feelings also give...

  18. Design methodology of Dutch banknotes

    Science.gov (United States)

    de Heij, Hans A. M.

    2000-04-01

    Since the introduction of a design methodology for Dutch banknotes, the quality of Dutch paper currency has improved in more than one way. The methodology is question provides for (i) a design policy, which helps fix clear objectives; (ii) design management, to ensure a smooth cooperation between the graphic designer, printer, papermaker an central bank, (iii) a program of requirements, a banknote development guideline for all parties involved. This systematic approach enables an objective selection of design proposals, including security features. Furthermore, the project manager obtains regular feedback from the public by conducting market surveys. Each new design of a Netherlands Guilder banknote issued by the Nederlandsche Bank of the past 50 years has been an improvement on its predecessor in terms of value recognition, security and durability.

  19. Workshops as a Research Methodology

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Levinsen, Karin Tweddell

    2017-01-01

    , and workshops as a research methodology. Focusing primarily on the latter, this paper presents five studies on upper secondary and higher education teachers’ professional development and on teaching and learning through video conferencing. Through analysis and discussion of these studies’ findings, we argue......This paper contributes to knowledge on workshops as a research methodology, and specifically on how such workshops pertain to e-learning. A literature review illustrated that workshops are discussed according to three different perspectives: workshops as a means, workshops as practice...... that workshops provide a platform that can aid researchers in identifying and exploring relevant factors in a given domain by providing means for understanding complex work and knowledge processes that are supported by technology (for example, e-learning). The approach supports identifying factors...

  20. Methodological challenges and lessons learned

    DEFF Research Database (Denmark)

    Nielsen, Poul Erik; Gustafsson, Jessica

    2017-01-01

    Taking as point of departure three recently conducted empirical studies, the aim of this article is to theoretically and empirically discuss methodological challenges studying the interrelations between media and social reality and to critically reflect on the methodologies used in the studies....... By deconstructing the studies, the article draws attention to the fact that different methods are able to grasp different elements of social reality. Moreover, by analysing the power relations at play, the article demonstrated that the interplay between interviewer and interviewee, and how both parties fit...... into present power structures, greatly influence the narratives that are co-produced during interviews. The article thus concludes that in order to fully understand complex phenomena it is not just enough to use a mixture of methods, the makeup of the research team is also imperative, as a diverse team...

  1. Quantitative assessments of distributed systems methodologies and techniques

    CERN Document Server

    Bruneo, Dario

    2015-01-01

    Distributed systems employed in critical infrastructures must fulfill dependability, timeliness, and performance specifications. Since these systems most often operate in an unpredictable environment, their design and maintenance require quantitative evaluation of deterministic and probabilistic timed models. This need gave birth to an abundant literature devoted to formal modeling languages combined with analytical and simulative solution techniques The aim of the book is to provide an overview of techniques and methodologies dealing with such specific issues in the context of distributed

  2. Implementation impacts of PRL methodology

    International Nuclear Information System (INIS)

    Caudill, J.A.; Krupa, J.F.; Meadors, R.E.; Odum, J.V.; Rodrigues, G.C.

    1993-02-01

    This report responds to a DOE-SR request to evaluate the impacts from implementation of the proposed Plutonium Recovery Limit (PRL) methodology. The PRL Methodology is based on cost minimization for decisions to discard or recover plutonium contained in scrap, residues, and other plutonium bearing materials. Implementation of the PRL methodology may result in decisions to declare as waste certain plutonium bearing materials originally considered to be a recoverable plutonium product. Such decisions may have regulatory impacts, because any material declared to be waste would immediately be subject to provisions of the Resource Conservation and Recovery Act (RCRA). The decision to discard these materials will have impacts on waste storage, treatment, and disposal facilities. Current plans for the de-inventory of plutonium processing facilities have identified certain materials as candidates for discard based upon the economic considerations associated with extending the operating schedules for recovery of the contained plutonium versus potential waste disposal costs. This report evaluates the impacts of discarding those materials as proposed by the F Area De-Inventory Plan and compares the De-Inventory Plan assessments with conclusions from application of the PRL. The impact analysis was performed for those materials proposed as potential candidates for discard by the De-Inventory Plan. The De-Inventory Plan identified 433 items, containing approximately 1% of the current SRS Pu-239 inventory, as not appropriate for recovery as the site moves to complete the mission of F-Canyon and FB-Line. The materials were entered into storage awaiting recovery as product under the Department's previous Economic Discard Limit (EDL) methodology which valued plutonium at its incremental cost of production in reactors. An application of Departmental PRLs to the subject 433 items revealed that approximately 40% of them would continue to be potentially recoverable as product plutonium

  3. ISE System Development Methodology Manual

    Energy Technology Data Exchange (ETDEWEB)

    Hayhoe, G.F.

    1992-02-17

    The Information Systems Engineering (ISE) System Development Methodology Manual (SDM) is a framework of life cycle management guidelines that provide ISE personnel with direction, organization, consistency, and improved communication when developing and maintaining systems. These guide-lines were designed to allow ISE to build and deliver Total Quality products, and to meet the goals and requirements of the US Department of Energy (DOE), Westinghouse Savannah River Company, and Westinghouse Electric Corporation.

  4. Environmental Testing Methodology in Biometrics

    OpenAIRE

    Fernández Saavedra, Belén; Sánchez Reíllo, Raúl; Alonso Moreno, Raúl; Miguel Hurtado, Óscar

    2010-01-01

    8 pages document + 5-slide presentation.-- Contributed to: 1st International Biometric Performance Conference (IBPC 2010, NIST, Gaithersburg, MD, US, Mar 1-5, 2010). Recently, biometrics is used in many security systems and these systems can be located in different environments. As many experts claim and previous works have demonstrated, environmental conditions influence biometric performance. Nevertheless, there is not a specific methodology for testing this influence at the moment...

  5. Soft systems methodology: other voices

    OpenAIRE

    Holwell, Sue

    2000-01-01

    This issue of Systemic Practice and Action Research, celebrating the work of Peter Checkland, in the particular nature and development of soft systems methodology (SSM), would not have happened unless the work was seen by others as being important. No significant contribution to thinking happens without a secondary literature developing. Not surprisingly, many commentaries have accompanied the ongoing development of SSM. Some of these are insightful, some full of errors, and some include both...

  6. Systems engineering agile design methodologies

    CERN Document Server

    Crowder, James A

    2013-01-01

    This book examines the paradigm of the engineering design process. The authors discuss agile systems and engineering design. The book captures the entire design process (functionbases), context, and requirements to affect real reuse. It provides a methodology for an engineering design process foundation for modern and future systems design. This book captures design patterns with context for actual Systems Engineering Design Reuse and contains a new paradigm in Design Knowledge Management.

  7. Methodological remarks on contraction theory

    DEFF Research Database (Denmark)

    Jouffroy, Jerome; Slotine, Jean-Jacques E.

    Because contraction analysis stems from a differential and incremental framework, the nature and methodology of contraction-based proofs are significantly different from those of their Lyapunov-based counterparts. This paper specifically studies this issue, and illustrates it by revisiting some c...... classical examples traditionally addressed using Lyapunov theory. Even in these cases, contraction tools can often yield significantly simplified analysis. The examples include adaptive control, robotics, and a proof of convergence of the deterministic Extended Kalman Filter....

  8. Artificial Intelligence Techniques and Methodology

    OpenAIRE

    Carbonell, Jaime G.; Sleeman, Derek

    1982-01-01

    Two closely related aspects of artificial intelligence that have received comparatively little attention in the recent literature are research methodology, and the analysis of computational techniques that span multiple application areas. We believe both issues to be increasingly significant as Artificial Intelligence matures into a science and spins off major application efforts. It is imperative to analyze the repertoire of AI methods with respect to past experience, utility in new domains,...

  9. Developing Foucault's Discourse Analytic Methodology

    Directory of Open Access Journals (Sweden)

    Rainer Diaz-Bone

    2006-01-01

    Full Text Available A methodological position for a FOUCAULTian discourse analysis is presented. A sequence of analytical steps is introduced and an illustrating example is offered. It is emphasized that discourse analysis has to discover the system-level of discursive rules and the deeper structure of the discursive formation. Otherwise the analysis will be unfinished. Michel FOUCAULTs work is theoretically grounded in French structuralism and (the so called post-structuralism. In this paper, post-structuralism is not conceived as a means for overcoming of structuralism, but as a way of critically continuing the structural perspective. In this way, discursive structures can be related to discursive practices and the concept of structure can be disclosed (e. g. to inter-discourse or DERRIDAs concept of structurality. In this way, the structural methodology is continued and radicalized, but not given up. In this paper, FOUCAULTs theory is combined with the works of Michel PÊCHEUX and (especially for the sociology of knowledge and the sociology of culture Pierre BOURDIEU. The practice of discourse analysis is theoretically grounded. This practice can be conceived as a reflexive coupling of deconstruction and reconstruction in the material to be analyzed. This methodology therefore can be characterized as a reconstructive qualitative methodology. At the end of the article, forms of discourse analysis are criticized that do not intend to recover the system level of discursive rules and that do not intend to discover the deeper structure of the discursive formation (i. e. episteme, socio-episteme. These forms merely are commentaries of discourses (not their analyses, they remain phenomenological and are therefore: pre-structuralist. URN: urn:nbn:de:0114-fqs060168

  10. Power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hacking, D [Marconi Simulation (United Kingdom)

    1992-09-01

    Over many years in the field of simulation Marconi has developed and adopted a number of procedures and methodologies for the management, design and development of an extensive range of training equipment. This equipment encompasses desktop computer-based training systems, generic training devices. The procurement of a training simulator is clearly dictated by the perceived training requirement or problem. Also, it should preferably involve or follow a detailed training needs analysis. Although the cost benefits of training are often difficult to quantify, a simulator is frequently easier to justify if plant familiarisation and training can be provided in advance of on-the-job experience. This is particularly true if the target operators have little hands-on experience of similar plant either in terms of processes or the operator interface. (author).

  11. Energy Efficiency Indicators Methodology Booklet

    Energy Technology Data Exchange (ETDEWEB)

    Sathaye, Jayant; Price, Lynn; McNeil, Michael; de la rue du Can, Stephane

    2010-05-01

    This Methodology Booklet provides a comprehensive review and methodology guiding principles for constructing energy efficiency indicators, with illustrative examples of application to individual countries. It reviews work done by international agencies and national government in constructing meaningful energy efficiency indicators that help policy makers to assess changes in energy efficiency over time. Building on past OECD experience and best practices, and the knowledge of these countries' institutions, relevant sources of information to construct an energy indicator database are identified. A framework based on levels of hierarchy of indicators -- spanning from aggregate, macro level to disaggregated end-use level metrics -- is presented to help shape the understanding of assessing energy efficiency. In each sector of activity: industry, commercial, residential, agriculture and transport, indicators are presented and recommendations to distinguish the different factors affecting energy use are highlighted. The methodology booklet addresses specifically issues that are relevant to developing indicators where activity is a major factor driving energy demand. A companion spreadsheet tool is available upon request.

  12. Software engineering methodologies and tools

    Science.gov (United States)

    Wilcox, Lawrence M.

    1993-01-01

    Over the years many engineering disciplines have developed, including chemical, electronic, etc. Common to all engineering disciplines is the use of rigor, models, metrics, and predefined methodologies. Recently, a new engineering discipline has appeared on the scene, called software engineering. For over thirty years computer software has been developed and the track record has not been good. Software development projects often miss schedules, are over budget, do not give the user what is wanted, and produce defects. One estimate is there are one to three defects per 1000 lines of deployed code. More and more systems are requiring larger and more complex software for support. As this requirement grows, the software development problems grow exponentially. It is believed that software quality can be improved by applying engineering principles. Another compelling reason to bring the engineering disciplines to software development is productivity. It has been estimated that productivity of producing software has only increased one to two percent a year in the last thirty years. Ironically, the computer and its software have contributed significantly to the industry-wide productivity, but computer professionals have done a poor job of using the computer to do their job. Engineering disciplines and methodologies are now emerging supported by software tools that address the problems of software development. This paper addresses some of the current software engineering methodologies as a backdrop for the general evaluation of computer assisted software engineering (CASE) tools from actual installation of and experimentation with some specific tools.

  13. Methodology for technical risk assessment

    International Nuclear Information System (INIS)

    Waganer, L.M.; Zuckerman, D.S.

    1983-01-01

    A methodology has been developed for and applied to the assessment of the technical risks associated with an evolving technology. This methodology, originally developed for fusion by K. W. Billman and F. R. Scott at EPRI, has been applied to assess the technical risk of a fuel system for a fusion reactor. Technical risk is defined as the risk that a particular technology or component which is currently under development will not achieve a set of required technical specifications (i.e. probability of failure). The individual steps in the technical risk assessment are summarized. The first step in this methodology is to clearly and completely quantify the technical requirements for the particular system being examined. The next step is to identify and define subsystems and various options which appear capable of achieving the required technical performance. The subsystem options are then characterized regarding subsystem functions, interface requirements with the subsystems and systems, important components, developmental obstacles and technical limitations. Key technical subsystem performance parameters are identified which directly or indirectly relate to the system technical specifications. Past, existing and future technical performance data from subsystem experts are obtained by using a Bayesian Interrogation technique. The input data is solicited in the form of probability functions. Thus the output performance of the system is expressed as probability functions

  14. Methodology for astronaut reconditioning research.

    Science.gov (United States)

    Beard, David J; Cook, Jonathan A

    2017-01-01

    Space medicine offers some unique challenges, especially in terms of research methodology. A specific challenge for astronaut reconditioning involves identification of what aspects of terrestrial research methodology hold and which require modification. This paper reviews this area and presents appropriate solutions where possible. It is concluded that spaceflight rehabilitation research should remain question/problem driven and is broadly similar to the terrestrial equivalent on small populations, such as rare diseases and various sports. Astronauts and Medical Operations personnel should be involved at all levels to ensure feasibility of research protocols. There is room for creative and hybrid methodology but careful systematic observation is likely to be more achievable and fruitful than complex trial based comparisons. Multi-space agency collaboration will be critical to pool data from small groups of astronauts with the accepted use of standardised outcome measures across all agencies. Systematic reviews will be an essential component. Most limitations relate to the inherent small sample size available for human spaceflight research. Early adoption of a co-operative model for spaceflight rehabilitation research is therefore advised. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Methodology for ranking restoration options

    International Nuclear Information System (INIS)

    Hedemann Jensen, Per

    1999-04-01

    The work described in this report has been performed as a part of the RESTRAT Project FI4P-CT95-0021a (PL 950128) co-funded by the Nuclear Fission Safety Programme of the European Commission. The RESTRAT project has the overall objective of developing generic methodologies for ranking restoration techniques as a function of contamination and site characteristics. The project includes analyses of existing remediation methodologies and contaminated sites, and is structured in the following steps: characterisation of relevant contaminated sites; identification and characterisation of relevant restoration techniques; assessment of the radiological impact; development and application of a selection methodology for restoration options; formulation of generic conclusions and development of a manual. The project is intended to apply to situations in which sites with nuclear installations have been contaminated with radioactive materials as a result of the operation of these installations. The areas considered for remedial measures include contaminated land areas, rivers and sediments in rivers, lakes, and sea areas. Five contaminated European sites have been studied. Various remedial measures have been envisaged with respect to the optimisation of the protection of the populations being exposed to the radionuclides at the sites. Cost-benefit analysis and multi-attribute utility analysis have been applied for optimisation. Health, economic and social attributes have been included and weighting factors for the different attributes have been determined by the use of scaling constants. (au)

  16. Supply chain simulation tools and techniques: a survey

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    The main contribution of this paper is twofold: it surveys different types of simulation for supply chain management; it discusses several methodological issues. These different types of simulation are spreadsheet simulation, system dynamics, discrete-event simulation and business games. Which

  17. Survey of Dynamic PSA Methodologies

    International Nuclear Information System (INIS)

    Lee, Hansul; Kim, Hyeonmin; Heo, Gyunyoung; Kim, Taewan

    2015-01-01

    Event Tree(ET)/Fault Tree(FT) are significant methodology in Probabilistic Safety Assessment(PSA) for Nuclear Power Plants(NPPs). ET/FT methodology has the advantage for users to be able to easily learn and model. It enables better communication between engineers engaged in the same field. However, conventional methodologies are difficult to cope with the dynamic behavior (e.g. operation mode changes or sequence-dependent failure) and integrated situation of mechanical failure and human errors. Meanwhile, new possibilities are coming for the improved PSA by virtue of the dramatic development on digital hardware, software, information technology, and data analysis.. More specifically, the computing environment has been greatly improved with being compared to the past, so we are able to conduct risk analysis with the large amount of data actually available. One method which can take the technological advantages aforementioned should be the dynamic PSA such that conventional ET/FT can have time- and condition-dependent behaviors in accident scenarios. In this paper, we investigated the various enabling techniques for the dynamic PSA. Even though its history and academic achievement was great, it seems less interesting from industrial and regulatory viewpoint. Authors expect this can contribute to better understanding of dynamic PSA in terms of algorithm, practice, and applicability. In paper, the overview for the dynamic PSA was conducted. Most of methodologies share similar concepts. Among them, DDET seems a backbone for most of methodologies since it can be applied to large problems. The common characteristics sharing the concept of DDET are as follows: • Both deterministic and stochastic approaches • Improves the identification of PSA success criteria • Helps to limit detrimental effects of sequence binning (normally adopted in PSA) • Helps to avoid defining non-optimal success criteria that may distort the risk • Framework for comprehensively considering

  18. Survey of Dynamic PSA Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hansul; Kim, Hyeonmin; Heo, Gyunyoung [Kyung Hee University, Yongin (Korea, Republic of); Kim, Taewan [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2015-05-15

    Event Tree(ET)/Fault Tree(FT) are significant methodology in Probabilistic Safety Assessment(PSA) for Nuclear Power Plants(NPPs). ET/FT methodology has the advantage for users to be able to easily learn and model. It enables better communication between engineers engaged in the same field. However, conventional methodologies are difficult to cope with the dynamic behavior (e.g. operation mode changes or sequence-dependent failure) and integrated situation of mechanical failure and human errors. Meanwhile, new possibilities are coming for the improved PSA by virtue of the dramatic development on digital hardware, software, information technology, and data analysis.. More specifically, the computing environment has been greatly improved with being compared to the past, so we are able to conduct risk analysis with the large amount of data actually available. One method which can take the technological advantages aforementioned should be the dynamic PSA such that conventional ET/FT can have time- and condition-dependent behaviors in accident scenarios. In this paper, we investigated the various enabling techniques for the dynamic PSA. Even though its history and academic achievement was great, it seems less interesting from industrial and regulatory viewpoint. Authors expect this can contribute to better understanding of dynamic PSA in terms of algorithm, practice, and applicability. In paper, the overview for the dynamic PSA was conducted. Most of methodologies share similar concepts. Among them, DDET seems a backbone for most of methodologies since it can be applied to large problems. The common characteristics sharing the concept of DDET are as follows: • Both deterministic and stochastic approaches • Improves the identification of PSA success criteria • Helps to limit detrimental effects of sequence binning (normally adopted in PSA) • Helps to avoid defining non-optimal success criteria that may distort the risk • Framework for comprehensively considering

  19. Prioritization methodology for chemical replacement

    Science.gov (United States)

    Cruit, Wendy; Goldberg, Ben; Schutzenhofer, Scott

    1995-01-01

    Since United States of America federal legislation has required ozone depleting chemicals (class 1 & 2) to be banned from production, The National Aeronautics and Space Administration (NASA) and industry have been required to find other chemicals and methods to replace these target chemicals. This project was initiated as a development of a prioritization methodology suitable for assessing and ranking existing processes for replacement 'urgency.' The methodology was produced in the form of a workbook (NASA Technical Paper 3421). The final workbook contains two tools, one for evaluation and one for prioritization. The two tools are interconnected in that they were developed from one central theme - chemical replacement due to imposed laws and regulations. This workbook provides matrices, detailed explanations of how to use them, and a detailed methodology for prioritization of replacement technology. The main objective is to provide a GUIDELINE to help direct the research for replacement technology. The approach for prioritization called for a system which would result in a numerical rating for the chemicals and processes being assessed. A Quality Function Deployment (QFD) technique was used in order to determine numerical values which would correspond to the concerns raised and their respective importance to the process. This workbook defines the approach and the application of the QFD matrix. This technique: (1) provides a standard database for technology that can be easily reviewed, and (2) provides a standard format for information when requesting resources for further research for chemical replacement technology. Originally, this workbook was to be used for Class 1 and Class 2 chemicals, but it was specifically designed to be flexible enough to be used for any chemical used in a process (if the chemical and/or process needs to be replaced). The methodology consists of comparison matrices (and the smaller comparison components) which allow replacement technology

  20. The fractional scaling methodology (FSM) Part 1. methodology development

    International Nuclear Information System (INIS)

    Novak Zuber; Ivan Catton; Upendra S Rohatgi; Wolfgang Wulff

    2005-01-01

    Full text of publication follows: a quantitative methodology is developed, based on the concepts of hierarchy and synthesis, to integrate and organize information and data. The methodology uses scaling to synthesize experimental data and analytical results, and to provide quantitative criteria for evaluating the effects of various design and operating parameters that influence processes in a complex system such as a nuclear power plant or a related test facility. Synthesis and scaling are performed on three hierarchical levels: the process, component and system levels. Scaling on the process level determines the effect of a selected process on a particular state variable during a selected scenario. At the component level this scaling determines the effects various processes have on a state variable, and it ranks the processes according to their importance by the magnitude of the fractional change they cause on that state variable. At the system level the scaling determines the governing processes and corresponding components, ranking these in the order of importance according to their effect on the fractional change of system-wide state variables. The scaling methodology reveals on all levels the fractional change of state variables and is called therefore the Fractional Scaling Methodology (FSM). FSM synthesizes process parameters and assigns to each thermohydraulic process a dimensionless effect metric Ω = ωt, that is the product of the specific rate of fractional change ω and the characteristic time t. The rate of fractional change ω is the ratio of process transport rate over content of a preserved quantity in a component. The effect metric Ω quantifies the contribution of the process to the fractional change of a state variable in a given component. Ordering of a component effect metrics provides the hierarchy of processes in a component, then in all components and the system. FSM separates quantitatively dominant from minor processes and components and

  1. Multi-agent systems simulation and applications

    CERN Document Server

    Uhrmacher, Adelinde M

    2009-01-01

    Methodological Guidelines for Modeling and Developing MAS-Based SimulationsThe intersection of agents, modeling, simulation, and application domains has been the subject of active research for over two decades. Although agents and simulation have been used effectively in a variety of application domains, much of the supporting research remains scattered in the literature, too often leaving scientists to develop multi-agent system (MAS) models and simulations from scratch. Multi-Agent Systems: Simulation and Applications provides an overdue review of the wide ranging facets of MAS simulation, i

  2. Clustering Measurements of broad-line AGNs: Review and Future

    Directory of Open Access Journals (Sweden)

    Mirko Krumpe

    2014-12-01

    Full Text Available Despite substantial effort, the precise physical processes that lead to the growth of super-massive black holes in the centers of galaxies are still not well understood. These phases of black hole growth are thought to be of key importance in understanding galaxy evolution. Forthcoming missions such as eROSITA, HETDEX, eBOSS, BigBOSS, LSST, and Pan-STARRS will compile by far the largest ever Active Galactic Nuclei (AGNs catalogs which will allow us to measure the spatial distribution of AGNs in the universe with unprecedented accuracy. For the first time, AGN clustering measurements will reach a level of precision that will not only allow for an alternative approach to answering open questions in AGN and galaxy co-evolution but will open a new frontier, allowing us to precisely determine cosmological parameters. This paper reviews large-scale clustering measurements of broad line AGNs. We summarize how clustering is measured and which constraints can be derived from AGN clustering measurements, we discuss recent developments, and we briefly describe future projects that will deliver extremely large AGN samples which will enable AGN clustering measurements of unprecedented accuracy. In order to maximize the scientific return on the research fields of AGN and galaxy evolution and cosmology, we advise that the community develops a full understanding of the systematic uncertainties which will, in contrast to today’s measurement, be the dominant source of uncertainty.

  3. Argument for a non-standard broad-line region

    International Nuclear Information System (INIS)

    Collin, S.

    1987-01-01

    The region emitting the broad lines (BLR) in quasars and AGN has a ''Standard Status''. It is shown that this status raises strong problems concerning the energetic budget and the thermal state of the BLR. A possible solution is proposed [fr

  4. Clues to quasar broad-line region geometry and kinematics

    NARCIS (Netherlands)

    Vestergaard, M; Wilkes, BJ; Barthel, PD

    2000-01-01

    We present evidence that the high-velocity C IV lambda 1549 emission-line gas of radio-loud quasars may originate in a disklike configuration, in close proximity to the accretion disk often assumed to emit the low-ionization lines. For a sample of 36 radio-loud z approximate to 2 quasars, we find

  5. Development of a General Modelling Methodology for Vacuum Residue Hydroconversion

    Directory of Open Access Journals (Sweden)

    Pereira de Oliveira L.

    2013-11-01

    Full Text Available This work concerns the development of a methodology for kinetic modelling of refining processes, and more specifically for vacuum residue conversion. The proposed approach allows to overcome the lack of molecular detail of the petroleum fractions and to simulate the transformation of the feedstock molecules into effluent molecules by means of a two-step procedure. In the first step, a synthetic mixture of molecules representing the feedstock for the process is generated via a molecular reconstruction method, termed SR-REM molecular reconstruction. In the second step, a kinetic Monte-Carlo method (kMC is used to simulate the conversion reactions on this mixture of molecules. The molecular reconstruction was applied to several petroleum residues and is illustrated for an Athabasca (Canada vacuum residue. The kinetic Monte-Carlo method is then described in detail. In order to validate this stochastic approach, a lumped deterministic model for vacuum residue conversion was simulated using Gillespie’s Stochastic Simulation Algorithm. Despite the fact that both approaches are based on very different hypotheses, the stochastic simulation algorithm simulates the conversion reactions with the same accuracy as the deterministic approach. The full-scale stochastic simulation approach using molecular-level reaction pathways provides high amounts of detail on the effluent composition and is briefly illustrated for Athabasca VR hydrocracking.

  6. Radioisotope methodology course radioprotection aspects

    International Nuclear Information System (INIS)

    Bergoc, R.M.; Caro, R.A.; Menossi, C.A.

    1996-01-01

    The advancement knowledge in molecular and cell biology, biochemistry, medicine and pharmacology, which has taken place during the last 50 years, after World War II finalization, is really outstanding. It can be safely said that this fact is principally due to the application of radioisotope techniques. The research on metabolisms, biodistribution of pharmaceuticals, pharmacodynamics, etc., is mostly carried out by means of techniques employing radioactive materials. Radioisotopes and radiation are frequently used in medicine both as diagnostic and therapeutic tools. The radioimmunoanalysis is today a routine method in endocrinology and in general clinical medicine. The receptor determination and characterization is a steadily growing methodology used in clinical biochemistry, pharmacology and medicine. The use of radiopharmaceuticals and radiation of different origins, for therapeutic purposes, should not be overlooked. For these reasons, the importance to teach radioisotope methodology is steadily growing. This is principally the case for specialization at the post-graduate level but at the pre graduate curriculum it is worthwhile to give some elementary theoretical and practical notions on this subject. These observations are justified by a more than 30 years teaching experience at both levels at the School of Pharmacy and Biochemistry of the University of Buenos Aires, Argentina. In 1960 we began to teach Physics III, an obligatory pregraduate course for biochemistry students, in which some elementary notions of radioactivity and measurement techniques were given. Successive modifications of the biochemistry pregraduate curriculum incorporated radiochemistry as an elective subject and since 1978, radioisotope methodology, as obligatory subject for biochemistry students. This subject is given at the radioisotope laboratory during the first semester of each year and its objective is to provide theoretical and practical knowledge to the biochemistry students, even

  7. Methodologies for tracking learning paths

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth; Gilje, Øystein; Lindstrand, Fredrik

    2009-01-01

    filmmakers: what furthers their interest and/or hinders it, and what learning patterns emerge. The aim of this article is to present and discuss issues regarding the methodology and meth- ods of the study, such as developing a relationship with interviewees when conducting inter- views online (using MSN). We...... suggest two considerations about using online interviews: how the interviewees value the given subject of conversation and their familiarity with being online. The benefit of getting online communication with the young filmmakers offers ease, because it is both practical and appropriates a meeting...

  8. Continuous culture apparatus and methodology

    International Nuclear Information System (INIS)

    Conway, H.L.

    1975-01-01

    At present, we are investigating the sorption of potentially toxic trace elements by phytoplankton under controlled laboratory conditions. Continuous culture techniques were used to study the mechanism of the sorption of the trace elements by unialgal diatom populations and the factors influencing this sorption. Continuous culture methodology has been used extensively to study bacterial kinetics. It is an excellent technique for obtaining a known physiological state of phytoplankton populations. An automated method for the synthesis of continuous culture medium for use in these experiments is described

  9. International Meeting on Simulation in Healthcare

    Science.gov (United States)

    2010-02-01

    into healthcare. Lean principles originate from Japanese manufacturing, particularly the Toyota production system. “Lean thinking”, essentially...for Simulation in Healthcare Page 50 Appendix 6: Video Presentations Simulation Cinema The video sessions provide an opportunity for the...Lean methodology is a series of principles derived from Japanese manufacturing aimed at creating value for customers through elimination of wasteful

  10. Introducing Simulation via the Theory of Records

    Science.gov (United States)

    Johnson, Arvid C.

    2011-01-01

    While spreadsheet simulation can be a useful method by which to help students to understand some of the more advanced concepts in an introductory statistics course, introducing the simulation methodology at the same time as these concepts can result in student cognitive overload. This article describes a spreadsheet model that has been…

  11. Mesoscopic simulations of crosslinked polymer networks

    NARCIS (Netherlands)

    Megariotis, G.; Vogiatzis, G.G.; Schneider, L.; Müller, M.; Theodorou, D.N.

    2016-01-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1'4-polyisoprene' is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn

  12. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  13. Convergence studies of deterministic methods for LWR explicit reflector methodology

    International Nuclear Information System (INIS)

    Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.

    2013-01-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)

  14. Chapter three: methodology of exposure modeling

    CSIR Research Space (South Africa)

    Moschandreas, DJ

    2002-12-01

    Full Text Available methodologies and models are reviewed. Three exposure/measurement methodologies are assessed. Estimation methods focus on source evaluation and attribution, sources include those outdoors and indoors as well as in occupational and in-transit environments. Fate...

  15. Methodological Issues and Practices in Qualitative Research.

    Science.gov (United States)

    Bradley, Jana

    1993-01-01

    Discusses methodological issues concerning qualitative research and describes research practices that qualitative researchers use to address these methodological issues. Topics discussed include the researcher as interpreter, the emergent nature of qualitative research, understanding the experience of others, trustworthiness in qualitative…

  16. Audit Methodology for IT Governance

    Directory of Open Access Journals (Sweden)

    Mirela GHEORGHE

    2010-01-01

    Full Text Available The continuous development of the new IT technologies was followed up by a rapid integration of them at the organization level. The management of the organizations face a new challenge: structural redefinition of the IT component in order to create plus value and to minimize IT risks through an efficient management of all IT resources of the organization. These changes have had a great impact on the governance of the IT component. The paper proposes an audit methodology of the IT Governance at the organization level. From this point of view the developed audit strategy is a strategy based on risks to enable IT auditor to study from the best angle efficiency and effectiveness of the IT Governance structure. The evaluation of the risks associated with IT Governance is a key process in planning the audit mission which will allow the identification of the segments with increased risks. With now ambition for completeness, the proposed methodology provides the auditor a useful tool in the accomplishment of his mission.

  17. Methodology of Credit Analysis Development

    Directory of Open Access Journals (Sweden)

    Slađana Neogradi

    2017-12-01

    Full Text Available The subject of research presented in this paper refers to the definition of methodology for the development of credit analysis in companies and its application in lending operations in the Republic of Serbia. With the developing credit market, there is a growing need for a well-developed risk and loss prevention system. In the introduction the process of bank analysis of the loan applicant is presented in order to minimize and manage the credit risk. By examining the subject matter, the process of processing the credit application is described, the procedure of analyzing the financial statements in order to get an insight into the borrower's creditworthiness. In the second part of the paper, the theoretical and methodological framework is presented applied in the concrete company. In the third part, models are presented which banks should use to protect against exposure to risks, i.e. their goal is to reduce losses on loan operations in our country, as well as to adjust to market conditions in an optimal way.

  18. Safeguarding the fuel cycle: Methodologies

    International Nuclear Information System (INIS)

    Gruemm, H.

    1984-01-01

    The effectiveness of IAEA safeguards is characterized by the extent to which they achieve their basic purpose - credible verification that no nuclear material is diverted from peaceful uses. This effectiveness depends inter alia but significantly on manpower in terms of the number and qualifications of inspectors. Staff increases will be required to improve effectiveness further, if this is requested by Member States, as well as to take into account new facilities expected to come under safeguards in the future. However, they are difficult to achieve due to financial constraints set by the IAEA budget. As a consequence, much has been done and is being undertaken to improve utilization of available manpower, including standardization of inspection procedures; improvement of management practices and training; rationalization of planning, reporting, and evaluation of inspection activities; and development of new equipment. This article focuses on certain aspects of the verification methodology presently used and asks: are any modifications of this methodology conceivable that would lead to economies of manpower, without loss of effectiveness. It has been stated in this context that present safeguards approaches are ''facility-oriented'' and that the adoption of a ''fuel cycle-oriented approach'' might bring about the desired savings. Many studies have been devoted to this very interesting suggestion. Up to this moment, no definite answer is available and further studies will be necessary to come to a conclusion. In what follows, the essentials of the problem are explained and some possible paths to a solution are discussed

  19. Methodology for combining dynamic responses

    International Nuclear Information System (INIS)

    Cudlin, R.; Hosford, S.; Mattu, R.; Wichman, K.

    1978-09-01

    The NRC has historically required that the structural/mechanical responses due to various accident loads and loads caused by natural phenomena, (such as earthquakes) be combined when analyzing structures, systems, and components important to safety. Several approaches to account for the potential interaction of loads resulting from accidents and natural phenomena have been used. One approach, the so-called absolute or linear summation (ABS) method, linearly adds the peak structural responses due to the individual dynamic loads. In general, the ABS method has also reflected the staff's conservative preference for the combination of dynamic load responses. A second approach, referred to as SRSS, yields a combined response equal to the square root of the sum of the squares of the peak responses due to the individual dynamic loads. The lack of a physical relationship between some of the loads has raised questions as to the proper methodology to be used in the design of nuclear power plants. An NRR Working Group was constituted to examine load combination methodologies and to develop a recommendation concerning criteria or conditions for their application. Evaluations of and recommendations on the use of the ABS and SRSS methods are provided in the report

  20. Developing a Simulated-Person Methodology Workshop: An Experiential Education Initiative for Educators and Simulators

    Science.gov (United States)

    Peisachovich, Eva Hava; Nelles, L. J.; Johnson, Samantha; Nicholson, Laura; Gal, Raya; Kerr, Barbara; Celia, Popovic; Epstein, Iris; Da Silva, Celina

    2017-01-01

    Numerous forecasts suggest that professional-competence development depends on human encounters. Interaction between organizations, tasks, and individual providers influence human behaviour, affect organizations' or systems' performance, and are a key component of professional-competence development. Further, insufficient or ineffective…

  1. Information technology security system engineering methodology

    Science.gov (United States)

    Childs, D.

    2003-01-01

    A methodology is described for system engineering security into large information technology systems under development. The methodology is an integration of a risk management process and a generic system development life cycle process. The methodology is to be used by Security System Engineers to effectively engineer and integrate information technology security into a target system as it progresses through the development life cycle. The methodology can also be used to re-engineer security into a legacy system.

  2. Turbine trip transient analysis in peach bottom NPP with TRAC-BF1 code and Simtab-1D methodology

    International Nuclear Information System (INIS)

    Barrachina, T.; Miro, R.; Verdu, G.; Collazo, I.; Gonzalez, P.; Concejal, A.; Ortego, P.; Melara, J.

    2010-01-01

    In TRAC-BF1 nuclear cross-sections are specified in the input deck in as a polynomial expansion. Therefore, it is necessary to obtain the coefficients of this polynomial function. One of the methods proposed in the literature is the KINPAR methodology. This methodology uses the results from different perturbations of the original state to obtain the coefficients of the polynominal expansion. The simulations are performed using the SIMULATE3 code. In this work, a new methodology to obtain the cross-sections set in 1D is presented. The first step consists of the application of the SIMTAB methodology, developed in UPV, to obtain the 3D cross-sections sets from CASMO4/SIMULATE3. These 3D cross-sections sets are collapsed to 1D, using as a weighting factor the 3D thermal and rapid neutron fluxes obtained from SIMULATE3. The 1D cross-sections obtained are in the same format as the 3D sets, hence, it has been necessary to modify the TRAC-BF1 code in order to be able to read and interpolate between these tabulated 1D cross-sections. With this new methodology it is not necessary to perform simulations of different perturbations of the original state, and also the variation range of the moderator density can be higher than using the former KINPAR methodology. This is important for simulating severe accidents in which the variables vary in a wide range. This new methodology is applied to the simulation of the turbine trip transient Benchmark in Peach Bottom NPP using the TRAC-BF1 code. The results of the transient simulation in TRAC-BF1 using the KINPAR methodology and the new methodology, SIMTAB-1D, are compared. (author)

  3. Reconciling Anti-essentialism and Quantitative Methodology

    DEFF Research Database (Denmark)

    Jensen, Mathias Fjællegaard

    2017-01-01

    Quantitative methodology has a contested role in feminist scholarship which remains almost exclusively qualitative. Considering Irigaray’s notion of mimicry, Spivak’s strategic essentialism, and Butler’s contingent foundations, the essentialising implications of quantitative methodology may prove...... the potential to reconcile anti-essentialism and quantitative methodology, and thus, to make peace in the quantitative/qualitative Paradigm Wars....

  4. 42 CFR 441.472 - Budget methodology.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Budget methodology. 441.472 Section 441.472 Public... Self-Directed Personal Assistance Services Program § 441.472 Budget methodology. (a) The State shall set forth a budget methodology that ensures service authorization resides with the State and meets the...

  5. Application of agile methodologies in software development

    Directory of Open Access Journals (Sweden)

    Jovanović Aca D.

    2016-01-01

    Full Text Available The paper presents the potentials for the development of software using agile methodologies. Special consideration is devoted to the potentials and advantages of use of the Scrum methodology in the development of software and the relationship between the implementation of agile methodologies and the software development projects.

  6. Monitoring and diagnosis for sensor fault detection using GMDH methodology

    International Nuclear Information System (INIS)

    Goncalves, Iraci Martinez Pereira

    2006-01-01

    The fault detection and diagnosis system is an Operator Support System dedicated to specific functions that alerts operators to sensors and actuators fault problems, and guide them in the diagnosis before the normal alarm limits are reached. Operator Support Systems appears to reduce panels complexity caused by the increase of the available information in nuclear power plants control room. In this work a Monitoring and Diagnosis System was developed based on the GMDH (Group Method of Data Handling) methodology. The methodology was applied to the IPEN research reactor IEA-R1. The system performs the monitoring, comparing GMDH model calculated values with measured values. The methodology developed was firstly applied in theoretical models: a heat exchanger model and an IPEN reactor theoretical model. The results obtained with theoretical models gave a base to methodology application to the actual reactor operation data. Three GMDH models were developed for actual operation data monitoring: the first one using just the thermal process variables, the second one was developed considering also some nuclear variables, and the third GMDH model considered all the reactor variables. The three models presented excellent results, showing the methodology utilization viability in monitoring the operation data. The comparison between the three developed models results also shows the methodology capacity to choose by itself the best set of input variables for the model optimization. For the system diagnosis implementation, faults were simulated in the actual temperature variable values by adding a step change. The fault values correspond to a typical temperature descalibration and the result of monitoring faulty data was then used to build a simple diagnosis system based on fuzzy logic. (author)

  7. Fully Stochastic Distributed Methodology for Multivariate Flood Frequency Analysis

    Directory of Open Access Journals (Sweden)

    Isabel Flores-Montoya

    2016-05-01

    Full Text Available An adequate estimation of the extreme behavior of basin response is essential both for designing river structures and for evaluating their risk. The aim of this paper is to develop a new methodology to generate extreme hydrograph series of thousands of years using an event-based model. To this end, a spatial-temporal synthetic rainfall generator (RainSimV3 is combined with a distributed physically-based rainfall–runoff event-based model (RIBS. The use of an event-based model allows simulating longer hydrograph series with less computational and data requirements but need to characterize the initial basis state, which depends on the initial basin moisture distribution. To overcome this problem, this paper proposed a probabilistic calibration–simulation approach, which considers the initial state and the model parameters as random variables characterized by a probability distribution though a Monte Carlo simulation. This approach is compared with two other approaches, the deterministic and the semi-deterministic approaches. Both approaches use a unique initial state. The deterministic approach also uses a unique value of the model parameters while the semi-deterministic approach obtains these values from its probability distribution through a Monte Carlo simulation, considering the basin variability. This methodology has been applied to the Corbès and Générargues basins, in the Southeast of France. The results show that the probabilistic approach offers the best fit. That means that the proposed methodology can be successfully used to characterize the extreme behavior of the basin considering the basin variability and overcoming the basin initial state problem.

  8. Simulating Vito

    CERN Document Server

    Fragapane, Alexander

    2013-01-01

    This paper discusses the techniques used to simulate the proposed upgrade to the ASPIC line at ISOLDE, VITO. It discusses the process used in the program SIMION by explaining how to start with an Autodesk Inventor drawing and import this into SIMION to get a working simulation. It then goes on to discuss the pieces of VITO which have been simulated in the program and how they were simulated. Finally, it explains a little about the simulations of the full beamline which have been done and discusses what still needs to be done.

  9. Advances in social simulation 2015

    CERN Document Server

    Verbrugge, Rineke; Flache, Andreas; Roo, Gert; Hoogduin, Lex; Hemelrijk, Charlotte

    2017-01-01

    This book highlights recent developments in the field, presented at the Social Simulation 2015 conference in Groningen, The Netherlands. It covers advances both in applications and methods of social simulation. Societal issues addressed range across complexities in economic systems, opinion dynamics and civil violence, changing mobility patterns, different land-use, transition in the energy system, food production and consumption, ecosystem management and historical processes. Methodological developments cover how to use empirical data in validating models in general, formalization of behavioral theory in agent behavior, construction of artificial populations for experimentation, replication of models, and agent-based models that can be run in a web browser. Social simulation is a rapidly evolving field. Social scientists are increasingly interested in social simulation as a tool to tackle the complex non-linear dynamics of society. Furthermore, the software and hardware tools available for social simulation ...

  10. Methodology for flammable gas evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Hopkins, J.D., Westinghouse Hanford

    1996-06-12

    There are 177 radioactive waste storage tanks at the Hanford Site. The waste generates flammable gases. The waste releases gas continuously, but in some tanks the waste has shown a tendency to trap these flammable gases. When enough gas is trapped in a tank`s waste matrix, it may be released in a way that renders part or all of the tank atmosphere flammable for a period of time. Tanks must be evaluated against previously defined criteria to determine whether they can present a flammable gas hazard. This document presents the methodology for evaluating tanks in two areas of concern in the tank headspace:steady-state flammable-gas concentration resulting from continuous release, and concentration resulting from an episodic gas release.

  11. Methodological Reflections: Inter- ethnic Research

    DEFF Research Database (Denmark)

    Singla, Rashmi

    2010-01-01

    with both youth and the parental generation with ethnic minority background in Denmark. These reflections include implications and challenges related to researcher’s national, ethnic background and educational, professional position in encounter with   diverse ‘researched persons’ such as youth......This article reflects on the methodological and epistemological aspects of the ethical issues involved in encounters between researcher and research participants with ethnic minority background in contexts with diversity. Specific challenges involved in longitudinal research (10 - 15 years......) are also considered. The issues related to the social relevance of the research deriving from psycho political validity implying consideration of power dynamics in the personal, relational and collective domains are included. The primary basis for these reflections is a follow-up study concerning young...

  12. Sustainable Innovation and Entrepreneurship Methodology

    DEFF Research Database (Denmark)

    Celik, Sine; Joore, Peter; Christodoulou, Panayiotis

    or regional “co-creation platform for sustainable solutions” to promote structural innovation. In this manual, the Sustainable Innovation and Entrepreneurship Methodology will be described. The organisational guidelines mainly take point of departure in how Aalborg University (AAU) in Denmark has organised......The objective of the InnoLabs project is to facilitate cross-sectoral, multidisciplinary solutions to complex social problems in various European settings. InnoLabs are university-driven physical and/or organizational spaces that function as student innovation laboratories and operate as a local...... this in daily practice. In line with the objectives of the Innolabs project (output 05), partners in the Innolabs project have reflected, evaluated and concluded the project experiences, which are described in this report. The InnoLabs project was developed for the 2014 call of Erasmus+ funds KA2- Cooperation...

  13. Methodologies for 2011 economic reports

    DEFF Research Database (Denmark)

    Nielsen, Rasmus

    STECF’s Expert Working Group 11-03 convened in Athens (28th March – 1st April, 2011) to discuss and seek agreement on the content, indicators, methodologies and format of the 2011 Annual Economic Reports (AER) on the EU fishing fleet, the fish processing and the aquaculture sectors. Proposals...... for improved contents and the overall structure were discussed. Templates for the national and EU overview chapters for the EU the fish processing and the aquaculture sectors were produced. Indicators for the EU fishing fleet and fish processing reports were reviewed; new indicators for the fish processing...... and the aquaculture sector reports were proposed. And topics of special interest were proposed for all three reports....

  14. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  15. Stakeholder analysis methodologies resource book

    Energy Technology Data Exchange (ETDEWEB)

    Babiuch, W.M.; Farhar, B.C.

    1994-03-01

    Stakeholder analysis allows analysts to identify how parties might be affected by government projects. This process involves identifying the likely impacts of a proposed action and stakeholder groups affected by that action. Additionally, the process involves assessing how these groups might be affected and suggesting measures to mitigate any adverse effects. Evidence suggests that the efficiency and effectiveness of government actions can be increased and adverse social impacts mitigated when officials understand how a proposed action might affect stakeholders. This report discusses how to conduct useful stakeholder analyses for government officials making decisions on energy-efficiency and renewable-energy technologies and their commercialization. It discusses methodological issues that may affect the validity and reliability of findings, including sampling, generalizability, validity, ``uncooperative`` stakeholder groups, using social indicators, and the effect of government regulations. The Appendix contains resource directories and a list of specialists in stakeholder analysis and involvement.

  16. Inventory differences: An evaluation methodology

    International Nuclear Information System (INIS)

    Heinberg, C.L.; Roberts, N.J.

    1987-01-01

    This paper discusses an evaluation methodology which is used for inventory differences at the Los Alamos National Laboratory. It is recognized that there are various methods which can be, and are being, used to evaluate process inventory differences at DOE facilities. The purpose of this paper is to share our thoughts on the subject and our techniques with those who are responsible for the evaluation of inventory differences at their facility. One of the most dangerous aspects of any evaluation technique, especially one as complex as most inventory difference evaluations tend to be, is to fail to look at the tools being used as indicators. There is a tendency to look at the results of an evaluation by one technique as an absolute. At the Los Alamos National Laboratory, several tools are used and the final evaluation is based on a combination of the observed results of a many-faceted evaluation. The tools used and some examples are presented

  17. Methodology of formal software evaluation

    International Nuclear Information System (INIS)

    Tuszynski, J.

    1998-01-01

    Sydkraft AB, the major Swedish utility, owner of ca 6000 MW el installed in nuclear (NPP Barsebaeck and NPP Oskarshamn), fossil fuel and hydro Power Plants is facing modernization of the control systems of the plants. Standards applicable require structured, formal methods for implementation of the control functions in the modem, real time software systems. This presentation introduces implementation methodology as discussed presently at the Sydkraft organisation. The approach suggested is based upon the process of co-operation of three parties taking part in the implementation; owner of the plant, vendor and Quality Assurance (QA) organisation. QA will be based on tools for formal software validation and on systematic gathering by the owner of validated and proved-by-operation control modules for the concern-wide utilisation. (author)

  18. Butterfly valve torque prediction methodology

    International Nuclear Information System (INIS)

    Eldiwany, B.H.; Sharma, V.; Kalsi, M.S.; Wolfe, K.

    1994-01-01

    As part of the Motor-Operated Valve (MOV) Performance Prediction Program, the Electric Power Research Institute has sponsored the development of methodologies for predicting thrust and torque requirements of gate, globe, and butterfly MOVs. This paper presents the methodology that will be used by utilities to calculate the dynamic torque requirements for butterfly valves. The total dynamic torque at any disc position is the sum of the hydrodynamic torque, bearing torque (which is induced by the hydrodynamic force), as well as other small torque components (such as packing torque). The hydrodynamic torque on the valve disc, caused by the fluid flow through the valve, depends on the disc angle, flow velocity, upstream flow disturbances, disc shape, and the disc aspect ratio. The butterfly valve model provides sets of nondimensional flow and torque coefficients that can be used to predict flow rate and hydrodynamic torque throughout the disc stroke and to calculate the required actuation torque and the maximum transmitted torque throughout the opening and closing stroke. The scope of the model includes symmetric and nonsymmetric discs of different shapes and aspects ratios in compressible and incompressible fluid applications under both choked and nonchoked flow conditions. The model features were validated against test data from a comprehensive flowloop and in situ test program. These tests were designed to systematically address the effect of the following parameters on the required torque: valve size, disc shapes and disc aspect ratios, upstream elbow orientation and its proximity, and flow conditions. The applicability of the nondimensional coefficients to valves of different sizes was validated by performing tests on 42-in. valve and a precisely scaled 6-in. model. The butterfly valve model torque predictions were found to bound test data from the flow-loop and in situ testing, as shown in the examples provided in this paper

  19. System Anthropological Psychology: Methodological Foundations

    Directory of Open Access Journals (Sweden)

    Vitaliy Y. Klochko

    2012-01-01

    Full Text Available The article considers methodological foundations of the system anthropologicalpsychology (SAP as a scientific branch developed by a well-represented groupof Siberian scientists. SAP is a theory based on axiomatics of cultural-historicalpsychology of L.S. Vygotsky and transspective analysis as a specially developedmeans to define the tendencies of science developing as a self-organizing system.Transspective analysis has revealed regularities in a constantly growing complexityof professional-psychological thinking along the course of emergence ofscientific cognition. It has proved that the field of modern psychology is shapedby theories constructed with ideation of different grades of complexity. The concept“dynamics of the paradigm of science” is introduced; it allows transitions tobe acknowledged from ordinary-binary logic characteristics of the classical scienceto a binary-ternary logic, adequate to non-classical science and then to aternary-multidimensional logic, which is now at the stage of emergence. The latteris employed in SAP construction. It involves the following basic methodologicalprinciples: the principle of directed (selective interaction and the principle ofgenerative effect of selective interaction. The concept of “complimentary interaction”applied in natural as well as humanitarian sciences is reconsidered in thecontext of psychology. The conclusion is made that the principle of selectivity anddirectedness of interaction is relevant to the whole Universe embracing all kindsof systems including the living ones. Different levels of matter organization representingsemantic structures of various complexity use one and the same principleof meaning making through which the Universe ensures its sustainability asa self-developing phenomenon. This methodology provides an explanation fornature and stages of emergence of multidimensional life space of an individual,which comes as a foundation for generation of such features of

  20. CIAU methodology and BEPU applications

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.

    2009-01-01

    Best-Estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are unpredictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Uncertainties may have different origins ranging from the approximation of the models, to the approximation of the numerical solution, and to the lack of precision of the values adopted for boundary and initial conditions. The amount of uncertainty that affects a calculation may strongly depend upon the codes and the modeling techniques (i.e. the code's users). A consistent and robust uncertainty methodology must be developed taking into consideration all the above aspects. The CIAU (Code with the capability of Internal Assessment of Uncertainty) and the UMAE (Uncertainty Methodology based on Accuracy Evaluation) methods have been developed by University of Pisa (UNIPI) in the framework of a long lasting research activities started since 80's and involving several researchers. CIAU is extensively discussed in the available technical literature, Refs. [1, 2, 3, 4, 5, 6, 7], and tens of additional relevant papers, that provide comprehensive details about the method, can be found in the bibliography lists of the above references. Therefore, the present paper supplies only 'spot-information' about CIAU and focuses mostly on the applications to some cases of industrial interest. In particular the application of CIAU to the OECD BEMUSE (Best Estimate Methods Uncertainty and Sensitivity Evaluation, [8, 9]) project is discussed and a critical comparison respect with other uncertainty methods (in relation to items like: sources of uncertainties, selection of the input parameters and quantification of