WorldWideScience

Sample records for computational thermo-hydraulic model

  1. Dynamic thermo-hydraulic model of district cooling networks

    International Nuclear Information System (INIS)

    Oppelt, Thomas; Urbaneck, Thorsten; Gross, Ulrich; Platzer, Bernd

    2016-01-01

    Highlights: • A dynamic thermo-hydraulic model for district cooling networks is presented. • The thermal modelling is based on water segment tracking (Lagrangian approach). • Thus, numerical errors and balance inaccuracies are avoided. • Verification and validation studies proved the reliability of the model. - Abstract: In the present paper, the dynamic thermo-hydraulic model ISENA is presented which can be applied for answering different questions occurring in design and operation of district cooling networks—e.g. related to economic and energy efficiency. The network model consists of a quasistatic hydraulic model and a transient thermal model based on tracking water segments through the whole network (Lagrangian method). Applying this approach, numerical errors and balance inaccuracies can be avoided which leads to a higher quality of results compared to other network models. Verification and validation calculations are presented in order to show that ISENA provides reliable results and is suitable for practical application.

  2. Thermo-Hydraulic Modelling of Buffer and Backfill

    International Nuclear Information System (INIS)

    Pintado, X.; Rautioaho, E.

    2013-09-01

    The temporal evolution of saturation, liquid pressure and temperature in the components of the engineered barrier system was studied using numerical methods. A set of laboratory tests was conducted to calibrate the parameters employed in the models. The modelling consisted of thermal, hydraulic and thermo-hydraulic analysis in which the significant thermo-hydraulic processes, parameters and features were identified. CODE B RIGHT was used for the finite element modelling and supplementary calculations were conducted with analytical methods. The main objective in this report is to improve understanding of the thermo-hydraulic processes and material properties that affect buffer behaviour in the Olkiluoto repository and to determine the parametric requirements of models for the accurate prediction of this behaviour. The analyses consisted of evaluating the influence of initial canister temperature and gaps in the buffer, and the role played by fractures and the rock mass located between fractures in supplying water for buffer and backfill saturation. In the thermo-hydraulic analysis, the primary processes examined were the effects of buffer drying near the canister on temperature evolution and the manner in which heat flow affects the buffer saturation process. Uncertainties in parameters and variations in the boundary conditions, modelling geometry and thermo-hydraulic phenomena were assessed with a sensitivity analysis. The material parameters, constitutive models, and assumptions made were carefully selected for all the modelling cases. The reference parameters selected for the simulations were compared and evaluated against laboratory measurements. The modelling results highlight the importance of understanding groundwater flow through the rock mass and from fractures in the rock in order to achieve reliable predictions regarding buffer saturation, since saturation times could range from a few years to tens of thousands of years depending on the hydrogeological

  3. Temperature control of CMS Barrel ECAL (EB) : computational thermo-hydraulic model for dynamic behaviour, control aspects

    CERN Document Server

    Wertelaers, P

    2010-01-01

    The current design foresees a central heat exchanger followed by a controlled post heater, for all ECAL. We discuss the scheme and try to assess its performance, from a Barrel viewpoint. This is based on computational work. The coolant transfer pipes play an essential role in building a dynamical model. After some studies on the behaviour of the cooling circuit itself, a strong yet simple controller is proposed. Then, the system with feedback control is scrutinized, with emphasis on disturbance rejection. The most relevant disturbances are cooling ripple, pipe heat attack, and electronics’ switching.

  4. On three-dimensional nuclear thermo-hydraulic computation techniques for ATR

    International Nuclear Information System (INIS)

    1997-08-01

    The three-dimensional computation code for nuclear thermo-hydraulic combination core LAYMON-2A is used for the calculation of the power distribution and the control rod reactivity value of the ATR. This code possesses various functions which are required for planning the core operation such as the search function for critical boric acid concentration, and can do various simulation calculations such as core burning calculation. Further, the three-dimensional analysis code for xenon dynamic characteristics in the core LAYMON-2C, in which the dynamic characteristic equation of xenon-samarium was incorporated into the LAYMON-2A code can take the change with time lapse of xenon-samarium concentration accompanying the change of power level and power distribution into account, and it is used for the analysis of the spatial vibration characteristics of power and the regional power control characteristics due to xenon in the core. As to the LAYMON-2A, the computation flow, power distribution and thermo-hydraulic computation models, and critical search function are explained. As to the LAYMON-2C, the computation flow is described. The comparison of the calculated values by using the LAYMON-2A code and the operation data of the Fugen is reported. (K.I.)

  5. Computer code HYDRO-ACE for analyzing thermo-hydraulic phenomena in the BWR core

    International Nuclear Information System (INIS)

    Abe, Kiyoharu; Naito, Yoshitaka

    1979-10-01

    A computer code HYDRO-ACE has been developed for analyzing thermo-hydraulic phenomena in the BWR core under forced or natural circulation of cooling water. The code is composed of two main calculation routines for single channels such as riser, separator, and downcommer and multiple channels such as the reactor core with a heated zone. Functionally the code is divided into many subroutines to be connected straightforwardly, and so that the user can choose a given course freely by simply arranging the subroutines. In the program, void fraction is calculated by Maurer's method, two-phase frictional pressure drop by Maltinelli-Nelson's, and critical heat flux ratio by Hench-Levy's. The coolant flow distributions in the JPDR-II core calculated by the code are in good agreement with those measured. (author)

  6. A novel thermo-hydraulic coupling model to investigate the crater formation in electrical discharge machining

    Science.gov (United States)

    Tang, Jiajing; Yang, Xiaodong

    2017-09-01

    A novel thermo-hydraulic coupling model was proposed in this study to investigate the crater formation in electrical discharge machining (EDM). The temperature distribution of workpiece materials was included, and the crater formation process was explained from the perspective of hydrodynamic characteristics of the molten region. To better track the morphology of the crater and the movement of debris, the level-set method was introduced in this study. Simulation results showed that the crater appears shortly after the ignition of the discharge, and the molten material is removed by vaporizing in the initial stage, then by splashing at the following time. The driving force for the detachment of debris in the splashing removal stage comes from the extremely large pressure difference in the upper part of the molten region, and the morphology of the crater is also influenced by the shearing flow of molten material. It was found that the removal ratio of molten material is only about 7.63% under the studied conditions, leaving most to form the re-solidification layer on the surface of the crater. The size of the crater reaches the maximum at the end of discharge duration then experiences a slight reduction because of the reflux of molten material after the discharge. The results of single pulse discharge experiments showed that the morphologies and sizes between the simulation crater and actual crater are good at agreement, verifying the feasibility of the proposed thermo-hydraulic coupling model in explaining the mechanisms of crater formation in EDM.

  7. The delay function in finite difference models for nuclear channels thermo-hydraulic transients

    International Nuclear Information System (INIS)

    Agazzi, A.

    1977-01-01

    The study of the thermo-hydraulic transients in a nuclear reactor core often requires a bi- or tri-dimensional mathematical simulation of a reactor channel. The equations involved are generally solved by means of finite-difference methods. The determination of the spatial mesh-width and the time interval is strongly conditioned by the necessity of a good accuracy in the description of the delay function which defines the transfer of thermal perturbations along the cooling channel. In this paper the effects of both space and time discretization on the delay function are considered and for the classical cases of inlet temperature step and ramp universal functions and diagrams are given in order to make possible the determination of optimal spatial mesh-width and time interval, once the requested accuracy of the model is fixed in advance

  8. Analytical model for calculation of the thermo hydraulic parameters in a fuel rod assembly

    Energy Technology Data Exchange (ETDEWEB)

    Cesna, B., E-mail: benas@mail.lei.l [Lithuanian Energy Institute, Laboratory of Nuclear Installation Safety, Breslaujos g. 3, LT-44403 Kaunas (Lithuania)

    2010-11-15

    Research highlights: {yields} Proposed calculation model can be used for rapid calculation of the bundles with rods spaced by wire wrapping or honey type spacer grids. {yields} Model estimate three flow cross mixture mechanisms. {yields} Program DARS is enable to analyses experimental results. - Abstract: The paper presents the procedure of the cellular calculation of thermo hydraulic parameters of a single-phase gas flow in a fuel rod assembly. The procedure is implemented in the DARS program. The program is intended for calculation of the distribution of the gaseous coolant parameters and wall temperatures in case of arbitrary, geometrically specified, arrangement of the rods in fuel assembly and in case of arbitrary, functionally specified in space, heat release in the rods. In mathematical model the flow cross-section of the channel of intricate shape is conventionally divided to elementary cells formed by straight lines, which connect the centers of rods. Within the limits of a single cell the coolant parameters and the temperature of the corresponding part of the rod surface are assumed constant. The entire fuel assembly is viewed as a system of parallel interconnected channels. Program DARS is illustrated by calculation of a temperature mode of 85-rod assembly with spacers of wire wrapping on the rods.

  9. ASCOT-1: a computer program for analyzing the thermo-hydraulic behavior in a PWR core during a LOCA

    International Nuclear Information System (INIS)

    Kobayashi, Kensuke; Sato, Kazuo

    1978-09-01

    A digital computer code ASCOT-1 has been developed to analyze the thermo-hydraulic behavior in a PWR core during a loss-of-coolant accident. The core is assumed to be axi-symmetric two-dimensional and the conservation laws are solved by the method of characteristics. For the temperature response of representative fuels of the concentric annular subregions into which the core is divided, the heat conduction equations are solved by the explicit method with the averaged flow conditions decided above. The boundary conditions at the upper and lower plenum are given as inputs. The program is of an adjustable dimension so there are no restrictions to the numbers of meshes. ASCOT-1 is written in FORTRAN-IV for FACOM230-75. (author)

  10. Assessment of TRAC-PD2 reflood core thermo-hydraulic model by CCTF Test C1-16

    International Nuclear Information System (INIS)

    Sugimoto, Jun

    1982-11-01

    The TRAC-PD2 reflood core thermo-hydraulic model was assessed by CCTF Test C1-16. The measured data were utilized as core boundary conditions in the TRAC calculations. The results indicate that the core inlet liquid temperature and the core heater rod temperatures are in reasonable agreement with data, but the pressure distribution in the core and water pool formation in the upper plenum are not in good agreement. The parametric effects of the droplet critical Weber number, the material properties of the heater rod, the noding of the upper plenum, and the minimum stable film boiling temperature are also discussed. (author)

  11. Development of a model of a NSSS of the PWR reactor with thermo-hydraulic code GOTHIC

    International Nuclear Information System (INIS)

    Gomez Garcia-Torano, I.; Jimenez, G.

    2013-01-01

    The Thermo-hydraulic code GOTHIC is often used in the nuclear industry for licensing transient analysis inside containment of generation II (PWR, BWR) plants as Gen III and III + (AP1000, ESBWR, APWR). After entering the mass and energy released to the containment, previously calculated by other codes (basis, TRACE), GOTHIC allows to calculate in detail the evolution of basic parameters in the containment.

  12. Neutron and thermo - hydraulic model of a reactivity transient in a nuclear power plant fuel element

    International Nuclear Information System (INIS)

    Oliva, Jose de Jesus Rivero

    2012-01-01

    A reactivity transient without reactor scram was modeled and calculated using analytical expressions for the space distributions of the temperature fields, combined with discrete numerical calculations for the time dependences of thermal power and temperatures. The transient analysis covered the time dependencies of reactivity, global thermal power, fuel heat flux and temperatures in fuel, cladding and cooling water. The model was implemented in Microsoft Office Excel, dividing the Excel file in several separated worksheets for input data, initial steady-state calculations, calculation of parameters non-depending on eigenvalues, eigenvalues determination, calculation of parameters depending on eigenvalues, transient calculation and graphical representation of intermediate and final results. The results show how the thermal power reaches a new equilibrium state due to the negative reactivity feedback derived from the fuel temperature increment. Nevertheless, the reactor mean power increases 40% during the first second and, in the hottest channel, the maximum fuel temperature goes to a significantly high value, slightly above 2100 deg C, after 8 seconds of transient. Consequently, the results confirm that certain degree of fuel damage could be expected in case of a reactor scram failure. Once the basic model has being established the scope of accidents for future analyses can be extended, modifying the nuclear power behavior (reactivity) during transient and the boundary conditions for coolant temperature. A more complex model is underway for an annular fuel element. (author)

  13. Thermo-hydraulic modelling of the South East Gas Pipeline System - an integrated model; Modelagem termo-hidraulica do Sistema de Gasodutos do Sudeste : um modelo integrado

    Energy Technology Data Exchange (ETDEWEB)

    Vianna Neto, Armando M.; Santos, Arnaldo M.; Mercon, Eduardo G. [TRANSPETRO - PETROBRAS Transportes, Rio de Janeiro, RJ (Brazil)

    2003-07-01

    This paper presents the development of an integrated simulation model, for the numerical calculation of thermal-hydraulic behaviors in the Brazilian southeast onshore gas pipeline flow system, remotely operated by TRANSPETRO's Gas Pipeline Control Centre (CCG). In its final application, this model is supposed to provide simulated results at the closer range to reality, in order to improve gas pipeline simulation studies and evaluations for the system in question. Considering the fact that numerical thermo-hydraulic simulation becomes the CCG's most important tool to analyze the boundary conditions to adjust the mentioned gas flow system, this paper seeks and takes aim to the optimization of the following prime attributions of a gas pipeline control centre: verification of system behaviors, face to some unit maintenance stop or procedure, programmed or not, or to some new gas outlet or inlet connection to the system; daily operational compatibility analysis between programmed and realized gas volumes; gas technical expedition and delivery analysis. Finally, all this work was idealized and carried out within the one-phase flow domain (dry gas) (author)

  14. Several new thermo-hydraulic test facilities in NPIC

    International Nuclear Information System (INIS)

    Ye Shurong; Sun Yufa; Ji Fuyun; Zong Guifang; Guo Zhongchuan

    1997-01-01

    Several new thermo-hydraulic test facilities are under construction in Nuclear Power Institute of Chinese (NPIC) at Chengdu. These facilities include: 1. Nuclear Power Component Comprehensive Test Facility. 2. Reactor Hydraulic Modeling Test Facility. 3. Control Rod Drive Line Hydraulic Test Facility. 4. Large Scale Thermo-Hydraulic Test Facility. The construction of these facilities will make huge progress in the research and development capability of nuclear power technology in CHINA. The author will present a brief description of the design parameters flowchart and test program of these facilities

  15. The applicability of CFD to simulate and study the mixing process and the thermo-hydraulic consequences of a main steam line break in PWR model

    Directory of Open Access Journals (Sweden)

    Farkas Istvan

    2017-01-01

    Full Text Available This paper focuses on the validation and applicability of CFD to simulate and analyze the thermo-hydraulic consequences of a main steam line break. Extensive validation data come from experiments performed using the Rossendorf coolant mixing model facility. For the calculation, the range of 9 to 12 million hexahe¬dral cells was constructed to capture all details in the interrogation domain in the system. The analysis was performed by running a time-dependent calculation, Detailed analyses were made at different cross-sections in the system to evaluate not only the value of the maximum and minimum temperature, but also the loca¬tion and the time at which it occurs during the transient which is considered to be indicator for the quality of mixing in the system. CFD and experimental results were qualitatively compared; mixing in the cold legs with emergency core cooling systems was overestimated. This could be explained by the sensitivity to the bound¬ary conditions. In the downcomer, the experiments displayed higher mixing: by our assumption this related to the dense measurement grid (they were not modelled. The temperature distribution in the core inlet plane agreed with the measurement results. Minor deviations were seen in the quantitative comparisons: the maximum temperature difference was 2ºC.

  16. Thermo-hydraulic and structural analysis for finger-based concept of ITER blanket first wall

    International Nuclear Information System (INIS)

    Kim, Byoung-Yoon; Ahn, Hee-Jae

    2011-01-01

    The blanket first wall is one of the main plasma facing components in ITER tokamak. The finger-typed first wall was proposed through the current design progress by ITER organization. In this concept, each first wall module is composed of a beam and twenty fingers. The main function of the first wall is to remove efficiently the high heat flux loading from the fusion plasma during its operation. Therefore, the thermal and structural performance should be investigated for the proposed finger-based design concept of first wall. The various case studies were performed for a unit finger model considering different loading conditions. The finite element model was made for a half of a module using symmetric boundary conditions to reduce the computational effort. The thermo-hydraulic analysis was performed to obtain the pressure drop and temperature profiles. Then the structural analysis was carried out using the maximum temperature distribution obtained in thermo-hydraulic analysis. Finally, the transient thermo-hydraulic analysis was performed for the generic first wall module to obtain the temperature evolution history considering cyclic heat flux loading with nuclear heating. After that, the thermo-mechanical analysis was performed at the time step when the maximum temperature gradient was occurred. Also, the stress analysis was performed for the component with a finger and a beam to check the residual stress of the component after thermal shrinkage assembly.

  17. Sensitivity analysis for thermo-hydraulics model of a Westinghouse type PWR. Verification of the simulation results

    Energy Technology Data Exchange (ETDEWEB)

    Farahani, Aref Zarnooshe [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Nuclear Engineering, Science and Research Branch; Yousefpour, Faramarz [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of); Hoseyni, Seyed Mohsen [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Basic Sciences; Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Young Researchers and Elite Club

    2017-07-15

    Development of a steady-state model is the first step in nuclear safety analysis. The developed model should be qualitatively analyzed first, then a sensitivity analysis is required on the number of nodes for models of different systems to ensure the reliability of the obtained results. This contribution aims to show through sensitivity analysis, the independence of modeling results to the number of nodes in a qualified MELCOR model for a Westinghouse type pressurized power plant. For this purpose, and to minimize user error, the nuclear analysis software, SNAP, is employed. Different sensitivity cases were developed by modification of the existing model and refinement of the nodes for the simulated systems including steam generators, reactor coolant system and also reactor core and its connecting flow paths. By comparing the obtained results to those of the original model no significant difference is observed which is indicative of the model independence to the finer nodes.

  18. A Semi-implicit Numerical Scheme for a Two-dimensional, Three-field Thermo-Hydraulic Modeling

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Jeong, Jaejoon

    2007-07-01

    The behavior of two-phase flow is modeled, depending on the purpose, by either homogeneous model, drift flux model, or separated flow model, Among these model, in the separated flow model, the behavior of each flow phase is modeled by its own governing equation, together with the interphase models which describe the thermal and mechanical interactions between the phases involved. In this study, a semi-implicit numerical scheme for two-dimensional, transient, two-fluid, three-field is derived. The work is an extension to the previous study for the staggered, semi-implicit numerical scheme in one-dimensional geometry (KAERI/TR-3239/2006). The two-dimensional extension is performed by specifying a relevant governing equation set and applying the related finite differencing method. The procedure for employing the semi-implicit scheme is also described in detail. Verifications are performed for a 2-dimensional vertical plate for a single-phase and two-phase flows. The calculations verify the mass and energy conservations. The symmetric flow behavior, for the verification problem, also confirms the momentum conservation of the numerical scheme

  19. Simplified model for the thermo-hydraulic simulation of the hot channel of a PWR type nuclear reactor

    International Nuclear Information System (INIS)

    Belem, J.A.T.

    1993-09-01

    The present work deals with the thermal-hydraulic analysis of the hot channel of a standard PWR type reactor utilizing a simplified mathematical model that considers constant the water mass flux during single-phase flow and reduction of the flow when the steam quality is increasing in the channel (two-phase flow). The model has been applied to the Angra-1 reactor and it has proved satisfactory when compared to other ones. (author). 25 refs, 15 figs, 3 tabs

  20. Application of CFD methods in research of SCWR thermo-hydraulics

    International Nuclear Information System (INIS)

    Zeng Xiaokang; Li Yongliang; Yan Xiao; Xiao Zejun; Huang Yanping

    2013-01-01

    The CFD method has been an important tool in the research of SCWR thermo- hydraulics. Currently, the CFD methods uses commonly the subcritical turbulence models, which can not accurately simulate the gravity and thermal expansion acceleration effect, and CFD numerical method is not applicable when the heat flux is large. The paper summarizes the application status of the CFD methods in the research of SCWR thermo-hydraulics in RETH. (authors)

  1. Connected analysis nuclear-thermo-hydraulic of parallel channels of a BWR reactor using distributed computation; Analisis acoplado nuclear-termohidraulico de canales paralelos de un reactor BWR empleando computacion distribuida

    Energy Technology Data Exchange (ETDEWEB)

    Campos Gonzalez, Rina Margarita

    2007-07-15

    This work consists of the integration of three models previously developed which are described widely in Literature: model of the thermo-hydraulic channel, model of the modal neutronic and the model of the recirculation bows. The tool used for this connection of models is the PVM system, Parallel Virtual Machine that allowed paralleling the model by means of the concept of distributed computation. The purpose of making this connection of models is the one of obtaining a more complete tool than better represents the real configuration and the phenomenology of the nucleus of a BWR reactor, thus obtaining better results. In addition to maintaining the flexibility to improve the resulting model at any time, since the very complex or sophisticated models are difficult to improve being impossible to modify the equations they use and can include variables that are not of primary importance in the tackled problem or that mask relations among variables due to the excess of results. Also maintaining the flexibility for adding component of models or systems of the BWR reactor, all of this following the modeling needs. The Swedish Ringhals power plant was chosen to characterize the resulting connected model for counting on a Stability Benchmark that offers the opportunity to count on real plant data. Besides that in case 9 of cycle 14 of this Benchamark oscillations outside phase appeared, which are from great interest because the detection systems that register the average of the power of the nucleus do not detect them. Additionally in this work the model of the recirculation bows as an independent module is obtained in an individual way, since this model belongs to another work and works connected to the reactor vessel. The model of the recirculation bows is able to model several transients of interest, as it is shown in the Appendix A of this work, among which are found the tripping of recirculation pumps or the transference at low or high velocity of them. The scope of the

  2. Development of LILAC-meltpool for the thermo-hydraulic analysis of core melt relocated in a reactor vessel

    International Nuclear Information System (INIS)

    Kim, Jong Tae; Kim, Sang Baik; Kim, Hee Dong

    2002-03-01

    LILAC-meltpool has been developed to study thermo-hydraulic behavior of molten pool and thermal behavior of vessel wall during severe accident. To validate LILAC-meltpool code several two and three dimensional thermo-hydraulic problems were selected and solved. The benchmark problems have experimental results or verified numerical results. Through the validation it was found that LILAC-meltpool reproduces very accurate numerical results. Two-layered semicircular pool was solved to study thermal and hydraulic characteristics of pool stratification. The LAVA experiment using alumina/ferrite molten pool was calculated and compared with computed results. Cooling of alumina/ferrite two-layered pool was affected by stratification. In the numerical results temperature of vessel inner was highest at a location below the interface. Crust was developed from upper surface and lower outer surface, but in the area near the interface corium simulant existed as molten state for long time. LAVA-4 experiment was studied using gap-cooling model in LILAC-meltpool code. Temperature increase of LAVA vessel after alumina melt relocation was strongly dependent on gap formation mechanism. Calculated cooling rates of the vessel were very similar to experimental results. For LAVA experiments which do not have heat generation coolant penetrates easily into a gap and it is found that gap-cooling is very effective for cooling of vessel, but it is thought that coolant penetration could be limited near upper part of gap because of decay heat and high temperature of corium crust

  3. Development of a model of a NSSS of the PWR reactor with thermo-hydraulic code GOTHIC; Desarrollo de un modelo del NSSS de un reactor PWR con el codigo termo-hidraulico GOTHIC

    Energy Technology Data Exchange (ETDEWEB)

    Gomez Garcia-Torano, I.; Jimenez, G.

    2013-07-01

    The Thermo-hydraulic code GOTHIC is often used in the nuclear industry for licensing transient analysis inside containment of generation II (PWR, BWR) plants as Gen III and III + (AP1000, ESBWR, APWR). After entering the mass and energy released to the containment, previously calculated by other codes (basis, TRACE), GOTHIC allows to calculate in detail the evolution of basic parameters in the containment.

  4. Study on thermo-hydraulic behavior during reflood phase of a PWR-LOCA

    International Nuclear Information System (INIS)

    Sugimoto, Jun

    1989-01-01

    This paper describes thermo-hydraulic behavior during the reflood phase in a postulated large-break loss-of-coolant accident (LOCA) of a PWR. In order to better predict the reflood transient in a nuclear safety analysis specific analytical models have been developed for, saturated film boiling heat transfer in inverted slung flow, the effect of grid spacers on core thermo-hydraulics, overall system thermo-hydraulic behavior, and the thermal response similarity between nuclear fuel rods and simulated rods. A heat transfer correlation has been newly developed for saturated film boiling based on a 4 x 4-rod experiment conducted at JAERI. The correlation provides a good agreement with existing experiments except in the vicinity of grid spacer locations. An analytical model has then been developed addressing the effect of grid spacers. The thermo-hydraulic behavior near the grid spacers was found to be predicted well with this model by considering the breakup of droplets in dispersed flow and water accumulation above the grid spacers in inverted slung flow. A system analysis code has been developed which couples the one-dimensional core and multi-loop primary system component models. It provides fairly good agreement with system behavior obtained in a large-scale integral reflood experiment with active primary system components. An analytical model for the radial temperature distribution in a rod has been developed and verified with data from existing experiments. It was found that a nuclear fuel rod has a lower cladding temperature and an earlier quench time than an electrically heated rod in a typical reflood condition. (author)

  5. Thermo-hydraulic design of earth-air heat exchangers

    Energy Technology Data Exchange (ETDEWEB)

    Paepe, M. de [Ghent University (Belgium). Department of Flow, Heat and Combustion Mechanics; Janssens, A. [Ghent University (Belgium). Department of Architecture and Urbanism

    2003-05-01

    Earth-air heat exchangers, also called ground tube heat exchangers, are an interesting technique to reduce energy consumption in a building. They can cool or heat the ventilation air, using cold or heat accumulated in the soil. Several papers have been published in which a design method is described. Most of them are based on a discretisation of the one-dimensional heat transfer problem in the tube. Three-dimensional complex models, solving conduction and moisture transport in the soil are also found. These methods are of high complexity and often not ready for use by designers. In this paper, a one-dimensional analytical method is used to analyse the influence of the design parameters of the heat exchanger on the thermo-hydraulic performance. A relation is derived for the specific pressure drop, linking thermal effectiveness with pressure drop of the air inside the tube. The relation is used to formulate a design method which can be used to determine the characteristic dimensions of the earth-air heat exchanger in such a way that optimal thermal effectiveness is reached with acceptable pressure loss. The choice of the characteristic dimensions, becomes thus independent of the soil and climatological conditions. This allows designers to choose the earth-air heat exchanger configuration with the best performance. (author)

  6. Thermo-hydraulic design of earth-air heat exchangers

    Energy Technology Data Exchange (ETDEWEB)

    De Paepe, M. [Department of Flow, Heat and Combustion Mechanics, Ghent University, Ghent (Belgium); Janssens, A. [Department of Architecture and Urbanism, Ghent University, Ghent (Belgium)

    2003-07-01

    Earth-air heat exchangers, also called ground tube heat exchangers, are an interesting technique to reduce energy consumption in a building. They can cool or heat the ventilation air, using cold or heat accumulated in the soil. Several papers have been published in which a design method is described. Most of them are based on a discretisation of the one-dimensional heat transfer problem in the tube. Three-dimensional complex models, solving conduction and moisture transport in the soil are also found. These methods are of high complexity and often not ready for use by designers. In this paper, a one-dimensional analytical method is used to analyse the influence of the design parameters of the heat exchanger on the thermo-hydraulic performance. A relation is derived for the specific pressure drop, linking thermal effectiveness with pressure drop of the air inside the tube. The relation is used to formulate a design method which can be used to determine the characteristic dimensions of the earth-air heat exchanger in such a way that optimal thermal effectiveness is reached with acceptable pressure loss. The choice of the characteristic dimensions, becomes thus independent of the soil and climatological conditions. This allows designers to choose the earth-air heat exchanger configuration with the best performance. (author)

  7. Thermo-hydraulic-mechanical analysis of the SS-050 sodium loop during a thermal shock of 2000C/s

    International Nuclear Information System (INIS)

    Jesus Miranda, C.A. de; Gebrin, A.N.

    1988-01-01

    An analytical thermo-hydraulic model was developed to obtain the temperature of the sodium flowing between the mixing tank TM of constant volume and the drain tank of the SS-050 sodium test facility. The piping connecting these two tanks is considered in the analysis. The sodium enters in the TM through a tube with lateral holes immersed in the TM's sodium. The model and relative computer program were tested and a typical situation was studied: a thermal shock with -200 0 C/s of thermal gradient in the test section. The sodium temperature time-histories along the piping length are presented. For the thermal shock situation, the temperature field in the TM bottom and outlet nozzle was calculated and the stresses were evaluated. The final thermal stresses will allow a detailed verification of the circuit design. (author) [pt

  8. Fundamental study on thermo-hydraulic behaviors during power transient, 2

    International Nuclear Information System (INIS)

    Shinano, M.; Inoue, A.

    1988-01-01

    Thermo-hydraulic behaviors during power transient of nuclear reactors are studied. Boiling around test rod heated transiently forces to flow out liquid in the test section and generates high pressure pulse. In this study, it is investigated experimentally and analytically that magnitude of pressure pulse and energy conversion efficiency to the mechanical works in cases of fragmentation and non-fragmentation. In analysis, effects of increasing of heat transfer and of interaction area due to fragmentation is considered. Consequently, 1) magnitude of pressure pulse on fragmentation is about 10 times greater than that on non-fragmentation. 2) analytical model can show characteristics of fragmentation processes qualitatively. (author)

  9. Parametric study of the stability properties of a thermo hydraulic channel coupled to punctual kinetics

    International Nuclear Information System (INIS)

    Cecenas F, M.; Campos G, R.M.

    2005-01-01

    The reason of decay is the indicator of stability usually used in the literature to evaluate stability of boiling water reactors, however, in the operation of this type of reactors is considered the length of boiling like an auxiliary parameter for the evaluation of stability. In this work its are studied the variation of these two indicators when modifying a given an operation parameter in a model of a thermo hydraulic channel coupled to punctual kinetics, maintaining all the other input constant variables. The parameters selected for study are the axial profile of power, the subcooling, the flow of coolant and the thermal power. The study is supplemented by means of real data of plant using the one Benchmark of Ringhals, and the results for the case of the ratio of decay its are compared with the decay reasons obtained by means of autoregression models of the local instrumentation of neutron flux. (Author)

  10. Parallelization of TWOPORFLOW, a Cartesian Grid based Two-phase Porous Media Code for Transient Thermo-hydraulic Simulations

    Science.gov (United States)

    Trost, Nico; Jiménez, Javier; Imke, Uwe; Sanchez, Victor

    2014-06-01

    TWOPORFLOW is a thermo-hydraulic code based on a porous media approach to simulate single- and two-phase flow including boiling. It is under development at the Institute for Neutron Physics and Reactor Technology (INR) at KIT. The code features a 3D transient solution of the mass, momentum and energy conservation equations for two inter-penetrating fluids with a semi-implicit continuous Eulerian type solver. The application domain of TWOPORFLOW includes the flow in standard porous media and in structured porous media such as micro-channels and cores of nuclear power plants. In the latter case, the fluid domain is coupled to a fuel rod model, describing the heat flow inside the solid structure. In this work, detailed profiling tools have been utilized to determine the optimization potential of TWOPORFLOW. As a result, bottle-necks were identified and reduced in the most feasible way, leading for instance to an optimization of the water-steam property computation. Furthermore, an OpenMP implementation addressing the routines in charge of inter-phase momentum-, energy- and mass-coupling delivered good performance together with a high scalability on shared memory architectures. In contrast to that, the approach for distributed memory systems was to solve sub-problems resulting by the decomposition of the initial Cartesian geometry. Thread communication for the sub-problem boundary updates was accomplished by the Message Passing Interface (MPI) standard.

  11. Numerical study of the thermo-hydraulic behavior for the Candu type fuel channel

    International Nuclear Information System (INIS)

    Lazaro, Pavel Gabriel; Balas Ghizdeanu, Elena Nineta

    2008-01-01

    Candu type reactors use fuel channel in a horizontal lattice. The fuel bundles are positioned in two Zircaloy tubes: the pressure tube surrounded by calandria tube. Inside the pressure tube the coolant heavy water flows. The coolant reaches high temperatures and pressures. Due to irregular neutron spatial distribution, the fuel channel stress differs from one channel to other. In one improbable event of severe accident, the fuel channel behaves differently according to its normal function history. Over the years, there have been many research projects trying to analyze thermal hydraulic performance of the design and to add some operational improvements in order to achieve an efficient thermal hydraulic distribution. This paper discusses the thermo hydraulic behavior (influence of the temperature and velocity distribution) of the most solicited channel, simulated with Fluent 6.X. Code. Moreover it will be commented the results obtained using different models and mesh applied. (authors)

  12. Thermo-hydraulic consequence of pressure suppression containment vessel during blowdown, 2

    International Nuclear Information System (INIS)

    Aya, Izuo; Nariai, Hideki; Kobayashi, Michiyuki

    1980-01-01

    As a part of the safety research works for the integral-type marine reactor, an analytical code SUPPAC-2V was developed to simulate the thermo-hydraulic consequence of a pressure suppression containment system during blowdown and the code was applied to the Model Experimental Facility of the Safety of Integral Type Marine Reactors (explained already in Part 1). SUPPAC-2V is much different from existing codes in the following points. A nonhomogeneous model for the gaseous region in the drywell, a new correlation for condensing heat transfer coefficient at drywell wall based on existing data and approximation of air bubbles in wetwell water by one dimensional bubble rising model are adopted in this code. In comparing calculational results with experimental results, values of predominant input parameters were evaluated and discussed. Moreover, the new code was applied also to the NSR-7 marine reactor, conceptually designed at the Shipbuilding Research Association in Japan, of which suppression system had been already analysed by CONTEMPT-PS. (author)

  13. Related research with thermo hydraulics safety by means of Trace code

    International Nuclear Information System (INIS)

    Chaparro V, F. J.; Del Valle G, E.; Rodriguez H, A.; Gomez T, A. M.; Sanchez E, V. H.; Jager, W.

    2014-10-01

    In this article the results of the design of a pressure vessel of a BWR/5 similar to the type of Laguna Verde NPP are presented, using the Trace code. A thermo hydraulics Vessel component capable of simulating the behavior of fluids and heat transfer that occurs within the reactor vessel was created. The Vessel component consists of a three-dimensional cylinder divided into 19 axial sections, 4 azimuthal sections and two concentric radial rings. The inner ring is used to contain the core and the central part of the reactor, while the outer ring is used as a down comer. Axial an azimuthal divisions were made with the intention that the dimensions of the internal components, heights and orientation of the external connections match the reference values of a reactor BWR/5 type. In the model internal components as, fuel assemblies, steam separators, jet pumps, guide tubes, etc. are included and main external connections as, steam lines, feed-water or penetrations of the recirculation system. The model presents significant simplifications because the object is to keep symmetry between each azimuthal section of the vessel. In most internal components lack a detailed description of the geometry and initial values of temperature, pressure, fluid velocity, etc. given that it only considered the most representative data, however with these simulations are obtained acceptable results in important parameters such as the total flow through the core, the pressure in the vessel, percentage of vacuums fraction, pressure drop in the core and the steam separators. (Author)

  14. Thermo-hydraulic analysis of the cool-down of the EDIPO test facility

    Science.gov (United States)

    Lewandowska, Monika; Bagnasco, Maurizio

    2011-09-01

    The first cool-down of the EDIPO (European DIPOle) test facility is foreseen to take place in 2011 by means of the existing 1.2 kW cryoplant at EPFL-CRPP Villigen. In this work, the thermo-hydraulic analysis of the EDIPO cool-down is performed in order both to assess the its duration and to optimize the procedure. The cool-down is driven by the helium flowing in both the outer cooling channel and in the windings connected hydraulically in parallel. We take into account limitations due to the pressure drop in the cooling circuit and the refrigerator capacity as well as heat conduction in the iron yoke. Two schemes of the hydraulic cooling circuit in the EDIPO windings are studied (coils connected in series and coils connected in parallel). The analysis is performed by means of an analytical model complemented by and numerical model. The results indicate that the cool-down to 5 K can be achieved in about 12 days.

  15. Simplified model for the thermo-hydraulic simulation of the hot channel of a PWR type nuclear reactor; Modelo simplificado para simulacao do comportamento termohidraulico do canal quente de reator nuclear do tipo PWR

    Energy Technology Data Exchange (ETDEWEB)

    Belem, J A.T.

    1993-09-01

    The present work deals with the thermal-hydraulic analysis of the hot channel of a standard PWR type reactor utilizing a simplified mathematical model that considers constant the water mass flux during single-phase flow and reduction of the flow when the steam quality is increasing in the channel (two-phase flow). The model has been applied to the Angra-1 reactor and it has proved satisfactory when compared to other ones. (author). 25 refs, 15 figs, 3 tabs.

  16. Numerical Analysis of Thermo Hydraulic Conditions in Car Fog Lamp

    Science.gov (United States)

    Ramšak, M.; Žunič, Z.; Škerget, L.; Jurejevčič, T.

    2009-08-01

    In the article a coupled heat transfer in the solid and fluid inside of a car fog lamp is presented using CFD software CFX [1]. All three basic principles of heat transfer are dealt with: conduction, convection and radiation. Two different approaches to radiation modeling are compared. Laminar and turbulent flow modeling are compared since computed Rayleight number indicates transitional flow regime. Results are in good agreement with the measurements.

  17. Preliminary study of the thermo-hydraulic behaviour of the binary breeder reactor

    International Nuclear Information System (INIS)

    Silveira Luz, M. da; Ferreira, W.J.

    1984-06-01

    Continuing the development of the Binary Breeder Reactor, its physical configuration and the advantages of differents types of spacers are analysed. In order to simulate the thermo-hydraulic behaviour and obtain data for a preliminary evaluation of the core geometry, the COBRA III C code was used to study the effects of the lenght and diameter of the fuel element, the coolant inlet temperature, the system pressure, helicoidal pitch and the pitch to diameter ratio. (Author) [pt

  18. Quench characterization and thermo hydraulic analysis of SST-1 TF magnet busbar

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, A.N., E-mail: ansharma@ipr.res.in [Institute for Plasma Research, Gandhinagar (India); Pradhan, S. [Institute for Plasma Research, Gandhinagar (India); Duchateau, J.L. [CEA Cadarache, 13108 St Paul lez Durance Cedex (France); Khristi, Y.; Prasad, U.; Doshi, K.; Varmora, P.; Tanna, V.L.; Patel, D.; Panchal, A. [Institute for Plasma Research, Gandhinagar (India)

    2015-01-15

    Highlights: • Details of SST-1 TF busbar quench detection. • Simulation of slow propagating normal zone. • Thermo hydraulic analyses of TF busbar in current feeder system. - Abstract: Toroidal field (TF) magnet system of steady-state superconducting tokamak-1 (SST-1) has 16 superconducting coils. TF coils are cooled with forced flow supercritical helium at 0.4 MPa, at 4.5 K and operate at nominal current of 10,000 A. Prior to TF magnet system assembly in SST-1 tokamak, each TF coil was tested individually in a test cryostat. During these tests, TF coil was connected to a pair of conventional helium vapor cooled current leads. The connecting busbar was made from the same base cable-in-conduit-conductor (CICC) of SST-1 superconducting magnet system. Quenches experimentally observed in the busbar sections of the single coil test setups have been analyzed in this paper. A steady state thermo hydraulic analysis of TF magnet busbar in actual SST-1 tokamak assembly has been done. The experimental observations of quench and results of relevant thermo hydraulic analyses have been used to predict the safe operation regime of TF magnet system busbar during actual SST-1 tokamak operational scenarios.

  19. Thermo-hydraulic stability study of a steam generator

    International Nuclear Information System (INIS)

    Magni, M C; Marcel, C P; Delmastro, D F

    2012-01-01

    In this work a mathematical model developed to investigate the thermalhydraulic stability of a helically coiled steam generator is presented. Such a steam generator is prone to experiment density wave oscillations. The model is therefore used to analyze the stability of the CAREM-25 reactor steam generators. The model is linear, numerically non-diffusive and nodal. In addition, it is able to represent non-uniform heat transfer fluxes between the primary and secondary coolant circuits. By using this model the marginal stability condition is found by varying the inlet friction coefficient for different conditions. The results are then compared with those obtained with a different model for which a simple uniform heat flux profiled is assumed. It is found that with such simplification the density waves instability mechanism is overestimated in a wide range of operating powers. For very low powers, in the contrary, the so-called uniform model underestimates the stabilizing inlet friction and therefore it gives non-conservative results. With the use of the more realistic non-uniform power profile model, it was possible to determine that, for a CAREM-25 steam generator, the most stable conditions is found at 60MW when the reactor operates at nominal pressure. Moreover, it is found that at high power levels the stability performance is dominated by the two-phase friction component while at low power levels the friction component originated in the over heated steam region prevail (author)

  20. THERMIT, 3-D Thermo-Hydraulics of BWR and PWR

    International Nuclear Information System (INIS)

    Kazimi, M.S.; Kao, S.P.; Kelly, J.E.

    1984-01-01

    1 - Description of program or function: THERMIT2, the most recent release of THERMIT, is intended for thermal-hydraulic analysis of both boiling and pressurized water reactor cores. It solves the three-dimensional, two-fluid equations describing the two-phase flow and heat transfer dynamics in rectangular coordinates. The two-fluid model uses separate partial differential equations expressing conservation of mass, momentum, and energy for each fluid. THERMIT2 offers the choice of either pressure or velocity boundary conditions at the top and bottom of the core. THERMIT2 includes a two-phase turbulent mixing model which provides subchannel analysis capability. THERMIT2 also solves the radial heat conduction equations for fuel pin temperatures, and calculates the heat flux from fuel pin to coolant with appropriate heat transfer models described by a boiling curve. 2 - Method of solution: By expressing the exchange of mass, momentum, and energy between the fluids with physically-based mathematical models, the relative motion and thermal non-equilibrium between the fluids can exist

  1. A Thermo-Hydraulic Tool for Automatic Virtual Hazop Evaluation

    Directory of Open Access Journals (Sweden)

    Pugi L.

    2014-12-01

    Full Text Available Development of complex lubrication systems in the Oil&Gas industry has reached high levels of competitiveness in terms of requested performances and reliability. In particular, the use of HazOp (acronym of Hazard and Operability analysis represents a decisive factor to evaluate safety and reliability of plants. The HazOp analysis is a structured and systematic examination of a planned or existing operation in order to identify and evaluate problems that may represent risks to personnel or equipment. In particular, P&ID schemes (acronym of Piping and Instrument Diagram according to regulation in force ISO 14617 are used to evaluate the design of the plant in order to increase its safety and reliability in different operating conditions. The use of a simulation tool can drastically increase speed, efficiency and reliability of the design process. In this work, a tool, called TTH lib (acronym of Transient Thermal Hydraulic Library for the 1-D simulation of thermal hydraulic plants is presented. The proposed tool is applied to the analysis of safety relevant components of compressor and pumping units, such as lubrication circuits. Opposed to the known commercial products, TTH lib has been customized in order to ease simulation of complex interactions with digital logic components and plant controllers including their sensors and measurement systems. In particular, the proposed tool is optimized for fixed step execution and fast prototyping of Real Time code both for testing and production purposes. TTH lib can be used as a standard SimScape-Simulink library of components optimized and specifically designed in accordance with the P&ID definitions. Finally, an automatic code generation procedure has been developed, so TTH simulation models can be directly assembled from the P&ID schemes and technical documentation including detailed informations of sensor and measurement system.

  2. Experimental study on thermo-hydraulic instability on reduced-moderation natural circulation BWR concept

    International Nuclear Information System (INIS)

    Watanabe, Noriyuki; Subki, M.H.; Kikura, Hiroshige; Aritomi, Masanori

    2003-01-01

    Reduced-moderation natural circulation BWR has been promoted to solve the recent challenges in BWR nuclear power technology problems as one of advanced small and medium-sized reactors equipped with the passive safety features in conformity with the natural law. However, the elimination of recirculation pumps and a high-density core due to the increase of conversion ratio could cause various thermo-hydraulic instabilities especially during the start-up stage. The occurrences of the thermo-hydraulic instabilities are not desirable and it is one of the main challenges in establishing reduced-moderation natural circulation BWR as a commercial reactor. The purpose of this present study is to experimentally investigate the driving mechanism of the thermo-hydraulic instabilities and the effect of system pressure on the unstable flow patterns. Hence, as the fundamental research for this study, a natural circulation loop that carries boiling fluid with parallel boiling channel has been constructed. Channel gap that has been set at 2 mm in order to simulate reduced-moderation reactor core. Pressure ranges of 0.1 up to 0.7 MPa, input heat flux range of 0 ou to 577 kW/m 2 , and inlet subcooling temperatures of 5, 10, and 15 K respectively, are imposed in the experiments. This experiment clarifies that changes in unstable flow patterns with increase in heat flux can be classified into two in response to system pressure range. In case of atmospheric pressure, unstable flow patters has been classified in beyond order, (1) in-phase geysering, (2) transition oscillation combined with both features of in-phase geysering and natural circulation oscillation, (3) natural circulation oscillation induced by hydrostatic head fluctuation, (4) density wave oscillation, and finally (5) stable boiling two-phase flow. On the other hand, in the system pressure range from 0.2 to 0.7 MPa, unstable patters have been dramatically changed in the following order (1) out-of-phase geysering, (2

  3. Basic researches on thermo-hydraulic non-equilibrium phenomena related to nuclear reactor safety

    International Nuclear Information System (INIS)

    Sakurai, Akira; Kataoka, Isao; Aritomi, Masanori.

    1989-01-01

    A review was made of recent developments of fundamental researches on thermo-hydraulic non-equilibrium phenomena related to light water reactor safety, in relation to problems to be solved for the improvement of safety analysis codes. As for the problems related to flow con ditions, fundamental researches on basic conservation equations and constitutive equations for transient two-phase flow were reviewed. Regarding to the problems related to thermal non-equilibrium phenomena, fundamental researches on film boiling in pool and forced convection, transient boiling heat transfer and flow behavior caused by pressure transients were reviewed. (author)

  4. Thermo-hydraulic Analysis of a Water-cooled Printed Circuit Heat Exchanger in a Small-scale Nitrogen Loop

    International Nuclear Information System (INIS)

    Kim, Chan Soo; Hong, Sung Deok; Kim, Min Hwan; Shim, Jaesool; Lee, Gyung Dong

    2013-01-01

    The development of high-temperature heat exchangers is very important because of its higher operation temperature and pressure than those of common light water reactors and industrial process plants. In particular, the intermediate heat exchanger is a key-challenged high temperature component in a Very High Temperature gas-cooled Reactor (VHTR). A printed circuit heat exchanger is one of the candidates for an intermediate heat exchanger in a VHTR. The printed circuit heat exchanger (PCHE) was developed and commercialized by HEATRIC. The compactness is better than any other heat exchanger types, because its core matrices are fabricated by diffusion bonding with photo-chemically etched micro-channels. Various tests and analysis have been performed to verify the performance of PCHE. The thermal stress analysis of the high temperature PCHE is necessary to endure the extremely operation condition of IHX. In this study, the thermo-hydraulic analysis for the laboratory-scale PCHE is performed to provide the input data for the boundary conditions of a structural analysis. The results from the first-principal calculation are compared with those from computational fluid dynamics code analysis. COMSOL 4.3a analysis is successfully performed at the uniform pressure drop condition in a set of flow channel stacks. The heat-exchanged region concentrated to the nitrogen inlet cause the uniform mass velocity distribution in the channels, therefore there is little difference between two analytical results

  5. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    Energy Technology Data Exchange (ETDEWEB)

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  6. IAEA coordinated research programme on heat transfer behavior and thermo-hydraulics code testing for super critical water cooled reactors

    International Nuclear Information System (INIS)

    Bilbao y Leon, Sama; Aksan, Nusret

    2009-01-01

    One of the key roles of the IAEA is to foster the collaboration among Member States on the development of advances in technology for advanced nuclear power plants. There is high international interest, both in developing and industrialized countries, in innovative supercritical water-cooled reactors (SCWRs), primarily because such concepts will achieve high thermal efficiencies (44-45%) and promise improved economic competitiveness utilizing and building upon the recent developments for highly efficient fossil power plants. The SCWR has been selected as one of the promising concepts for development by the Generation-IV International Forum. Following the advice of the IAEA Nuclear Energy Department's Technical Working Groups on Advanced Technologies for LWRs and HWRs (the TWG-LWR and TWG-HWR), with the feedback from the Gen-IV SCWR Steering Committee, and in coordination with the OECD-NEA, IAEA has recently started a Coordinated Research Programme (CRP) in the areas of heat transfer behaviour and testing of thermo-hydraulic computer methods for Supercritical Water-Cooled Reactors. The first Research Coordination Meeting (RCM) of the CRP was held at the IAEA Headquarters, in Vienna, Austria in July 2008. This paper summarizes the current status of the CRP, including the Integrated Research Plan and the general schedule for the CRP. (author)

  7. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    International Nuclear Information System (INIS)

    Arndt, S.A.

    1997-01-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities

  8. Thermo-hydraulic characteristics of ship propulsion reactor in the conditions of ship motions and safety assessment

    International Nuclear Information System (INIS)

    Kobayashi, Michiyuki; Aya, Izuo; Inasaka, Fujio; Murata, Hiroyuki; Odano, Naoteru; Shiozaki, Koki

    1998-01-01

    A research project from 1995-1999 had a plan to make experimental studies on (1) safety of nuclear ship loaded with an integral ship propulsion reactor (2) effects of pulsating flow on the thermo-hydraulic characteristics of ship propulsion reactor and (3) thermo-hydraulic behaviors of the reactor container at the time of accident in a passively safe ship propulsion reactor. Development of a data base for ship propulsion reactor was attempted using previous experimental data on the thermo-hydraulic characteristics of the reactor in the institute in addition to the present results aiming to make general analytical evaluation for the safety of the engineering-simulation system for nuclear ship. A general data base was obtained by integrating the data list and the analytical program for static characteristics. A test equipment which allows to visualize the pulsating flow was produced and visualization experiments have started. (M.N.)

  9. Neutronic and thermo-hydraulic design of LEU core for Japan Research Reactor 4

    International Nuclear Information System (INIS)

    Arigane, Kenji; Watanabe, Shukichi; Tsuruta, Harumichi

    1988-04-01

    As a part of the Reduced Enrichment Research and Test Reactor (RERTR) program in JAERI, the enrichment reduction for Japan Research Reactor 4 (JRR-4) is in progress. A fuel element using a 19.75 % enriched UAlx-Al dispersion type with a uranium density of 2.2 g/cm 3 was designed as the LEU fuel and the neutronic and thermo-hydraulic performances of the LEU core were compared with those of the current HEU core. The results of the neutronic design are as follows: (1) the excess reactivity of the LEU core becomes about 1 % Δk/k less, (2) the thermal neutron flux in the fuel region decreases about 25 % on the average, (3) the thermal neutron fluxes in the irradiation pipes are almost the same and (4) the core burnup lifetime becomes about 20 % longer. The thermo-hydraulic design also shows that: (1) the fuel plate surface temperature decreases about 10 deg C due to the increase of the number of fuel plates and (2) the temperature margin with respect to the ONB temperature increases. Therefore, it is confirmed that the same utilization performance as the HEU core is attainable with the LEU core. (author)

  10. Thermo-hydraulic analysis of the generic equatorial port plug design

    International Nuclear Information System (INIS)

    Rodríguez, E.; Guirao, J.; Ordieres, J.; Cortizo, J.L.; Iglesias, S.

    2012-01-01

    Highlights: ► Thermo-hydraulic transient performance evaluation and optimization of the GEPP structure cooling/heating system under neutronic heating and baking conditions. ► The optimization of the GEPP box structure's cooling system includes positioning and minimization of number and size of gun drilled channels, complying with the flow and functional requirements during operating and baking conditions. - Abstract: The port-based ITER diagnostic systems are housed primarily in two locations, the equatorial and upper port plugs. The port plug structure provides confinement function, maintains ultra-high vacuum quality and the first confinement barrier for radioactive materials at the ports. The port plug structure design, from the ITER International Organisation (IO), is cooled and heated by pressurized water which flows through a series of gun-drilled water channels and water pipes. The cooling function is required to remove nuclear heating due to radiation during operation of ITER, while the heating function is intended to heat up uniformly the machine during baking condition. The work presented provides coupled thermo-hydraulic analysis and optimization of a Generic Equatorial Port Plug (GEPP) structure cooling and heating system. The optimization performed includes positioning, minimization of number and size of gun drilled channels, complying with the flow and functional requirements during operating and baking conditions.

  11. Thermo-Hydraulic Analysis of Heat Storage Filled with the Ceramic Bricks Dedicated to the Solar Air Heating System.

    Science.gov (United States)

    Nemś, Magdalena; Nemś, Artur; Kasperski, Jacek; Pomorski, Michał

    2017-08-12

    This article presents the results of a study into a packed bed filled with ceramic bricks. The designed storage installation is supposed to become part of a heating system installed in a single-family house and eventually to be integrated with a concentrated solar collector adapted to climate conditions in Poland. The system's working medium is air. The investigated temperature ranges and air volume flow rates in the ceramic bed were dictated by the planned integration with a solar air heater. Designing a packed bed of sufficient parameters first required a mathematical model to be constructed and heat exchange to be analyzed, since heat accumulation is a complex process influenced by a number of material properties. The cases discussed in the literature are based on differing assumptions and different formulas are used in calculations. This article offers a comparison of various mathematical models and of system operating parameters obtained from these models. The primary focus is on the Nusselt number. Furthermore, in the article, the thermo-hydraulic efficiency of the investigated packed bed is presented. This part is based on a relationship used in solar air collectors with internal storage.

  12. Thermo-Hydraulic Analysis of Heat Storage Filled with the Ceramic Bricks Dedicated to the Solar Air Heating System

    Science.gov (United States)

    Nemś, Magdalena; Nemś, Artur; Kasperski, Jacek; Pomorski, Michał

    2017-01-01

    This article presents the results of a study into a packed bed filled with ceramic bricks. The designed storage installation is supposed to become part of a heating system installed in a single-family house and eventually to be integrated with a concentrated solar collector adapted to climate conditions in Poland. The system’s working medium is air. The investigated temperature ranges and air volume flow rates in the ceramic bed were dictated by the planned integration with a solar air heater. Designing a packed bed of sufficient parameters first required a mathematical model to be constructed and heat exchange to be analyzed, since heat accumulation is a complex process influenced by a number of material properties. The cases discussed in the literature are based on differing assumptions and different formulas are used in calculations. This article offers a comparison of various mathematical models and of system operating parameters obtained from these models. The primary focus is on the Nusselt number. Furthermore, in the article, the thermo-hydraulic efficiency of the investigated packed bed is presented. This part is based on a relationship used in solar air collectors with internal storage. PMID:28805703

  13. Thermo-hydraulic behavior of saturated steam-water mixture in pressure vessel during injection of cold water

    International Nuclear Information System (INIS)

    Aya, Izuo; Kobayashi, Michiyuki; Inasaka, Fujio; Nariai, Hideki.

    1983-01-01

    The thermo-hydraulic behavior of saturated steam water mixture in a pressure vessel during injection of cold water was experimentally investigated with the Facility for Mixing Effect of Emergency Core Cooling Water. The dimensions of the pressure vessel used in the experiments were 284mm ID and 1,971mm height. 11 experiments were conducted without blowdown in order to comprehend the basic process excluding the effect of blowdown at injection of cold water. The initial pressure and water level, the injection flow rate and the size of injection nozzle were chosen as experimental parameters. Temperatures and void fractions at 6 elevations as well as pressure in the pressure vessel were measured, and new data especially on the pressure undershoot just after the initation of water injection and the vertical distribution of temperature and void fraction were gotten. The transients of pressure, average temperature and void fraction were caluculated using single-volume analysis code BLODAC-1V which is based on thermal equilibrium and so-called bubble gradient model. Some input parameters included in the analysis code were evaluated through the comparison of analysis with experimental data. Moreover, the observed pressure undershoot which is evaluated to be induced by a time lag of vapourization in water due to thermal nonequilibrium, was also discussed with the aid of another simple analysis model. (author)

  14. Related research with thermo hydraulics safety by means of Trace code; Investigaciones relacionadas con seguridad termohidraulica con el codigo TRACE

    Energy Technology Data Exchange (ETDEWEB)

    Chaparro V, F. J.; Del Valle G, E. [IPN, Escuela Superior de Fisica y Matematicas, UP - Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico); Rodriguez H, A.; Gomez T, A. M. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Sanchez E, V. H.; Jager, W., E-mail: evalle@esfm.ipn.mx [Karlsruhe Institute of Technology, Hermann-von-Helmholtz Platz I, D-76344 Eggenstein - Leopoldshafen (Germany)

    2014-10-15

    In this article the results of the design of a pressure vessel of a BWR/5 similar to the type of Laguna Verde NPP are presented, using the Trace code. A thermo hydraulics Vessel component capable of simulating the behavior of fluids and heat transfer that occurs within the reactor vessel was created. The Vessel component consists of a three-dimensional cylinder divided into 19 axial sections, 4 azimuthal sections and two concentric radial rings. The inner ring is used to contain the core and the central part of the reactor, while the outer ring is used as a down comer. Axial an azimuthal divisions were made with the intention that the dimensions of the internal components, heights and orientation of the external connections match the reference values of a reactor BWR/5 type. In the model internal components as, fuel assemblies, steam separators, jet pumps, guide tubes, etc. are included and main external connections as, steam lines, feed-water or penetrations of the recirculation system. The model presents significant simplifications because the object is to keep symmetry between each azimuthal section of the vessel. In most internal components lack a detailed description of the geometry and initial values of temperature, pressure, fluid velocity, etc. given that it only considered the most representative data, however with these simulations are obtained acceptable results in important parameters such as the total flow through the core, the pressure in the vessel, percentage of vacuums fraction, pressure drop in the core and the steam separators. (Author)

  15. Fundamental study on thermo-hydraulics during start-up in natural circulation boiling water reactors, (1)

    International Nuclear Information System (INIS)

    Aritomi, Masanori; Chiang Jing-Hsien; Takahashi, Tohru; Wataru, Masumi; Mori, Michitsugu.

    1992-01-01

    Recently, many concepts, in which passive and simplified functions are actively adapted, have been proposed for the next generation LWRs. The natural circulation BWR is one such considered from the requirements for next generation LWRs as compared with current BWRs. It is pointed out from this consideration that a thermo-hydraulic instability, which may appear during start-up, greatly influences concept feasibility because its occurence makes operation for raising power output difficult. Thermo-hydraulic instabilities are investigated experimentally under conditions simulating normal and abnormal start-up processes. It is clarified that three kinds of thermo-hydraulic instabilities may occur during start-up in the natural circulation BWR according to its procedure and reactor configuration, which are (1) geysering induced by condensation, (2) natural circulation instability induced by hydrostatic head fluctuation in steam separators and (3) density wave instability. Driving mechanisms of the geysering and the natural circulation instability, which have never understood enough, are inferred from the results. Finally, the difference of thermo-hydraulic behavior during start-up processes between thermal natural circulation boilers and the Dodewaard reactor is discussed. (author)

  16. Thermo-hydraulic Quench Propagation at the LHC Superconducting Magnet String

    CERN Document Server

    Rodríguez-Mateos, F; Serio, L

    1998-01-01

    The superconducting magnets of the LHC are protected by heaters and cold by-pass diodes. If a magnet quenches, the heaters on this magnet are fired and the magnet chain is de-excited in about two minu tes by opening dump switches in parallel to a resistor. During the time required for the discharge, adjacent magnets might quench due to thermo-hydraulic propagation in the helium bath and/or heat con duction via the bus bar. The number of quenching magnets depends on the mechanisms for the propagation. In this paper we report on quench propagation experiments from a dipole magnet to an adjacent ma gnet. The mechanism for the propagation is hot helium gas expelled from the first quenching magnet. The propagation changes with the pressure opening settings of the quench relief valves.

  17. Modifications in Compacted MX-80 Bentonite Due to Thermo-Hydraulic Treatment

    International Nuclear Information System (INIS)

    Gomez-Espina, R.; Villar, M. V.

    2013-01-01

    The thermo-hydraulic tests reproduce the thermal and hydraulic conditions to which bentonite is subjected in the engineered barrier of a deep geological repository of radioactive waste. The results of thermo-hydraulic test TBT1500, which was running for approximately 1500 days, are presented. This is a continuation to the Technical Report Ciemat 1199, which presented results of test TBT500, performed under similar conditions but with duration of 500 days. In both tests the MX-80 bentonite was used with initial density and water content similar to those of the large-scale test TBT. The bentonite column was heated at the bottom at 140 degree centigrade and hydrated on top with deionized water. At the end of the test a sharp water content gradient was observed along the column, as well as an inverse dry density gradient. Hydration modified also the bentonite microstructure. Besides, an overall decrease of the smectite content with respect to the initial value took place, especially in the most hydrated areas where the percentage of interest ratified illite increased and in the longer test. On the other hand, the content of cristobalite, feldspars and calcite increased. Smectite dissolution processes (probably colloidal) occurred, particularly in the more hydrated areas and in the longer test. Due to the dissolution of low-solubility species and to the loss of exchangeable positions in the smectite, the content of soluble salts in the pore water increased with respect to the original one, especially in the longer test. The solubilized ions were transported; sodium, calcium, magnesium and sulphate having a similar mobility, which was in turn lower than that of potassium and chloride. The cationic exchange complex was also modified. (Author)

  18. Response of Compacted Bentonites to Thermal and Thermo-Hydraulic Loadings at High Temperatures

    Directory of Open Access Journals (Sweden)

    Snehasis Tripathy

    2017-07-01

    Full Text Available The final disposal of high-level nuclear waste in many countries is preferred to be in deep geological repositories. Compacted bentonites are proposed for use as the buffer surrounding the waste canisters which may be subjected to both thermal and hydraulic loadings. A significant increase in the temperature is anticipated within the buffer, particularly during the early phase of the repository lifetime. In this study, several non-isothermal and non-isothermal hydraulic tests were carried on compacted MX80 bentonite. Compacted bentonite specimens (water content = 15.2%, dry density = 1.65 Mg/m3 were subjected to a temperature of either 85 or 150 °C at one end, whereas the temperature at the opposite end was maintained at 25 °C. During the non-isothermal hydraulic tests, water was supplied from the opposite end of the heat source. The temperature and relative humidity were monitored along predetermined depths of the specimens. The profiles of water content, dry density, and degree of saturation were established after termination of the tests. The test results showed that thermal gradients caused redistribution of the water content, whereas thermo-hydraulic gradients caused both redistribution and an increase in the water content within compacted bentonites, both leading to development of axial stress of various magnitudes. The applied water injection pressures (5 and 600 kPa and temperature gradients appeared to have very minimal impact on the magnitude of axial stress developed. The thickness of thermal insulation layer surrounding the testing devices was found to influence the temperature and relative humidity profiles thereby impacting the redistribution of water content within compacted bentonites. Under the influence of both the applied thermal and thermo-hydraulic gradients, the dry density of the bentonite specimens increased near the heat source, whereas it decreased at the opposite end. The test results emphasized the influence of

  19. Development of gas-cooled fast reactor and its thermo-hydraulics

    International Nuclear Information System (INIS)

    Kawamura, Hiroshi

    1977-10-01

    Development, thermo-hydraulics and safety of GCFR are reviewed. The Development of Gas-Cooled Fast Reactor (GCFR) utilizes helium technology of HTGR and fuel technology of LMFBR. The breeding ratio of GCFR will be larger than that of LMFBR by about 0.2. Features of GCFR are a fuel with roughened surface to raise the heat transfer and vent system for the pressure equalization in the fuel rod. Helium as coolant of GCFR is chemically stable and stays in the single phase. So, there is no fuel-coolant interaction unlike the case of LMFBR. Since the helium must be pressurized, possibility of a depressurization accident is not negligible. In the United States, a 300MWe demonstration plant program is about to start; the collaboration with European countries is now quite active in this field. Though the development of GCFR started behind that of LMFBR, GCFR is equally promising as a fast breeder reactor. When realized, it will present possibility of a choice between these two. (auth.)

  20. Thermo-hydraulic characteristics of ship propulsion reactor in the conditions of ship motions and safety assessment

    International Nuclear Information System (INIS)

    Kobayashi, Michiyuki; Murata, Hiroyuki; Sawada, Kenichi; Inasaka, Fujio; Aya, Izuo; Shiozaki, Koki

    1999-01-01

    By inputting the experimental data, information and others on thermo-hydraulic characteristics of integrated ship propulsion reactor accumulated hitherto by the Ship Research Institute and some recent cooperation results into the nuclear ship engineering simulation system, it was conducted not only to contribute an improvement study on next ship reactor by executing general analysis and evaluation on motion characteristics under ship body motion conditions, safety at accidents, and others of the integrated ship reactor but also to investigate and prepare some measures to apply fundamental experiment results based on obtained here information to safety countermeasure of the nuclear ships. In 1997 fiscal year, on safety of the integrated ship propulsion reactor loading nuclear ship, by adding experimental data on unstable flow analysis and information on all around of the analysis to general data base fundamental program, development to intellectual data base program was intended; on effect of pulsation flow on thermo-hydraulic characteristics of ship propulsion reactor; after pulsation flow visualization experiment, experimental equipment was reconstructed into heat transfer type to conduct numerical analysis of pulsation flow by confirming validity of numerical analysis code under comparison with the visualization experiment results; and on thermo-hydraulic behavior in storage container at accident of active safety type ship propulsion reactor; a flashing vibration test using new apparatus finished on its higher pressurization at last fiscal year to examine effects of each parameter such as radius and length of exhausting nozzle and pool water temperature. (G.K.)

  1. Thermo-hydraulic characterization of a self-pumping corrugated wall heat exchanger

    International Nuclear Information System (INIS)

    Schmidmayer, Kevin; Kumar, Prashant; Lavieille, Pascal; Miscevic, Marc; Topin, Frédéric

    2017-01-01

    Compactness, efficiency and thermal control of the heat exchanger are of critical significance for many electronic industry applications. In this view, a new concept of heat exchanger at millimeter scale is proposed and numerically studied. It consists in dynamically deforming at least one of its walls by a progressive wave in order to create an active corrugated channel. Systematic studies were performed in single-phase flow on the different deformation parameters that allow obtaining the thermo-hydraulic characteristics of the system. It has been observed the dynamic wall deformation induces a significant pumping effect. Intensification of heat transfer remains very important even for highly degraded waveforms although the pumping efficiency is reduced in this case. The mechanical power applied on the upper wall to deform it dynamically is linked to the wave shape, amplitude, frequency and outlet-inlet pressure difference. The overall performance of the proposed system has been evaluated and compared to existing static channels. The performance of the proposed heat exchanger evolved in two steps for a given wall deformation. It declines slightly up to a critical value of mechanical power applied on the wall. When this critical value is exceeded, it deteriorates significantly, reaching the performance of existing conventional systems. - Highlights: • A new concept of heat exchanger within channel at millimeter scale is proposed. • Upper wall is deformed dynamically by applying external mechanical power. • Pumping effect is observed and is linked to the wave shape, amplitude and frequency. • Efficient proposed system in low Reynolds number range. • Overall performance is significantly high compared to static corrugated and straight channels.

  2. Computational model and performance optimization methodology of a compact design heat exchanger used as an IHX in HTGR

    International Nuclear Information System (INIS)

    De la Torre V, R.; Francois L, J. L.

    2017-09-01

    The intermediate heat exchangers (IHX) present in high-temperature gas-cooled reactor (HTGR) present complex operating conditions, characterized by temperature values higher than 1073 K. Conventional designs of tubes and shell have shown disadvantages with respect to compact designs. In this work, computational models of a compact heat exchanger design, the printed circuit, were built under IHX conditions in a HTGR installation. In these models, a detailed geometry was considered in three dimensions, corresponding to a transfer unit of the heat exchanger. Computational fluid dynamics techniques and finite element methods were used to study the thermo-hydraulic and mechanical functioning of the equipment, respectively. The properties of the materials were defined as temperature functions. The thermo-hydraulic results obtained were established as operating conditions in the structural calculations. A methodology was developed based on the analysis of capital and operating costs, which takes into account the heat transfer, pressure drop and the mechanical behavior of the structure, in a single optimization variable. By analyzing the experimental results of other authors, a relationship was obtained between the operation time of the equipment and the maximum effort in the structure, which was used in the model. The results show that the model that allows a greater thermal efficiency differs from the one that has lower total cost per year. (Author)

  3. Implementation of the optimization for the methodology of the neutronic calculation and thermo-hydraulic in IEA-R1 reactor

    International Nuclear Information System (INIS)

    Stefani, Giovanni Laranjo de; Conti, Thadeu das Neves; Fedorenko, Giuliana G.; Castro, Vinicius A.; Maio, Mireia F.; Santos, Thiago Augusto dos

    2011-01-01

    This work objective was to create a manager program that would automate the programs and computer codes in use for neutronic calculation and thermo-hydraulic in IEA-R1 reactor thus making the process for calculation of safety parameters and for configuration change up to 98% faster than that used in the reactor today. This process was tested in combination with the reactor operators and is being implemented by the quality department. The main codes and programs involved in the calculations of configuration change are Leopard, Hammier-Technion, Twodb, Citation and Cobra. Calculations of delayed neutron and criticality coefficients given in the process of safety parameters calculation are given by the Hammer-Technion and Citation in a process that involves about eleven repetitions so that it meets all the necessary conditions (such different temperatures of the moderator and fuel). The results are entirely consistent with the expected and absolutely the same as those given by manual process. Thus the work shows its reliability as well the advantage of saving time, once a process that could take up to four hours was turned in one that takes around five minutes when done in a home computer. Much of this advantage is due to the fact that were created subprograms to treat the output of each program used and transform them into the input of the other programs, removing from it the intermediate essential data for this to occur, thus avoiding also a possible human error by handling the various data supplied. (author)

  4. 3. Workshop for IAEA ICSP on Integral PWR Design Natural Circulation Flow Stability and Thermo-hydraulic Coupling of Containment and Primary System during Accidents. Presentations

    International Nuclear Information System (INIS)

    2012-04-01

    Most advanced nuclear power plant designs adopted several kinds of passive systems. Natural circulation is used as a key driving force for many passive systems and even for core heat removal during normal operation such as NuScale, CAREM, ESBWR and Indian AHWR designs. Simulation of natural circulation phenomena is very challenging since the driving force of it is weak compared to forced circulation and involves a coupling between primary system and containment for integral type reactor. The IAEA ICSP (International Collaborative Standard Problem) on 'Integral PWR Design Natural Circulation Flow Stability and Thermo-hydraulic Coupling of Containment and Primary System during Accidents' was proposed within the CRP on 'Natural Circulation Phenomena, Modelling, and Reliability of Passive Systems that utilize Natural Circulation'. Oregon State University (OSU) of USA offered to host this ICSP. This ICSP plans to conduct the following experiments and blind/open simulations with system codes: 1. Quasi-steady state operation with different core power levels: Conduct quasi-steady state operation with step-wise increase of core power level in order to observe single phase natural circulation flow according to power level. The experimental facility and operating conditions for an integral PWR will be used. 2. Thermo-hydraulic Coupling between Primary system and Containment: Conduct a loss of feedwater transient with subsequent ADS blowdown and long term cooling to determine the progression of a loss of feedwater transient by natural circulation through primary and containment systems. These tests would examine the blowdown phase as well as the long term cooling using sump natural circulation by coupling the primary to containment systems. This data could be used for the evaluation of system codes to determine if they model specific phenomena in an accurate manner. OSU completed planned two ICSP tests in July 2011 and real initial and boundary conditions measured from the

  5. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  6. Thermo-hydraulic instability of natural circulation BWRs at low pressure star-up. Experimental estimation of instability region with test facility considering scaling law

    International Nuclear Information System (INIS)

    Inada, F.; Furuya, M.; Yasuo, A.; Tabata, H.; Yoshioka, Y.; Kim, H.T.

    1995-01-01

    In natural circulation BWRs developed for advanced light water reactors with simplified passive safety systems, thermo-hydraulic stability should be confirmed especially at low pressure start-up. In this paper, nondimensional parameters to estimate the hydrodynamic stability to reactors at low pressure start-up were obtained by transformation of the basic equations of drift-flux model in the two-phase region into nondimensional form. A test facility based on these parameters was then constructed. The height of the test facility is 70% of SBWR and many nondimensional test facility parameters are almost the same as those of the reactor. Reactor stability was estimated experimentally. Stability maps below 0.5MPa were obtained on the heat flux - channel inlet subcooling place. It was found that there were two stability boundaries, between which the flow became unstable. Flow was stable in the high and low channel inlet subcooling regions. Typical conditions of SBWR at low pressure start-up were noted in the high channel inlet subcooling stable region. The heat flux at typical SBWR start-up was about one fifth that of the stability boundary. Though some nondimensional parameters of the test facility did not exactly agree with those of SBWR, it was suggested that the flow in SBWR was stable below 0.5MPa because of the large margin. (author)

  7. Thermo-hydraulic test of the moderator cell of liquid hydrogen cold neutron source for the Budapest research reactor

    International Nuclear Information System (INIS)

    Grosz, Tamas; Rosta, Laszlo; Hargitai, Tibor; Mityukhlyaev, V.A.; Serebrov, A.P.; Zaharov, A.A.

    1999-01-01

    Thermo-hydraulic experiment was carried out in order to test performance of the direct cooled liquid hydrogen moderator cell to be installed at the research reactor of the Budapest Neutron Center. Two electric hearers up to 300 W each imitated the nuclear heat release in the liquid hydrogen as well as in construction material. The test moderator cell was also equipped with temperature gauges to measure the hydrogen temperature at different positions as well as the inlet and outlet temperature of cooling he gas. The hydrogen pressure in the connected buffer volume was also controlled. At 140 w expected total heat load the moderator cell was filled with liquid hydrogen within 4 hours. The heat load and hydrogen pressure characteristics of the moderator cell are also presented. (author)

  8. Sensitiveness Analysis of Neutronic Parameters Due to Uncertainty in Thermo-hydraulic parameters on CAREM-25 Reactor

    International Nuclear Information System (INIS)

    Serra, Oscar

    2000-01-01

    Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose

  9. Parametric study of the stability properties of a thermo hydraulic channel coupled to punctual kinetics; Estudio parametrico de las propiedades de estabilidad de un canal termohidraulico acoplado a cinetica puntual

    Energy Technology Data Exchange (ETDEWEB)

    Cecenas F, M.; Campos G, R.M. [Instituto de Investigaciones Electricas, Reforma 113, Col. Palmira, Temixco, Morelos (Mexico)]. e-mail: mcf@iie.org.mx

    2005-07-01

    The reason of decay is the indicator of stability usually used in the literature to evaluate stability of boiling water reactors, however, in the operation of this type of reactors is considered the length of boiling like an auxiliary parameter for the evaluation of stability. In this work its are studied the variation of these two indicators when modifying a given an operation parameter in a model of a thermo hydraulic channel coupled to punctual kinetics, maintaining all the other input constant variables. The parameters selected for study are the axial profile of power, the subcooling, the flow of coolant and the thermal power. The study is supplemented by means of real data of plant using the one Benchmark of Ringhals, and the results for the case of the ratio of decay its are compared with the decay reasons obtained by means of autoregression models of the local instrumentation of neutron flux. (Author)

  10. Using statistical sensitivities for adaptation of a best-estimate thermo-hydraulic simulation model

    International Nuclear Information System (INIS)

    Liu, X.J.; Kerner, A.; Schaefer, A.

    2010-01-01

    On-line adaptation of best-estimate simulations of NPP behaviour to time-dependent measurement data can be used to insure that simulations performed in parallel to plant operation develop synchronously with the real plant behaviour even over extended periods of time. This opens a range of applications including operator support in non-standard-situations, improving diagnostics and validation of measurements in real plants or experimental facilities. A number of adaptation methods have been proposed and successfully applied to control problems. However, these methods are difficult to be applied to best-estimate thermal-hydraulic codes, such as TRACE and ATHLET, with their large nonlinear differential equation systems and sophisticated time integration techniques. This paper presents techniques to use statistical sensitivity measures to overcome those problems by reducing the number of parameters subject to adaptation. It describes how to identify the most significant parameters for adaptation and how this information can be used by combining: -decomposition techniques splitting the system into a small set of component parts with clearly defined interfaces where boundary conditions can be derived from the measurement data, -filtering techniques to insure that the time frame for adaptation is meaningful, -numerical sensitivities to find minimal error conditions. The suitability of combining those techniques is shown by application to an adaptive simulation of the PKL experiment.

  11. Preliminary Thermo-hydraulic Core Design Analysis of Korea Advanced Nuclear Thermal Engine Rocket for Space Application

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Seung Hyun; Lee, Jeong Ik; Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-05-15

    {sub th} power and electricity generation with 100 kW{sub th} idle power. Consequently, KANUTER has the characteristics of a compact and lightweight system, excellent propellant efficiency, bimodal capability, and mission versatility as indicated in the reference design parameters. This thermo-hydraulic design analysis was carried out to estimate the optimum FWT of the unique SLHC fuel design in the core and thereby the maximum rocket performance. The FWT affects the mechanical strength of the SLHC fuel assembly as well as the thermo-hydraulic capability mainly depending on the heat transfer area of fuel. The thicker fuel wafer is mechanically strong with low pressure drop, while the thinner fuel wafer is thermally robust with less mechanical strength and higher shear stress in the core.

  12. Computational model and performance optimization methodology of a compact design heat exchanger used as an IHX in HTGR; Modelo computacional y metodologia de optimizacion del funcionamiento de un intercambiador de calor de diseno compacto empleado como IHX en HTGR

    Energy Technology Data Exchange (ETDEWEB)

    De la Torre V, R.; Francois L, J. L., E-mail: delatorrevaldes@gmail.com [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    The intermediate heat exchangers (IHX) present in high-temperature gas-cooled reactor (HTGR) present complex operating conditions, characterized by temperature values higher than 1073 K. Conventional designs of tubes and shell have shown disadvantages with respect to compact designs. In this work, computational models of a compact heat exchanger design, the printed circuit, were built under IHX conditions in a HTGR installation. In these models, a detailed geometry was considered in three dimensions, corresponding to a transfer unit of the heat exchanger. Computational fluid dynamics techniques and finite element methods were used to study the thermo-hydraulic and mechanical functioning of the equipment, respectively. The properties of the materials were defined as temperature functions. The thermo-hydraulic results obtained were established as operating conditions in the structural calculations. A methodology was developed based on the analysis of capital and operating costs, which takes into account the heat transfer, pressure drop and the mechanical behavior of the structure, in a single optimization variable. By analyzing the experimental results of other authors, a relationship was obtained between the operation time of the equipment and the maximum effort in the structure, which was used in the model. The results show that the model that allows a greater thermal efficiency differs from the one that has lower total cost per year. (Author)

  13. Thermal, thermo-hydraulic and thermo-mechanic analysis for fuel elements of IEA-R1 reactor at 5MW

    International Nuclear Information System (INIS)

    Teixeira e Silva, A.; Silva Macedo, L.V. da

    1989-01-01

    In connection with the on going conversion of IEA-R1 Research Reactor, operated by IPEN-CNEN/SP, from the use of highly enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel, steady-state thermal and thermo-hydraulic analysis of both existing HEU and proposed LEU cores under 2 MW operating conditions have been carried out. Keeping in mind the possibility of power upgrading, steady-state thermal, thermo-hydraulic and thermomechanical analysis of proposed LEU core under 5 MW operating conditions have also been carried out. The thermal and thermo-hydraulic analysis at 2 MW show that the conversion of the existing HEU core to be proposed LEU core will not change the reactor safety margins. Although the upgrading of the reactor power to 5 MW will result in safety margins lower than in case of 2MW, these will be still sufficient for optimum operation and safe behaviour. The thermomechanical analysis at 5 MW show that the thermal stresses induced in the fuel element will satisfy the design limits for mechanical strenght and elastic stability. (author) [pt

  14. A friend man-machine interface for thermo-hydraulic simulation codes of nuclear installations

    International Nuclear Information System (INIS)

    Araujo Filho, F. de; Belchior Junior, A.; Barroso, A.C.O.; Gebrim, A.

    1994-01-01

    This work presents the development of a Man-Machine Interface to the TRAC-PF1 code, a computer program to perform best estimate analysis of transients and accidents at nuclear power plants. The results were considered satisfactory and a considerable productivity gain was achieved in the activity of preparing and analyzing simulations. (author)

  15. Thermo-Hydraulic Optimisation of the EURISOL-DS MMW Hg target

    CERN Document Server

    M. Ashrafi-Nik

    The present document describes the thermal and the stress analysis of the final design of the EURISOL DS target. The preliminary design by Q. Prétet, R. Milenkovic and B. Smith was used as a starting point for further improvements to reduce stresses in the hull; the results of these computations are summarised in this document. All variants studied to attain the objective are documented using CFD to assess the effects of different flow configurations on the temperature distribution in the target liquid metal and structural analysis for determining the stresses and temperatures in the target structure.

  16. Thermo-Hydraulic behaviour of dual-channel superconducting Cable-In-Conduit Conductors for ITER

    International Nuclear Information System (INIS)

    Renard, B.

    2006-09-01

    In an effort to optimise the cryogenics of large superconducting coils for fusion applications (ITER), dual channel Cable-In-Conduit Conductors (CICC) are designed with a central channel spiral to provide low hydraulic resistance and faster helium circulation. The qualitative and economic rationale of the conductor central channel is here justified to limit the superconductor temperature increase, but brings more complexity to the conductor cooling characteristics. The pressure drop of spirals is experimentally evaluated in nitrogen and water and an explicit hydraulic friction model is proposed. Temperatures in the cable must be quantified to guarantee superconductor margin during coil operation under heat disturbance and set adequate inlet temperature. Analytical one-dimensional thermal models, in steady state and in transient, allow to better understand the thermal coupling of CICC central and annular channels. The measurement of a heat transfer characteristic space and time constants provides cross-checking experimental estimations of the internal thermal homogenization. A simple explicit model of global inter-channel heat exchange coefficient is proposed. The risk of thermosyphon between the two channels is considered since vertical portions of fusion coils are subject to gravity. The new hydraulic model, heat exchange model and gravitational risk ratio allow the thermohydraulic improvement of CICC central spirals. (author)

  17. Thermo-hydraulics of the Peruvian accretionary complex at 12°S

    Science.gov (United States)

    Kukowski, Nina; Pecher, Ingo

    1999-01-01

    Coupled heat and fluid transport at the Peruvian convergent margin at 12°S wasstudied with finite element modelling. Structural information was available from two seismicreflection lines. Heat production in the oceanic plate, the metamorphic basement, and sedimentswas estimated from literature. Porosity, permeability, and thermal conductivity for the modelswere partly available from Ocean Drilling Program (ODP) Leg 112; otherwise we used empiricalrelations. Our models accounted for a possible permeability anisotropy. The decollement was bestmodelled as a highly permeable zone (10−13 m2). Permeabilities of thePeruvian accretionary wedge adopted from the model calculations fall within the range of 2 to7×10−16 m2 at the ocean bottom to a few 10−18 m2 at the base and need to be anisotropic. Fluid expulsion at the sea floor decreases graduallywith distance from the deformation front and is structure controlled. Small scale variations of heatflux reflected by fluctuations of BSR depths across major faults could be modelled assuming highpermeability in the faults which allow for efficient advective transport along those faults.

  18. Influence of Bypass on Thermo-Hydraulics of VVER 440 Fuel Assembly

    Directory of Open Access Journals (Sweden)

    Jakubec Jakub

    2017-04-01

    Full Text Available The paper deals with CFD modelling and simulation of coolant flow within the nuclear reactor VVER 440 fuel assembly. The influence of coolant flow in bypass on the temperature distribution at the outlet of the fuel assembly and pressure drop was investigated. Only steady-state analyses were performed. Boundary conditions are based on operating conditions. ANSYS CFX is chosen as the main CFD software tool, where all analyses are performed.

  19. ECOSIM - Applied to a study on the thermo-hydraulic behaviour of feedwater heaters

    International Nuclear Information System (INIS)

    Huelamo Martinez, E.; Casado Flores, E.; Bosch Aparicio, F.

    1998-01-01

    In order to carry out a behaviour study on the secondary circuit of a nuclear power plant operating at a load level higher than originally planned, it is essential to know if the cycle heaters are valid from the thermo-dynamic point of view. This paper describes the models which were used for the study of certain heaters; these models were validated by checking that they faithfully reproduced the behaviour of the equipment (TTD and DCA) in areas where data from the manufacturer was available. The behaviour of said equipment was later obtained in the foreseen operating range. The calculations necessary for these studies were carried out by building ECOSIM models, taking into account that the behaviour of the feedwater heaters depends both on the entry conditions of the extraction steam and also on the remaining mass and energy inputs. For this reason the actual plant layout was taken into consideration, as it was different from the original design. This paper describes the starting hypothesis, the correlations used, the results obtained, an analysis of said results, and a comparison with the manufacturer's data where available. (Author)

  20. Experimental Study of Thermo-hydraulic Characteristics of Surfaces with In-line Dimple Arrangement

    Directory of Open Access Journals (Sweden)

    S. A. Burtsev

    2015-01-01

    Full Text Available The paper presents a conducted experimental study of the heat exchange intensification on the surfaces covered with a regular vortex-generating relief that is an in-line array of the shallow hemispherical dimples. Using 12 configuration options with the Reynolds numbers in the range of (0.2-7.0 106 as an example, it analyses how a longitudinal and cross step of the in-line dimple array (density dimples effects on the processes of heat exchange intensification and resistance.The monocomponent strain-gauge balance allows us to define a value of the resistance coefficient by direct weighing of models (located in parallel in a flow of "relief" and smooth "reference" ones being under study. Distribution fields of heat – transfer factor are determined by recording a cooling process of the surface of studied models having high spatial and temporary resolution. All researches were conducted with one-shot data record of these thermal and hydraulic measurements for the smooth (reference surfaces and the studied surfaces covered with a regular vortex-generating relief (dimples. The error of determined parameters was no more than ±5%.The oil-sooty method allows us to visualize flow around a regular relief and obtain a flow pattern for 12 options of dimples configuration. The analysis has been carried out and a compliance of the flow patterns with the field of heat-transfer factors has been obtained.It has been found that for the in-line configuration a Reynolds analogy factor for most models is nonlinearly dependent on the Reynolds number. The friction intensification, at first, falls (to some Reynolds number and, further, starts increasing, tending to the friction intensification value with self-similarity flow around. Thus with increasing Reynolds number, the heattransfer factor intensification falls (more slowly than resistance intensification.

  1. Parametric thermo-hydraulic analysis of the TF system of JT-60SA during fast discharge

    International Nuclear Information System (INIS)

    Polli, Gian Mario; Lacroix, Benoit; Zani, Louis; Besi Vetrella, Ugo; Cucchiaro, Antonio

    2013-01-01

    Highlights: • We modeled the central clock-wise pancake of JT-60SA TF magnet at the EOB. • We simulated a quench followed by a fast discharge. • We evaluated the temperature and pressure rises in the nominal configuration. • We evaluated the effect of several parameter changes on the thermal-hydraulic response of the system. -- Abstract: The evolution of the conductor temperature and of the helium pressure of the central pancake of the TF superconducting magnet of the JT-60SA tokamak in a quench scenario are here discussed. The quench is triggered by a heat disturbance applied at the end of burning and followed by a fast safety discharge. A parametric study aimed at assessing the robustness of the calculation is also addressed with special regard to the voltage threshold, used to define the occurrence of the quench, and to the time delay, that cover all the possible delays in the fast discharge after quench detection. Finally, due to sensitivity analyses the influences of different parameters were assessed: the material properties of the strands (RRR, copper fraction), the magnitude and the spatial length of the triggering disturbance and the magnetic field distribution. The numerical evaluations were performed in the framework of the Broader Approach Agreement in collaboration with CEA, ENEA and the JT-60SA European Home Team using the 1D code Gandalf [1

  2. Pressurized thermal shock. Thermo-hydraulic conditions in the CNA-I reactor pressure vessel

    International Nuclear Information System (INIS)

    Ventura, Mirta A.; Rosso, Ricardo D.

    2002-01-01

    In this paper we analyze several reports issued by the Utility (Nucleo Electrica S.A.) and related to Reactor Pressure Vessel (RPV) phenomena in the CNA-I Nuclear Power Plant. These analyses are aimed at obtaining conclusions and establishing criteria ensuring the RPV integrity. Special attention was given to the effects ECCS cold-water injection at the RPV down-comer leading to pressurized thermal shock scenarios. The results deal with hypothetical primary system pipe breaks of different sizes, the inadvertent opening of the pressurizer safety valve, the double guillotine break of a live steam line in the containment and the inadvertent actuation pressurizer heaters. Modeling conditions were setup to represent experiments performed at the UPTF, under the hypothesis that they are representative of those that, hypothetically, may occur at the CNA-I. No system scaling analysis was performed, so this assertion and the inferred conclusions are no fully justified, at least in principle. The above mentioned studies, indicate that the RPV internal wall surface temperature will be nearly 40 degree. It was concluded that they allowed a better approximation of PTS phenomena in the RPV of the CNA-I. Special emphasis was made on the influence of the ECCS systems on the attained RPV wall temperature, particularly the low-pressure TJ water injection system. Some conservative hypothesis made, are discussed in this report. (author)

  3. RAP-3A Computer code for thermal and hydraulic calculations in steady state conditions for fuel element clusters

    International Nuclear Information System (INIS)

    Popescu, C.; Biro, L.; Iftode, I.; Turcu, I.

    1975-10-01

    The RAP-3A computer code is designed for calculating the main steady state thermo-hydraulic parameters of multirod fuel clusters with liquid metal cooling. The programme provides a double accuracy computation of temperatures and axial enthalpy distributions of pressure losses and axial heat flux distributions in fuel clusters before boiling conditions occur. Physical and mathematical models as well as a sample problem are presented. The code is written in FORTRAN-4 language and is running on a IBM-370/135 computer

  4. Analysis Thermo-hydraulic of trajectories related to procedures for operation of Emergency (POE). Application to the loss of a train of the DTH; Analisis termohidraulico de trayectorias vinculadas a Procedimientos de Operacion de emergencia (POE). Aplicacion a la perdida de un tren de RHR

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Saez, F.; Martorell Alsina, S.; Carlos Alberola, A.; Villanueva Lopez, J. F.; Martorell Aygues, P.

    2012-07-01

    This work explores different possible sequences at the loss of a train of the DTH when the plant is lowering power. The study of the different possible trajectories has been done through the collapse tool and study thermo-hydraulic each of these paths is done by the code TRACE Thermo-hydraulic.

  5. Modifications in Compacted MX-80 Bentonite Due to Thermo-Hydraulic Treatment; Modificaciones en la Bentonita MX-80 Compactada Sometida a Tratamiento Termo-Hidraulico

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Espina, R.; Villar, M. V.

    2013-09-01

    The thermo-hydraulic tests reproduce the thermal and hydraulic conditions to which bentonite is subjected in the engineered barrier of a deep geological repository of radioactive waste. The results of thermo-hydraulic test TBT1500, which was running for approximately 1500 days, are presented. This is a continuation to the Technical Report Ciemat 1199, which presented results of test TBT500, performed under similar conditions but with duration of 500 days. In both tests the MX-80 bentonite was used with initial density and water content similar to those of the large-scale test TBT. The bentonite column was heated at the bottom at 140 degree centigrade and hydrated on top with deionized water. At the end of the test a sharp water content gradient was observed along the column, as well as an inverse dry density gradient. Hydration modified also the bentonite microstructure. Besides, an overall decrease of the smectite content with respect to the initial value took place, especially in the most hydrated areas where the percentage of interest ratified illite increased and in the longer test. On the other hand, the content of cristobalite, feldspars and calcite increased. Smectite dissolution processes (probably colloidal) occurred, particularly in the more hydrated areas and in the longer test. Due to the dissolution of low-solubility species and to the loss of exchangeable positions in the smectite, the content of soluble salts in the pore water increased with respect to the original one, especially in the longer test. The solubilized ions were transported; sodium, calcium, magnesium and sulphate having a similar mobility, which was in turn lower than that of potassium and chloride. The cationic exchange complex was also modified. (Author)

  6. Interaction forces model on a bubble growing for nuclear best estimate computer codes

    International Nuclear Information System (INIS)

    Espinosa-Paredes, Gilberto; Nunez-Carrera, Alejandro; Martinez-Mendez, Elizabeth J.

    2005-01-01

    This paper presents a mathematical model that takes into account the bubble radius variation that take place in a boiling water nuclear reactor during transients with changes in the pressure vessel, changes in the inlet core mass flow rate, density-wave phenomena or flow regime instability. The model with expansion effects was developed considering the interaction force between a dilute dispersion of gas bubbles and a continuous liquid phase. The closure relationships were formulated as an associated problem with the spatial deviation around averaging variables as a function of known variables. In order to solve the closure problem, a geometric model given by an eccentric unit cell was applied as an approach of heterogeneous structure of the two-phase flow. The closure relationship includes additional terms that represent combined effects between translation and pulsation due to displacement and size variation of the bubbles, respectively. This result can be implanted straightforward in best estimate thermo-hydraulics models. An example, the implementation of the closure relationships into TRAC best estimate computer code is presented

  7. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  8. CFD Application and OpenFOAM on the 2-D Model for the Moderator System of Heavy-Water Reactors

    International Nuclear Information System (INIS)

    Chang, Se Myong; Park, A. Y.; Kim, Hyoung Tae

    2011-01-01

    The flow in the complex pipeline system in a calandria tank of CANDU reactor is transported through the distribution of heat sources, which also exerts the pressure drop to the coolant flow. So the phenomena should be considered as multi-physics both in the viewpoints of heat transfer and fluid dynamics. In this study, we have modeled the calandria tank system as two-dimensional simplified one preliminarily that is yet far from the real objects, but to see the essential physics and to test the possibility of the present CFD(computational fluid dynamics) methods for the thermo-hydraulic problem in the moderator system of heavy-water reactors

  9. Computational Investigations in Rectangular Convergent and Divergent Ribbed Channels

    Science.gov (United States)

    Sivakumar, Karthikeyan; Kulasekharan, N.; Natarajan, E.

    2018-05-01

    Computational investigations on the rib turbulated flow inside a convergent and divergent rectangular channel with square ribs of different rib heights and different Reynolds numbers (Re=20,000, 40,000 and 60,000). The ribs were arranged in a staggered fashion between the upper and lower surfaces of the test section. Computational investigations are carried out using computational fluid dynamic software ANSYS Fluent 14.0. Suitable solver settings like turbulence models were identified from the literature and the boundary conditions for the simulations on a solution of independent grid. Computations were carried out for both convergent and divergent channels with 0 (smooth duct), 1.5, 3, 6, 9 and 12 mm rib heights, to identify the ribbed channel with optimal performance, assessed using a thermo hydraulic performance parameter. The convergent and divergent rectangular channels show higher Nu values than the standard correlation values.

  10. Pipe fracture evaluations for leak-rate detection: Probabilistic models

    International Nuclear Information System (INIS)

    Rahman, S.; Wilkowski, G.; Ghadiali, N.

    1993-01-01

    This is the second in series of three papers generated from studies on nuclear pipe fracture evaluations for leak-rate detection. This paper focuses on the development of novel probabilistic models for stochastic performance evaluation of degraded nuclear piping systems. It was accomplished here in three distinct stages. First, a statistical analysis was conducted to characterize various input variables for thermo-hydraulic analysis and elastic-plastic fracture mechanics, such as material properties of pipe, crack morphology variables, and location of cracks found in nuclear piping. Second, a new stochastic model was developed to evaluate performance of degraded piping systems. It is based on accurate deterministic models for thermo-hydraulic and fracture mechanics analyses described in the first paper, statistical characterization of various input variables, and state-of-the-art methods of modem structural reliability theory. From this model. the conditional probability of failure as a function of leak-rate detection capability of the piping systems can be predicted. Third, a numerical example was presented to illustrate the proposed model for piping reliability analyses. Results clearly showed that the model provides satisfactory estimates of conditional failure probability with much less computational effort when compared with those obtained from Monte Carlo simulation. The probabilistic model developed in this paper will be applied to various piping in boiling water reactor and pressurized water reactor plants for leak-rate detection applications

  11. Physical models and numerical methods of the reactor dynamic computer program RETRAN

    International Nuclear Information System (INIS)

    Kamelander, G.; Woloch, F.; Sdouz, G.; Koinig, H.

    1984-03-01

    This report describes the physical models and the numerical methods of the reactor dynamic code RETRAN simulating reactivity transients in Light-Water-Reactors. The neutron-physical part of RETRAN bases on the two-group-diffusion equations which are solved by discretization similar to the TWIGL-method. An exponential transformation is applied and the inner iterations are accelerated by a coarse-mesh-rebalancing procedure. The thermo-hydraulic model approximates the equation of state by a built-in steam-water-table and disposes of options for the calculation of heat-conduction coefficients and heat transfer coefficients. (Author) [de

  12. Comparison of thermo-hydraulic analysis with measurements for HELIOS. The scaled integral test loop for PEACER

    International Nuclear Information System (INIS)

    Cho, Jae Hyun; Lim, Jun; Kim, Ji Hak; Hwang, Il Soon

    2009-01-01

    A scaled-down Lead-Bismuth Eutectic circulating integral test loop named as HELIOS (Heavy Eutectic liquid metal Loop for Integral test of Operability and Safety of PEACER) has been employed to characterize steady-state isothermal forced circulation behavior and non-isothermal natural circulation capability of the lead and lead-alloy cooled advanced nuclear energy systems (LACANES). In this time, thermal-hydraulic experiments have been carried out using HELIOS following rigorous calibration campaigns on sensors for temperature and pressure, especially isothermal steady-state forced convection using by the pump. The isothermal steady-state forced convection test was performed to obtain the pressure loss information including friction loss coefficients and form loss coefficients. Then its data were compared with multi-approaching analysis including hand calculation results and computer simulation code results. (MARS-LBE, CFX). We report the results of comparisons between the analysis and measurements together. (author)

  13. Mini-channel flow experiments and CFD validation analyses with the IFMIF Thermo- Hydraulic Experimental facility (ITHEX)

    International Nuclear Information System (INIS)

    Arbeiter, F.; Heinzel, V.; Leichtle, D.; Stratmanns, E.; Gordeev, S.

    2006-01-01

    The design of the IFMIF High Flux Test Module (HFTM) is based on the predictions for the heat transfer in narrow channels conducting helium flow of 50 o C inlet temperature at 0.3 MPa. The emerging helium flow conditions are in the transition regime of laminar to turbulent flow. The rectangular cooling channels are too short for the full development of the coolant flow. Relaminarization along the cooling passage is expected. At the shorter sides of the channels secondary flow occurs, which may have an impact on the temperature field inside the irradiation specimen's stack. As those conditions are not covered by available experimental data, the dedicated gas loop ITHEX has been constructed to operate up to a pressure of 0.42 MPa and temperatures of 200 o C. It's objective is to conduct experiments for the validation of the STAR-CD CFD code used for the design of the HFTM. As a first stage, two annular test-sections with hydraulic diameter of 1.2 mm have been used, where the experiments have been varied with respect to gas species (N 2 , He), inlet pressure, dimensionless heating span and Reynolds number encompassing the range of operational parameters of the HFTM. Local friction factors and Nusselt numbers have been obtained giving evidence that the transition regime will extend to Reynolds 10,000. For heating rates comparable to the HFTM filled with RAFM steels, local heat transfer coefficients are in consistence with the measured friction data. To validate local velocity profiles the ITHEX facility was further equipped with a flat rectangular test-section and a Laser Doppler Anemometry (LDA) system. An appropriate optical system has been developed and tested for the tiny observation volume of 40 μm diameter. Velocity profiles as induced by the transition of a wide inlet plenum to the flat mini-channels have been measured. Whereas the CFD models were able to reproduce the patterns far away from the nozzle, they show some disagreement for the conditions at the

  14. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  15. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  16. Coupled Model of channels in parallel and neutron kinetics in two dimensions

    International Nuclear Information System (INIS)

    Cecenas F, M.; Campos G, R.M.; Valle G, E. del

    2004-01-01

    In this work an arrangement of thermohydraulic channels is presented that represent those four quadrants of a nucleus of reactor type BWR. The channels are coupled to a model of neutronic in two dimensions that allow to generate the radial profile of power of the reactor. Nevertheless that the neutronic pattern is of two dimensions, it is supplemented with axial additional information when considering the axial profiles of power for each thermo hydraulic channel. The stationary state is obtained the one it imposes as frontier condition the same pressure drop for all the channels. This condition is satisfied to iterating on the flow of coolant in each channel to equal the pressure drop in all the channels. This stationary state is perturbed later on when modifying the values for the effective sections corresponding to an it assembles. The calculation in parallel of the neutronic and the thermo hydraulic is carried out with Vpm (Virtual parallel machine) by means of an outline teacher-slave in a local net of computers. (Author)

  17. Sensitiveness Analysis of Neutronic Parameters Due to Uncertainty in Thermo-hydraulic parameters on CAREM-25 Reactor; Analisis de Sensibilidad de los Parametros Neutronicos ante Incertezas en los Parametros Termohidraulicos en el Reactor CAREM-25

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Oscar [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose.

  18. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  19. Coupled Model of channels in parallel and neutron kinetics in two dimensions; Modelo acoplado de canales en paralelo y cinetica neutronica en dos dimensiones

    Energy Technology Data Exchange (ETDEWEB)

    Cecenas F, M.; Campos G, R.M. [Instituto de Investigaciones Electricas, Av. Reforma 113, Col. Palmira, 62490 Cuernavaca, Morelos (Mexico)]. E-mail: mcf@iie.org.mx; Valle G, E. del [IPN, ESFM, 07738 Mexico D.F. (Mexico)

    2004-07-01

    In this work an arrangement of thermohydraulic channels is presented that represent those four quadrants of a nucleus of reactor type BWR. The channels are coupled to a model of neutronic in two dimensions that allow to generate the radial profile of power of the reactor. Nevertheless that the neutronic pattern is of two dimensions, it is supplemented with axial additional information when considering the axial profiles of power for each thermo hydraulic channel. The stationary state is obtained the one it imposes as frontier condition the same pressure drop for all the channels. This condition is satisfied to iterating on the flow of coolant in each channel to equal the pressure drop in all the channels. This stationary state is perturbed later on when modifying the values for the effective sections corresponding to an it assembles. The calculation in parallel of the neutronic and the thermo hydraulic is carried out with Vpm (Virtual parallel machine) by means of an outline teacher-slave in a local net of computers. (Author)

  20. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  1. Overhead Crane Computer Model

    Science.gov (United States)

    Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.

    2018-03-01

    The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.

  2. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  3. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  4. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  5. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  6. The off-line computation system for supervising performance of JOYO: JOYPAC system, 2

    International Nuclear Information System (INIS)

    Suzuki, Tomoo; Hasegawa, Akira; Akimoto, Masayuki; Miyamoto, Yoshiaki; Katsuragi, Satoru

    1976-10-01

    HONEYCOMB is a code for detailed calculations in analyzing nuclear characteristics of the reactor. It performs criticality calculation in diffusion model and burn up calculation, for 3-dimensional hexagonal-z geometry. It can predict the critical insertion depth of control rods and calculate the 3-dimensional power distribution required by thermo-hydraulic calculation. Power distribution and burn up are also obtained for fuel pins, if necessary, as well as for assemblies. FDCAL-2 predicts coolant flow distribution in every coolant channel between inlet and outlet plenums in the reactor vessel. In calculating the flow distribution in the assemblies, the subchannel model is used, and the thermal mixing effect is expressed in terms of an apparent heat transfer coefficient. FATEC-3 calculates temperature distribution within some assemblies, optionally specified in the given core matrix. At the same time, it estimates the hot-spot temperature, one of the informations for confirming the safe operation. FACAL-2 and FATEC-3 have been combined so as to remove their unnecessary overlapping parts, and have consequently formed a detailed calculation code for analyzing thermo-hydraulic characteristics of the reactor, FDCAL-3. FDCAL-3 has been linked to HONEYCOMB as a segment of overlay structure, and this combination of HONEYCOMB and FDCAL-3 forms the detailed calculation subsystem in the JOYPAC system. The detailed calculation subsystem produces the data file of the detailed fundamental informations such as distributions of neutron flux, power etc. about the reactor under stationary performance. This file is required by the quick and simple calculation subsystem SMART and the recording subsystem MASTOR described in Part I. Thus, times of resorting to the time-consuming detailed calculation are reduced as far as possible, and supervision of reactor performance is realized in both features of practically sufficient accuracy and reasonable computer cost. (JPN)

  7. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Chaos Modelling with Computers Unpredicatable Behaviour of Deterministic Systems. Balakrishnan Ramasamy T S K V Iyer. General Article Volume 1 Issue 5 May 1996 pp 29-39 ...

  8. Recent advances in modeling and validation of nuclear thermal-hydraulics applications with NEPTUNE CFD - 15471

    International Nuclear Information System (INIS)

    Guingo, M.; Baudry, C.; Hassanaly, M.; Lavieville, J.; Mechitouna, N.; Merigoux, N.; Mimouni, S.; Bestion, D.; Coste, P.; Morel, C.

    2015-01-01

    NEPTUNE CFD is a Computational Multi-(Fluid) Dynamics code dedicated to the simulation of multiphase flows, primarily targeting nuclear thermo-hydraulics applications, such as the departure from nuclear boiling (DNB) or the two-phase Pressurized Thermal Shock (PTS). It is co-developed within the joint research/development project NEPTUNE (AREVA, CEA, EDF, IRSN) since 2001. Over the years, to address the aforementioned applications, dedicated physical models and numerical methods have been developed and implemented in the code, including specific sets of models for turbulent boiling flows and two-phase non-adiabatic stratified flows. This paper aims at summarizing the current main modeling capabilities of the code, and gives an overview of the associated validation database. A brief summary of emerging applications of the code, such as containment simulation during a potential severe accident or in-vessel retention, is also provided. (authors)

  9. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  10. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  11. Computer systems and nuclear industry

    International Nuclear Information System (INIS)

    Nkaoua, Th.; Poizat, F.; Augueres, M.J.

    1999-01-01

    This article deals with computer systems in nuclear industry. In most nuclear facilities it is necessary to handle a great deal of data and of actions in order to help plant operator to drive, to control physical processes and to assure the safety. The designing of reactors requires reliable computer codes able to simulate neutronic or mechanical or thermo-hydraulic behaviours. Calculations and simulations play an important role in safety analysis. In each of these domains, computer systems have progressively appeared as efficient tools to challenge and master complexity. (A.C.)

  12. The Antares computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)

    2013-10-11

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  13. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  14. Plasticity modeling & computation

    CERN Document Server

    Borja, Ronaldo I

    2013-01-01

    There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.

  15. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  16. A physicist's model of computation

    International Nuclear Information System (INIS)

    Fredkin, E.

    1991-01-01

    An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs

  17. Computational modeling in biomechanics

    CERN Document Server

    Mofrad, Mohammad

    2010-01-01

    This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..

  18. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  19. COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT

    OpenAIRE

    Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar

    2011-01-01

    Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...

  20. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  1. Experimental and parametric studies of a louvered fin and flat tube compact heat exchanger using computational fluid dynamics

    Directory of Open Access Journals (Sweden)

    P. Karthik

    2015-12-01

    Full Text Available The present study aimed to perform the parametric analysis on thermo-hydraulic performance of a compact heat exchanger using computational fluid dynamics (CFD. The analysis has been carried out at different frontal air velocities by varying the geometrical parameters such as fin pitch, transverse tube pitch, longitudinal tube pitch, louver pitch and louver angle. The air side performance of the heat exchanger has been evaluated by calculating Colburn factor (j and Fanning friction factor (f. The comparison of CFD results with the experimental data exhibited a good agreement and the influence of various geometrical parameters for the selected range of values on the pressure drop, heat transfer coefficient and goodness factor was analyzed. The results obtained from the analysis will be very useful to optimize the louvered fin and flat tube compact heat exchanger for better thermo-hydraulic performance analysis without the need of time consuming and expensive experimentation.

  2. RFQ modeling computer program

    International Nuclear Information System (INIS)

    Potter, J.M.

    1985-01-01

    The mathematical background for a multiport-network-solving program is described. A method for accurately numerically modeling an arbitrary, continuous, multiport transmission line is discussed. A modification to the transmission-line equations to accommodate multiple rf drives is presented. An improved model for the radio-frequency quadrupole (RFQ) accelerator that corrects previous errors is given. This model permits treating the RFQ as a true eight-port network for simplicity in interpreting the field distribution and ensures that all modes propagate at the same velocity in the high-frequency limit. The flexibility of the multiport model is illustrated by simple modifications to otherwise two-dimensional systems that permit modeling them as linear chains of multiport networks

  3. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  4. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  5. Computational modelling in fluid mechanics

    International Nuclear Information System (INIS)

    Hauguel, A.

    1985-01-01

    The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr

  6. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Chaos is one of the major scientific discoveries of our times. In fact many scientists ... But there are other natural phenomena that are not predictable though ... characteristics of chaos. ... The position and velocity are all that are needed to determine the motion of a .... a system of equations that modelled the earth's weather ...

  7. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  8. Computer model for ductile fracture

    International Nuclear Information System (INIS)

    Moran, B.; Reaugh, J. E.

    1979-01-01

    A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak

  9. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  10. Trust models in ubiquitous computing.

    Science.gov (United States)

    Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro

    2008-10-28

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.

  11. Ch. 33 Modeling: Computational Thermodynamics

    International Nuclear Information System (INIS)

    Besmann, Theodore M.

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  12. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  14. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  15. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  16. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  17. Getting computer models to communicate

    International Nuclear Information System (INIS)

    Caremoli, Ch.; Erhard, P.

    1999-01-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  18. Computational Modeling in Liver Surgery

    Directory of Open Access Journals (Sweden)

    Bruno Christ

    2017-11-01

    Full Text Available The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery.

  19. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  20. Cosmic logic: a computational model

    International Nuclear Information System (INIS)

    Vanchurin, Vitaly

    2016-01-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps

  1. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  2. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  3. Validation of thermohydraulic codes by comparison of experimental results with computer simulations

    International Nuclear Information System (INIS)

    Madeira, A.A.; Galetti, M.R.S.; Pontedeiro, A.C.

    1989-01-01

    The results obtained by simulation of three cases from CANON depressurization experience, using the TRAC-PF1 computer code, version 7.6, implanted in the VAX-11/750 computer of Brazilian CNEN, are presented. The CANON experience was chosen as first standard problem in thermo-hydraulic to be discussed at ENFIR for comparing results from different computer codes with results obtained experimentally. The ability of TRAC-PF1 code to prevent the depressurization phase of a loss of primary collant accident in pressurized water reactors is evaluated. (M.C.K.) [pt

  4. Development of the computer code system for the analyses of PWR core

    International Nuclear Information System (INIS)

    Tsujimoto, Iwao; Naito, Yoshitaka.

    1992-11-01

    This report is one of the materials for the work titled 'Development of the computer code system for the analyses of PWR core phenomena', which is performed under contracts between Shikoku Electric Power Company and JAERI. In this report, the numerical method adopted in our computer code system are described, that is, 'The basic course and the summary of the analysing method', 'Numerical method for solving the Boltzmann equation', 'Numerical method for solving the thermo-hydraulic equations' and 'Description on the computer code system'. (author)

  5. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...

  6. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  7. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  8. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  9. Computer modeling of liquid crystals

    International Nuclear Information System (INIS)

    Al-Barwani, M.S.

    1999-01-01

    In this thesis, we investigate several aspects of the behaviour of liquid crystal molecules near interfaces using computer simulation. We briefly discuss experiment, theoretical and computer simulation studies of some of the liquid crystal interfaces. We then describe three essentially independent research topics. The first of these concerns extensive simulations of a liquid crystal formed by long flexible molecules. We examined the bulk behaviour of the model and its structure. Studies of a film of smectic liquid crystal surrounded by vapour were also carried out. Extensive simulations were also done for a long-molecule/short-molecule mixture, studies were then carried out to investigate the liquid-vapour interface of the mixture. Next, we report the results of large scale simulations of soft-spherocylinders of two different lengths. We examined the bulk coexistence of the nematic and isotropic phases of the model. Once the bulk coexistence behaviour was known, properties of the nematic-isotropic interface were investigated. This was done by fitting order parameter and density profiles to appropriate mathematical functions and calculating the biaxial order parameter. We briefly discuss the ordering at the interfaces and make attempts to calculate the surface tension. Finally, in our third project, we study the effects of different surface topographies on creating bistable nematic liquid crystal devices. This was carried out using a model based on the discretisation of the free energy on a lattice. We use simulation to find the lowest energy states and investigate if they are degenerate in energy. We also test our model by studying the Frederiks transition and comparing with analytical and other simulation results. (author)

  10. Computer models for economic and silvicultural decisions

    Science.gov (United States)

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  11. Physical model of lean suppression pressure oscillation phenomena: steam condensation in the light water reactor pressure suppression system (PSS)

    International Nuclear Information System (INIS)

    McCauley, E.W.; Holman, G.S.; Aust, E.; Schwan, H.; Vollbrandt, J.

    1980-01-01

    Using the results of large scale multivent tests conducted by GKSS, a physical model of chugging is developed. The unique combination of accurate digital data and cinematic data has provided the derivation of a detailed, quantified correlation between the dynamic physical variables and the associated two-phase thermo-hydraulic phenomena occurring during lean suppression (chugging) phases of the loss-of-coolant accident in a boiling water reactor pressure suppression system

  12. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  13. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  14. NET model coil test possibilities in the TOSKA TWIN configuration

    International Nuclear Information System (INIS)

    Gruenhagen, A.; Heller, R.; Herz, W.; Hofmann, A.; Jentzsch, K.; Kapulla, H.; Kneifel, B.; Komarek, P.; Lehmann, W.; Maurer, W.; Ulbricht, A.; Vogt, A.; Zahn, G.

    1989-07-01

    This report continues an earlier one on the possibilities of NET model coil testing in the TOSKA Upgrade facility at KfK. The investigation of a 'Cluster Test Facility' and a 'Solenoid Test Facility' is followed by the investigation of two further test arrangements. They are called 'Twin Configurations'. One common feature of both arrangements is that the EURATOM-LCT-coil delivers a background magnetic field. This coil should be operated at a temperature of 1.8 K and an enhanced current up to 20 kA compared to the LCT test where 3.5 K and up to 16 kA were the operating conditions. In one configuration the NET model test coil is adjacent to the LCT coil (ATC = Adjacent Twin Configuration), in the other one the NET model coil is inserted into the bore of LCT coil (ITC = Inserted Twin Configuration) either upright or with a 60 0 C slope. The configurations are investigated with respect to their electromagnetic mechanical and thermo-hydraulic properties. The requirements for the necessary mechanical support structure of the LCT coil were computed. Installation and cooling of the whole system were discussed. The time schedule and the costs for the test facility modification were estimated. Advantages and disadvantages for the configurations were discussed with respect to feasibility of the test arrangement and operation. (orig.) [de

  15. Computer modeling of the gyrocon

    International Nuclear Information System (INIS)

    Tallerico, P.J.; Rankin, J.E.

    1979-01-01

    A gyrocon computer model is discussed in which the electron beam is followed from the gun output to the collector region. The initial beam may be selected either as a uniform circular beam or may be taken from the output of an electron gun simulated by the program of William Herrmannsfeldt. The fully relativistic equations of motion are then integrated numerically to follow the beam successively through a drift tunnel, a cylindrical rf beam deflection cavity, a combination drift space and magnetic bender region, and an output rf cavity. The parameters for each region are variable input data from a control file. The program calculates power losses in the cavity wall, power required by beam loading, power transferred from the beam to the output cavity fields, and electronic and overall efficiency. Space-charge effects are approximated if selected. Graphical displays of beam motions are produced. We discuss the Los Alamos Scientific Laboratory (LASL) prototype design as an example of code usage. The design shows a gyrocon of about two-thirds megawatt output at 450 MHz with up to 86% overall efficiency

  16. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  17. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  18. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  19. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  20. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  1. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  2. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    the mid-20th century of a paradoxical distinction/complicity between the technical organisation of computed function and the human Being, in the sense of creative action upon such function. This paradoxical distinction/complicity promotes a chiastic (Merleau-Ponty) relationship of extension of one......This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT...... that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...

  3. Climate Ocean Modeling on Parallel Computers

    Science.gov (United States)

    Wang, P.; Cheng, B. N.; Chao, Y.

    1998-01-01

    Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.

  4. Computational Intelligence. Mortality Models for the Actuary

    NARCIS (Netherlands)

    Willemse, W.J.

    2001-01-01

    This thesis applies computational intelligence to the field of actuarial (insurance) science. In particular, this thesis deals with life insurance where mortality modelling is important. Actuaries use ancient models (mortality laws) from the nineteenth century, for example Gompertz' and Makeham's

  5. Applications of computer modeling to fusion research

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  8. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    where x increases from zero to N, the saturation value. Box 1. Matrix Meth- ... such as Laplace transforms and non-linear differential equa- tions with .... atomic bomb project in the. US in the early ... his work on game theory and computers.

  9. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  10. A Categorisation of Cloud Computing Business Models

    OpenAIRE

    Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David

    2010-01-01

    This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...

  11. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  12. Creation of 'Ukrytie' objects computer model

    International Nuclear Information System (INIS)

    Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.

    1999-01-01

    A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model

  13. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  14. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  15. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  16. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  17. Computational challenges in modeling gene regulatory events.

    Science.gov (United States)

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  18. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  19. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  20. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  1. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  2. Validation and comparison of two-phase flow modeling capabilities of CFD, sub channel and system codes by means of post-test calculations of BFBT transient tests

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail: victor.sanchez@kit.edu

    2013-10-15

    Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.

  3. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  4. Computational Modeling of Culture's Consequences

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, T.

    2010-01-01

    This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture,

  5. Computational aspects of premixing modelling

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.

    1998-01-01

    In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)

  6. Computational modeling of concrete flow

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic

    2007-01-01

    particle flow, and numerical techniques allowing the modeling of particles suspended in a fluid. The general concept behind each family of techniques is described. Pros and cons for each technique are given along with examples and references to applications to fresh cementitious materials....

  7. Computer Modeling of Direct Metal Laser Sintering

    Science.gov (United States)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  8. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  9. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  10. Transient computational fluid dynamics analysis of emergency core cooling injection at natural circulation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Scheuerer, Martina, E-mail: Martina.Scheuerer@grs.de [Gesellschaft fuer Anlagen- und Reaktorsicherheit, Forschungsinstitute, 85748 Garching (Germany); Weis, Johannes, E-mail: Johannes.Weis@grs.de [Gesellschaft fuer Anlagen- und Reaktorsicherheit, Forschungsinstitute, 85748 Garching (Germany)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Pressurized thermal shocks are important phenomena for plant life extension and aging. Black-Right-Pointing-Pointer The thermal-hydraulics of PTS have been studied experimentally and numerically. Black-Right-Pointing-Pointer In the Large Scale Test Facility a loss of coolant accident was investigated. Black-Right-Pointing-Pointer CFD software is validated to simulate the buoyancy driven flow after ECC injection. - Abstract: Within the framework of the European Nuclear Reactor Integrated Simulation Project (NURISP), computational fluid dynamics (CFD) software is validated for the simulation of the thermo-hydraulics of pressurized thermal shocks. A proposed validation experiment is the test series performed within the OECD ROSA V project in the Large Scale Test Facility (LSTF). The LSTF is a 1:48 volume-scaled model of a four-loop Westinghouse pressurized water reactor (PWR). ROSA V Test 1-1 investigates temperature stratification under natural circulation conditions. This paper describes calculations which were performed with the ANSYS CFD software for emergency core cooling injection into one loop at single-phase flow conditions. Following the OECD/NEA CFD Best Practice Guidelines (Mahaffy, 2007) the influence of grid resolution, discretisation schemes, and turbulence models (shear stress transport and Reynolds stress model) on the mixing in the cold leg were investigated. A half-model was used for these simulations. The transient calculations were started from a steady-state solution at natural circulation conditions. The final calculations were obtained in a complete model of the downcomer. The results are in good agreement with data.

  11. Transient computational fluid dynamics analysis of emergency core cooling injection at natural circulation conditions

    International Nuclear Information System (INIS)

    Scheuerer, Martina; Weis, Johannes

    2012-01-01

    Highlights: ► Pressurized thermal shocks are important phenomena for plant life extension and aging. ► The thermal-hydraulics of PTS have been studied experimentally and numerically. ► In the Large Scale Test Facility a loss of coolant accident was investigated. ► CFD software is validated to simulate the buoyancy driven flow after ECC injection. - Abstract: Within the framework of the European Nuclear Reactor Integrated Simulation Project (NURISP), computational fluid dynamics (CFD) software is validated for the simulation of the thermo-hydraulics of pressurized thermal shocks. A proposed validation experiment is the test series performed within the OECD ROSA V project in the Large Scale Test Facility (LSTF). The LSTF is a 1:48 volume-scaled model of a four-loop Westinghouse pressurized water reactor (PWR). ROSA V Test 1-1 investigates temperature stratification under natural circulation conditions. This paper describes calculations which were performed with the ANSYS CFD software for emergency core cooling injection into one loop at single-phase flow conditions. Following the OECD/NEA CFD Best Practice Guidelines (Mahaffy, 2007) the influence of grid resolution, discretisation schemes, and turbulence models (shear stress transport and Reynolds stress model) on the mixing in the cold leg were investigated. A half-model was used for these simulations. The transient calculations were started from a steady-state solution at natural circulation conditions. The final calculations were obtained in a complete model of the downcomer. The results are in good agreement with data.

  12. Computational modeling of epiphany learning.

    Science.gov (United States)

    Chen, Wei James; Krajbich, Ian

    2017-05-02

    Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.

  13. Analysis of the three dimensional core kinetics NESTLE code coupling with the advanced thermo-hydraulic code systems, RELAP5/SCDAPSIM and its application to the Laguna Verde Central reactor

    International Nuclear Information System (INIS)

    Salazar C, J.H.; Nunez C, A.; Chavez M, C.

    2004-01-01

    The objective of the written present is to propose a methodology for the joining of the codes RELAP5/SCDAPSIM and NESTLE. The development of this joining will be carried out inside a doctoral program of Engineering in Energy with nuclear profile of the Ability of Engineering of the UNAM together with the National Commission of Nuclear Security and Safeguards (CNSNS). The general purpose of this type of developments, is to have tools that are implemented by multiple programs or codes such a that systems or models of the three-dimensional kinetics of the core can be simulated and those of the dynamics of the reactor (water heater-hydraulics). In the past, by limitations for the calculation of the complete answer of both systems, the developed models they were carried out for separate, putting a lot of emphasis in one but neglecting the other one. These methodologies, calls of better estimate, will be good to the nuclear industry to evaluate, with more high grades of detail, the designs of the nuclear power plant (for modifications to those already existent or for new concepts in the designs of advanced reactors), besides analysing events (transitory and have an accident), among other applications. The coupled system was applied to design studies and investigation of the Laguna Verde Nuclear power plant (CNLV). (Author)

  14. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    International Nuclear Information System (INIS)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J.; Talbot, Paul W.; Rinaldi, Ivan; Maljovec, Dan; Wang, Bei; Pascucci, Valerio; Zhao, Haihua

    2015-01-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, the overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.

  15. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, Paul W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rinaldi, Ivan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Dan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, the overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.

  16. Computational models of airway branching morphogenesis.

    Science.gov (United States)

    Varner, Victor D; Nelson, Celeste M

    2017-07-01

    The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Computational multiscale modeling of intergranular cracking

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2011-01-01

    A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.

  18. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  19. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  20. Generating Computational Models for Serious Gaming

    NARCIS (Netherlands)

    Westera, Wim

    2018-01-01

    Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of

  1. Security Management Model in Cloud Computing Environment

    OpenAIRE

    Ahmadpanah, Seyed Hossein

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  2. Ewe: a computer model for ultrasonic inspection

    International Nuclear Information System (INIS)

    Douglas, S.R.; Chaplin, K.R.

    1991-11-01

    The computer program EWE simulates the propagation of elastic waves in solids and liquids. It has been applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues

  3. Light reflection models for computer graphics.

    Science.gov (United States)

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.

  4. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  5. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  6. Quantum Vertex Model for Reversible Classical Computing

    Science.gov (United States)

    Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng

    We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.

  7. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  8. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  9. Thermo-hydraulic simulations of the experimental fast reactor core

    International Nuclear Information System (INIS)

    Silveira Luz, M. da; Braz Filho, F.A.; Borges, E.M.

    1985-01-01

    A study of the core and performance of metallic fuel of the experimental fast reactor, from the thermal-hydraulic point of view, was carried out employing the COBRA IV-I code. The good safety characteristics of this reactor and the feasibility of using metallic fuel in experimental fast reactor were demonstrated. (Author) [pt

  10. Thermo hydraulics of a steam boiler forced circulation

    International Nuclear Information System (INIS)

    Tucakovic, Dragan; Zivanovic, Titoslav; Stevanovic, Vladimir

    2006-01-01

    In order to minimize the dryout at the steam boiler furnace in the Thermal Power Plant Kolubara B, designed are inner rifled wall tubes. This type of tubes, with many spiral grooves cut into the bore, prevents film boiling and enables the nucleate boiling be still maintained under the condition of vapour quality being app. 1. To verify the choice of the rifled tubes instead of the cheaper, smooth tubes type being justified, analyzed is the change of the actual and critical vapour quality with the furnace height, under uniform and non-uniform heat flu through evaporator walls. Furthermore, made are hydraulic calculations for various steam boiler loads, in case of both rifled and smooth tubes types, with the purpose to check the rifles influence to pressure drop increase in comparison with the smooth tubes. Also, checked is the selection of the circulation pump. Key words: evaporator, forced circulation, rifled tubes, critical vapour quality, pressure drop

  11. Thermo-hydraulic performance enhancement of solar air heater ...

    African Journals Online (AJOL)

    DR OKE

    Keywords: Solar air heater; Nusselt number; thermal efficiency; multiple arcs with ... loss; and one or two covers of glass or transparent plastic provide resistance to ..... Methods of testing to determine the thermal performance of solar collectors.

  12. Hybrid computer modelling in plasma physics

    International Nuclear Information System (INIS)

    Hromadka, J; Ibehej, T; Hrach, R

    2016-01-01

    Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)

  13. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  14. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  15. Computational algebraic geometry of epidemic models

    Science.gov (United States)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  16. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  17. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  18. Description of mathematical models and computer programs

    International Nuclear Information System (INIS)

    1977-01-01

    The paper gives a description of mathematical models and computer programs for analysing possible strategies for spent fuel management, with emphasis on economic analysis. The computer programs developed, describe the material flows, facility construction schedules, capital investment schedules and operating costs for the facilities used in managing the spent fuel. The computer programs use a combination of simulation and optimization procedures for the economic analyses. Many of the fuel cycle steps (such as spent fuel discharges, storage at the reactor, and transport to the RFCC) are described in physical and economic terms through simulation modeling, while others (such as reprocessing plant size and commissioning schedules, interim storage facility commissioning schedules etc.) are subjected to economic optimization procedures to determine the approximate lowest-cost plans from among the available feasible alternatives

  19. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  20. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  1. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  2. Computer Modelling of Photochemical Smog Formation

    Science.gov (United States)

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  3. A Computational Model of Fraction Arithmetic

    Science.gov (United States)

    Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…

  4. Model Checking - Automated Verification of Computational Systems

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Model Checking - Automated Verification of Computational Systems. Madhavan Mukund. General Article Volume 14 Issue 7 July 2009 pp 667-681. Fulltext. Click here to view fulltext PDF. Permanent link:

  5. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  6. Computer Modeling of Platinum Reforming Reactors | Momoh ...

    African Journals Online (AJOL)

    This paper, instead of using a theoretical approach has considered a computer model as means of assessing the reformate composition for three-stage fixed bed reactors in platforming unit. This is done by identifying many possible hydrocarbon transformation reactions that are peculiar to the process unit, identify the ...

  7. Particle modeling of plasmas computational plasma physics

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1991-01-01

    Recently, through the development of supercomputers, a powerful new method for exploring plasmas has emerged; it is computer modeling of plasmas. Such modeling can duplicate many of the complex processes that go on in a plasma and allow scientists to understand what the important processes are. It helps scientists gain an intuition about this complex state of matter. It allows scientists and engineers to explore new ideas on how to use plasma before building costly experiments; it allows them to determine if they are on the right track. It can duplicate the operation of devices and thus reduce the need to build complex and expensive devices for research and development. This is an exciting new endeavor that is in its infancy, but which can play an important role in the scientific and technological competitiveness of the US. There are a wide range of plasma models that are in use. There are particle models, fluid models, hybrid particle fluid models. These can come in many forms, such as explicit models, implicit models, reduced dimensional models, electrostatic models, magnetostatic models, electromagnetic models, and almost an endless variety of other models. Here the author will only discuss particle models. He will give a few examples of the use of such models; these will be taken from work done by the Plasma Modeling Group at UCLA because he is most familiar with work. However, it only gives a small view of the wide range of work being done around the US, or for that matter around the world

  8. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  9. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  10. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  11. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  12. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  13. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  14. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...... attacker remain somehow undened and still under extensive investigation. This Thesis explores the nature of the ubiquitous attacker with a focus on how she interacts with the physical world and it denes a model that captures the abilities of the attacker. Furthermore a quantitative implementation...

  15. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  16. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  17. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  18. Computational Aerodynamic Modeling of Small Quadcopter Vehicles

    Science.gov (United States)

    Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.

    2017-01-01

    High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.

  19. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  20. Computer model for harmonic ultrasound imaging.

    Science.gov (United States)

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  1. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  2. Computational fluid dynamics modelling in cardiovascular medicine.

    Science.gov (United States)

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. Published by the BMJ Publishing Group Limited. For permission

  3. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  4. The deterministic computational modelling of radioactivity

    International Nuclear Information System (INIS)

    Damasceno, Ralf M.; Barros, Ricardo C.

    2009-01-01

    This paper describes a computational applicative (software) that modelling the simply radioactive decay, the stable nuclei decay, and tbe chain decay directly coupled with superior limit of thirteen radioactive decays, and a internal data bank with the decay constants of the various existent decays, facilitating considerably the use of program by people who does not have access to the program are not connected to the nuclear area; this makes access of the program to people that do not have acknowledgment of that area. The paper presents numerical results for typical problem-models

  5. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  6. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  7. ADGEN: ADjoint GENerator for computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs

  8. Computational Design Modelling : Proceedings of the Design Modelling Symposium

    CERN Document Server

    Kilian, Axel; Palz, Norbert; Scheurer, Fabian

    2012-01-01

    This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.

  9. Toward a computational model of hemostasis

    Science.gov (United States)

    Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith

    2017-11-01

    Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.

  10. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    Science.gov (United States)

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  11. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  12. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  13. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  14. Computer models for optimizing radiation therapy

    International Nuclear Information System (INIS)

    Duechting, W.

    1998-01-01

    The aim of this contribution is to outline how methods of system analysis, control therapy and modelling can be applied to simulate normal and malignant cell growth and to optimize cancer treatment as for instance radiation therapy. Based on biological observations and cell kinetic data, several types of models have been developed describing the growth of tumor spheroids and the cell renewal of normal tissue. The irradiation model is represented by the so-called linear-quadratic model describing the survival fraction as a function of the dose. Based thereon, numerous simulation runs for different treatment schemes can be performed. Thus, it is possible to study the radiation effect on tumor and normal tissue separately. Finally, this method enables a computer-assisted recommendation for an optimal patient-specific treatment schedule prior to clinical therapy. (orig.) [de

  15. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  16. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  17. Computer modeling for optimal placement of gloveboxes

    International Nuclear Information System (INIS)

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units

  18. Computer models in the design of FXR

    International Nuclear Information System (INIS)

    Vogtlin, G.; Kuenning, R.

    1980-01-01

    Lawrence Livermore National Laboratory is developing a 15 to 20 MeV electron accelerator with a beam current goal of 4 kA. This accelerator will be used for flash radiography and has a requirement of high reliability. Components being developed include spark gaps, Marx generators, water Blumleins and oil insulation systems. A SCEPTRE model was developed that takes into consideration the non-linearity of the ferrite and the time dependency of the emission from a field emitter cathode. This model was used to predict an optimum charge time to obtain maximum magnetic flux change from the ferrite. This model and its application will be discussed. JASON was used extensively to determine optimum locations and shapes of supports and insulators. It was also used to determine stress within bubbles adjacent to walls in oil. Computer results will be shown and bubble breakdown will be related to bubble size

  19. Computational modeling of a forward lunge

    DEFF Research Database (Denmark)

    Alkjær, Tine; Wieland, Maja Rose; Andersen, Michael Skipper

    2012-01-01

    during forward lunging. Thus, the purpose of the present study was to establish a musculoskeletal model of the forward lunge to computationally investigate the complete mechanical force equilibrium of the tibia during the movement to examine the loading pattern of the cruciate ligaments. A healthy female...... was selected from a group of healthy subjects who all performed a forward lunge on a force platform, targeting a knee flexion angle of 90°. Skin-markers were placed on anatomical landmarks on the subject and the movement was recorded by five video cameras. The three-dimensional kinematic data describing...... the forward lunge movement were extracted and used to develop a biomechanical model of the lunge movement. The model comprised two legs including femur, crus, rigid foot segments and the pelvis. Each leg had 35 independent muscle units, which were recruited according to a minimum fatigue criterion...

  20. Computational fluid dynamic modelling of cavitation

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  1. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  2. Computational model of a whole tree combustor

    Energy Technology Data Exchange (ETDEWEB)

    Bryden, K.M.; Ragland, K.W. [Univ. of Wisconsin, Madison, WI (United States)

    1993-12-31

    A preliminary computational model has been developed for the whole tree combustor and compared to test results. In the simulation model presented hardwood logs, 15 cm in diameter are burned in a 4 m deep fuel bed. Solid and gas temperature, solid and gas velocity, CO, CO{sub 2}, H{sub 2}O, HC and O{sub 2} profiles are calculated. This deep, fixed bed combustor obtains high energy release rates per unit area due to the high inlet air velocity and extended reaction zone. The lowest portion of the overall bed is an oxidizing region and the remainder of the bed acts as a gasification and drying region. The overfire air region completes the combustion. Approximately 40% of the energy is released in the lower oxidizing region. The wood consumption rate obtained from the computational model is 4,110 kg/m{sup 2}-hr which matches well the consumption rate of 3,770 kg/m{sup 2}-hr observed during the peak test period of the Aurora, MN test. The predicted heat release rate is 16 MW/m{sup 2} (5.0*10{sup 6} Btu/hr-ft{sup 2}).

  3. THYDE-B1/MOD1: a computer code for analysis of small-break loss-of-coolant accident of boiling water reactors

    International Nuclear Information System (INIS)

    Muramatsu, Ken; Akimoto, Masayuki

    1982-08-01

    THYDE-B1/MOD1 is a computer code to analyze thermo-hydraulic transients of the reactor cooling system of a BWR, mainly during a small-break loss-of-coolant accidnet (SB-LOCA) with a special emphasis on the behavior of pressure and mixture level in the pressure vessel. The coolant behavior is simulated with a volume-and-junction method based on assumptions of thermal equilibrium and homogeneous conditions for two-phase flow. A characteristic feature of this code is a three-region representation of the state of the coolant in a control volume, in which three regions, i.e., subcooled liquid, saturated mixture and saturated steam regions are allowed to exist. The regions are separated by moving boundaries, tracked by mass and energy balances for each region. The interior of the pressure vessel is represented by two volumes with three regions: one for inside of the shroud and the other for outside, while other portions of the system are treated with homogeneous model. This method, although it seems to be very simple, has been verified to be adequate for cases of BWR SB-LOCAs in which the hydraulic transient is relatively slow and the cooling of the core strongly depends on the mixture level behavior in the vessel. In order to simulate the system behavior, THYDE-B1 is provided with analytical models for reactor kinetics, heat generation and conduction in fuel rods and structures, heat transfer between coolant and solid surfaces, coolant injection systems, breaks and discharge systems, jet pumps, recirculation pumps, and so on. The verification of the code has been conducted. A good predictability of the code has been indicated through the comparison of calculated results with experimental data provided by ROSA-III small-break tests. This report presents the analytical models, solution method, and input data requirements of the THYDE-B1/MOD1 code. (author)

  4. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  5. Dynamical Models for Computer Viruses Propagation

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network.

  6. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  7. Getting computer models to communicate; Faire communiquer les modeles numeriques

    Energy Technology Data Exchange (ETDEWEB)

    Caremoli, Ch. [Electricite de France (EDF), 75 - Paris (France). Dept. Mecanique et Modeles Numeriques; Erhard, P. [Electricite de France (EDF), 75 - Paris (France). Dept. Physique des Reacteurs

    1999-07-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  8. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  9. Modeling Reality: How Computers Mirror Life

    International Nuclear Information System (INIS)

    Inoue, J-I

    2005-01-01

    Modeling Reality: How Computers Mirror Life covers a wide range of modern subjects in complex systems, suitable not only for undergraduate students who want to learn about modelling 'reality' by using computer simulations, but also for researchers who want to learn something about subjects outside of their majors and need a simple guide. Readers are not required to have specialized training before they start the book. Each chapter is organized so as to train the reader to grasp the essential idea of simulating phenomena and guide him/her towards more advanced areas. The topics presented in this textbook fall into two categories. The first is at graduate level, namely probability, statistics, information theory, graph theory, and the Turing machine, which are standard topics in the course of information science and information engineering departments. The second addresses more advanced topics, namely cellular automata, deterministic chaos, fractals, game theory, neural networks, and genetic algorithms. Several topics included here (neural networks, game theory, information processing, etc) are now some of the main subjects of statistical mechanics, and many papers related to these interdisciplinary fields are published in Journal of Physics A: Mathematical and General, so readers of this journal will be familiar with the subject areas of this book. However, each area is restricted to an elementary level and if readers wish to know more about the topics they are interested in, they will need more advanced books. For example, on neural networks, the text deals with the back-propagation algorithm for perceptron learning. Nowadays, however, this is a rather old topic, so the reader might well choose, for example, Introduction to the Theory of Neural Computation by J Hertz et al (Perseus books, 1991) or Statistical Physics of Spin Glasses and Information Processing by H Nishimori (Oxford University Press, 2001) for further reading. Nevertheless, this book is worthwhile

  10. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  11. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  12. A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.

    2014-01-01

    SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365

  13. A computational model of motor neuron degeneration.

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L F

    2014-08-20

    To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Computational models of intergroup competition and warfare.

    Energy Technology Data Exchange (ETDEWEB)

    Letendre, Kenneth (University of New Mexico); Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  15. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  16. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  17. Direct modeling for computational fluid dynamics

    Science.gov (United States)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  18. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  19. Assessment of some interfacial shear correlations in a model of ECC bypass flow in PWR reactor downcomer

    International Nuclear Information System (INIS)

    Popov, N.K.; Rohatgi, U.S.

    1987-01-01

    The bypass/refill process in the PWR reactor downcomer, following a large rupture of a cold leg coolant supply pipe, is a complicated thermo-hydraulic two-phase flow phenomenon. Mathematical modeling of such phenomena is always accompanied with a difficult task of selection of suitable constitutive correlations. In a typically hydrodynamic phenomenon, like ECC refill process of the reactor lower plenum is considered, the phasic interfacial friction is the most influential constitutive correlation. Therefore, assessment of the well-known widely-used interfacial friction constitutive correlations in the model of ECC bypass/refill process, is the subject of this paper

  20. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  1. Thermohydraulic analysis of pressurized water reactors

    International Nuclear Information System (INIS)

    Veloso, M.A.

    1980-01-01

    The computer program PANTERA is applied in the thermo-hydraulic analysis of Pressurized Water Reactor Cores (PWR). It is a version of COBRA-IIIC in which a new thermal conduction model for fuel rods was introduced. The results calculated by this program are compared with experimental data obtained from bundles of fuel rods, simulating reactor conditions. The validity of the new thermal model is checked too. The PANTERA code, through a simplified procedure of calculation, is used in the thermo-hydraulic analysis of Indian Point, Unit 2, reactor core, in stationary conditions. The results are discussed and compared with design data. (Autor) [pt

  2. Computer models of vocal tract evolution: an overview and critique

    NARCIS (Netherlands)

    de Boer, B.; Fitch, W. T.

    2010-01-01

    Human speech has been investigated with computer models since the invention of digital computers, and models of the evolution of speech first appeared in the late 1960s and early 1970s. Speech science and computer models have a long shared history because speech is a physical signal and can be

  3. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  4. Computational modeling of intraocular gas dynamics

    International Nuclear Information System (INIS)

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-01-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF_6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF_6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF_6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF_6 is 1.4 times more than that of using diluted SF_6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency. (paper)

  5. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  6. Parallel Computing for Terrestrial Ecosystem Carbon Modeling

    International Nuclear Information System (INIS)

    Wang, Dali; Post, Wilfred M.; Ricciuto, Daniel M.; Berry, Michael

    2011-01-01

    Terrestrial ecosystems are a primary component of research on global environmental change. Observational and modeling research on terrestrial ecosystems at the global scale, however, has lagged behind their counterparts for oceanic and atmospheric systems, largely because the unique challenges associated with the tremendous diversity and complexity of terrestrial ecosystems. There are 8 major types of terrestrial ecosystem: tropical rain forest, savannas, deserts, temperate grassland, deciduous forest, coniferous forest, tundra, and chaparral. The carbon cycle is an important mechanism in the coupling of terrestrial ecosystems with climate through biological fluxes of CO 2 . The influence of terrestrial ecosystems on atmospheric CO 2 can be modeled via several means at different timescales. Important processes include plant dynamics, change in land use, as well as ecosystem biogeography. Over the past several decades, many terrestrial ecosystem models (see the 'Model developments' section) have been developed to understand the interactions between terrestrial carbon storage and CO 2 concentration in the atmosphere, as well as the consequences of these interactions. Early TECMs generally adapted simple box-flow exchange models, in which photosynthetic CO 2 uptake and respiratory CO 2 release are simulated in an empirical manner with a small number of vegetation and soil carbon pools. Demands on kinds and amount of information required from global TECMs have grown. Recently, along with the rapid development of parallel computing, spatially explicit TECMs with detailed process based representations of carbon dynamics become attractive, because those models can readily incorporate a variety of additional ecosystem processes (such as dispersal, establishment, growth, mortality etc.) and environmental factors (such as landscape position, pest populations, disturbances, resource manipulations, etc.), and provide information to frame policy options for climate change

  7. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  8. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  9. Model to Implement Virtual Computing Labs via Cloud Computing Services

    OpenAIRE

    Washington Luna Encalada; José Luis Castillo Sequera

    2017-01-01

    In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own device (BYOD) are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the...

  10. Computer modelling of eddy current probes

    International Nuclear Information System (INIS)

    Sullivan, S.P.

    1992-01-01

    Computer programs have been developed for modelling impedance and transmit-receive eddy current probes in two-dimensional axis-symmetric configurations. These programs, which are based on analytic equations, simulate bobbin probes in infinitely long tubes and surface probes on plates. They calculate probe signal due to uniform variations in conductor thickness, resistivity and permeability. These signals depend on probe design and frequency. A finite element numerical program has been procured to calculate magnetic permeability in non-linear ferromagnetic materials. Permeability values from these calculations can be incorporated into the above analytic programs to predict signals from eddy current probes with permanent magnets in ferromagnetic tubes. These programs were used to test various probe designs for new testing applications. Measurements of magnetic permeability in magnetically biased ferromagnetic materials have been performed by superimposing experimental signals, from special laboratory ET probes, on impedance plane diagrams calculated using these programs. (author). 3 refs., 2 figs

  11. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  12. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  13. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  14. Model of the containment building of Almaraz NPP and the system of recombiners PARs, with the GOTHIC code, for the study of the diffusion of combustible gases

    International Nuclear Information System (INIS)

    Garcia Gonzalez, M.; Huelamo, E.; Mazrtinez, M.; Perez, J. R.

    2014-01-01

    This paper presents the analysis of distribution of gases within the containment building carried out a simulation model with the code Thermo hydraulic GOTHIC, which has been evaluated based on passive autocatalytic recombiners gas control system. The model considers scenarios of severe accident with specific conditions that produce the most hydrogen generation rates. Intended to verify the effectiveness of the control system of gas expected to be installed in the Almaraz Nuclear power plant so that the number and location of recombiners equipment meets its function of preventing the formation of explosive atmospheres which impairs the integrity of the containment, reducing and limiting the concentration of combustible gases during the postulated accident. (Author)

  15. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  16. COGMIR: A computer model for knowledge integration

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.X.

    1988-01-01

    This dissertation explores some aspects of knowledge integration, namely, accumulation of scientific knowledge and performing analogical reasoning on the acquired knowledge. Knowledge to be integrated is conveyed by paragraph-like pieces referred to as documents. By incorporating some results from cognitive science, the Deutsch-Kraft model of information retrieval is extended to a model for knowledge engineering, which integrates acquired knowledge and performs intelligent retrieval. The resulting computer model is termed COGMIR, which stands for a COGnitive Model for Intelligent Retrieval. A scheme, named query invoked memory reorganization, is used in COGMIR for knowledge integration. Unlike some other schemes which realize knowledge integration through subjective understanding by representing new knowledge in terms of existing knowledge, the proposed scheme suggests at storage time only recording the possible connection of knowledge acquired from different documents. The actual binding of the knowledge acquired from different documents is deferred to query time. There is only one way to store knowledge and numerous ways to utilize the knowledge. Each document can be represented as a whole as well as its meaning. In addition, since facts are constructed from the documents, document retrieval and fact retrieval are treated in a unified way. When the requested knowledge is not available, query invoked memory reorganization can generate suggestion based on available knowledge through analogical reasoning. This is done by revising the algorithms developed for document retrieval and fact retrieval, and by incorporating Gentner's structure mapping theory. Analogical reasoning is treated as a natural extension of intelligent retrieval, so that two previously separate research areas are combined. A case study is provided. All the components are implemented as list structures similar to relational data-bases.

  17. The use of conduction model in laser weld profile computation

    Science.gov (United States)

    Grabas, Bogusław

    2007-02-01

    Profiles of joints resulting from deep penetration laser beam welding of a flat workpiece of carbon steel were computed. A semi-analytical conduction model solved with Green's function method was used in computations. In the model, the moving heat source was attenuated exponentially in accordance with Beer-Lambert law. Computational results were compared with those in the experiment.

  18. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  19. Computational and Organotypic Modeling of Microcephaly ...

    Science.gov (United States)

    Microcephaly is associated with reduced cortical surface area and ventricular dilations. Many genetic and environmental factors precipitate this malformation, including prenatal alcohol exposure and maternal Zika infection. This complexity motivates the engineering of computational and experimental models to probe the underlying molecular targets, cellular consequences, and biological processes. We describe an Adverse Outcome Pathway (AOP) framework for microcephaly derived from literature on all gene-, chemical-, or viral- effects and brain development. Overlap with NTDs is likely, although the AOP connections identified here focused on microcephaly as the adverse outcome. A query of the Mammalian Phenotype Browser database for ‘microcephaly’ (MP:0000433) returned 85 gene associations; several function in microtubule assembly and centrosome cycle regulated by (microcephalin, MCPH1), a gene for primary microcephaly in humans. The developing ventricular zone is the likely target. In this zone, neuroprogenitor cells (NPCs) self-replicate during the 1st trimester setting brain size, followed by neural differentiation of the neocortex. Recent studies with human NPCs confirmed infectivity with Zika virions invoking critical cell loss (apoptosis) of precursor NPCs; similar findings have been shown with fetal alcohol or methylmercury exposure in rodent studies, leading to mathematical models of NPC dynamics in size determination of the ventricular zone. A key event

  20. Computer modeling of the Cabriolet Event

    International Nuclear Information System (INIS)

    Kamegai, M.

    1979-01-01

    Computer modeling techniques are described for calculating the results of underground nuclear explosions at depths shallow enough to produce cratering. The techniques are applied to the Cabriolet Event, a well-documented nuclear excavation experiment, and the calculations give good agreement with the experimental results. It is concluded that, given data obtainable by outside observers, these modeling techniques are capable of verifying the yield and depth of underground nuclear cratering explosions, and that they could thus be useful in monitoring another country's compliance with treaty agreements on nuclear testing limitations. Several important facts emerge from the study: (1) seismic energy is produced by only a fraction of the nuclear yield, a fraction depending strongly on the depth of shot and the mechanical properties of the surrounding rock; (2) temperature of the vented gas can be predicted accurately only if good equations of state are available for the rock in the detonation zone; and (3) temperature of the vented gas is strongly dependent on the cooling effect, before venting, of mixing with melted rock in the expanding cavity and, to a lesser extent, on the cooling effect of water in the rock

  1. Random matrix model of adiabatic quantum computing

    International Nuclear Information System (INIS)

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-01-01

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size

  2. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  3. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel; Buse, Gerrit; Pfluger, Dirk

    2012-01-01

    of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute

  4. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  5. Computer models for kinetic equations of magnetically confined plasmas

    International Nuclear Information System (INIS)

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method

  6. Editorial: Modelling and computational challenges in granular materials

    OpenAIRE

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss the current progress and latest advancements in the field of advanced numerical methods and modelling of granular materials. The focus will be on computational methods, improved algorithms and the m...

  7. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Elements of matrix modeling and computing with Matlab

    CERN Document Server

    White, Robert E

    2006-01-01

    As discrete models and computing have become more common, there is a need to study matrix computation and numerical linear algebra. Encompassing a diverse mathematical core, Elements of Matrix Modeling and Computing with MATLAB examines a variety of applications and their modeling processes, showing you how to develop matrix models and solve algebraic systems. Emphasizing practical skills, it creates a bridge from problems with two and three variables to more realistic problems that have additional variables. Elements of Matrix Modeling and Computing with MATLAB focuses on seven basic applicat

  9. Vehicle - Bridge interaction, comparison of two computing models

    Science.gov (United States)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  10. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  11. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  12. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  13. PETRI NET MODELING OF COMPUTER VIRUS LIFE CYCLE

    African Journals Online (AJOL)

    Dr Obe

    dynamic system analysis is applied to model the virus life cycle. Simulation of the derived model ... Keywords: Virus lifecycle, Petri nets, modeling. simulation. .... complex process. Figure 2 .... by creating Matlab files for five different computer ...

  14. Regenerating computer model of the thymus

    International Nuclear Information System (INIS)

    Lumb, J.R.

    1975-01-01

    This computer model simulates the cell population kinetics of the development and later degeneration of the thymus. Nutritional factors are taken into account by the growth of blood vessels in the simulated thymus. The stem cell population is kept at its maximum by allowing some stem cells to divide into two stem cells until the population reaches its maximum, thus regenerating the thymus after an insult such as irradiation. After a given number of population doublings the maximum allowed stem cell population is gradually decreased in order to simulate the degeneration of the thymus. Results show that the simulated thymus develops and degenerates in a pattern similar to that of the natural thymus. This simulation is used to evaluate cellular kinetic data for the the thymus. The results from testing the internal consistency of available data are reported. The number of generations which most represents the natural thymus includes seven dividing generations of lymphocytes and one mature, nondividing generation of small lymphocytes. The size of the resulting developed thymus can be controlled without affecting other variables by changing the maximum stem cell population allowed. In addition, recovery from irradiation is simulated

  15. Computational modeling of epidural cortical stimulation

    Science.gov (United States)

    Wongsarnpigoon, Amorn; Grill, Warren M.

    2008-12-01

    Epidural cortical stimulation (ECS) is a developing therapy to treat neurological disorders. However, it is not clear how the cortical anatomy or the polarity and position of the electrode affects current flow and neural activation in the cortex. We developed a 3D computational model simulating ECS over the precentral gyrus. With the electrode placed directly above the gyrus, about half of the stimulus current flowed through the crown of the gyrus while current density was low along the banks deep in the sulci. Beneath the electrode, neurons oriented perpendicular to the cortical surface were depolarized by anodic stimulation, and neurons oriented parallel to the boundary were depolarized by cathodic stimulation. Activation was localized to the crown of the gyrus, and neurons on the banks deep in the sulci were not polarized. During regulated voltage stimulation, the magnitude of the activating function was inversely proportional to the thickness of the CSF and dura. During regulated current stimulation, the activating function was not sensitive to the thickness of the dura but was slightly more sensitive than during regulated voltage stimulation to the thickness of the CSF. Varying the width of the gyrus and the position of the electrode altered the distribution of the activating function due to changes in the orientation of the neurons beneath the electrode. Bipolar stimulation, although often used in clinical practice, reduced spatial selectivity as well as selectivity for neuron orientation.

  16. Geometric modeling for computer aided design

    Science.gov (United States)

    Schwing, James L.; Olariu, Stephen

    1995-01-01

    The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.

  17. Review of computational thermal-hydraulic modeling

    International Nuclear Information System (INIS)

    Keefer, R.H.; Keeton, L.W.

    1995-01-01

    Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix

  18. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Computational Intelligence Agent-Oriented Modelling

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman

    2006-01-01

    Roč. 5, č. 2 (2006), s. 430-433 ISSN 1109-2777 R&D Projects: GA MŠk 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : multi-agent systems * adaptive agents * computational intelligence Subject RIV: IN - Informatics, Computer Science

  20. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  1. Deployment Models: Towards Eliminating Security Concerns From Cloud Computing

    OpenAIRE

    Zhao, Gansen; Chunming, Rong; Jaatun, Martin Gilje; Sandnes, Frode Eika

    2010-01-01

    Cloud computing has become a popular choice as an alternative to investing new IT systems. When making decisions on adopting cloud computing related solutions, security has always been a major concern. This article summarizes security concerns in cloud computing and proposes five service deployment models to ease these concerns. The proposed models provide different security related features to address different requirements and scenarios and can serve as reference models for deployment. D...

  2. Cloud Computing Adoption Business Model Factors: Does Enterprise Size Matter?

    OpenAIRE

    Bogataj Habjan, Kristina; Pucihar, Andreja

    2017-01-01

    This paper presents the results of research investigating the impact of business model factors on cloud computing adoption. The introduced research model consists of 40 cloud computing business model factors, grouped into eight factor groups. Their impact and importance for cloud computing adoption were investigated among enterpirses in Slovenia. Furthermore, differences in opinion according to enterprise size were investigated. Research results show no statistically significant impacts of in...

  3. The complete guide to blender graphics computer modeling and animation

    CERN Document Server

    Blain, John M

    2014-01-01

    Smoothly Leads Users into the Subject of Computer Graphics through the Blender GUIBlender, the free and open source 3D computer modeling and animation program, allows users to create and animate models and figures in scenes, compile feature movies, and interact with the models and create video games. Reflecting the latest version of Blender, The Complete Guide to Blender Graphics: Computer Modeling & Animation, 2nd Edition helps beginners learn the basics of computer animation using this versatile graphics program. This edition incorporates many new features of Blender, including developments

  4. Computational Models for Nonlinear Aeroelastic Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  5. Editorial: Modelling and computational challenges in granular materials

    NARCIS (Netherlands)

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss

  6. Security Issues Model on Cloud Computing: A Case of Malaysia

    OpenAIRE

    Komeil Raisian; Jamaiah Yahaya

    2015-01-01

    By developing the cloud computing, viewpoint of many people regarding the infrastructure architectures, software distribution and improvement model changed significantly. Cloud computing associates with the pioneering deployment architecture, which could be done through grid calculating, effectiveness calculating and autonomic calculating. The fast transition towards that, has increased the worries regarding a critical issue for the effective transition of cloud computing. From the security v...

  7. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  8. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  9. Subcooled decompression analysis of the ROSA and the LOFT semiscale blowdown test data with the digital computer code DEPCO-MULTI

    International Nuclear Information System (INIS)

    Namatame, Ken; Kobayashi, Kensuke

    1975-12-01

    In the ROSA (Rig of Safety Assessment) program, the digital computer code DEPCO-SINGLE and DEPCO-MULTI (Subcooled Decompression Process in Loss-of-Coolant Accident - Single Pipe and - Multiple Pipe Network) were prepared to study thermo-hydraulic behavior of the primary coolant in subcooled decompression of the PWR LOCA. The analytical results with DEPCO-MULTI on the subcooled decompression phenomena are presented for ROSA-I, ROSA-II and LOFT 500, 600, 700 and 800 series experiments. The effects of space mesh length, elasticity of pressure boundary materials and simplification for computational piping system on the computed result are described. This will be the final work on the study of the subcooled decompression analysis as for the ROSA program, and the authors wish that the present code shall further be examined with the data of much advanced experiments. (auth.)

  10. The emerging role of cloud computing in molecular modelling.

    Science.gov (United States)

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  12. Model of the containment building of Almaraz NPP and the system of recombiners PARs, with the GOTHIC code, for the study of the diffusion of combustible gases; Modelo del edificio de contencion de C.N. Almaraz y del sistema de recombinadores PARs, con el codigo GOTHIC, para el estudio de la difusion de gases combustibles

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Gonzalez, M.; Huelamo, E.; Mazrtinez, M.; Perez, J. R.

    2014-07-01

    This paper presents the analysis of distribution of gases within the containment building carried out a simulation model with the code Thermo hydraulic GOTHIC, which has been evaluated based on passive autocatalytic recombiners gas control system. The model considers scenarios of severe accident with specific conditions that produce the most hydrogen generation rates. Intended to verify the effectiveness of the control system of gas expected to be installed in the Almaraz Nuclear power plant so that the number and location of recombiners equipment meets its function of preventing the formation of explosive atmospheres which impairs the integrity of the containment, reducing and limiting the concentration of combustible gases during the postulated accident. (Author)

  13. Structure, function, and behaviour of computational models in systems biology.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  14. Ocean Modeling and Visualization on Massively Parallel Computer

    Science.gov (United States)

    Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.

    1997-01-01

    Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.

  15. Computer model for economic study of unbleached kraft paperboard production

    Science.gov (United States)

    Peter J. Ince

    1984-01-01

    Unbleached kraft paperboard is produced from wood fiber in an industrial papermaking process. A highly specific and detailed model of the process is presented. The model is also presented as a working computer program. A user of the computer program will provide data on physical parameters of the process and on prices of material inputs and outputs. The program is then...

  16. Airfoil Computations using the γ - Reθ Model

    DEFF Research Database (Denmark)

    Sørensen, Niels N.

    computations. Based on this, an estimate of the error in the computations is determined to be approximately one percent in the attached region. Following the verification of the implemented model, the model is applied to four airfoils, NACA64- 018, NACA64-218, NACA64-418 and NACA64-618 and the results...

  17. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  18. Computational intelligence applications in modeling and control

    CERN Document Server

    Vaidyanathan, Sundarapandian

    2015-01-01

    The development of computational intelligence (CI) systems was inspired by observable and imitable aspects of intelligent activity of human being and nature. The essence of the systems based on computational intelligence is to process and interpret data of various nature so that that CI is strictly connected with the increase of available data as well as capabilities of their processing, mutually supportive factors. Developed theories of computational intelligence were quickly applied in many fields of engineering, data analysis, forecasting, biomedicine and others. They are used in images and sounds processing and identifying, signals processing, multidimensional data visualization, steering of objects, analysis of lexicographic data, requesting systems in banking, diagnostic systems, expert systems and many other practical implementations. This book consists of 16 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought ...

  19. COMPUTATIONAL MODELING OF AIRFLOW IN NONREGULAR SHAPED CHANNELS

    Directory of Open Access Journals (Sweden)

    A. A. Voronin

    2013-05-01

    Full Text Available The basic approaches to computational modeling of airflow in the human nasal cavity are analyzed. Different models of turbulent flow which may be used in order to calculate air velocity and pressure are discussed. Experimental measurement results of airflow temperature are illustrated. Geometrical model of human nasal cavity reconstructed from computer-aided tomography scans and numerical simulation results of airflow inside this model are also given. Spatial distributions of velocity and temperature for inhaled and exhaled air are shown.

  20. Model Checking Quantified Computation Tree Logic

    NARCIS (Netherlands)

    Rensink, Arend; Baier, C; Hermanns, H.

    2006-01-01

    Propositional temporal logic is not suitable for expressing properties on the evolution of dynamically allocated entities over time. In particular, it is not possible to trace such entities through computation steps, since this requires the ability to freely mix quantification and temporal

  1. Computational compliance criteria in water hammer modelling

    Directory of Open Access Journals (Sweden)

    Urbanowicz Kamil

    2017-01-01

    Full Text Available Among many numerical methods (finite: difference, element, volume etc. used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC, which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  2. Computational compliance criteria in water hammer modelling

    Science.gov (United States)

    Urbanowicz, Kamil

    2017-10-01

    Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  3. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  4. An integrated introduction to computer graphics and geometric modeling

    CERN Document Server

    Goldman, Ronald

    2009-01-01

    … this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco

  5. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  6. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  7. Computer modeling of ORNL storage tank sludge mobilization and mixing

    International Nuclear Information System (INIS)

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks

  8. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.; Douglas, Craig C.

    2010-01-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models

  9. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  10. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  11. Patentability aspects of computational cancer models

    Science.gov (United States)

    Lishchuk, Iryna

    2017-07-01

    Multiscale cancer models, implemented in silico, simulate tumor progression at various spatial and temporal scales. Having the innovative substance and possessing the potential of being applied as decision support tools in clinical practice, patenting and obtaining patent rights in cancer models seems prima facie possible. What legal hurdles the cancer models need to overcome for being patented we inquire from this paper.

  12. Introduction to computation and modeling for differential equations

    CERN Document Server

    Edsberg, Lennart

    2008-01-01

    An introduction to scientific computing for differential equationsIntroduction to Computation and Modeling for Differential Equations provides a unified and integrated view of numerical analysis, mathematical modeling in applications, and programming to solve differential equations, which is essential in problem-solving across many disciplines, such as engineering, physics, and economics. This book successfully introduces readers to the subject through a unique ""Five-M"" approach: Modeling, Mathematics, Methods, MATLAB, and Multiphysics. This approach facilitates a thorough understanding of h

  13. Computational model of cellular metabolic dynamics

    DEFF Research Database (Denmark)

    Li, Yanjun; Solomon, Thomas; Haus, Jacob M

    2010-01-01

    of the cytosol and mitochondria. The model simulated skeletal muscle metabolic responses to insulin corresponding to human hyperinsulinemic-euglycemic clamp studies. Insulin-mediated rate of glucose disposal was the primary model input. For model validation, simulations were compared with experimental data......: intracellular metabolite concentrations and patterns of glucose disposal. Model variations were simulated to investigate three alternative mechanisms to explain insulin enhancements: Model 1 (M.1), simple mass action; M.2, insulin-mediated activation of key metabolic enzymes (i.e., hexokinase, glycogen synthase......, by application of mechanism M.3, the model predicts metabolite concentration changes and glucose partitioning patterns consistent with experimental data. The reaction rate fluxes quantified by this detailed model of insulin/glucose metabolism provide information that can be used to evaluate the development...

  14. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  15. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  16. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  17. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    Science.gov (United States)

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services. PMID:28112020

  18. Establishing a Cloud Computing Success Model for Hospitals in Taiwan.

    Science.gov (United States)

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  19. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    Directory of Open Access Journals (Sweden)

    Jiunn-Woei Lian PhD

    2017-01-01

    Full Text Available The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  20. Computer modeling of Cannabinoid receptor type 1

    Directory of Open Access Journals (Sweden)

    Sapundzhi Fatima

    2018-01-01

    Full Text Available Cannabinoid receptors are important class of receptors as they are involved in various physiological processes such as appetite, pain-sensation, mood, and memory. It is important to design receptor-selective ligands in order to treat a particular disorder. The aim of the present study is to model the structure of cannabinoid receptor CB1 and to perform docking between obtained models and known ligands. Two models of CBR1 were prepared with two different methods (Modeller of Chimera and MOE. They were used for docking with GOLD 5.2. It was established a high correlation between inhibitory constant Ki of CB1 cannabinoid ligands and the ChemScore scoring function of GOLD, which concerns both models. This suggests that the models of the CB1 receptors obtained could be used for docking studies and in further investigation and design of new potential, selective and active cannabinoids with the desired effects.

  1. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    OpenAIRE

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality...

  2. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid

    2011-01-01

    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  3. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters

    Directory of Open Access Journals (Sweden)

    Pietro Cipresso

    2017-08-01

    Full Text Available Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future.

  4. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  5. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  6. Computational Modeling of Turbulent Spray Combustion

    NARCIS (Netherlands)

    Ma, L.

    2016-01-01

    The objective of the research presented in this thesis is development and validation of predictive models or modeling approaches of liquid fuel combustion (spray combustion) in hot-diluted environments, known as flameless combustion or MILD combustion. The goal is to combine good physical insight,

  7. Computer Aided Modelling – Opportunities and Challenges

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    -based solutions to significant problems? The important issues of workflow and data flow are discussed together with fit-for-purpose model development. As well, the lack of tools around multiscale modelling provides opportunities for the development of efficient tools to address such challenges. The ability...

  8. Computing broadband accelerograms using kinematic rupture modeling

    International Nuclear Information System (INIS)

    Ruiz Paredes, J.A.

    2007-05-01

    In order to make the broadband kinematic rupture modeling more realistic with respect to dynamic modeling, physical constraints are added to the rupture parameters. To improve the slip velocity function (SVF) modeling, an evolution of the k -2 source model is proposed, which consists to decompose the slip as a sum of sub-events by band of k. This model yields to SVF close to the solution proposed by Kostrov for a crack, while preserving the spectral characteristics of the radiated wave field, i.e. a w 2 model with spectral amplitudes at high frequency scaled to the coefficient of directivity C d . To better control the directivity effects, a composite source description is combined with a scaling law defining the extent of the nucleation area for each sub-event. The resulting model allows to reduce the apparent coefficient of directivity to a fraction of C d , as well as to reproduce the standard deviation of the new empirical attenuation relationships proposed for Japan. To make source models more realistic, a variable rupture velocity in agreement with the physics of the rupture must be considered. The followed approach that is based on an analytical relation between the fracture energy, the slip and the rupture velocity, leads to higher values of the peak ground acceleration in the vicinity of the fault. Finally, to better account for the interaction of the wave field with the geological medium, a semi-empirical methodology is developed combining a composite source model with empirical Green functions, and is applied to the Yamaguchi, M w 5.9 earthquake. The modeled synthetics reproduce satisfactorily well the observed main characteristics of ground motions. (author)

  9. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  10. Computer Simulation (Microcultures): An Effective Model for Multicultural Education.

    Science.gov (United States)

    Nelson, Jorge O.

    This paper presents a rationale for using high-fidelity computer simulation in planning for and implementing effective multicultural education strategies. Using computer simulation, educators can begin to understand and plan for the concept of cultural sensitivity in delivering instruction. The model promises to emphasize teachers' understanding…

  11. The European computer model for optronic system performance prediction (ECOMOS)

    NARCIS (Netherlands)

    Kessler, S.; Bijl, P.; Labarre, L.; Repasi, E.; Wittenstein, W.; Bürsing, H.

    2017-01-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The

  12. Computer - based modeling in extract sciences research -III ...

    African Journals Online (AJOL)

    Molecular modeling techniques have been of great applicability in the study of the biological sciences and other exact science fields like agriculture, mathematics, computer science and the like. In this write up, a list of computer programs for predicting, for instance, the structure of proteins has been provided. Discussions on ...

  13. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  14. Validation of the STAFF-5 computer model

    International Nuclear Information System (INIS)

    Fletcher, J.F.; Fields, S.R.

    1981-04-01

    STAFF-5 is a dynamic heat-transfer-fluid-flow stress model designed for computerized prediction of the temperature-stress performance of spent LWR fuel assemblies under storage/disposal conditions. Validation of the temperature calculating abilities of this model was performed by comparing temperature calculations under specified conditions to experimental data from the Engine Maintenance and Dissassembly (EMAD) Fuel Temperature Test Facility and to calculations performed by Battelle Pacific Northwest Laboratory (PNL) using the HYDRA-1 model. The comparisons confirmed the ability of STAFF-5 to calculate representative fuel temperatures over a considerable range of conditions, as a first step in the evaluation and prediction of fuel temperature-stress performance

  15. Transport modeling and advanced computer techniques

    International Nuclear Information System (INIS)

    Wiley, J.C.; Ross, D.W.; Miner, W.H. Jr.

    1988-11-01

    A workshop was held at the University of Texas in June 1988 to consider the current state of transport codes and whether improved user interfaces would make the codes more usable and accessible to the fusion community. Also considered was the possibility that a software standard could be devised to ease the exchange of routines between groups. It was noted that two of the major obstacles to exchanging routines now are the variety of geometrical representation and choices of units. While the workshop formulated no standards, it was generally agreed that good software engineering would aid in the exchange of routines, and that a continued exchange of ideas between groups would be worthwhile. It seems that before we begin to discuss software standards we should review the current state of computer technology --- both hardware and software to see what influence recent advances might have on our software goals. This is done in this paper

  16. Predictive Models and Computational Toxicology (II IBAMTOX)

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  17. Computational Models of Human Organizational Dynamics

    National Research Council Canada - National Science Library

    Courand, Gregg

    2000-01-01

    .... ThIs is the final report for our Phase II SBIR project, conducted over three years. Our research program has contributed theory, methodology, and technology for organizational modeling and analysis...

  18. Computational Model for Spacecraft/Habitat Volume

    Data.gov (United States)

    National Aeronautics and Space Administration — Please note that funding to Dr. Simon Hsiang, a critical co-investigator for the development of the Spacecraft Optimization Layout and Volume (SOLV) model, was...

  19. Computational modeling and engineering in pediatric and congenital heart disease.

    Science.gov (United States)

    Marsden, Alison L; Feinstein, Jeffrey A

    2015-10-01

    Recent methodological advances in computational simulations are enabling increasingly realistic simulations of hemodynamics and physiology, driving increased clinical utility. We review recent developments in the use of computational simulations in pediatric and congenital heart disease, describe the clinical impact in modeling in single-ventricle patients, and provide an overview of emerging areas. Multiscale modeling combining patient-specific hemodynamics with reduced order (i.e., mathematically and computationally simplified) circulatory models has become the de-facto standard for modeling local hemodynamics and 'global' circulatory physiology. We review recent advances that have enabled faster solutions, discuss new methods (e.g., fluid structure interaction and uncertainty quantification), which lend realism both computationally and clinically to results, highlight novel computationally derived surgical methods for single-ventricle patients, and discuss areas in which modeling has begun to exert its influence including Kawasaki disease, fetal circulation, tetralogy of Fallot (and pulmonary tree), and circulatory support. Computational modeling is emerging as a crucial tool for clinical decision-making and evaluation of novel surgical methods and interventions in pediatric cardiology and beyond. Continued development of modeling methods, with an eye towards clinical needs, will enable clinical adoption in a wide range of pediatric and congenital heart diseases.

  20. Computational numerical modelling of plasma focus

    International Nuclear Information System (INIS)

    Brollo, Fabricio

    2005-01-01

    Several models for calculation of the dynamics of Plasma Focus have been developed. All of them begin from the same physic principle: the current sheet run down the anode length, ionizing and collecting the gas that finds in its way.This is known as snow-plow model.Concerning pinch's compression, a MHD model is proposed.The plasma is treated as a fluid , particularly as a high ionized gas.However, there are not many models that, taking into account thermal equilibrium inside the plasma, make approximated calculations of the maximum temperatures reached in the pinch.Besides, there are no models which use those temperatures to estimate the termofusion neutron yield for the Deuterium or Deuterium-Tritium gas filled cases.In the PLADEMA network (Dense Magnetized Plasmas) a code was developed with the objective of describe the plasma focus dynamics, in a conceptual engineering stage.The codes calculates the principal variables (currents, time to focus, etc) and estimates the neutron yield in Deuterium-filled plasma focus devices.It can be affirmed that the code's experimental validation, in its axial and radial stages, was very successfully. However, it was accepted that the compression stage should be formulated again, to find a solution for a large variation of a parameter related with velocity profiles for the particles trapped inside the pinch.The objectives of this work can be stated in the next way : - Check the compression's model hypothesis. Develop a new model .- Implement the new model in the code. Compare results against experimental data of Plasma Focus devices from all around the world [es

  1. Computer models and output, Spartan REM: Appendix B

    Science.gov (United States)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.

  2. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  3. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  4. Model and Computing Experiment for Research and Aerosols Usage Management

    Directory of Open Access Journals (Sweden)

    Daler K. Sharipov

    2012-09-01

    Full Text Available The article deals with a math model for research and management of aerosols released into the atmosphere as well as numerical algorithm used as hardware and software systems for conducting computing experiment.

  5. Computational Models for Nonlinear Aeroelastic Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  6. Computational modelling of the impact of AIDS on business.

    Science.gov (United States)

    Matthews, Alan P

    2007-07-01

    An overview of computational modelling of the impact of AIDS on business in South Africa, with a detailed description of the AIDS Projection Model (APM) for companies, developed by the author, and suggestions for further work. Computational modelling of the impact of AIDS on business in South Africa requires modelling of the epidemic as a whole, and of its impact on a company. This paper gives an overview of epidemiological modelling, with an introduction to the Actuarial Society of South Africa (ASSA) model, the most widely used such model for South Africa. The APM produces projections of HIV prevalence, new infections, and AIDS mortality on a company, based on the anonymous HIV testing of company employees, and projections from the ASSA model. A smoothed statistical model of the prevalence test data is computed, and then the ASSA model projection for each category of employees is adjusted so that it matches the measured prevalence in the year of testing. FURTHER WORK: Further techniques that could be developed are microsimulation (representing individuals in the computer), scenario planning for testing strategies, and models for the business environment, such as models of entire sectors, and mapping of HIV prevalence in time and space, based on workplace and community data.

  7. Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis

    Science.gov (United States)

    Young, Cristobal; Holsteen, Katherine

    2017-01-01

    Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…

  8. Computational quench model applicable to the SMES/CICC

    Science.gov (United States)

    Luongo, Cesar A.; Chang, Chih-Lien; Partain, Kenneth D.

    1994-07-01

    A computational quench model accounting for the hydraulic peculiarities of the 200 kA SMES cable-in-conduit conductor has been developed. The model is presented and used to simulate the quench on the SMES-ETM. Conclusions are drawn concerning quench detection and protection. A plan for quench model validation is presented.

  9. Petri Net Modeling of Computer Virus Life Cycle | Ikekonwu ...

    African Journals Online (AJOL)

    Virus life cycle, which refers to the stages of development of a computer virus, is presented as a suitable area for the application of Petri nets. Petri nets a powerful modeling tool in the field of dynamic system analysis is applied to model the virus life cycle. Simulation of the derived model is also presented. The intention of ...

  10. A Situative Space Model for Mobile Mixed-Reality Computing

    DEFF Research Database (Denmark)

    Pederson, Thomas; Janlert, Lars-Erik; Surie, Dipak

    2011-01-01

    This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time.......This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time....

  11. Computational model of miniature pulsating heat pipes

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Mario J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Givler, Richard C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  12. Computer Models and Automata Theory in Biology and Medicine

    CERN Document Server

    Baianu, I C

    2004-01-01

    The applications of computers to biological and biomedical problem solving goes back to the very beginnings of computer science, automata theory [1], and mathematical biology [2]. With the advent of more versatile and powerful computers, biological and biomedical applications of computers have proliferated so rapidly that it would be virtually impossible to compile a comprehensive review of all developments in this field. Limitations of computer simulations in biology have also come under close scrutiny, and claims have been made that biological systems have limited information processing power [3]. Such general conjectures do not, however, deter biologists and biomedical researchers from developing new computer applications in biology and medicine. Microprocessors are being widely employed in biological laboratories both for automatic data acquisition/processing and modeling; one particular area, which is of great biomedical interest, involves fast digital image processing and is already established for rout...

  13. Computer Models Simulate Fine Particle Dispersion

    Science.gov (United States)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  14. Computational Models for Analysis of Illicit Activities

    DEFF Research Database (Denmark)

    Nizamani, Sarwat

    been explored in this thesis by considering them as epidemic-like processes. A mathematical model has been developed based on differential equations, which studies the dynamics of the issues from the very beginning until the issues cease. This study extends classical models of the spread of epidemics...... to describe the phenomenon of contagious public outrage, which eventually leads to the spread of violence following a disclosure of some unpopular political decisions and/or activity. The results shed a new light on terror activity and provide some hint on how to curb the spreading of violence within...

  15. Model Infrastruktur dan Manajemen Platform Server Berbasis Cloud Computing

    Directory of Open Access Journals (Sweden)

    Mulki Indana Zulfa

    2017-11-01

    Full Text Available Cloud computing is a new technology that is still very rapidly growing. This technology makes the Internet as the main media for the management of data and applications remotely. Cloud computing allows users to run an application without having to think about infrastructure and its platforms. Other technical aspects such as memory, storage, backup and restore, can be done very easily. This research is intended to modeling the infrastructure and management of computer platform in computer network of Faculty of Engineering, University of Jenderal Soedirman. The first stage in this research is literature study, by finding out the implementation model in previous research. Then the result will be combined with a new approach to existing resources and try to implement directly on the existing server network. The results showed that the implementation of cloud computing technology is able to replace the existing platform network.

  16. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad

    2015-01-01

    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  17. A Computational Analysis Model for Open-ended Cognitions

    Science.gov (United States)

    Morita, Junya; Miwa, Kazuhisa

    In this paper, we propose a novel usage for computational cognitive models. In cognitive science, computational models have played a critical role of theories for human cognitions. Many computational models have simulated results of controlled psychological experiments successfully. However, there have been only a few attempts to apply the models to complex realistic phenomena. We call such a situation ``open-ended situation''. In this study, MAC/FAC (``many are called, but few are chosen''), proposed by [Forbus 95], that models two stages of analogical reasoning was applied to our open-ended psychological experiment. In our experiment, subjects were presented a cue story, and retrieved cases that had been learned in their everyday life. Following this, they rated inferential soundness (goodness as analogy) of each retrieved case. For each retrieved case, we computed two kinds of similarity scores (content vectors/structural evaluation scores) using the algorithms of the MAC/FAC. As a result, the computed content vectors explained the overall retrieval of cases well, whereas the structural evaluation scores had a strong relation to the rated scores. These results support the MAC/FAC's theoretical assumption - different similarities are involved on the two stages of analogical reasoning. Our study is an attempt to use a computational model as an analysis device for open-ended human cognitions.

  18. Global Stability of an Epidemic Model of Computer Virus

    Directory of Open Access Journals (Sweden)

    Xiaofan Yang

    2014-01-01

    Full Text Available With the rapid popularization of the Internet, computers can enter or leave the Internet increasingly frequently. In fact, no antivirus software can detect and remove all sorts of computer viruses. This implies that viruses would persist on the Internet. To better understand the spread of computer viruses in these situations, a new propagation model is established and analyzed. The unique equilibrium of the model is globally asymptotically stable, in accordance with the reality. A parameter analysis of the equilibrium is also conducted.

  19. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    Science.gov (United States)

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  20. Computational Modeling Develops Ultra-Hard Steel

    Science.gov (United States)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  1. A Computational Model of Spatial Development

    Science.gov (United States)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  2. Electricity load modelling using computational intelligence

    NARCIS (Netherlands)

    Ter Borg, R.W.

    2005-01-01

    As a consequence of the liberalisation of the electricity markets in Europe, market players have to continuously adapt their future supply to match their customers' demands. This poses the challenge of obtaining a predictive model that accurately describes electricity loads, current in this thesis.

  3. Computational Modeling of Fluorescence Loss in Photobleaching

    DEFF Research Database (Denmark)

    Hansen, Christian Valdemar; Schroll, Achim; Wüstner, Daniel

    2015-01-01

    sequences as reaction– diffusion systems on segmented cell images. The cell geometry is extracted from microscopy images using the Chan–Vese active contours algorithm [8]. The PDE model is subsequently solved by the automated Finite Element software package FEniCS [20]. The flexibility of FEniCS allows...

  4. Radiation enhanced conduction in insulators: computer modelling

    International Nuclear Information System (INIS)

    Fisher, A.J.

    1986-10-01

    The report describes the implementation of the Klaffky-Rose-Goland-Dienes [Phys. Rev. B.21 3610,1980] model of radiation-enhanced conduction and describes the codes used. The approach is demonstrated for the data for alumina of Pells, Buckley, Hill and Murphy [AERE R.11715, 1985]. (author)

  5. GPSS and Modeling of Computer Communication Networks.

    Science.gov (United States)

    1982-04-01

    action block in a flowchart of the system being modeled. For instance, the process of capturing a facility for some length of time and then...because of the abundance of tutorial material available; whereas, far less complete 47 tutorial material is available to beginners learning SIMSCRIPT

  6. Life system modeling and intelligent computing. Pt. I. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part I of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 55 papers in this volume are organized in topical sections on intelligent modeling, monitoring, and control of complex nonlinear systems; autonomy-oriented computing and intelligent agents; advanced theory and methodology in fuzzy systems and soft computing; computational intelligence in utilization of clean and renewable energy resources; intelligent modeling, control and supervision for energy saving and pollution reduction; intelligent methods in developing vehicles, engines and equipments; computational methods and intelligence in modeling genetic and biochemical networks and regulation. (orig.)

  7. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  8. Computational Modeling of Supercritical and Transcritical Flows

    Science.gov (United States)

    2017-01-09

    Acentric factor I. Introduction Liquid rocket and gas turbine engines operate at high pressures . For gas turbines, the combustor pressurecan be 60 − 100...equation of state for several reduced pressures . The model captures the high density at very low temperatures and the supercritical behavior at high reduced...physical meaning. The temperature range over which the three roots are present is bounded by TL on the low side and TH on the high side. Figure 2: Roots

  9. Computational Modeling of Lipid Metabolism in Yeast

    Directory of Open Access Journals (Sweden)

    Vera Schützhold

    2016-09-01

    Full Text Available Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes.Here, we present a object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner.The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism.

  10. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  11. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  12. Airfoil computations using the gamma-Retheta model; Wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, Niels N.

    2009-05-15

    The present work addresses the validation of the implementation of the Menter, Langtry et al. gamma-theta correlation based transition model [1, 2, 3] in the EllipSys2D code. Firstly the 2. order of accuracy of the code is verified using a grid refinement study for laminar, turbulent and transitional computations. Based on this, an estimate of the error in the computations is determined to be approximately one percent in the attached region. Following the verification of the implemented model, the model is applied to four airfoils, NACA64-018, NACA64-218, NACA64-418 and NACA64-618 and the results are compared to measurements [4] and computations using the Xfoil code by Drela et al. [5]. In the linear pre stall region good agreement is observed both for lift and drag, while differences to both measurements and Xfoil computations are observed in stalled conditions. (au)

  13. Assessment of weld thickness loss in offshore pipelines using computed radiography and computational modeling

    International Nuclear Information System (INIS)

    Correa, S.C.A.; Souza, E.M.; Oliveira, D.F.; Silva, A.X.; Lopes, R.T.; Marinho, C.; Camerini, C.S.

    2009-01-01

    In order to guarantee the structural integrity of oil plants it is crucial to monitor the amount of weld thickness loss in offshore pipelines. However, in spite of its relevance, this parameter is very difficult to determine, due to both the large diameter of most pipes and the complexity of the multi-variable system involved. In this study, a computational modeling based on Monte Carlo MCNPX code is combined with computed radiography to estimate the weld thickness loss in large-diameter offshore pipelines. Results show that computational modeling is a powerful tool to estimate intensity variations in radiographic images generated by weld thickness variations, and it can be combined with computed radiography to assess weld thickness loss in offshore and subsea pipelines.

  14. Comparisons with measured data of the simulated local core parameters by the coupled code ATHLET-BIPR-VVER applying a new enhanced model of the reactor pressure vessel

    International Nuclear Information System (INIS)

    Nikonov, S.; Pasichnyk, I.; Velkov, K.; Pautz, A.

    2011-01-01

    The paper describes the performed comparisons of measured and simulated local core data based on the OECD/NEA Benchmark on Kalinin-3 NPP: 'Switching off of one of the four operating main circulation pumps at nominal reactor power'. The local measurements of in core self-powered neutron detectors (SPND) in 64 fuel assemblies on 7 axial levels are used for the comparisons of the assemblies axial power distributions and the thermocouples readings at 93 fuel assembly heads are applied for the fuel assembly coolant temperature comparisons. The analyses are done on the base of benchmark transient calculations performed with the coupled system code ATHLET/BIPR-VVER. In order to describe more realistically the fluid mixing phenomena in a reactor pressure vessel a new enhanced nodalization scheme is being developed. It could take into account asymmetric flow behaviour in the reactor pressure vessel structures like downcomer, reactor core inlet and outlet, control rods' guided tubes, support grids etc. For this purpose details of the core geometry are modelled. About 58000 control volumes and junctions are applied. Cross connection are used to describe the interaction between the fluid objects. The performed comparisons are of great interest because they show some advantages by performing coupled code production pseudo-3D analysis of NPPs applying the parallel thermo-hydraulic channel methodology (or 1D thermo-hydraulic system code modeling). (Authors)

  15. The role of computer modelling in participatory integrated assessments

    International Nuclear Information System (INIS)

    Siebenhuener, Bernd; Barth, Volker

    2005-01-01

    In a number of recent research projects, computer models have been included in participatory procedures to assess global environmental change. The intention was to support knowledge production and to help the involved non-scientists to develop a deeper understanding of the interactions between natural and social systems. This paper analyses the experiences made in three projects with the use of computer models from a participatory and a risk management perspective. Our cross-cutting analysis of the objectives, the employed project designs and moderation schemes and the observed learning processes in participatory processes with model use shows that models play a mixed role in informing participants and stimulating discussions. However, no deeper reflection on values and belief systems could be achieved. In terms of the risk management phases, computer models serve best the purposes of problem definition and option assessment within participatory integrated assessment (PIA) processes

  16. Computer modeling of road bridge for simulation moving load

    Directory of Open Access Journals (Sweden)

    Miličić Ilija M.

    2016-01-01

    Full Text Available In this paper is shown computational modelling one span road structures truss bridge with the roadway on the upper belt of. Calculation models were treated as planar and spatial girders made up of 1D finite elements with applications for CAA: Tower and Bridge Designer 2016 (2nd Edition. The conducted computer simulations results are obtained for each comparison of the impact of moving load according to the recommendations of the two standards SRPS and AASHATO. Therefore, it is a variant of the bridge structure modeling application that provides Bridge Designer 2016 (2nd Edition identical modeled in an environment of Tower. As important information for the selection of a computer applications point out that the application Bridge Designer 2016 (2nd Edition we arent unable to treat the impacts moving load model under national standard - V600. .

  17. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  18. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  19. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  20. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  2. Computer-assisted modeling: Contributions of computational approaches to elucidating macromolecular structure and function: Final report

    International Nuclear Information System (INIS)

    Walton, S.

    1987-01-01

    The Committee, asked to provide an assessment of computer-assisted modeling of molecular structure, has highlighted the signal successes and the significant limitations for a broad panoply of technologies and has projected plausible paths of development over the next decade. As with any assessment of such scope, differing opinions about present or future prospects were expressed. The conclusions and recommendations, however, represent a consensus of our views of the present status of computational efforts in this field

  3. Computational needs for modelling accelerator components

    International Nuclear Information System (INIS)

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs

  4. Paradox of integration-A computational model

    Science.gov (United States)

    Krawczyk, Małgorzata J.; Kułakowski, Krzysztof

    2017-02-01

    The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  5. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  6. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  7. COMPUTER MODELLING OF ENERGY SAVING EFFECTS

    Directory of Open Access Journals (Sweden)

    Marian JANCZAREK

    2016-09-01

    Full Text Available The paper presents the analysis of the dynamics of the heat transfer through the outer wall of the thermal technical spaces, taking into account the impact of the sinusoidal nature of the changes in atmospheric temperature. These temporal variations of the input on the outer surface of the chamber divider result at the output of the sinusoidal change on the inner wall of the room, but suitably suppressed and shifted in phase. Properly selected phase shift is clearly important for saving energy used for the operation associated with the maintenance of a specific regime of heat inside the thermal technical chamber support. Laboratory tests of the model and the actual object allowed for optimal design of the chamber due to the structure of the partition as well as due to the orientation of the geographical location of the chamber.

  8. Computational modeling of Metal-Organic Frameworks

    Science.gov (United States)

    Sung, Jeffrey Chuen-Fai

    In this work, the metal-organic frameworks MIL-53(Cr), DMOF-2,3-NH 2Cl, DMOF-2,5-NH2Cl, and HKUST-1 were modeled using molecular mechanics and electronic structure. The effect of electronic polarization on the adsorption of water in MIL-53(Cr) was studied using molecular dynamics simulations of water-loaded MIL-53 systems with both polarizable and non-polarizable force fields. Molecular dynamics simulations of the full systems and DFT calculations on representative framework clusters were utilized to study the difference in nitrogen adsorption between DMOF-2,3-NH2Cl and DMOF-2,5-NH 2Cl. Finally, the control of proton conduction in HKUST-1 by complexation of molecules to the Cu open metal site was investigated using the MS-EVB methodology.

  9. Computer Forensics Field Triage Process Model

    Directory of Open Access Journals (Sweden)

    Marcus K. Rogers

    2006-06-01

    Full Text Available With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time - measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s/media, transporting it to the lab, making a forensic image(s, and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s. The Cyber Forensic Field Triage Process Model (CFFTPM proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s/media back to the lab for an in-depth examination or acquiring a complete forensic image(s. The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model’s forensic soundness, investigative support capabilities and practical considerations.

  10. Modeling soft factors in computer-based wargames

    Science.gov (United States)

    Alexander, Steven M.; Ross, David O.; Vinarskai, Jonathan S.; Farr, Steven D.

    2002-07-01

    Computer-based wargames have seen much improvement in recent years due to rapid increases in computing power. Because these games have been developed for the entertainment industry, most of these advances have centered on the graphics, sound, and user interfaces integrated into these wargames with less attention paid to the game's fidelity. However, for a wargame to be useful to the military, it must closely approximate as many of the elements of war as possible. Among the elements that are typically not modeled or are poorly modeled in nearly all military computer-based wargames are systematic effects, command and control, intelligence, morale, training, and other human and political factors. These aspects of war, with the possible exception of systematic effects, are individually modeled quite well in many board-based commercial wargames. The work described in this paper focuses on incorporating these elements from the board-based games into a computer-based wargame. This paper will also address the modeling and simulation of the systemic paralysis of an adversary that is implied by the concept of Effects Based Operations (EBO). Combining the fidelity of current commercial board wargames with the speed, ease of use, and advanced visualization of the computer can significantly improve the effectiveness of military decision making and education. Once in place, the process of converting board wargames concepts to computer wargames will allow the infusion of soft factors into military training and planning.

  11. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  12. Computational modeling of neural activities for statistical inference

    CERN Document Server

    Kolossa, Antonio

    2016-01-01

    This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .

  13. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    , the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...

  14. Modelling Emission from Building Materials with Computational Fluid Dynamics

    DEFF Research Database (Denmark)

    Topp, Claus; Nielsen, Peter V.; Heiselberg, Per

    This paper presents a numerical model that by means of computational fluid dynamics (CFD) is capable of dealing with both pollutant transport across the boundary layer and internal diffusion in the source without prior knowledge of which is the limiting process. The model provides the concentration...

  15. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sø rbye, Sigrunn H.; Myrvoll-Nilsen, Eirik; Rue, Haavard

    2017-01-01

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood

  16. Computational 3-D Model of the Human Respiratory System

    Science.gov (United States)

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  17. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  18. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul

    2007-01-01

    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...

  19. Computer-Aided Template for Model Reuse, Development and Maintenance

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2014-01-01

    A template-based approach for model development is presented in this work. Based on a model decomposition technique, the computer-aided template concept has been developed. This concept is implemented as a software tool , which provides a user-friendly interface for following the workflow steps...

  20. A Novel Computer Virus Propagation Model under Security Classification

    Directory of Open Access Journals (Sweden)

    Qingyi Zhu

    2017-01-01

    Full Text Available In reality, some computers have specific security classification. For the sake of safety and cost, the security level of computers will be upgraded with increasing of threats in networks. Here we assume that there exists a threshold value which determines when countermeasures should be taken to level up the security of a fraction of computers with low security level. And in some specific realistic environments the propagation network can be regarded as fully interconnected. Inspired by these facts, this paper presents a novel computer virus dynamics model considering the impact brought by security classification in full interconnection network. By using the theory of dynamic stability, the existence of equilibria and stability conditions is analysed and proved. And the above optimal threshold value is given analytically. Then, some numerical experiments are made to justify the model. Besides, some discussions and antivirus measures are given.

  1. Basic definitions for discrete modeling of computer worms epidemics

    Directory of Open Access Journals (Sweden)

    Pedro Guevara López

    2015-01-01

    Full Text Available The information technologies have evolved in such a way that communication between computers or hosts has become common, so much that the worldwide organization (governments and corporations depends on it; what could happen if these computers stop working for a long time is catastrophic. Unfortunately, networks are attacked by malware such as viruses and worms that could collapse the system. This has served as motivation for the formal study of computer worms and epidemics to develop strategies for prevention and protection; this is why in this paper, before analyzing epidemiological models, a set of formal definitions based on set theory and functions is proposed for describing 21 concepts used in the study of worms. These definitions provide a basis for future qualitative research on the behavior of computer worms, and quantitative for the study of their epidemiological models.

  2. Computer modeling of flow induced in-reactor vibrations

    International Nuclear Information System (INIS)

    Turula, P.; Mulcahy, T.M.

    1977-01-01

    An assessment of the reliability of finite element method computer models, as applied to the computation of flow induced vibration response of components used in nuclear reactors, is presented. The prototype under consideration was the Fast Flux Test Facility reactor being constructed for US-ERDA. Data were available from an extensive test program which used a scale model simulating the hydraulic and structural characteristics of the prototype components, subjected to scaled prototypic flow conditions as well as to laboratory shaker excitations. Corresponding analytical solutions of the component vibration problems were obtained using the NASTRAN computer code. Modal analyses and response analyses were performed. The effect of the surrounding fluid was accounted for. Several possible forcing function definitions were considered. Results indicate that modal computations agree well with experimental data. Response amplitude comparisons are good only under conditions favorable to a clear definition of the structural and hydraulic properties affecting the component motion. 20 refs

  3. Computational modelling of memory retention from synapse to behaviour

    Science.gov (United States)

    van Rossum, Mark C. W.; Shippi, Maria

    2013-03-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.

  4. Computational fluid dynamics modelling of displacement natural ventilation.

    OpenAIRE

    Ji, Yingchun

    2005-01-01

    Natural ventilation is widely recognised as contributing towards low-energy building design. The requirement to reduce energy usage in new buildings has rejuvenated interest in natural ventilation. This thesis deals with computer modelling of natural displacement ventilation driven either by buoyancy or buoyancy combined with wind forces. Two benchmarks have been developed using computational fluid dynamics (CFD) in order to evaluate the accuracy with which CFD is able to mo...

  5. Using Computational and Mechanical Models to Study Animal Locomotion

    OpenAIRE

    Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas

    2012-01-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locom...

  6. Computational modelling of memory retention from synapse to behaviour

    International Nuclear Information System (INIS)

    Van Rossum, Mark C W; Shippi, Maria

    2013-01-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists. (paper)

  7. Actors: A Model of Concurrent Computation in Distributed Systems.

    Science.gov (United States)

    1985-06-01

    Artificial Intelligence Labora- tory of the Massachusetts Institute of Technology. Support for the labora- tory’s aritificial intelligence research is...RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE ...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp-oved I= pblicrelease and sale; itsI

  8. Computational modeling of turn-taking dynamics in spoken conversations

    OpenAIRE

    Chowdhury, Shammur Absar

    2017-01-01

    The study of human interaction dynamics has been at the center for multiple research disciplines in- cluding computer and social sciences, conversational analysis and psychology, for over decades. Recent interest has been shown with the aim of designing computational models to improve human-machine interaction system as well as support humans in their decision-making process. Turn-taking is one of the key aspects of conversational dynamics in dyadic conversations and is an integral part of hu...

  9. Single-server blind quantum computation with quantum circuit model

    Science.gov (United States)

    Zhang, Xiaoqian; Weng, Jian; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing; Song, Tingting

    2018-06-01

    Blind quantum computation (BQC) enables the client, who has few quantum technologies, to delegate her quantum computation to a server, who has strong quantum computabilities and learns nothing about the client's quantum inputs, outputs and algorithms. In this article, we propose a single-server BQC protocol with quantum circuit model by replacing any quantum gate with the combination of rotation operators. The trap quantum circuits are introduced, together with the combination of rotation operators, such that the server is unknown about quantum algorithms. The client only needs to perform operations X and Z, while the server honestly performs rotation operators.

  10. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer......-format and COM-objects, are incorporated to allow the export and import of mathematical models; 5) a user interface that provides the work-flow and data-flow to guide the user through the different modelling tasks....

  11. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  12. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  13. Geometric and computer-aided spline hob modeling

    Science.gov (United States)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  14. Soft Tissue Biomechanical Modeling for Computer Assisted Surgery

    CERN Document Server

    2012-01-01

      This volume focuses on the biomechanical modeling of biological tissues in the context of Computer Assisted Surgery (CAS). More specifically, deformable soft tissues are addressed since they are the subject of the most recent developments in this field. The pioneering works on this CAS topic date from the 1980's, with applications in orthopaedics and biomechanical models of bones. More recently, however, biomechanical models of soft tissues have been proposed since most of the human body is made of soft organs that can be deformed by the surgical gesture. Such models are much more complicated to handle since the tissues can be subject to large deformations (non-linear geometrical framework) as well as complex stress/strain relationships (non-linear mechanical framework). Part 1 of the volume presents biomechanical models that have been developed in a CAS context and used during surgery. This is particularly new since most of the soft tissues models already proposed concern Computer Assisted Planning, with ...

  15. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    DEFF Research Database (Denmark)

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas

    2016-01-01

    . To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient......'s CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns......Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging...

  16. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Keßler, Stefan; Bijl, Piet; Labarre, Luc; Repasi, Endre; Wittenstein, Wolfgang; Bürsing, Helge

    2017-10-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

  17. Importance of Computer Model Validation in Pyroprocessing Technology Development

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Y. E.; Li, Hui; Yim, M. S. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-05-15

    In this research, we developed a plan for experimental validation of one of the computer models developed for ER process modeling, i. e., the ERAD code. Several candidate surrogate materials are selected for the experiment considering the chemical and physical properties. Molten salt-based pyroprocessing technology is being examined internationally as an alternative to treat spent nuclear fuel over aqueous technology. The central process in pyroprocessing is electrorefining(ER) which separates uranium from transuranic elements and fission products present in spent nuclear fuel. ER is a widely used process in the minerals industry to purify impure metals. Studies of ER by using actual spent nuclear fuel materials are problematic for both technical and political reasons. Therefore, the initial effort for ER process optimization is made by using computer models. A number of models have been developed for this purpose. But as validation of these models is incomplete and often times problematic, the simulation results from these models are inherently uncertain.

  18. The Effect of Computer Models as Formative Assessment on Student Understanding of the Nature of Models

    Science.gov (United States)

    Park, Mihwa; Liu, Xiufeng; Smith, Erica; Waight, Noemi

    2017-01-01

    This study reports the effect of computer models as formative assessment on high school students' understanding of the nature of models. Nine high school teachers integrated computer models and associated formative assessments into their yearlong high school chemistry course. A pre-test and post-test of students' understanding of the nature of…

  19. Computer Aided Multi-Data Fusion Dismount Modeling

    Science.gov (United States)

    2012-03-22

    dependent on a particular environmental condition. They are costly, cumbersome, and involve dedicated software practices and particular knowledge to operate...allow manipulation of 2D matrices, like Microsoft Excel or Libre Office. The second alternative is to modify an already created model (MEM). The model... software . Therefore, with the described computer aided multi-data dismount model the researcher will be able to attach signatures to any desired

  20. Computer modelling for ecosystem service assessment: Chapter 4.4

    Science.gov (United States)

    Dunford, Robert; Harrison, Paula; Bagstad, Kenneth J.

    2017-01-01

    Computer models are simplified representations of the environment that allow biophysical, ecological, and/or socio-economic characteristics to be quantified and explored. Modelling approaches differ from mapping approaches (Chapter 5) as (i) they are not forcibly spatial (although many models do produce spatial outputs); (ii) they focus on understanding and quantifying the interactions between different components of social and/or environmental systems and (iii)

  1. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    Science.gov (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  2. A computer simulation model to compute the radiation transfer of mountainous regions

    Science.gov (United States)

    Li, Yuguang; Zhao, Feng; Song, Rui

    2011-11-01

    In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.

  3. ANS main control complex three-dimensional computer model development

    International Nuclear Information System (INIS)

    Cleaves, J.E.; Fletcher, W.M.

    1993-01-01

    A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

  4. Computational model for dosimetric purposes in dental procedures

    International Nuclear Information System (INIS)

    Kawamoto, Renato H.; Campos, Tarcisio R.

    2013-01-01

    This study aims to develop a computational model for dosimetric purposes the oral region, based on computational tools SISCODES and MCNP-5, to predict deterministic effects and minimize stochastic effects caused by ionizing radiation by radiodiagnosis. Based on a set of digital information provided by computed tomography, three-dimensional voxel model was created, and its tissues represented. The model was exported to the MCNP code. In association with SICODES, we used the Monte Carlo N-Particle Transport Code (MCNP-5) method to play the corresponding interaction of nuclear particles with human tissues statistical process. The study will serve as a source of data for dosimetric studies in the oral region, providing deterministic effect and minimize the stochastic effect of ionizing radiation

  5. Computational Models for Calcium-Mediated Astrocyte Functions

    Directory of Open Access Journals (Sweden)

    Tiina Manninen

    2018-04-01

    Full Text Available The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop

  6. Computational Models for Calcium-Mediated Astrocyte Functions.

    Science.gov (United States)

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus

  7. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  8. One-dimensional computational modeling on nuclear reactor problems

    International Nuclear Information System (INIS)

    Alves Filho, Hermes; Baptista, Josue Costa; Trindade, Luiz Fernando Santos; Heringer, Juan Diego dos Santos

    2013-01-01

    In this article, we present a computational modeling, which gives us a dynamic view of some applications of Nuclear Engineering, specifically in the power distribution and the effective multiplication factor (keff) calculations. We work with one-dimensional problems of deterministic neutron transport theory, with the linearized Boltzmann equation in the discrete ordinates (SN) formulation, independent of time, with isotropic scattering and then built a software (Simulator) for modeling computational problems used in a typical calculations. The program used in the implementation of the simulator was Matlab, version 7.0. (author)

  9. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  10. Validation of Computer Models for Homeland Security Purposes

    International Nuclear Information System (INIS)

    Schweppe, John E.; Ely, James; Kouzes, Richard T.; McConn, Ronald J.; Pagh, Richard T.; Robinson, Sean M.; Siciliano, Edward R.; Borgardt, James D.; Bender, Sarah E.; Earnhart, Alison H.

    2005-01-01

    At Pacific Northwest National Laboratory, we are developing computer models of radiation portal monitors for screening vehicles and cargo. Detailed models of the radiation detection equipment, vehicles, cargo containers, cargos, and radioactive sources have been created. These are used to determine the optimal configuration of detectors and the best alarm algorithms for the detection of items of interest while minimizing nuisance alarms due to the presence of legitimate radioactive material in the commerce stream. Most of the modeling is done with the Monte Carlo code MCNP to describe the transport of gammas and neutrons from extended sources through large, irregularly shaped absorbers to large detectors. A fundamental prerequisite is the validation of the computational models against field measurements. We describe the first step of this validation process, the comparison of the models to measurements with bare static sources

  11. Computational Modelling of Piston Ring Dynamics in 3D

    Directory of Open Access Journals (Sweden)

    Dlugoš Jozef

    2014-12-01

    Full Text Available Advanced computational models of a piston assembly based on the level of virtual prototypes require a detailed description of piston ring behaviour. Considering these requirements, the piston rings operate in regimes that cannot, in general, be simplified into an axisymmetric model. The piston and the cylinder liner do not have a perfect round shape, mainly due to machining tolerances and external thermo-mechanical loads. If the ring cannot follow the liner deformations, a local loss of contact occurs resulting in blow-by and increased consumption of lubricant oil in the engine. Current computational models are unable to implement such effects. The paper focuses on the development of a flexible 3D piston ring model based on the Timoshenko beam theory using the multibody system (MBS. The MBS model is compared to the finite element method (FEM solution.

  12. Cloud Computing Adoption Model for Universities to Increase ICT Proficiency

    Directory of Open Access Journals (Sweden)

    Safiya Okai

    2014-08-01

    Full Text Available Universities around the world especially those in developing countries are faced with the problem of delivering the level of information and communications technology (ICT needed to facilitate teaching, learning, research, and development activities ideal in a typical university, which is needed to meet educational needs in-line with advancement in technology and the growing dependence on IT. This is mainly due to the high cost involved in providing and maintaining the needed hardware and software. A technology such as cloud computing that delivers on demand provisioning of IT resources on a pay per use basis can be used to address this problem. Cloud computing promises better delivery of IT services as well as availability whenever and wherever needed at reduced costs with users paying only as much as they consume through the services of cloud service providers. The cloud technology reduces complexity while increasing speed and quality of IT services provided; however, despite these benefits the challenges that come with its adoption have left many sectors especially the higher education skeptical in committing to this technology. This article identifies the reasons for the slow rate of adoption of cloud computing at university level, discusses the challenges faced and proposes a cloud computing adoption model that contains strategic guidelines to overcome the major challenges identified and a roadmap for the successful adoption of cloud computing by universities. The model was tested in one of the universities and found to be both useful and appropriate for adopting cloud computing at university level.

  13. Lattice Boltzmann model capable of mesoscopic vorticity computation

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  14. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  15. Simulation models for computational plasma physics: Concluding report

    International Nuclear Information System (INIS)

    Hewett, D.W.

    1994-01-01

    In this project, the authors enhanced their ability to numerically simulate bounded plasmas that are dominated by low-frequency electric and magnetic fields. They moved towards this goal in several ways; they are now in a position to play significant roles in the modeling of low-frequency electromagnetic plasmas in several new industrial applications. They have significantly increased their facility with the computational methods invented to solve the low frequency limit of Maxwell's equations (DiPeso, Hewett, accepted, J. Comp. Phys., 1993). This low frequency model is called the Streamlined Darwin Field model (SDF, Hewett, Larson, and Doss, J. Comp. Phys., 1992) has now been implemented in a fully non-neutral SDF code BEAGLE (Larson, Ph.D. dissertation, 1993) and has further extended to the quasi-neutral limit (DiPeso, Hewett, Comp. Phys. Comm., 1993). In addition, they have resurrected the quasi-neutral, zero-electron-inertia model (ZMR) and began the task of incorporating internal boundary conditions into this model that have the flexibility of those in GYMNOS, a magnetostatic code now used in ion source work (Hewett, Chen, ICF Quarterly Report, July--September, 1993). Finally, near the end of this project, they invented a new type of banded matrix solver that can be implemented on a massively parallel computer -- thus opening the door for the use of all their ADI schemes on these new computer architecture's (Mattor, Williams, Hewett, submitted to Parallel Computing, 1993)

  16. Life system modeling and intelligent computing. Pt. II. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part II of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 56 papers in this volume are organized in topical sections on advanced evolutionary computing theory and algorithms; advanced neural network and fuzzy system theory and algorithms; modeling and simulation of societies and collective behavior; biomedical signal processing, imaging, and visualization; intelligent computing and control in distributed power generation systems; intelligent methods in power and energy infrastructure development; intelligent modeling, monitoring, and control of complex nonlinear systems. (orig.)

  17. An ODP computational model of a cooperative binding object

    Science.gov (United States)

    Logé, Christophe; Najm, Elie; Chen, Ken

    1997-12-01

    A next generation of systems that should appear will have to manage simultaneously several geographically distributed users. These systems belong to the class of computer-supported cooperative work systems (CSCW). The development of such complex systems requires rigorous development methods and flexible open architectures. Open distributed processing (ODP) is a standardization effort that aims at providing such architectures. ODP features appropriate abstraction levels and a clear articulation between requirements, programming and infrastructure support. ODP advocates the use of formal methods for the specification of systems and components. The computational model, an object-based model, one of the abstraction levels identified within ODP, plays a central role in the global architecture. In this model, basic objects can be composed with communication and distribution abstractions (called binding objects) to form a computational specification of distributed systems, or applications. Computational specifications can then be mapped (in a mechanism akin to compilation) onto an engineering solution. We use an ODP-inspired method to computationally specify a cooperative system. We start from a general purpose component that we progressively refine into a collection of basic and binding objects. We focus on two issues of a co-authoring application, namely, dynamic reconfiguration and multiview synchronization. We discuss solutions for these issues and formalize them using the MT-LOTOS specification language that is currently studied in the ISO standardization formal description techniques group.

  18. FCJ-131 Pervasive Computing and Prosopopoietic Modelling – Notes on computed function and creative action

    Directory of Open Access Journals (Sweden)

    Anders Michelsen

    2011-12-01

    Full Text Available This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997 terms, ‘invisible’, on the horizon, ’calm’, it also points to a much more important and slightly different perspective: that of creative action upon novel forms of artifice. Most importantly for this article, ubiquity and pervasive computing is seen to point to the continuous existence throughout the computational heritage since the mid-20th century of a paradoxical distinction/complicity between the technical organisation of computed function and the human Being, in the sense of creative action upon such function. This paradoxical distinction/complicity promotes a chiastic (Merleau-Ponty relationship of extension of one into the other. It also indicates a generative creation that itself points to important issues of ontology with methodological implications for the design of computing. In this article these implications will be conceptualised as prosopopoietic modeling on the basis of Bernward Joerges introduction of the classical rhetoric term of ’prosopopoeia’ into the debate on large technological systems. First, the paper introduces the paradoxical distinction/complicity by debating Gilbert Simondon’s notion of a ‘margin of indeterminacy’ vis-a-vis computing. Second, it debates the idea of prosopopoietic modeling, pointing to a principal role of the paradoxical distinction/complicity within the computational heritage in three cases: a. Prosopopoietic

  19. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  20. Computer models for fading channels with applications to digital transmission

    Science.gov (United States)

    Loo, Chun; Secord, Norman

    1991-11-01

    The authors describe computer models for Rayleigh, Rician, log-normal, and land-mobile-satellite fading channels. All computer models for the fading channels are based on the manipulation of a white Gaussian random process. This process is approximated by a sum of sinusoids with random phase angle. These models compare very well with analytical models in terms of their probability distribution of envelope and phase of the fading signal. For the land mobile satellite fading channel, results of level crossing rate and average fade duration are given. These results show that the computer models can provide a good coarse estimate of the time statistic of the faded signal. Also, for the land-mobile-satellite fading channel, the results show that a 3-pole Butterworth shaping filter should be used with the model. An example of the application of the land-mobile-satellite fading-channel model to predict the performance of a differential phase-shift keying signal is described.

  1. On turbulence models for rod bundle flow computations

    International Nuclear Information System (INIS)

    Hazi, Gabor

    2005-01-01

    In commercial computational fluid dynamics codes there is more than one turbulence model built in. It is the user responsibility to choose one of those models, suitable for the problem studied. In the last decade, several computations were presented using computational fluid dynamics for the simulation of various problems of the nuclear industry. A common feature in a number of those simulations is that they were performed using the standard k-ε turbulence model without justifying the choice of the model. The simulation results were rarely satisfactory. In this paper, we shall consider the flow in a fuel rod bundle as a case study and discuss why the application of the standard k-ε model fails to give reasonable results in this situation. We also show that a turbulence model based on the Reynolds stress transport equations can provide qualitatively correct results. Generally, our aim is pedagogical, we would like to call the readers attention to the fact that turbulence models have to be selected based on theoretical considerations and/or adequate information obtained from measurements

  2. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...... are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...

  3. A PROFICIENT MODEL FOR HIGH END SECURITY IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    R. Bala Chandar

    2014-01-01

    Full Text Available Cloud computing is an inspiring technology due to its abilities like ensuring scalable services, reducing the anxiety of local hardware and software management associated with computing while increasing flexibility and scalability. A key trait of the cloud services is remotely processing of data. Even though this technology had offered a lot of services, there are a few concerns such as misbehavior of server side stored data , out of control of data owner's data and cloud computing does not control the access of outsourced data desired by the data owner. To handle these issues, we propose a new model to ensure the data correctness for assurance of stored data, distributed accountability for authentication and efficient access control of outsourced data for authorization. This model strengthens the correctness of data and helps to achieve the cloud data integrity, supports data owner to have control on their own data through tracking and improves the access control of outsourced data.

  4. Category-theoretic models of algebraic computer systems

    Science.gov (United States)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  5. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  6. SPLAI: Computational Finite Element Model for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ruzana Ishak

    2006-01-01

    Full Text Available Wireless sensor network refers to a group of sensors, linked by a wireless medium to perform distributed sensing task. The primary interest is their capability in monitoring the physical environment through the deployment of numerous tiny, intelligent, wireless networked sensor nodes. Our interest consists of a sensor network, which includes a few specialized nodes called processing elements that can perform some limited computational capabilities. In this paper, we propose a model called SPLAI that allows the network to compute a finite element problem where the processing elements are modeled as the nodes in the linear triangular approximation problem. Our model also considers the case of some failures of the sensors. A simulation model to visualize this network has been developed using C++ on the Windows environment.

  7. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  8. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  9. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  10. Computer modeling of inelastic wave propagation in porous rock

    International Nuclear Information System (INIS)

    Cheney, J.A.; Schatz, J.F.; Snell, C.

    1979-01-01

    Computer modeling of wave propagation in porous rock has several important applications. Among them are prediction of fragmentation and permeability changes to be caused by chemical explosions used for in situ resource recovery, and the understanding of nuclear explosion effects such as seismic wave generation, containment, and site hardness. Of interest in all these applications are the distance from the source to which inelastic effects persist and the amount of porosity change within the inelastic region. In order to study phenomena related to these applications, the Cam Clay family of models developed at Cambridge University was used to develop a similar model that is applicable to wave propagation in porous rock. That model was incorporated into a finite-difference wave propagation computer code SOC. 10 figures, 1 table

  11. Computer modeling of lung cancer diagnosis-to-treatment process.

    Science.gov (United States)

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U; Yu, Xinhua; Faris, Nick; Li, Jingshan

    2015-08-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed.

  12. Advances in engineering turbulence modeling. [computational fluid dynamics

    Science.gov (United States)

    Shih, T.-H.

    1992-01-01

    Some new developments in two equation models and second order closure models are presented. In this paper, modified two equation models are proposed to remove shortcomings such as computing flows over complex geometries and the ad hoc treatment near the separation and reattachment points. The calculations using various two equation models are compared with direct numerical solutions of channel flows and flat plate boundary layers. Development of second order closure models will also be discussed with emphasis on the modeling of pressure related correlation terms and dissipation rates in the second moment equations. All existing models poorly predict the normal stresses near the wall and fail to predict the three dimensional effect of mean flow on the turbulence. The newly developed second order near-wall turbulence model to be described in this paper is capable of capturing the near-wall behavior of turbulence as well as the effect of three dimension mean flow on the turbulence.

  13. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  14. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    International Nuclear Information System (INIS)

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-01-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  15. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  16. Computer modelling as a tool for understanding language evolution

    NARCIS (Netherlands)

    de Boer, Bart; Gontier, N; VanBendegem, JP; Aerts, D

    2006-01-01

    This paper describes the uses of computer models in studying the evolution of language. Language is a complex dynamic system that can be studied at the level of the individual and at the level of the population. Much of the dynamics of language evolution and language change occur because of the

  17. Scratch as a Computational Modelling Tool for Teaching Physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  18. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  19. Computer - based modeling in extract sciences research -I ...

    African Journals Online (AJOL)

    Specifically, in the discipline of chemistry, it has been of great utility. Its use dates back to the 17th Century and includes such wide areas as computational chemistry, chemoinformatics, molecular mechanics, chemical dynamics, molecular dynamics, molecular graphics and algorithms. Modeling has been employed ...

  20. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...